Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
0 | 1 | 767
SELF-ORGANIZATION OF ASSOCIATIVE DATABASE
AND ITS APPLICATIONS
Hisashi Suzuki and Suguru Arimoto
Osaka University, Toyonaka, Osaka 560, Japan
ABSTRACT
An efficient method of self-organizing associative databases is proposed together with
applications to robot eyesight systems. The proposed databases can associate any input
with some output. In the first half part of discussion, an algorithm of self-organization is
proposed. From an aspect of hardware, it produces a new style of neural network. In the
latter half part, an applicability to handwritten letter recognition and that to an autonomous
mobile robot system are demonstrated.
INTRODUCTION
Let a mapping f : X -+ Y be given. Here, X is a finite or infinite set, and Y is another
finite or infinite set. A learning machine observes any set of pairs (x, y) sampled randomly
from X x Y. (X x Y means the Cartesian product of X and Y.) And, it computes some
estimate j : X -+ Y of f to make small, the estimation error in some measure.
Usually we say that: the faster the decrease of estimation error with increase of the number of samples, the better the learning machine. However, such expression on performance
is incomplete. Since, it lacks consideration on the candidates of J of j assumed preliminarily. Then, how should we find out good learning machines? To clarify this conception,
let us discuss for a while on some types of learning machines. And, let us advance the
understanding of the self-organization of associative database .
. Parameter Type
An ordinary type of learning machine assumes an equation relating x's and y's with
parameters being indefinite, namely, a structure of f. It is equivalent to define implicitly a
set F of candidates of
(F is some subset of mappings from X to Y.) And, it computes
values of the parameters based on the observed samples. We call such type a parameter
type.
For a learning machine defined well, if F 3 f, j approaches f as the number of samples
increases. In the alternative case, however, some estimation error remains eternally. Thus,
a problem of designing a learning machine returns to find out a proper structure of f in this
sense.
On the other hand, the assumed structure of f is demanded to be as compact as possible
to achieve a fast learning. In other words, the number of parameters should be small. Since,
if the parameters are few, some j can be uniquely determined even though the observed
samples are few. However, this demand of being proper contradicts to that of being compact.
Consequently, in the parameter type, the better the compactness of the assumed structure
that is proper, the better the learning machine. This is the most elementary conception
when we design learning machines .
1.
. Universality and Ordinary Neural Networks
Now suppose that a sufficient knowledge on f is given though J itself is unknown. In
this case, it is comparatively easy to find out proper and compact structures of J. In the
alternative case, however, it is sometimes difficult. A possible solution is to give up the
compactness and assume an almighty structure that can cover various 1's. A combination
of some orthogonal bases of the infinite dimension is such a structure. Neural networks 1 ,2
are its approximations obtained by truncating finitely the dimension for implementation.
? American Institute of Physics 1988
768
A main topic in designing neural networks is to establish such desirable structures of 1.
This work includes developing practical procedures that compute values of coefficients from
the observed samples. Such discussions are :flourishing since 1980 while many efficient methods have been proposed. Recently, even hardware units computing coefficients in parallel
for speed-up are sold, e.g., ANZA, Mark III, Odyssey and E-1.
Nevertheless, in neural networks, there always exists a danger of some error remaining
eternally in estimating /. Precisely speaking, suppose that a combination of the bases of a
finite number can define a structure of 1 essentially. In other words, suppose that F 3 /, or
1 is located near F. In such case, the estimation error is none or negligible. However, if 1
is distant from F, the estimation error never becomes negligible. Indeed, many researches
report that the following situation appears when 1 is too complex. Once the estimation
error converges to some value (> 0) as the number of samples increases, it decreases hardly
even though the dimension is heighten. This property sometimes is a considerable defect of
neural networks .
. Recursi ve Type
The recursive type is founded on another methodology of learning that should be as
follows. At the initial stage of no sample, the set Fa (instead of notation F) of candidates
of I equals to the set of all mappings from X to Y. After observing the first sample
(Xl, Yl) E X x Y, Fa is reduced to Fi so that I(xt) = Yl for any I E F. After observing
the second sample (X2' Y2) E X x Y, Fl is further reduced to F2 so that i(xt) = Yl and
I(X2) = Y2 for any I E F. Thus, the candidate set F becomes gradually small as observation
of samples proceeds. The after observing i-samples, which we write
is one of the most
likelihood estimation of 1 selected in fi;. Hence, contrarily to the parameter type, the
recursive type guarantees surely that j approaches to 1 as the number of samples increases.
The recursive type, if observes a sample (x" yd, rewrites values 1,-l(X),S to I,(x)'s for
some x's correlated to the sample. Hence, this type has an architecture composed of a rule
for rewriting and a free memory space. Such architecture forms naturally a kind of database
that builds up management systems of data in a self-organizing way. However, this database
differs from ordinary ones in the following sense. It does not only record the samples already
observed, but computes some estimation of l(x) for any x E X. We call such database an
associative database.
The first subject in constructing associative databases is how we establish the rule for
rewri ting. For this purpose, we adap t a measure called the dissimilari ty. Here, a dissimilari ty
means a mapping d : X x X -+ {reals > O} such that for any (x, x) E X x X, d(x, x) > 0
whenever l(x) t /(x). However, it is not necessarily defined with a single formula. It is
definable with, for example, a collection of rules written in forms of "if? .. then?? .. "
The dissimilarity d defines a structure of 1 locally in X x Y. Hence, even though
the knowledge on f is imperfect, we can re:flect it on d in some heuristic way. Hence,
contrarily to neural networks, it is possible to accelerate the speed of learning by establishing
d well. Especially, we can easily find out simple d's for those l's which process analogically
information like a human. (See the applications in this paper.) And, for such /'s, the
recursive type shows strongly its effectiveness.
We denote a sequence of observed samples by (Xl, Yd, (X2' Y2),???. One of the simplest
constructions of associative databases after observing i-samples (i = 1,2,.,,) is as follows.
i
i"
I,
Algorithm 1. At the initial stage, let So be the empty set. For every i =
1,2" .. , let i,-l(x) for any x E X equal some y* such that (x*,y*) E S,-l and
d(x, x*) =
min
(%,y)ES.-t
d(x, x) .
Furthermore, add (x" y,) to S;-l to produce Sa, i.e., S, = S,_l U {(x"
(1)
y,n.
769
Another version improved to economize the memory is as follows.
Algorithm 2, At the initial stage, let So be composed of an arbitrary element
in X x Y. For every i = 1,2"", let ii-lex) for any x E X equal some y. such
that (x?, y.) E Si-l and
d(x, x?) =
min
d(x, x) .
(i,i)ES.-l
Furthermore, if ii-l(Xi) # Yi then let Si = Si-l, or add (Xi, Yi) to Si-l to
produce Si, i.e., Si = Si-l U {(Xi, Yi)}'
In either construction, ii approaches to f as i increases. However, the computation time
grows proportionally to the size of Si. The second subject in constructing associative
databases is what addressing rule we should employ to economize the computation time. In
the subsequent chapters, a construction of associative database for this purpose is proposed.
It manages data in a form of binary tree.
SELF-ORGANIZATION OF ASSOCIATIVE DATABASE
Given a sample sequence (Xl, Yl), (X2' Y2), .. " the algorithm for constructing associative
database is as follows.
Algorithm 3,'
Step I(Initialization): Let (x[root], y[root]) = (Xl, Yd. Here, x[.] and y[.] are
variables assigned for respective nodes to memorize data.. Furthermore, let t = 1.
Step 2: Increase t by 1, and put x, in. After reset a pointer n to the root, repeat
the following until n arrives at some terminal node, i.e., leaf.
Notations nand
d(xt, x[n)), let n
n mean the descendant nodes of n.
=n. Otherwise, let n =n.
If d(x" r[n)) ~
Step 3: Display yIn] as the related information. Next, put y, in. If yIn] = y" back
to step 2. Otherwise, first establish new descendant nodes n and n. Secondly,
let
(x[n], yIn))
(x[n], yIn))
(x[n], yIn)),
(Xt, y,).
(2)
(3)
Finally, back to step 2. Here, the loop of step 2-3 can be stopped at any time
and also can be continued.
Now, suppose that gate elements, namely, artificial "synapses" that play the role of branching by d are prepared. Then, we obtain a new style of neural network with gate elements
being randomly connected by this algorithm.
LETTER RECOGNITION
Recen tly, the vertical slitting method for recognizing typographic English letters3 , the
elastic matching method for recognizing hand written discrete English letters4 , the global
training and fuzzy logic search method for recognizing Chinese characters written in square
styleS, etc. are published. The self-organization of associative database realizes the recognition of handwritten continuous English letters.
770
9 /wn"
NOV
~ ~ ~ -xk :La.t
~~ ~ ~~~
dw1lo'
~~~~~of~~
~~~ 4,-?~~4Fig. 1. Source document.
2~~---------------'
lOO~---------------'
H
o
o
Fig. 2. Windowing.
1000
2000
3000
4000
Number of samples
o
1000
2000
3000
4000
NUAlber of sampl es
Fig. 3. An experiment result.
An image scanner takes a document image (Fig. 1). The letter recognizer uses a parallelogram window that at least can cover the maximal letter (Fig. 2), and processes the
sequence of letters while shifting the window. That is, the recognizer scans a word in a
slant direction. And, it places the window so that its left vicinity may be on the first black
point detected. Then, the window catches a letter and some part of the succeeding letter.
If recognition of the head letter is performed, its end position, namely, the boundary line
between two letters becomes known. Hence, by starting the scanning from this boundary
and repeating the above operations, the recognizer accomplishes recursively the task. Thus
the major problem comes to identifying the head letter in the window.
Considering it, we define the following.
? Regard window images as x's, and define X accordingly.
? For a (x, x) E X x X, denote by B a black point in the left area from the boundary on
window image X. Project each B onto window image x. Then, measure the Euclidean
distance 6 between fj and a black point B on x being the closest to B. Let d(x, x) be
the summation of 6's for all black points B's on x divided by the number of B's.
? Regard couples of the "reading" and the position of boundary as y's, and define Y
accordingly.
An operator teaches the recognizer in interaction the relation between window image and
reading& boundary with algorithm 3. Precisely, if the recalled reading is incorrect, the
operator teaches a correct reading via the console. Moreover, if the boundary position is
incorrect, he teaches a correct position via the mouse.
Fig. 1 shows partially a document image used in this experiment. Fig. 3 shows the
change of the number of nodes and that of the recognition rate defined as the relative
frequency of correct answers in the past 1000 trials. Speciiications of the window are height
= 20dot, width = 10dot, and slant angular = 68deg. In this example, the levels of tree
were distributed in 6-19 at time 4000 and the recognition rate converged to about 74%.
Experimentally, the recognition rate converges to about 60-85% in most cases, and to 95% at
a rare case. However, it does not attain 100% since, e.g., "c" and "e" are not distinguishable
because of excessive lluctuation in writing. If the consistency of the x, y-relation is not
assured like this, the number of nodes increases endlessly (d. Fig. 3). Hence, it is clever to
stop the learning when the recognition rate attains some upper limit. To improve further
the recognition rate, we must consider the spelling of words. It is one of future subjects.
771
OBSTACLE AVOIDING MOVEMENT
Various systems of camera type autonomous mobile robot are reported flourishingly6-1O.
The system made up by the authors (Fig. 4) also belongs to this category. Now, in mathematical methodologies, we solve usually the problem of obstacle avoiding movement as
a cost minimization problem under some cost criterion established artificially. Contrarily,
the self-organization of associative database reproduces faithfully the cost criterion of an
operator. Therefore, motion of the robot after learning becomes very natural.
Now, the length, width and height of the robot are all about O.7m, and the weight is
about 30kg. The visual angle of camera is about 55deg. The robot has the following three
factors of motion. It turns less than ?30deg, advances less than 1m, and controls speed less
than 3km/h. The experiment was done on the passageway of wid th 2.5m inside a building
which the authors' laboratories exist in (Fig. 5). Because of an experimental intention, we
arrange boxes, smoking stands, gas cylinders, stools, handcarts, etc. on the passage way at
random. We let the robot take an image through the camera, recall a similar image, and
trace the route preliminarily recorded on it. For this purpose, we define the following.
? Let the camera face 28deg downward to take an image, and process it through a low
pass filter. Scanning vertically the filtered image from the bottom to the top, search
the first point C where the luminance changes excessively. Then, su bstitu te all points
from the bottom to C for white, and all points from C to the top for black (Fig. 6).
(If no obstacle exists just in front of the robot, the white area shows the ''free'' area
where the robot can move around.) Regard binary 32 x 32dot images processed thus
as x's, and define X accordingly.
? For every (x, x) E X x X, let d(x, x) be the number of black points on the exclusive-or
image between x and X.
? Regard as y's the images obtained by drawing routes on images x's, and define Y
accordingly.
The robot superimposes, on the current camera image x, the route recalled for x, and
inquires the operator instructions. The operator judges subjectively whether the suggested
route is appropriate or not. In the negative answer, he draws a desirable route on x with the
mouse to teach a new y to the robot. This opera.tion defines implicitly a sample sequence
of (x, y) reflecting the cost criterion of the operator.
.::l" !
-
IibUBe
_. -
22
11
Roan
12
{-
13
Stationary uni t
Fig. 4. Configuration of
autonomous mobile robot system.
~
I
,
23
24
North
14
rmbi Ie unit (robot)
-
Roan
y
t
Fig. 5. Experimental
environment.
772
Wall
Camera image
Preprocessing
A
::: !fa
?
Preprocessing
0
O
Course
suggest ion
??
..
Search
A
Fig. 6. Processing for
obstacle avoiding movement.
x
Fig. 1. Processing for
position identification.
We define the satisfaction rate by the relative frequency of acceptable suggestions of
route in the past 100 trials. In a typical experiment, the change of satisfaction rate showed
a similar tendency to Fig. 3, and it attains about 95% around time 800. Here, notice that
the rest 5% does not mean directly the percentage of collision. (In practice, we prevent the
collision by adopting some supplementary measure.) At time 800, the number of nodes was
145, and the levels of tree were distributed in 6-17.
The proposed method reflects delicately various characters of operator. For example, a
robot trained by an operator 0 moves slowly with enough space against obstacles while one
trained by another operator 0' brushes quickly against obstacles. This fact gives us a hint
on a method of printing "characters" into machines.
POSITION IDENTIFICATION
The robot can identify its position by recalling a similar landscape with the position data
to a camera image. For this purpose, in principle, it suffices to regard camera images and
position data as x's and y's, respectively. However, the memory capacity is finite in actual
compu ters. Hence, we cannot but compress the camera images at a slight loss of information.
Such compression is admittable as long as the precision of position identification is in an
acceptable area. Thus, the major problem comes to find out some suitable compression
method.
In the experimental environment (Fig. 5), juts are on the passageway at intervals of
3.6m, and each section between adjacent juts has at most one door. The robot identifies
roughly from a surrounding landscape which section itself places in. And, it uses temporarily
a triangular surveying technique if an exact measure is necessary. To realize the former task,
we define the following .
? Turn the camera to take a panorama image of 360deg. Scanning horizontally the
center line, substitute the points where the luminance excessively changes for black
and the other points for white (Fig. 1). Regard binary 360dot line images processed
thus as x's, and define X accordingly.
? For every (x, x) E X x X, project each black point A on x onto x. And, measure the
Euclidean distance 6 between A and a black point A on x being the closest to A. Let
the summation of 6 be S. Similarly, calculate S by exchanging the roles of x and X.
Denoting the numbers of A's and A's respectively by nand n, define
773
d(x, x) =
~(~
+ ~).
2 n
n
(4)
? Regard positive integers labeled on sections as y's (cf. Fig. 5), and define Y accordingly.
In the learning mode, the robot checks exactly its position with a counter that is reset periodically by the operator. The robot runs arbitrarily on the passageways within 18m area
and learns the relation between landscapes and position data. (Position identification beyond 18m area is achieved by crossing plural databases one another.) This task is automatic
excepting the periodic reset of counter, namely, it is a kind of learning without teacher.
We define the identification rate by the relative frequency of correct recalls of position
data in the past 100 trials. In a typical example, it converged to about 83% around time
400. At time 400, the number of levels was 202, and the levels oftree were distributed in 522. Since the identification failures of 17% can be rejected by considering the trajectory, no
pro blem arises in practical use. In order to improve the identification rate, the compression
ratio of camera images must be loosened. Such possibility depends on improvement of the
hardware in the future.
Fig. 8 shows an example of actual motion of the robot based on the database for obstacle
avoiding movement and that for position identification. This example corresponds to a case
of moving from 14 to 23 in Fig. 5. Here, the time interval per frame is about 40sec.
,~. .~ (
;~"i..
~
"
"
.
..I
I
?
?
"
I'
.
'.1
t
;
i
-:
, . . , 'II
Fig. 8. Actual motion of the robot.
774
CONCLUSION
A method of self-organizing associative databases was proposed with the application to
robot eyesight systems. The machine decomposes a global structure unknown into a set of
local structures known and learns universally any input-output response. This framework
of problem implies a wide application area other than the examples shown in this paper.
A defect of the algorithm 3 of self-organization is that the tree is balanced well only
for a subclass of structures of f. A subject imposed us is to widen the class. A probable
solution is to abolish the addressing rule depending directly on values of d and, instead, to
establish another rule depending on the distribution function of values of d. It is now under
investigation.
REFERENCES
1. Hopfield, J. J. and D. W. Tank, "Computing with Neural Circuit: A Model/'
Science 233 (1986), pp. 625-633.
2. Rumelhart, D. E. et al., "Learning Representations by Back-Propagating Errors," Nature 323 (1986), pp. 533-536.
3. Hull, J. J., "Hypothesis Generation in a Computational Model for Visual Word
Recognition," IEEE Expert, Fall (1986), pp. 63-70.
4. Kurtzberg, J. M., "Feature Analysis for Symbol Recognition by Elastic Matching," IBM J. Res. Develop. 31-1 (1987), pp. 91-95.
5. Wang, Q. R. and C. Y. Suen, "Large Tree Classifier with Heuristic Search and
Global Training," IEEE Trans. Pattern. Anal. & Mach. Intell. PAMI 9-1
(1987) pp. 91-102.
6. Brooks, R. A. et al, "Self Calibration of Motion and Stereo Vision for Mobile
Robots," 4th Int. Symp. of Robotics Research (1987), pp. 267-276.
7. Goto, Y. and A. Stentz, "The CMU System for Mobile Robot Navigation," 1987
IEEE Int. Conf. on Robotics & Automation (1987), pp. 99-105.
8. Madarasz, R. et al., "The Design of an Autonomous Vehicle for the Disabled,"
IEEE Jour. of Robotics & Automation RA 2-3 (1986), pp. 117-125.
9. Triendl, E. and D. J. Kriegman, "Stereo Vision and Navigation within Buildings," 1987 IEEE Int. Conf. on Robotics & Automation (1987), pp. 1725-1730.
10. Turk, M. A. et al., "Video Road-Following for the Autonomous Land Vehicle,"
1987 IEEE Int. Conf. on Robotics & Automation (1987), pp. 273-279.
| 1 |@word trial:3 version:1 compression:3 instruction:1 km:1 delicately:1 recursively:1 initial:3 configuration:1 denoting:1 document:3 past:3 current:1 si:8 universality:1 written:3 must:2 realize:1 subsequent:1 periodically:1 distant:1 succeeding:1 sampl:1 stationary:1 half:2 selected:1 leaf:1 accordingly:6 xk:1 record:1 pointer:1 filtered:1 node:7 height:2 mathematical:1 descendant:2 incorrect:2 symp:1 inside:1 ra:1 indeed:1 roughly:1 terminal:1 endlessly:1 actual:3 window:10 considering:2 becomes:4 project:2 estimating:1 notation:2 moreover:1 circuit:1 what:1 kg:1 kind:2 surveying:1 fuzzy:1 guarantee:1 every:4 passageway:3 subclass:1 exactly:1 classifier:1 control:1 unit:2 positive:1 negligible:2 vertically:1 local:1 limit:1 mach:1 establishing:1 yd:3 pami:1 black:9 initialization:1 practical:2 camera:11 recursive:4 practice:1 differs:1 procedure:1 danger:1 area:7 attain:1 matching:2 word:5 intention:1 road:1 suggest:1 onto:2 clever:1 cannot:1 operator:10 put:2 writing:1 equivalent:1 imposed:1 demonstrated:1 center:1 starting:1 truncating:1 identifying:1 rule:6 continued:1 osaka:2 autonomous:5 construction:3 suppose:4 play:1 exact:1 us:2 designing:2 hypothesis:1 associate:1 element:3 crossing:1 recognition:11 rumelhart:1 located:1 database:19 labeled:1 observed:5 role:2 bottom:2 wang:1 calculate:1 connected:1 decrease:2 movement:4 counter:2 observes:2 balanced:1 environment:2 kriegman:1 trained:2 rewrite:1 f2:1 eyesight:2 accelerate:1 easily:1 hopfield:1 various:3 chapter:1 surrounding:1 fast:1 artificial:1 detected:1 heuristic:2 supplementary:1 solve:1 say:1 drawing:1 otherwise:2 triangular:1 itself:2 associative:13 preliminarily:2 sequence:4 interaction:1 product:1 reset:3 maximal:1 loop:1 organizing:3 achieve:1 empty:1 adap:1 produce:3 converges:2 depending:2 develop:1 propagating:1 finitely:1 sa:1 memorize:1 come:2 judge:1 implies:1 direction:1 correct:4 filter:1 hull:1 human:1 wid:1 odyssey:1 suffices:1 wall:1 investigation:1 probable:1 elementary:1 secondly:1 summation:2 clarify:1 scanner:1 around:3 mapping:4 major:2 arrange:1 purpose:4 recognizer:4 estimation:8 realizes:1 faithfully:1 reflects:1 minimization:1 suen:1 always:1 mobile:5 superimposes:1 improvement:1 likelihood:1 check:1 attains:2 sense:2 nand:2 compactness:2 relation:3 tank:1 equal:3 once:1 never:1 definable:1 excessive:1 future:2 report:1 hint:1 few:2 employ:1 randomly:2 widen:1 composed:2 ve:1 intell:1 recalling:1 cylinder:1 organization:7 possibility:1 navigation:2 arrives:1 necessary:1 respective:1 orthogonal:1 tree:5 incomplete:1 euclidean:2 re:2 stopped:1 obstacle:7 cover:2 exchanging:1 ordinary:3 applicability:1 cost:4 addressing:2 subset:1 rare:1 recognizing:3 too:1 loo:1 front:1 reported:1 answer:2 scanning:3 periodic:1 teacher:1 jour:1 ie:1 physic:1 yl:4 together:1 mouse:2 quickly:1 recorded:1 management:1 slowly:1 compu:1 conf:3 american:1 expert:1 style:3 return:1 japan:1 hisashi:1 sec:1 includes:1 coefficient:2 north:1 int:4 automation:4 depends:1 performed:1 root:3 tion:1 vehicle:2 observing:4 parallel:1 square:1 opera:1 abolish:1 identify:1 landscape:3 handwritten:2 identification:8 manages:1 none:1 trajectory:1 published:1 converged:2 synapsis:1 whenever:1 against:2 ty:2 failure:1 frequency:3 pp:10 turk:1 naturally:1 couple:1 sampled:1 stop:1 recall:2 knowledge:2 back:3 reflecting:1 appears:1 methodology:2 response:1 improved:1 done:1 though:4 strongly:1 box:1 furthermore:3 angular:1 stage:3 just:1 rejected:1 until:1 hand:2 su:1 lack:1 defines:2 mode:1 disabled:1 grows:1 building:2 excessively:2 y2:4 former:1 hence:7 assigned:1 vicinity:1 laboratory:1 white:3 adjacent:1 self:11 uniquely:1 branching:1 width:2 criterion:3 motion:5 fj:1 passage:1 pro:1 loosened:1 image:23 consideration:1 recently:1 fi:2 console:1 arimoto:1 he:2 slight:1 relating:1 slant:2 stool:1 automatic:1 consistency:1 similarly:1 dot:4 moving:1 robot:23 calibration:1 etc:2 base:2 add:2 subjectively:1 closest:2 showed:1 belongs:1 route:6 binary:3 arbitrarily:1 yi:3 accomplishes:1 surely:1 ii:4 windowing:1 desirable:2 faster:1 long:1 divided:1 essentially:1 vision:2 cmu:1 sometimes:2 adopting:1 achieved:1 ion:1 robotics:5 interval:2 source:1 rest:1 contrarily:3 subject:4 goto:1 effectiveness:1 call:2 integer:1 near:1 door:1 iii:1 conception:2 easy:1 wn:1 enough:1 architecture:2 imperfect:1 whether:1 expression:1 stereo:2 speaking:1 hardly:1 collision:2 proportionally:1 repeating:1 prepared:1 locally:1 hardware:3 processed:2 category:1 simplest:1 reduced:2 exist:1 percentage:1 notice:1 per:1 write:1 discrete:1 indefinite:1 nevertheless:1 prevent:1 rewriting:1 economize:2 luminance:2 defect:2 tly:1 run:1 angle:1 letter:11 place:2 parallelogram:1 draw:1 acceptable:2 fl:1 display:1 precisely:2 x2:4 aspect:1 speed:3 min:2 stentz:1 developing:1 combination:2 contradicts:1 character:3 gradually:1 equation:1 remains:1 discus:1 turn:2 end:1 operation:1 appropriate:1 alternative:2 gate:2 substitute:1 compress:1 assumes:1 remaining:1 top:2 cf:1 ting:1 build:1 establish:4 especially:1 chinese:1 comparatively:1 move:2 already:1 lex:1 fa:3 exclusive:1 spelling:1 distance:2 capacity:1 topic:1 length:1 ratio:1 difficult:1 jut:2 teach:4 trace:1 negative:1 design:2 implementation:1 proper:4 anal:1 unknown:2 upper:1 vertical:1 observation:1 sold:1 finite:4 gas:1 situation:1 head:2 frame:1 arbitrary:1 pair:1 namely:4 smoking:1 recalled:2 established:1 trans:1 brook:1 beyond:1 suggested:1 proceeds:1 usually:2 pattern:1 reading:4 memory:3 video:1 shifting:1 suitable:1 satisfaction:2 natural:1 improve:2 identifies:1 catch:1 understanding:1 relative:3 loss:1 suggestion:1 generation:1 sufficient:1 principle:1 land:1 ibm:1 course:1 repeat:1 free:2 english:3 institute:1 wide:1 fall:1 face:1 distributed:3 regard:7 boundary:6 dimension:3 stand:1 computes:3 author:2 suzuki:1 collection:1 made:1 preprocessing:2 universally:1 founded:1 nov:1 compact:3 uni:1 implicitly:2 logic:1 deg:5 global:3 reproduces:1 assumed:3 xi:3 excepting:1 search:4 demanded:1 continuous:1 decomposes:1 nature:1 elastic:2 flourishing:1 complex:1 necessarily:1 constructing:3 artificially:1 assured:1 main:1 plural:1 fig:22 precision:1 position:15 xl:4 candidate:4 printing:1 learns:2 formula:1 xt:4 symbol:1 exists:2 dissimilarity:1 te:1 downward:1 cartesian:1 demand:1 yin:5 distinguishable:1 visual:2 horizontally:1 temporarily:1 partially:1 ters:1 brush:1 corresponds:1 consequently:1 considerable:1 dissimilari:2 change:4 experimentally:1 infinite:3 determined:1 typical:2 panorama:1 called:1 pas:1 e:3 la:1 experimental:3 tendency:1 mark:1 latter:1 scan:1 arises:1 avoiding:4 correlated:1 |
1 | 10 | 683
A MEAN FIELD THEORY OF LAYER IV OF VISUAL CORTEX
AND ITS APPLICATION TO ARTIFICIAL NEURAL NETWORKS*
Christopher L. Scofield
Center for Neural Science and Physics Department
Brown University
Providence, Rhode Island 02912
and
Nestor, Inc., 1 Richmond Square, Providence, Rhode Island,
02906.
ABSTRACT
A single cell theory for the development of selectivity and
ocular dominance in visual cortex has been presented previously
by Bienenstock, Cooper and Munrol. This has been extended to a
network applicable to layer IV of visual cortex 2 . In this paper
we present a mean field approximation that captures in a fairly
transparent manner the qualitative, and many of the
quantitative, results of the network theory. Finally, we consider
the application of this theory to artificial neural networks and
show that a significant reduction in architectural complexity is
possible.
A SINGLE LAYER NETWORK AND THE MEAN FIELD
APPROXIMATION
We consider a single layer network of ideal neurons which
receive signals from outside of the layer and from cells within
the layer (Figure 1). The activity of the ith cell in the network is
c'1 -- m'1 d + ""'
~ T .. c'
~J J'
J
(1)
Each cell
d is a vector of afferent signals to the network.
receives input from n fibers outside of the cortical network
through the matrix of synapses mi' Intra-layer input to each cell
is then transmitted through the matrix of cortico-cortical
synapses L.
? American Institute of Physics 1988
684
Afferent
Signals
>
... ..
m2
m1
mn
~
r;.
",...-
d
.L
:
1
,~
2
... ..
, ...c.. ,
~
~
Figure 1: The general single layer recurrent
network.
Light circles are the LGN -cortical
synapses.
Dark circles are the (nonmodifiable) cortico-cortical synapses.
We now expand the response of the i th cell into individual
terms describing the number of cortical synapses traversed by
the signal d before arriving through synapses Lij at cell i.
Expanding Cj in (1), the response of cell i becomes
ci
=mi d + l: ~j mj d + l: ~jL Ljk mk d + 2: ~j 2Ljk L Lkn mn d +... (2)
J
J
K
J
K' n
Note that each term contains a factor of the form
This factor describes the first order effect, on cell q, of the
cortical transformation of the signal d.
The mean field
approximation consists of estimating this factor to be a constant,
independant of cell location
(3)
685
This assumption does not imply that each cell in the network is
selective to the same pattern, (and thus that mi = mj). Rather,
the assumption is that the vector sum is a constant
This amounts to assuming that each cell in the network is
surrounded by a population of cells which represent, on average,
all possible pattern preferences.
Thus the vector sum of the
afferent synaptic states describing these pattern preferences is a
constant independent of location.
Finally, if we assume that the lateral connection strengths are
a function only of i-j then Lij becomes a circular matrix so that
r. Lij ::: ~J Lji = Lo = constan t.
1
Then the response of the cell i becomes
(4)
for I
~
I <1
where we define the spatial average of cortical cell activity C = in
d, and N is the average number of intracortical synapses.
Here, in a manner similar to that in the theory of magnetism,
we have replaced the effect of individual cortical cells by their
average effect (as though all other cortical cells can be replaced
by an 'effective' cell, figure 2). Note that we have retained all
orders of synaptic traversal of the signal d.
Thus, we now focus on the activity of the layer after
'relaxation' to equilibrium. In the mean field approximation we
can therefore write
(5)
where the mean field
a
with
=am
686
and we asume that
inhibitory).
Afferent
Signals
d
Lo < 0 (the network is,
on
average,
>
Figure 2: The single layer mean field network.
Detailed connectivity between all cells of the
network is replaced with a single (nonmodifiable) synapse from an 'effective' cell.
LEARNING IN THE CORTICAL NETWORK
We will first consider evolution of the network according to a
synaptic modification rule that has been studied in detail, for
single cells, elsewhere!? 3.
We consider the LGN - cortical
synapses to be the site of plasticity and assume for maximum
simplicity that there is no modification of cortico-cortical
synapses. Then
(6)
.
Lij = O.
In what follows c denotes the spatial average over cortical cells,
while Cj denotes the time averaged activity of the i th cortical cell.
The function cj> has been discussed extensively elsewhere.
Here
we note that cj> describes a function of the cell response that has
both hebbian and anti-hebbian regions.
687
This leads to a very complex set of non-linear stochastic
equations that have been analyzed partially elsewhere 2 . In
general, the afferent synaptic state has fixed points that are
stable and selective and unstable fixed points that are nonselective!, 2. These arguments may now be generalized for the
network. In the mean field approximation
(7)
The mean field, a has a time dependent component m. This
varies as the average over all of the network modifiable
synapses and, in most environmental situations, should change
slowly compared to the change of the modifiable synapses to a
single cell. Then in this approximation we can write
?
(mi(a)-a) = cj>[mi(a) - a] d.
(8)
We see that there is a mapping
mi' <-> mica) - a
(9)
such that for every mj(a) there exists a corresponding (mapped)
point mj' which satisfies
the original equation for the mean field zero theory. It can be
shown 2, 4 that for every fixed point of mj( a = 0), there exists a
corresponding fixed point mj( a) with the same selectivity and
stability properties.
The fixed points are available to the
neurons if there is sufficient inhibition in the network (ILo I is
sufficiently large).
APPLICATION OF THE MEAN FIELD NETWORK TO
LAYER IV OF VISUAL CORTEX
Neurons in the primary visual cortex of normal adult cats are
sharply tuned for the orientation of an elongated slit of light and
most are activated by stimulation of either eye. Both of these
properties--orientation selectivity and binocularity--depend on
the type of visual environment experienced during a critical
688
period of early postnatal development. For example, deprivation
of patterned input during this critical period leads to loss of
orientation selectivity while monocular deprivation (MD) results
in a dramatic shift in the ocular dominance of cortical neurons
such that most will be responsive exclusively to the open eye.
The ocular dominance shift after MD is the best known and most
intensively studied type of visual cortical plasticity.
The behavior of visual cortical cells in various rearing
conditions suggests that some cells respond more rapidly to
environmental changes than others.
In monocular deprivation,
for example, some cells remain responsive to the closed eye in
spite of the very large shift of most cells to the open eye- Singer
et. al. 5 found, using intracellular recording, that geniculo-cortical
synapses on inhibitory interneurons are more resistant to
monocular deprivation than are synapses on pyramidal cell
dendrites. Recent work suggests that the density of inhibitory
GABAergic synapses in kitten striate cortex is also unaffected by
MD during the cortical period 6, 7.
These results suggest that some LGN -cortical synapses modify
rapidly, while others modify relatively slowly, with slow
modification of some cortico-cortical synapses. Excitatory LGNcortical synapses into excitatory cells may be those that modify
primarily.
To embody these facts we introduce two types of
LGN -cortical synapses:
those (mj) that modify and those (Zk)
that remain relatively constant. In a simple limit we have
and
(10)
We assume for simplicity and consistent with the above
physiological interpretation that these two types of synapses are
confined to two different classes of cells and that both left and
right eye have similar synapses (both m i or both Zk) on a given
cell. Then, for binocular cells, in the mean field approximation
(where binocular terms are in italics)
689
where dl(r) are the explicit left (right) eye time averaged signals
arriving form the LGN.
Note that a1(r) contain terms from
modifiable and non-modifiable synapses:
al(r) =
a (ml(r) + zl(r?).
Under conditions of monocular deprivation, the animal is reared
with one eye closed. For the sake of analysis assume that the
right eye is closed and that only noise-like signals arrive at
cortex from the right eye. Then the environment of the cortical
cells is:
d = (di, n)
(12)
Further, assume that the left eye synapses have reached their
1
r
selective fixed point, selective to pattern d 1 ? Then (mi' m i )
(m:*, xi) with IXil ?lm!*1.
linear analysis of the
the closed eye
<I> -
=
Following the methods of BCM, a local
function is employed to show that for
Xi =
a (1 - }..a)-li.r.
(13)
where A. = NmIN is the ratio of the number modifiable cells to the
total number of cells in the network. That is, the asymptotic
state of the closed eye synapses is a scaled function of the meanfield due to non-modifiable (inhibitory) cortical cells. The scale
of this state is set not only by the proportion of non-modifiable
cells, but in addition, by the averaged intracortical synaptic
strength Lo.
Thus contrasted with the mean field zero theory the deprived
eye LGN-cortical synapses do not go to zero.
Rather they
approach the constant value dependent on the average inhibition
produced by the non-modifiable cells in such a way that the
asymptotic output of the cortical cell is zero (it cannot be driven
by the deprived eye). However lessening the effect of inhibitory
synapses (e.g. by application of an inhibitory blocking agent such
as bicuculine) reduces the magnitude of a so that one could once
more obtain a response from the deprived eye.
690
We find, consistent with previous theory and experiment,
that most learning can occur in the LGN-cortical synapse, for
inhibitory (cortico-cortical) synapses need not modify.
Some
non-modifiable LGN-cortical synapses are required.
THE MEAN FIELD APPROXIMATION AND
ARTIFICIAL NEURAL NETWORKS
The mean field approximation may be applied to networks in
which the cortico-cortical feedback is a general function of cell
activity. In particular, the feedback may measure the difference
between the network activity and memories of network activity.
In this way, a network may be used as a content addressable
memory.
We have been discussing the properties of a mean
field network after equilibrium has been reached. We now focus
on the detailed time dependence of the relaxation of the cell
activity to a state of equilibrium.
Hopfield8 introduced a simple formalism for the analysis of
the time dependence of network activity.
In this model,
network activity is mapped onto a physical system in which the
state of neuron activity is considered as a 'particle' on a potential
energy surface.
Identification of the pattern occurs when the
activity 'relaxes' to a nearby minima of the energy.
Thus
mlmma are employed as the sites of memories. For a Hopfield
network of N neurons, the intra-layer connectivity required is of
order N2. This connectivity is a significant constraint on the
practical implementation of such systems for large scale
problems. Further, the Hopfield model allows a storage capacity
which is limited to m < N memories 8, 9. This is a result of the
proliferation of unwanted local minima in the 'energy' surface.
Recently, Bachmann et al. l 0, have proposed a model for the
relaxation of network activity in which memories of activity
patterns are the sites of negative 'charges', and the activity
caused by a test pattern is a positive test 'charge'. Then in this
model, the energy function is the electrostatic energy of the
(unit) test charge with the collection of charges at the memory
sites
E = -IlL ~ Qj I J-l- Xj I - L,
J
(14)
691
where Jl (0) is a vector describing the initial network activity
caused by a test pattern, and Xj' the site of the jth memory. L is
a parameter related to the network size.
This model has the advantage that storage density is not
restricted by the the network size as it is in the Hopfield model,
and in addition, the architecture employs a connectivity of order
m x N.
Note that at each stage in the settling of Jl (t) to a memory
(of network activity) Xj' the only feedback from the network to
each cell is the scalar
~
J
Q. I Jl- X? I - L
J
J
(15)
This quantity is an integrated measure of the distance of the
current network state from stored memories.
Importantly, this
measure is the same for all cells; it is as if a single virtual cell
was computing the distance in activity space between the
current state and stored states. The result of the computation is
This is a
then broadcast to all of the cells in the network.
generalization of the idea that the detailed activity of each cell in
the network need not be fed back to each cell.
Rather some
global measure, performed by a single 'effective' cell is all that is
sufficient in the feedback.
DISCUSSION
We have been discussing a formalism for the analysis of
networks of ideal neurons based on a mean field approximation
of the detailed activity of the cells in the network. We find that
a simple assumption concerning the spatial distribution of the
pattern preferences of the cells allows a great simplification of
the analysis. In particular, the detailed activity of the cells of
the network may be replaced with a mean field that in effect is
computed by a single 'effective' cell.
Further, the application of this formalism to the cortical layer
IV of visual cortex allows the prediction that much of learning in
cortex may be localized to the LGN-cortical synaptic states, and
that cortico-cortical plasticity is relatively unimportant. We find,
in agreement with experiment, that monocular deprivation of
the cortical cells will drive closed-eye responses to zero, but
chemical blockage of the cortical inhibitory pathways would
reveal non-zero closed-eye synaptic states.
692
Finally, the mean field approximation allows the development
of single layer models of memory storage that are unrestricted
in storage density, but require a connectivity of order mxN. This
is significant for the fabrication of practical content addressable
memories.
ACKNOWLEOOEMENTS
I would like to thank Leon Cooper for many helpful discussions
and the contributions he made to this work.
*This work was supported by the Office of Naval Research and
the Army Research Office under contracts #NOOOI4-86-K-0041
and #DAAG-29-84-K-0202.
REFERENCES
[1] Bienenstock, E. L., Cooper, L. N & Munro, P. W. (1982) 1.
Neuroscience 2, 32-48.
[2] Scofield, C. L. (I984) Unpublished Dissertation.
[3] Cooper, L. N, Munro, P. W. & Scofield, C. L. (1985) in Synaptic
Modification, Neuron Selectivity and Nervous System
Organization, ed. C. Levy, J. A. Anderson & S. Lehmkuhle,
(Erlbaum Assoc., N. J.).
[4] Cooper, L. N & Scofield, C. L. (to be published) Proc. Natl. Acad.
Sci. USA ..
[5] Singer, W. (1977) Brain Res. 134, 508-000.
[6] Bear, M. F., Schmechel D. M., & Ebner, F. F. (1985) 1. Neurosci.
5, 1262-0000.
[7] Mower, G. D., White, W. F., & Rustad, R. (1986) Brain Res. 380,
253-000.
[8] Hopfield, J. J. (1982) Proc. Natl. A cad. Sci. USA 79, 2554-2558.
[9] Hopfield, J. J., Feinstein, D. 1., & Palmer, R. O. (1983) Nature
304, 158-159.
[10] Bachmann, C. M., Cooper, L. N, Dembo, A. & Zeitouni, O. (to be
published) Proc. Natl. Acad. Sci. USA.
| 10 |@word proportion:1 open:2 independant:1 dramatic:1 reduction:1 initial:1 contains:1 exclusively:1 tuned:1 rearing:1 current:2 cad:1 ixil:1 plasticity:3 nervous:1 postnatal:1 dembo:1 ith:1 dissertation:1 location:2 preference:3 lessening:1 qualitative:1 consists:1 pathway:1 introduce:1 manner:2 proliferation:1 embody:1 behavior:1 brain:2 becomes:3 estimating:1 what:1 transformation:1 quantitative:1 every:2 charge:4 unwanted:1 lehmkuhle:1 scaled:1 assoc:1 zl:1 unit:1 before:1 positive:1 local:2 modify:5 limit:1 acad:2 rhode:2 studied:2 suggests:2 patterned:1 limited:1 palmer:1 averaged:3 practical:2 addressable:2 spite:1 suggest:1 cannot:1 onto:1 storage:4 elongated:1 center:1 mower:1 go:1 simplicity:2 m2:1 rule:1 importantly:1 stability:1 population:1 agreement:1 nonmodifiable:2 blocking:1 capture:1 region:1 environment:2 lji:1 complexity:1 traversal:1 depend:1 magnetism:1 hopfield:5 cat:1 fiber:1 various:1 mxn:1 effective:4 artificial:3 outside:2 advantage:1 rapidly:2 recurrent:1 stochastic:1 virtual:1 require:1 transparent:1 generalization:1 traversed:1 sufficiently:1 considered:1 normal:1 great:1 equilibrium:3 mapping:1 lm:1 early:1 proc:3 applicable:1 nonselective:1 rather:3 office:2 focus:2 naval:1 richmond:1 am:1 helpful:1 dependent:2 integrated:1 bienenstock:2 expand:1 selective:4 lgn:9 orientation:3 ill:1 development:3 animal:1 spatial:3 fairly:1 field:19 once:1 others:2 primarily:1 employ:1 nestor:1 individual:2 replaced:4 organization:1 interneurons:1 circular:1 intra:2 analyzed:1 light:2 activated:1 natl:3 ilo:1 iv:4 circle:2 re:2 mk:1 formalism:3 fabrication:1 erlbaum:1 stored:2 providence:2 varies:1 lkn:1 density:3 contract:1 physic:2 connectivity:5 broadcast:1 slowly:2 american:1 li:1 potential:1 intracortical:2 inc:1 caused:2 afferent:5 performed:1 closed:7 reached:2 contribution:1 square:1 identification:1 produced:1 drive:1 unaffected:1 published:2 synapsis:27 synaptic:8 ed:1 energy:5 ocular:3 mi:7 di:1 blockage:1 noooi4:1 intensively:1 cj:5 back:1 response:6 synapse:2 though:1 anderson:1 binocular:2 stage:1 nmin:1 receives:1 christopher:1 reveal:1 usa:3 effect:5 brown:1 contain:1 evolution:1 chemical:1 white:1 during:3 generalized:1 recently:1 stimulation:1 physical:1 jl:4 discussed:1 interpretation:1 m1:1 he:1 significant:3 particle:1 stable:1 resistant:1 cortex:9 surface:2 inhibition:2 electrostatic:1 recent:1 driven:1 selectivity:5 discussing:2 transmitted:1 minimum:2 unrestricted:1 employed:2 period:3 signal:9 reduces:1 hebbian:2 concerning:1 a1:1 prediction:1 represent:1 confined:1 cell:55 receive:1 addition:2 pyramidal:1 recording:1 ideal:2 relaxes:1 geniculo:1 xj:3 architecture:1 mica:1 idea:1 shift:3 qj:1 munro:2 detailed:5 unimportant:1 amount:1 dark:1 extensively:1 inhibitory:8 neuroscience:1 modifiable:9 write:2 dominance:3 relaxation:3 sum:2 respond:1 arrive:1 architectural:1 layer:14 simplification:1 activity:21 strength:2 occur:1 constraint:1 sharply:1 sake:1 nearby:1 argument:1 leon:1 relatively:3 department:1 according:1 describes:2 remain:2 island:2 modification:4 deprived:3 restricted:1 equation:2 monocular:5 previously:1 describing:3 singer:2 feinstein:1 fed:1 available:1 reared:1 responsive:2 original:1 denotes:2 zeitouni:1 quantity:1 occurs:1 primary:1 dependence:2 md:3 striate:1 italic:1 distance:2 thank:1 mapped:2 lateral:1 sci:3 capacity:1 unstable:1 assuming:1 retained:1 ratio:1 negative:1 implementation:1 ebner:1 neuron:8 anti:1 situation:1 extended:1 introduced:1 unpublished:1 required:2 connection:1 bcm:1 adult:1 pattern:9 memory:11 critical:2 meanfield:1 settling:1 mn:2 imply:1 eye:17 gabaergic:1 lij:4 asymptotic:2 loss:1 bear:1 localized:1 agent:1 sufficient:2 consistent:2 surrounded:1 lo:3 elsewhere:3 excitatory:2 supported:1 arriving:2 jth:1 scofield:4 cortico:7 institute:1 feedback:4 cortical:35 collection:1 made:1 ml:1 global:1 xi:2 constan:1 mj:7 zk:2 nature:1 expanding:1 dendrite:1 complex:1 intracellular:1 neurosci:1 noise:1 n2:1 site:5 cooper:6 slow:1 experienced:1 explicit:1 levy:1 deprivation:6 bachmann:2 physiological:1 dl:1 exists:2 ci:1 magnitude:1 army:1 visual:9 partially:1 scalar:1 environmental:2 satisfies:1 ljk:2 content:2 change:3 contrasted:1 total:1 slit:1 kitten:1 |
2 | 100 | 394
STORING COVARIANCE BY THE ASSOCIATIVE
LONG?TERM POTENTIATION AND DEPRESSION
OF SYNAPTIC STRENGTHS IN THE HIPPOCAMPUS
Patric K. Stanton? and Terrence J. Sejnowski t
Department of Biophysics
Johns Hopkins University
Baltimore, MD 21218
ABSTRACT
In modeling studies or memory based on neural networks, both the selective
enhancement and depression or synaptic strengths are required ror effident storage
or inrormation (Sejnowski, 1977a,b; Kohonen, 1984; Bienenstock et aI, 1982;
Sejnowski and Tesauro, 1989). We have tested this assumption in the hippocampus,
a cortical structure or the brain that is involved in long-term memory. A brier,
high-frequency activation or excitatory synapses in the hippocampus produces an
increase in synaptic strength known as long-term potentiation, or LTP (BUss and
Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it
requires the simultaneous release or neurotransmitter from presynaptic terminals
coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller,
1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or
synaptic strength that could balance LTP has not yet been demonstrated. We studied the associative interactions between separate inputs onto the same dendritic
trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency
input which, by itselr, does not persistently change synaptic strength, can either
increase (associative LTP) or decrease in strength (associative long-term depression
or LTD) depending upon whether it is positively or negatively correlated in time
with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian,
and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarization sufficient to block postsynaptic activity. Thus, associative LTP and associative LTO are capable or storing inrormation contained in the
covariance between separate, converging hippocampal inputs?
?Present address: Dep~ents of NeW'Oscience and Neurology, Albert Einstein College
of Medicine, 1410 Pelham Parkway South, Bronx, NY 10461 USA.
tPresent address: Computational Neurobiology Laboratory, The Salk Institute, P.O. Box
85800, San Diego, CA 92138 USA.
Storing Covariance by Synaptic Strengths in the Hippocampus
INTRODUCTION
Associative LTP can be produced in some hippocampal neuroos when lowfrequency. (Weak) and high-frequency (Strong) inputs to the same cells are simultaneously activated (Levy and Steward, 1979; Levy and Steward, 1983; Barrionuevo and
Brown, 1983). When stimulated alone, a weak input does not have a long-lasting effect
on synaptic strength; however, when paired with stimulation of a separate strong input
sufficient to produce homo synaptic LTP of that pathway, the weak pathway is associatively potentiated. Neural network modeling studies have predicted that, in addition to
this Hebbian form of plasticity, synaptic strength should be weakened when weak and
strong inputs are anti-correlated (Sejnowski, 1977a,b; Kohonen, 1984; Bienenstock et al,
1982; Sejnowski and Tesauro, 1989). Evidence for heterosynaptic depression in the hippocampus has been found for inputs that are inactive (Levy and Steward, 1979; Lynch et
al, 1977) or weakly active (Levy and Steward, 1983) during the stimulation of a strong
input, but this depression did not depend on any pattern of weak input activity and was
not typically as long-lasting as LTP.
Therefore, we searched for conditions under which stimulation of a hippocampal
pathway, rather than its inactivity, could produce either long-term depression or potentiation of synaptic strengths, depending on the pattern of stimulation. The stimulus paradigm that we used, illustrated in Fig. I, is based on the finding that bursts of stimuli at 5
Hz are optimal in eliciting LTP in the hippocampus (Larson and Lynch, 1986). A highfrequency burst (S'IRONG) stimulus was applied to Schaffer collateral axons and a lowfrequency (WEAK) stimulus given to a separate subicular input coming from the opposite side of the recording site, but terminating on dendrites of the same population of CAl
pyramidal neurons. Due to the rhythmic nature of the strong input bursts, each weak
input shock could be either superimposed on the middle of each burst of the strong input
(IN PHASE), or placed symmetrically between bursts (OUT OF PHASE).
RESULTS
Extracellular evoked field potentials were recorded from the apical dendritic and
somatic layers of CAl pyramidal cells. The weak stimulus train was first applied alone
and did not itself induce long-lasting changes. The strong site was then stimulated alone,
which elicited homosynaptic LTP of the strong pathway but did not significantly alter
amplitude of responses to the weak input. When weak and strong inputs were activated
IN PHASE, there was an associative LTP of the weak input synapses, as shown in Fig.
2a. Both the synaptic excitatory post-synaptic potential (e.p.s.p.) (Ae.p.s.p. = +49.8 ?
7.8%, n=20) and population action potential (&Pike = +65.4 ? 16.0%, n=14) were
significantly enhanced for at least 60 min up to 180 min following stimulation.
In contrast, when weak and strong inputs were applied OUT OF PHASE, they elicited an associative long-term depression (LTO) of the weak input synapses, as shown in
Fig. 2b. There was a marked reduction in the population spike (-46.5 ? 11.4%, n=10)
with smaller decreases in the e.p.s.p. (-13.8 ? 3.5%, n=13). Note that the stimulus patterns applied to each input were identical in these two experiments, and only the relative
395
396
Stanton and Sejnowski
phase of the weak and strong stimuli was altered. With these stimulus patterns. synaptic
strength could be repeatedly enhanced and depressed in a single slice. as illustrated in Fig
2c. As a control experiment to determine whether information concerning covariance
between the inputs was actually a determinant of plasticity. we combined the in phase
and out of phase conditions, giving both the weak input shocks superimposed on the
bursts plus those between the bursts. for a net frequency of 10 Hz. This pattern. which
resulted in zero covariance between weak and strong inputs. produced no net change in
weak input synaptic strength measmed by extracellular evoked potentials. Thus. the assoa
b
A.SSOCIA.TIVE STIMULUS PA.RA.DIGMS
POSJTIVE.LY CORKELA TED ? "IN PHASE"
~K~~ _I~__~I____~I____~I_
SI1IONG,NJO\IT
. u.Jj1l 11l. -1---1&1111.....
11 ---1&1
111.....
11 ---,I~IIII
NEGATIVELY CORRELATED? 'our OF PHASE"
W[AKIN'lTf
STIONG 'N''''
~I
11111
--,-;
11111
11111
Figure 1. Hippocampal slice preparation and stimulus paradigms. a: The in vitro hippocampal slice showing recording sites in CAl pyramidal cell somatic (stratum pyramidale) and dendritic (stratum radiatum) layers. and stimulus sites activating Schaffer collateral (STRONG) and commissural (WEAK) afferents. Hippocampal slices (400 Jlm
thick) were incubated in an interface slice chamber at 34-35 0 C. Extracellular (1-5 M!l
resistance, 2M NaCI filled) and intracellular (70-120 M 2M K-acetate filled) recording electrodes. and bipolar glass-insulated platinum wire stimulating electrodes (50 Jlm
tip diameter). were prepared by standard methods (Mody et al, 1988). b: Stimulus paradigms used. Strong input stimuli (STRONG INPUT) were four trains of 100 Hz bursts.
Each burst had 5 stimuli and the interburst interval was 200 msec. Each train lasted 2
seconds for a total of 50 stimuli. Weak input stimuli (WEAK INPUT) were four trains of
shocks at 5 Hz frequency. each train lasting for 2 seconds. When these inputs were IN
PHASE. the weak single shocks were superimposed on the middle of each burst of the
strong input. When the weak input was OUT OF PHASE. the single shocks were placed
symmetrically between the bursts.
n.
Storing Covariance by Synaptic Strengths in the Hippocampus
ciative LTP and LTD mechanisms appear to be balanced in a manner ideal for the
storage of temporal covariance relations.
The simultaneous depolarization of the postsynaptic membrane and activation of
glutamate receptors of the N-methyl-D-aspartate (NMDA) subtype appears to be necessary for LTP induction (Collingridge et ai, 1983; Harris et al, 1984; Wigstrom and Gustaffson, 1984). The SJ?read of current from strong to weak synapses in the dendritic tree,
d
ASSOCIATIVE
LON(;.TE~
I'OTENTIATION
LONG-TE~
DE,/tESSION
-
!!Ll!!!!.
b
ASSOCIATIVE
I
11111
?
11111.
I
c
e...
I
I
I
I
Figure 2. mustration of associative long-term potentiation (LTP) and associative longterm depression (LTD) using extracellular recordings. a: Associative LTP of evoked
excitatory postsynaptic potentials (e.p.s.p.'s) and population action potential responses in
the weak inpuL Test responses are shown before (Pre) and 30 min after (post) application of weak stimuli in phase with the coactive strong input. b: Associative LTD of
evoked e.p.s.p.'s and population spike responses in the weak input. Test responses are
shown before (Pre) and 30 min after (post) application of weak stimuli out of phase with
the coactive strong input. c: Time course of the changes in population spike amplitude
observed at each input for a typical experiment. Test responses from the strong input (S,
open circles), show that the high-frequency bursts (5 pulses/l00 Hz, 200 msec interburst
interval as in Fig. 1) elicited synapse-specific LTP independent of other input activity.
Test responses from the weak input (W. filled circles) show that stimulation of the weak
pathway out of phase with the strong one produced associative LTD (Assoc LTD) of this
input. Associative LTP (Assoc LTP) of the same pathway was then elicited following in
phase stimulation. Amplitude and duration of associative LTD or LTP could be increased
by stimulating input pathways with more trains of shocks.
397
398
Stanton and Sejnowski
coupled with release of glutamate from the weak inputs, could account for the ability of
the strong pathway to associatively potentiate a weak one (Kelso et al, 1986; Malinow
and Miller, 1986; Gustaffson et al, 1987). Consistent with this hypothesis, we find that
the NMDA receptor antagonist 2-amino-S-phosphonovaleric acid (APS, 10 J.1M) blocks
induction of associative LTP in CAl pyramidal neurons (data not shown, n=S). In contrast, the application of APS to the bathing solution at this same concentration had no
significant effect on associative LTD (data not shown, n=6). Thus, the induction of LTD
seems to involve cellular mechanisms different from associative LTP.
The conditions necessary for LTD induction were explored in another series of
experiments using intracellular recordings from CAl pyramidal neurons made using
standard techniques (Mody et al, 1988). Induction of associative LTP (Fig 3; WEAK
S+W IN PHASE) produced an increase in amplitude of the single cell evoked e.p.s.p. and
a lowered action potential threshold in the weak pathway, as reported previously (Barrionuevo and Brown, 1983). Conversely, the induction of associative LTD (Fig. 3;
WEAK S+W OUT OF PHASE) was accompanied by a long-lasting reduction of e.p.s.p.
amplitude and reduced ability to elicit action potential firing. As in control extracellular
experiments, the weak input alone produced no long-lasting alterations in intracellular
e.p.s.p.'s or firing properties, while the strong input alone yielded specific increases of
the strong pathway e.p.s.p. without altering e.p.s.p. 's elicited by weak input stimulation.
PRE
30 min POST
S+W OUT OF PHASE
30 min POST
S+W IN PHASE
Figure 3. Demonstration of associative LTP and LTD using intracellular recordings from
a CAl pyramidal neuron. Intracellular e.p.s.p.'s prior to repetitive stimulation (pre), 30
min after out of phase stimulation (S+W OUT OF PHASE), and 30 min after subsequent in phase stimuli (S+W IN PHASE). The strong input (Schaffer collateral side,
lower traces) exhibited LTP of the evoked e.p.s.p. independent of weak input activity.
Out of phase stimulation of the weak (Subicular side, upper traces) pathway produced a
marked, persistent reduction in e.p.s.p. amplitude. In the same cell, subsequent in phase
stimuli resulted in associative LTP of the weak input that reversed the LTD and enhanced
amplitude of the e.p.s.p. past the original baseline. (RMP = -62 mY, RN = 30 MO)
Storing Covariance by Synaptic Strengths in the Hippocampus
A weak stimulus that is out of phase with a strong one anives when the postsynaptic neuron is hyperpolarized as a consequence of inhibitory postsynaptic potentials and
afterhyperpolarization from mechanisms intrinsic to pyramidal neurons. This suggests
that postsynaptic hyperpolarization coupled with presynaptic activation may trigger L'ID.
To test this hypothesis, we injected current with intracellular microelectrodes to hyperpolarize or depolarize the cell while stimulating a synaptic input. Pairing the injection of
depolarizing current with the weak input led to LTP of those synapses (Fig. 4a; STIM;
a
PRE
? ?IDPOST
S'I1M ? DEPOL
~l"V
lS.,.c
r
," i
COI'ITROL
-Jj
b
I
--" \
"----
(W.c:ULVllj
PRE
lOlIIin POST
STlM ? HYPERPOL
Figure 4. Pairing of postsynaptic hyperpolarization with stimulation of synapses on CAl
hippocampal pyramidal neurons produces L'ID specific to the activated pathway, while
pairing of postsynaptic depolarization with synaptic stimulation produces synapsespecific LTP. a: Intracellular evoked e.p.s.p.'s are shown at stimulated (STIM) and
unstimulated (CONTROL) pathway synapses before (Pre) and 30 min after (post) pairing a 20 mY depolarization (constant current +2.0 nA) with 5 Hz synaptic stimulation.
The stimulated pathway exhibited associative LTP of the e.p.s.p., while the control,
unstimulated input showed no change in synaptic strength. (RMP = -65 mY; RN = 35
Mfl) b: Intracellular e.p.s.p. 's are shown evoked at stimulated and control pathway
synapses before (Pre) and 30 min after (post) pairing a 20 mV hyperpolarization (constant current -1.0 nA) with 5 Hz synaptic stimulation. The input (STIM) activated during
the hyperpolarization showed associative LTD of synaptic evoked e.p.s.p.'s, while
synaptic strength of the silent input (CONTROL) was unaltered. (RMP =-62 mV; RN =
38M!l)
399
400
Stanton and Sejnowski
+64.0 -9.7%, n=4), while a control input inactive during the stimulation did not change
(CONTROL), as reported previously (Kelso et al, 1986; Malinow and Miller, 1986; Gustaffson et al, 1987). Conversely, prolonged hyperpolarizing current injection paired with
the same low-frequency stimuli led to induction of LTD in the stimulated pathway (Fig.
4b; STIM; -40.3 ? 6.3%, n=6). but not in the unstimulated pathway (CONTROL). The
application of either depolarizing current, hyperpolarizing current, or the weak 5 Hz
synaptic stimulation alone did not induce long-term alterations in synaptic strengths.
Thus. hyperpolarization and simultaneous presynaptic activity supply sufficient conditions for the induction of LTD in CAl pyramidal neurons.
CONCLUSIONS
These experiments identify a novel fono of anti-Hebbian synaptic plasticity in the
hippocampus and confirm predictions made from modeling studies of information storage
in neural networks. Unlike previous reports of synaptic depression in the hippocampus,
the plasticity is associative, long-lasting, and is produced when presynaptic activity
occurs while the postsynaptic membrane is hyperpolarized. In combination with Hebbian
mechanisms also present at hippocampal synapses. associative LTP and associative LTD
may allow neurons in the hippocampus to compute and store covariance between inputs
(Sejnowski, 1977a,b; Stanton and Sejnowski. 1989). These finding make temporal as
well as spatial context an important feature of memory mechanisms in the hippocampus.
Elsewhere in the brain, the receptive field properties of cells in cat visual cortex
can be altered by visual experience paired with iontophoretic excitation or depression of
cellular activity (Fregnac et al, 1988; Greuel et al, 1988). In particular, the chronic hyperpolarization of neurons in visual cortex coupled with presynaptic transmitter release leads
to a long-teno depression of the active. but not inactive, inputs from the lateral geniculate
nucleus (Reiter and Stryker, 1988). Thus. both Hebbian and anti-Hebbian mechanisms
found in the hippocampus seem to also be present in other brain areas, and covariance of
firing patterns between converging inputs a likely key to understanding higher cognitive
function.
This research was supported by grants from the National Science Foundation and
the Office of Naval research to TJS. We thank Drs. Charles Stevens and Richard Morris
for discussions about related experiments.
Rererences
Bienenstock, E., Cooper. LN. and Munro. P. Theory for the development of neuron
selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2. 32-48 (1982).
Barrionuevo, G. and Brown, T.H. Associative long-teno potentiation in hippocampal
slices. Proc. Nat. Acad. Sci. (USA) 80, 7347-7351 (1983).
Bliss. T.V.P. and Lomo, T. Long-lasting potentiation of synaptic ttansmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J.
Physiol. (Lond.) 232. 331-356 (1973).
Storing Covariance by Synaptic Strengths in the Hippocampus
Collingridge, GL., Kehl, SJ. and McLennan, H. Excitatory amino acids in synaptic
transmission in the Schaffer collateral-commissural pathway of the rat hippocampus. J.
Physiol. (Lond.) 334, 33-46 (1983).
Fregnac, Y., Shulz, D., Thorpe, S. and Bienenstock, E. A cellular analogue of visual cortical plasticity. Nature (Lond.) 333, 367-370 (1988).
Greuel. J.M.. Luhmann. H.J. and Singer. W. Pharmacological induction of usedependent receptive field modifications in visual cortex. Science 242,74-77 (1988).
Gustafsson, B., Wigstrom, H., Abraham, W.C. and Huang. Y.Y. Long-term potentiation
in the hippocampus using depolarizing current pulses as the conditioning stimulus to
single volley synaptic potentials. J. Neurosci. 7, 774-780 (1987).
Harris. E.W., Ganong, A.H. and Cotman, C.W. Long-term potentiation in the hippocampus involves activation of N-metbyl-D-aspartate receptors. Brain Res. 323, 132137 (1984).
Kelso, S.R.. Ganong, A.H. and Brown, T.H. Hebbian synapses in hippocampus. Proc.
Natl. Acad. Sci. USA 83, 5326-5330 (1986).
Kohonen. T. Self-Organization and Associative Memory. (Springer-Verlag. Heidelberg,
1984).
Larson. J. and Lynch. G. Synaptic potentiation in hippocampus by patterned stimulation
involves two events. Science 232, 985-988 (1986).
Levy. W.B. and Steward, O. Synapses as associative memory elements in the hippocampal formation. Brain Res. 175,233-245 (1979).
Levy. W.B. and Steward, O. Temporal contiguity requirements for long-term associative
potentiation/depression in the hippocampus. Neuroscience 8, 791-797 (1983).
Lynch. G.S., Dunwiddie. T. and Gribkoff. V. Heterosynaptic depression: a postsynaptic
correlate oflong-term potentiation. Nature (Lond.) 266. 737-739 (1977).
Malinow. R. and Miller, J.P. Postsynaptic hyperpolarization during conditioning reversibly blocks induction of long-term potentiation Nature (Lond.)32.0. 529-530 (1986).
Mody. I.. Stanton. PK. and Heinemann. U. Activation of N-methyl-D-aspartate
(NMDA) receptors parallels changes in cellular and synaptic properties of dentate
gyrus granule cells after kindling. J. Neurophysiol. 59. 1033-1054 (1988).
Reiter, H.O. and Stryker, M.P. Neural plasticity without postsynaptic action potentials:
Less-active inputs become dominant when kitten visual cortical cells are pharmacologically inhibited. Proc. Natl. Acad. Sci. USA 85, 3623-3627 (1988).
Sejnowski, T J. and Tesauro, G. Building network learning algorithms from Hebbian
synapses, in: Brain Organization and Memory JL. McGaugh, N.M. Weinberger, and
G. Lynch, Eds. (Oxford Univ. Press, New York, in press).
Sejnowski, TJ. Storing covariance with nonlinearly interacting neurons. J. Math. Biology 4, 303-321 (1977).
Sejnowski, T. J. Statistical constraints on synaptic plasticity. J. Theor. Biology 69, 385389 (1977).
Stanton, P.K. and Sejnowski, TJ. Associative long-term depression in the hippocampus:
Evidence for anti-Hebbian synaptic plasticity. Nature (Lond.), in review.
Wigstrom, H. and Gustafsson, B. A possible correlate of the postsynaptic condition for
long-lasting potentiation in the guinea pig hippocampus in vitro. Neurosci. Lett. 44,
327?332 (1984).
401
| 100 |@word determinant:1 longterm:1 unaltered:1 middle:2 hippocampus:22 seems:1 hyperpolarized:2 open:1 pulse:2 covariance:12 lowfrequency:2 reduction:4 series:1 past:1 coactive:2 current:9 activation:5 yet:1 john:1 physiol:2 subsequent:2 hyperpolarizing:2 plasticity:8 aps:2 alone:6 patric:1 math:1 tpresent:1 burst:12 become:1 supply:1 persistent:2 pairing:6 gustafsson:2 pathway:18 manner:1 pharmacologically:1 ra:1 brier:1 brain:6 terminal:1 inrormation:2 prolonged:1 depolarization:4 contiguity:1 finding:2 jlm:2 temporal:3 bipolar:1 assoc:2 control:9 subtype:1 ly:1 grant:1 appear:1 before:4 consequence:1 acad:3 receptor:4 id:2 oxford:1 firing:4 path:1 plus:1 studied:1 bursting:1 weakened:1 evoked:9 conversely:2 suggests:1 patterned:1 block:3 i____:2 area:2 elicit:1 significantly:2 kelso:4 pre:8 induce:2 specificity:1 onto:1 cal:9 storage:3 context:1 demonstrated:1 chronic:1 duration:1 l:1 rabbit:1 methyl:2 subicular:2 population:6 insulated:1 diego:1 enhanced:3 trigger:1 hypothesis:2 pa:1 persistently:1 element:1 observed:1 decrease:2 balanced:1 gustaffson:3 terminating:1 weakly:1 depend:1 ror:3 upon:1 negatively:2 lto:2 neurophysiol:1 cat:1 neurotransmitter:1 train:6 univ:1 sejnowski:14 formation:1 ability:2 itself:1 associative:35 net:2 interaction:2 coming:1 kohonen:3 enhancement:1 electrode:2 transmission:1 requirement:1 produce:5 depending:2 dep:1 strong:26 predicted:1 involves:2 thick:1 stevens:1 coi:1 mclennan:1 potentiation:13 activating:1 dendritic:4 theor:1 dentate:2 mo:1 proc:3 geniculate:1 barrionuevo:3 platinum:1 lynch:5 rather:1 office:1 release:3 usedependent:1 lon:1 naval:1 oflong:1 transmitter:1 superimposed:3 lasted:1 contrast:2 baseline:1 glass:1 typically:1 bienenstock:4 relation:1 i1m:1 selective:1 orientation:1 development:1 spatial:1 field:4 ted:1 identical:1 biology:2 unstimulated:3 alter:1 report:1 stimulus:23 richard:1 inhibited:1 thorpe:1 shulz:1 microelectrodes:1 simultaneously:1 resulted:2 national:1 phase:26 ents:1 organization:2 homo:1 activated:4 natl:2 tj:2 capable:1 necessary:2 experience:1 collateral:4 tree:2 filled:3 circle:2 re:2 increased:1 modeling:3 altering:1 apical:1 reported:2 my:3 combined:1 stratum:2 terrence:1 tip:1 hopkins:1 fregnac:2 na:2 recorded:1 huang:1 cognitive:1 luhmann:1 account:1 potential:11 de:1 accompanied:1 alteration:2 afterhyperpolarization:1 bliss:1 afferent:1 mv:2 elicited:6 parallel:1 depolarizing:3 acid:2 miller:4 identify:1 weak:42 produced:7 reversibly:1 simultaneous:3 synapsis:12 synaptic:36 ed:1 frequency:8 involved:1 commissural:2 nmda:3 amplitude:7 actually:1 appears:1 higher:1 day:1 response:7 synapse:1 box:1 binocular:1 usa:5 effect:2 perforant:1 brown:4 depolarize:1 building:1 read:1 laboratory:1 reiter:2 illustrated:2 round:1 ll:1 during:4 pharmacological:1 self:1 excitation:1 larson:2 rat:1 hippocampal:11 antagonist:1 interface:1 novel:1 charles:1 stimulation:19 hyperpolarization:8 vitro:2 conditioning:2 jl:1 rmp:3 significant:1 potentiate:1 ai:2 depressed:1 had:2 lowered:1 cortex:4 dominant:1 inpul:1 showed:2 tesauro:3 store:1 steward:6 selectivity:1 verlag:1 determine:1 paradigm:3 pelham:1 hebbian:11 long:24 concerning:1 post:8 paired:3 biophysics:1 converging:2 prediction:1 ae:1 bronx:1 albert:1 repetitive:1 mustration:1 cell:10 addition:1 iiii:1 interval:2 baltimore:1 pyramidal:10 unlike:1 exhibited:2 south:1 associatively:2 hz:8 ltp:30 recording:6 seem:1 symmetrically:2 ideal:1 njo:1 opposite:1 silent:1 inactive:3 whether:2 munro:1 ltd:18 inactivity:1 akin:1 resistance:1 york:1 pike:1 jj:1 action:5 depression:14 repeatedly:1 involve:1 prepared:1 morris:1 diameter:1 reduced:1 gyrus:1 inhibitory:1 neuroscience:1 key:1 four:2 threshold:1 shock:6 interburst:2 injected:1 heterosynaptic:2 tjs:1 layer:2 yielded:1 activity:7 strength:19 constraint:1 homosynaptic:1 min:10 lond:6 injection:2 extracellular:5 department:1 combination:1 membrane:2 smaller:1 postsynaptic:15 modification:1 lasting:9 ln:1 bus:1 previously:2 mechanism:7 singer:1 drs:1 collingridge:2 einstein:1 chamber:1 weinberger:1 original:1 medicine:1 giving:1 granule:1 eliciting:1 anaesthetized:1 spike:3 occurs:1 receptive:2 concentration:1 stryker:2 md:1 highfrequency:1 reversed:1 separate:4 thank:1 lateral:1 sci:3 presynaptic:6 cellular:4 induction:10 stim:4 balance:1 demonstration:1 trace:2 lomo:2 potentiated:1 i_:1 neuron:12 wire:1 upper:1 anti:5 neurobiology:1 rn:3 interacting:1 somatic:2 schaffer:4 tive:1 nonlinearly:1 required:1 address:2 pattern:6 pig:1 memory:6 analogue:1 event:1 glutamate:2 stanton:7 altered:2 coupled:4 prior:1 understanding:1 review:1 wigstrom:3 relative:1 radiatum:1 foundation:1 nucleus:1 sufficient:3 consistent:1 storing:7 excitatory:4 course:1 elsewhere:1 placed:2 last:1 supported:1 gl:1 guinea:1 side:3 allow:1 institute:1 rhythmic:1 slice:6 lett:1 cortical:3 made:2 san:1 ganong:2 correlate:2 sj:2 l00:1 confirm:1 active:3 parkway:1 neurology:1 stimulated:6 nature:5 ca:1 dendrite:1 heidelberg:1 did:5 pk:1 intracellular:8 neurosci:3 abraham:1 amino:2 positively:1 fig:9 site:4 ny:1 salk:1 axon:1 cooper:1 msec:2 volley:1 levy:6 specific:3 showing:1 aspartate:3 explored:1 evidence:2 mcgaugh:1 intrinsic:1 te:2 nat:1 led:2 likely:1 visual:7 contained:1 malinow:4 springer:1 harris:2 stimulating:3 marked:2 change:7 heinemann:1 typical:1 total:1 rererences:1 college:1 searched:1 preparation:1 tested:1 kitten:1 correlated:3 |
3 | 1,000 | Bayesian Query Construction for Neural
Network Models
Gerhard Paass
Jorg Kindermann
German National Research Center for Computer Science (GMD)
D-53757 Sankt Augustin, Germany
paass@gmd.de
kindermann@gmd.de
Abstract
If data collection is costly, there is much to be gained by actively selecting particularly informative data points in a sequential way. In
a Bayesian decision-theoretic framework we develop a query selection criterion which explicitly takes into account the intended use
of the model predictions. By Markov Chain Monte Carlo methods
the necessary quantities can be approximated to a desired precision. As the number of data points grows, the model complexity
is modified by a Bayesian model selection strategy. The properties of two versions of the criterion ate demonstrated in numerical
experiments.
1
INTRODUCTION
In this paper we consider the situation where data collection is costly, as when
for example, real measurements or technical experiments have to be performed. In
this situation the approach of query learning ('active data selection', 'sequential
experimental design', etc.) has a potential benefit. Depending on the previously
seen examples, a new input value ('query') is selected in a systematic way and
the corresponding output is obtained. The motivation for query learning is that
random examples often contain redundant information, and the concentration on
non-redundant examples must necessarily improve generalization performance.
We use a Bayesian decision-theoretic framework to derive a criterion for query construction. The criterion reflects the intended use of the predictions by an appropriate
444
Gerhard Paass. Jorg Kindermann
loss function. We limit our analysis to the selection of the next data point, given a
set of data already sampled. The proposed procedure derives the expected loss for
candidate inputs and selects a query with minimal expected loss.
There are several published surveys of query construction methods [Ford et al. 89,
Plutowski White 93, Sollich 94]. Most current approaches, e.g. [Cohn 94], rely
on the information matrix of parameters. Then however, all parameters receive
equal attention regardless of their influence on the intended use of the model
[Pronzato Walter 92]. In addition, the estimates are valid only asymptotically. Bayesian approaches have been advocated by [Berger 80], and applied to neural networks
[MacKay 92]. In [Sollich Saad 95] their relation to maximum information gain is
discussed. In this paper we show that by using Markov Chain Monte Carlo methods it is possible to determine all quantities necessary for the selection of a query.
This approach is valid in small sample situations, and the procedure's precision can
be increased with additional computational effort. With the square loss function,
the criterion is reduced to a variant of the familiar integrated mean square error
[Plutowski White 93].
In the next section we develop the query selection criterion from a decision-theoretic
point of view. In the third section we show how the criterion can be calculated using
Markov Chain Monte Carlo methods and we discuss a strategy for model selection.
In the last section, the results of two experiments with MLPs are described.
2
A DECISION-THEORETIC FRAMEWORK
Assume we have an input vector x and a scalar output y distributed as y "" p(y I x, w)
where w is a vector of parameters. The conditional expected value is a deterministic
function !(x, w) := E(y I x, w) where y = !(x, w)+? and ? is a zero mean error term.
Suppose we have iteratively collected observations D(n) := ((Xl, iii), .. . , (Xn, Yn)).
We get the Bayesian posterior p(w I D(n)) = p(D(n) Iw) p(w)/ J p(D(n) Iw) p(w) dw
and the predictive distribution p(y I x, D(n)) = p(y I x, w)p(w I D(n)) dw if p(w) is
the prior distribution.
J
We consider the situation where, based on some data x, we have to perform an
action a whose result depends on the unknown output y. Some decisions may have
more severe effects than others. The loss function L(y, a) E [0,00) measures the
loss if y is the true value and we have taken the action a E A. In this paper we
consider real-valued actions, e.g. setting the temperature a in a chemical process.
We have to select an a E A only knowing the input x. According to the Bayes
Principle [Berger 80, p.14] we should follow a decision rule d : x --t a such that
the average risk J R(w, d) p(w I D(n)) dw is minimal, where the risk is defined as
R(w, d) := J L(y, d(x)) p(y I x, w) p(x) dydx. Here p(x) is the distribution of future
inputs, which is assumed to be known.
For the square loss function L(y, a) = (y - a)2, the conditional expectation
d(x) := E(y I x, D(n)) is the optimal decision rule. In a control problem the loss
may be larger at specific critical points. This can be addressed with a weighted square loss function L(y, a) := h(y)(y - a)2, where h(y) 2: a [Berger 80,
p.1U]. The expected loss for an action is J(y - a)2h(y) p(y I x, D(n)) dy. Replacing the predictive density p(y I x, D(n)) with the weighted predictive density
Bayesian Query Construction for Neural Network Models
445
p(y I x, Den) := h(y) p(y I x, Den)/G(x), where G(x) := I h(y) p(y I x, Den) dy,
we get the optimal decision rule d(x) := I yp(y I x, Den) dy and the average loss
G(x) I(y - E(y I x, D(n))2 p(y I x, Den) dy for a given input x. With these modifications, all later derIvations for the square loss function may be applied to the
weighted square loss.
The aim of query sampling is the selection of a new observation x in such a way
that the average risk will be maximally reduced. Together with its still unknown
y-value, x defines a new observation (x, y) and new data Den) U (x, y). To determine
this risk for some given x we have to perform the following conceptual steps for a
candidate query x:
1. Future Data: Construct the possible sets of 'future' observations Den) U
(x, y), where y ""' p(y I x, Den).
2. Future posterior: Determine a 'future' posterior distribution of parameters
p(w I Den) U (x, y? that depends on y in the same way as though it had
actually been observed.
3. Future Loss: Assuming d~,x(x) is the optimal decision rule for given values
of x, y, and x, compute the resulting loss as
1';,x(x):=
J
L(y,d;,x(x?p(ylx,w)p(wIDen)U(x,y?dydw
(1)
4. Averaging: Integrate this quantity over the future trial inputs x distributed
as p(x) and the different possible future outputs y, yielding
1';:= Ir;,x(x)p(x)p(ylx,Den)dxdy.
This procedure is repeated until an x with minimal average risk is found. Since local
optima are typical, a global optimization method is required. Subsequently we then
try to determine whether the current model is still adequate or whether we have to
increase its complexity (e.g. by adding more hidden units).
3
COMPUTATIONAL PROCEDURE
Let us assume that the real data Den) was generated according to a regression model
y = !(x, w)+{ with i.i.d. Gaussian noise {""' N(O, (T2(w?. For example !(x, w) may
be a multilayer perceptron or a radial basis function network. Since the error terms
are independent, the posterior density is p( w I Den) ex: p( w) rr~=l P(Yi I Xi, w) even
in the case of query sampling [Ford et al. 89].
As the analytic derivation of the posterior is infeasible except in trivial cases, we
have to use approximations. One approach is to employ a normal approximation
[MacKay 92], but this is unreliable if the number of observations is small compared to the number of parameters. We use Markov Chain Monte Carlo procedures
[PaaB 91, Neal 93] to generate a sample WeB) := {WI, .. .WB} of parameters distributed according to p( w I Den). If the number of sampling steps approaches infinity,
the distribution of the simulated Wb approximates the posterior arbitrarily well.
To take into account the range of future y-values, we create a set of them by simulation. For each Wb E WeB) a number of y ""' p(y I x, Wb) is generated. Let
446
y(x.R)
Gerhard Paass. JiJrg Kindermann
{YI, ... , YR} be the resulting set. Instead of performing a new Markov
Monte Carlo run to generate a new sample according to p(w I DCn) U (x, y)), we
:=
use the old set WCB) of parameters and reweight them (importance sampling).
In this way we may approximate integrals of some function g( w) with respect to
p(w I DCn) U (x, y)) [Kalos Whitlock 86, p.92]:
- -))d
j 9 (w ) P(W IDCn) U( X,
Y
W
__
--
L~-lg(Wb)P(ylx,Wb)
B
Lb=l p(Y I x, Wb)
(2)
The approximation error approaches zero as the size of WCB) increases.
3.1
APPROXIMATION OF FUTURE LOSS
Consider the future loss f;,x(x) given new observation (x, y) and trial input Xt. In
the case of the square loss function, (1) can be transformed to
f~,.t(Xt)
=
j[!(Xt,w)-E(yIXt,Dcn)U(X,y)Wp(wIDcn)U(x,y))dw (3)
+ j ?T2(w) p(w I DCn) U (x, y)) dw
where ?T2(w) := Var(y I x, w) is independent of x. Assume a set XT = {Xl, ... , XT}
is given, which is representative of trial inputs for the distribution p(x). Define
S(x, y) := L~=i p(Y I x, Wb) for y E YCx,R) . Then from equations (2) and (3) we get
E(ylxt,DCn)U(x,y)):= 1/S(x,Y)L~=1!(Xt,Wb)P(Ylx,Wb) and
1
B
S(x -) L?T 2(Wb)P(Ylx,Wb)
,y b=l
1
+ S(x
(4)
B
-) I)!(Xt, Wb) - E(y I Xt, DCn) U (x, y))]2 p(Y I x, Wb)
,y
b=l
The final value of f; is obtained by averaging over the different y E YCx,R) and
different trial inputs Xt E XT. To reduce the variance, the trial inputs Xt should
be selected by importance sampling (2) to concentrate them on regions with high
current loss (see (5) below). To facilitate the search for an x with minimal f; we
reduce the extent of random fluctuations of the y values. Let (Vi, ... , VR) be a
vector of random numbers Vr -- N(O,1), and let jr be randomly selected from
{1, ... , B}. Then for each x the possible observations Yr E YCx,R) are defined as
Yr := !(x, wir) + V r?T2(wir). In this way the difference between neighboring inputs
is not affected by noise, and search procedures can exploit gradients.
3.2
CURRENT LOSS
As a proxy for the future loss, we may use the current loss at
x,
rcurr(x) = p(x) j L(y, d*(x)) p(y I x, DCn)) dy
(5)
Bayesian Query Construction for Neural Network Models
447
where p(x) weights the inputs according to their relevance. For the square loss
function the average loss at x is the conditional variance Var(y I x, DCn?. We get
=
Tcurr(X)
p(x) jU(x,w)-E(YIX,DCn?)2p(wIDcn?dw
(6)
+ p(x) j 0"2(w) p(w I D(n? dw
If E(y I x,DCn?
fr~~=lf(x,wb) and the sample WCB):= {Wl, ... ,WB} is
representative of p(w I DCn? we can approximate the current loss with
Tcurr(X)
~
p( x) ~
13 L..tU(x, Wb) -
2
E(y I x, DCn?) +
A
p( x) ~
13 L..t 0"
b=l
2
(Wb)
(7)
b=l
If the input distribution p( x) is uniform, the second term is independent of x.
3.3
COMPLEXITY REGULARIZATION
Neural network models can represent arbitrary mappings between finite-dimensional
spaces if the number of hidden units is sufficiently large [Hornik Stinchcombe 89].
As the number of observations grows, more and more hidden units are necessary to catch the details of the mapping. Therefore we use a sequential procedure to increase the capacity of our networks during query learning. White and
Wooldridge call this approach the "method of sieves" and provide some asymptotic results on its consistency [White Wooldridge 91]. Gelfand and Dey compare Bayesian approaches for model selection and prove that, in the case of nested models Ml and M2, model choice by the ratio of popular Bayes factors
p(DCn) I Mi) := J p(DCn) I W, Mi ) p(w I Mi) dw will always choose the full model
regardless of the data as n --t 00 [Gelfand Dey 94]. They show that the pseudoBayes factor, a Bayesian variant of crossvalidation, is not affected by this paradox
A(Ml' M2) :=
n
n
;=1
j=1
II p(y; I x;, DCn,j), Mt}j II p(Y; Ix;, DCn,j), M2)
(8)
Here DCn ,;) := D(n) \ (x;, y;). As the difference between p(w I DCn? and p( wi D(n,j?
is usually small, we use the full posterior as the importance function (2) and get
p(Y;
I x;, DCn,j),Mi) =
j p(Y; IXj,w,Mi)p(wIDCn,j),Mi)dw
'" B/(t,l/P(Y;li;,W"M,))
4
(9)
NUMERICAL DEMONSTRATION
In a first experiment we tested the approach for a small a 1-2-1 MLP target function with Gaussian noise N(0,0.05 2 ). We assumed the square loss function and a
uniform input distribution p(x) over [-5,5]. Using the "true" architecture for the
approximating model we started with a single randomly generated observation. We
448
Gerhard Paass, JiJrg Kindermann
~
=~!?~
--- ~tuo:io_
~
.. .' .
1'01
..
on
~
I - '~ ' =~ I
:;
"
. ..
a:
0
::::.:::::.::::\....
d
:;
....
\~.
'\ ------ -- - - - - - - -----
\., 1\l
. . ......_. _-_._...........__................... _. ._......._..
~
\!
~
,
\
:.,.
\, '
"
\!
'"
0
..
-2
10
15
20
No.d_
25
30
Figure 1: Future loss exploration: predicted posterior mean, future loss and current
loss for 12 observations (left), and root mean square error of prediction (right) .
estimated the future loss by (4) for 100 different inputs and selected the input with
smallest future loss as the next query. B = 50 parameter vectors were generated requiring 200,000 Metropolis steps. Simultaneously we approximated the current loss
criterion by (7). The left side of figure 1 shows the typical relation of both measures.
In most situations the future loss is low in the same regions where the current loss
(posterior standard deviation of mean prediction) is high. The queries are concentrated in areas of high variation and the estimated posterior mean approximates
the target function quite well.
In the right part of figure 1 the RMSE of prediction averaged over 12 independent
experiments is shown. After a few observations the RMSE drops sharply. In our
example there is no marked difference between the prediction errors resulting from
the future loss and the current loss criterion (also averaged over 12 experiments).
Considering the substantial computing effort this favors the current loss criterion.
The dots indicate the RMSE for randomly generated data (averaged over 8 experiments) using the same Bayesian prediction procedure. Because only few data points
were located in the critical region of high variation the RMSE is much larger.
In the second experiment, a 2-3-1 MLP defined the target function I(x, wo) , to which
Gaussian noise of standard deviation 0.05 was added. I( x, wo) is shown in the left
part of figure 2. We used five MLPs with 2-6 hidden units as candidate models
Ml, .. . , M5 and generated B = 45 samples WeB) of the posterior pew I D(n)' M.),
where D(n) is the current data. We started with 30,000 Metropolis steps for small
values of n and increased this to 90,000 Metropolis steps for larger values of n.
For a network with 6 hidden units and n = 50 observations, 10,000 Metropolis
steps took about 30 seconds on a Sparc10 workstation. Next, we used equation (9)
to compare the different models, and then used the optimal model to calculate the
current loss (7) on a regular grid of 41 x 41 = 1681 query points x. Here we assumed
the square loss function and a uniform input distribution p(x) over [-5,5] x [-5,5].
We selected the query point with maximal current loss and determined the final
query point with a hillclimbing algorithm. In this way we were rather sure to get
close to the true global optimum.
The main result of the experiment is summarized in the right part of figure 2. It
Bayesian Query Construct.ion for Neural Network Models
449
?
o
".
.m
eXDlorati~n
random a
:2"': \
<:>
\
~\?{l? .
.,
."
o .. o .. o ............. __ (). ...
\
. . .......... 0 ... .. ........ --
..
~.
20
40
0
60
80
100
No. of Observations
Figure 2: Current loss exploration: MLP target function and root mean square error.
shows - averaged over 3 experiments - the root mean square error between the true
mean value and the posterior mean E(y I x) on the grid of 1681 inputs in relation to
the sample size. Three phases of the exploration can be distinguished (see figure 3).
In the beginning a search is performed with many queries on the border of the
input area. After about 20 observations the algorithm knows enough detail about
the true function to concentrate on the relevant parts of the input space. This leads
to a marked reduction ofthe mean square error. After 40 observations the systematic
part of the true function has been captured nearly perfectly. In the last phase of
the experiment the algorithm merely reduces the uncertainty caused by the random
noise. In contrast , the data generated randomly does not have sufficient information
on the details of f(x , w), and therefore the error only gradually decreases. Because
of space constraints we cannot report experiments with radial basis functions which
led to similar results.
Acknowledgements
This work is part of the joint project 'REFLEX' of the German Fed. Department
of Science and Technology (BMFT), grant number 01 IN 111Aj4. We would like to
thank Alexander Linden, Mark Ring, and Frank Weber for many fruitful discussions.
References
[Berger 80] Berger, J. (1980): Statistical Decision Theory, Foundations, Concepts, and
Methods. Springer Verlag, New York.
[Cohn 94] Cohn, D. (1994): Neural Network Exploration Using Optimal Experimental
Design. In J. Cowan et al. (eds.): NIPS 5. Morgan Kaufmann, San Mateo.
[Ford et al. 89] Ford, I. , Titterington, D.M., Kitsos, C.P. (1989): Recent Advances in Nonlinear Design. Technometrics, 31, p.49-60.
[Gelfand Dey 94] Gelfand, A.E., Dey, D.K. (1994): Bayesian Model Choice: Asymptotics
and Exact Calculations. J. Royal Statistical Society B, 56, pp.501-514.
450
Gerhard Paass, Jorg Kindermann
Figure 3: Squareroot of current loss (upper row) and absolute deviation from true
function (lower row) for 10,25, and 40 observations (which are indicated by dots) .
[Hornik Stinchcombe 89] Hornik, K., Stinchcombe, M. (1989): Multilayer Feedforward
Networks are Universal Approximators. Neural Networks 2, p.359-366.
[Kalos Whitlock 86] Kalos, M.H., Whitlock, P.A. (1986): Monte Carlo Methods, Wiley,
New York.
[MacKay 92] MacKay, D. (1992): Information-Based Objective Functions for Active Data
Selection. Neural Computation 4, p .590-604.
[Neal 93] Neal, R.M. (1993): Probabilistic Inference using Markov Chain Monte Carlo
Methods. Tech. Report CRG-TR-93-1, Dep. of Computer Science, Univ. of Toronto.
[PaaB 91] PaaB, G. (1991): Second Order Probabilities for Uncertain and Conflicting Evidence. In: P.P. Bonissone et al. (eds.) Uncertainty in Artificial Intelligence 6. Elsevier,
Amsterdam, pp. 447-456.
[Plutowski White 93] Plutowski, M., White, H. (1993): Selecting Concise Training Sets
from Clean Data. IEEE Tr. on Neural Networks, 4, p.305-318.
[Pronzato Walter 92] Pronzato, L., Walter, E. (1992): Nonsequential Bayesian Experimental Design for Response Optimization. In V. Fedorov, W.G. Miiller, I.N. Vuchkov
(eds.): Model Oriented Data-Analysis. Physica Verlag, Heidelberg, p. 89-102.
[Sollich 94] Sollich, P. (1994): Query Construction, Entropy and Generalization in Neural
Network Models. To appear in Physical Review E.
[Sollich Saad 95] Sollich, P., Saad, D. (1995): Learning from Queries for Maximum Information Gain in Unlearnable Problems. This volume.
[White Wooldridge 91] White, H., Wooldridge, J. (1991): Some Results for Sieve Estimation with Dependent Observations. In W. Barnett et al. (eds.) : Nonparametric and
Semiparametric Methods in Econometrics and Statistics, New York, Cambridge Univ.
Press.
| 1000 |@word trial:5 wcb:3 version:1 simulation:1 concise:1 tr:2 reduction:1 selecting:2 current:16 ixj:1 must:1 numerical:2 informative:1 dydx:1 analytic:1 drop:1 intelligence:1 selected:5 yr:3 beginning:1 toronto:1 wir:2 five:1 prove:1 expected:4 considering:1 project:1 sankt:1 titterington:1 control:1 unit:5 grant:1 yn:1 appear:1 local:1 limit:1 fluctuation:1 mateo:1 range:1 averaged:4 lf:1 procedure:8 asymptotics:1 area:2 universal:1 radial:2 regular:1 get:6 cannot:1 close:1 selection:10 risk:5 influence:1 fruitful:1 deterministic:1 demonstrated:1 center:1 dcn:19 attention:1 regardless:2 survey:1 m2:3 rule:4 dw:9 variation:2 construction:6 suppose:1 gerhard:5 target:4 exact:1 approximated:2 particularly:1 located:1 econometrics:1 observed:1 calculate:1 region:3 decrease:1 substantial:1 complexity:3 ycx:3 predictive:3 basis:2 joint:1 derivation:2 walter:3 univ:2 monte:7 query:25 artificial:1 whose:1 gelfand:4 larger:3 valued:1 quite:1 bonissone:1 favor:1 statistic:1 ford:4 final:2 rr:1 took:1 maximal:1 fr:1 neighboring:1 tu:1 relevant:1 crossvalidation:1 optimum:2 ring:1 depending:1 develop:2 derive:1 ex:1 dep:1 advocated:1 predicted:1 indicate:1 concentrate:2 subsequently:1 exploration:4 generalization:2 d_:1 crg:1 physica:1 sufficiently:1 normal:1 mapping:2 smallest:1 whitlock:3 estimation:1 nonsequential:1 iw:2 augustin:1 kindermann:6 wl:1 create:1 reflects:1 weighted:3 gaussian:3 yix:1 aim:1 modified:1 always:1 rather:1 tech:1 contrast:1 elsevier:1 inference:1 dependent:1 integrated:1 hidden:5 relation:3 transformed:1 selects:1 germany:1 mackay:4 equal:1 construct:2 sampling:5 barnett:1 nearly:1 paass:6 future:18 others:1 t2:4 report:2 employ:1 few:2 widen:1 randomly:4 oriented:1 simultaneously:1 national:1 familiar:1 intended:3 phase:2 technometrics:1 mlp:3 severe:1 yielding:1 chain:5 integral:1 necessary:3 old:1 desired:1 minimal:4 uncertain:1 increased:2 wb:18 deviation:3 uniform:3 ju:1 density:3 systematic:2 probabilistic:1 together:1 choose:1 yp:1 actively:1 li:1 account:2 potential:1 wooldridge:4 de:2 summarized:1 explicitly:1 caused:1 depends:2 vi:1 performed:2 view:1 later:1 try:1 root:3 bayes:2 rmse:4 mlps:2 square:14 ir:1 variance:2 kaufmann:1 ofthe:1 bayesian:14 carlo:7 published:1 ed:4 pp:2 mi:6 workstation:1 sampled:1 gain:2 popular:1 actually:1 follow:1 response:1 maximally:1 though:1 dey:4 until:1 web:3 replacing:1 cohn:3 nonlinear:1 defines:1 indicated:1 grows:2 facilitate:1 effect:1 contain:1 true:7 requiring:1 concept:1 regularization:1 sieve:2 chemical:1 iteratively:1 wp:1 neal:3 white:8 during:1 criterion:10 m5:1 theoretic:4 temperature:1 weber:1 mt:1 physical:1 volume:1 discussed:1 approximates:2 measurement:1 cambridge:1 pew:1 consistency:1 grid:2 had:1 dot:2 etc:1 posterior:12 recent:1 verlag:2 arbitrarily:1 approximators:1 yi:2 plutowski:4 seen:1 captured:1 additional:1 dxdy:1 morgan:1 determine:4 redundant:2 ii:2 full:2 reduces:1 technical:1 calculation:1 prediction:7 variant:2 regression:1 multilayer:2 expectation:1 represent:1 ion:1 receive:1 addition:1 semiparametric:1 addressed:1 saad:3 sure:1 cowan:1 call:1 feedforward:1 iii:1 enough:1 architecture:1 perfectly:1 reduce:2 knowing:1 whether:2 effort:2 wo:2 miiller:1 york:3 action:4 adequate:1 ylx:5 nonparametric:1 concentrated:1 gmd:3 reduced:2 generate:2 estimated:2 affected:2 clean:1 asymptotically:1 merely:1 run:1 uncertainty:2 squareroot:1 decision:10 dy:5 pronzato:3 infinity:1 sharply:1 constraint:1 performing:1 department:1 according:5 jr:1 ate:1 sollich:6 wi:2 metropolis:4 modification:1 den:13 gradually:1 taken:1 equation:2 previously:1 discus:1 german:2 know:1 fed:1 appropriate:1 distinguished:1 kalos:3 exploit:1 approximating:1 society:1 objective:1 already:1 quantity:3 added:1 strategy:2 costly:2 concentration:1 gradient:1 thank:1 simulated:1 capacity:1 collected:1 extent:1 trivial:1 assuming:1 berger:5 ratio:1 demonstration:1 lg:1 frank:1 reweight:1 jorg:3 design:4 unknown:2 perform:2 upper:1 observation:17 fedorov:1 markov:6 finite:1 situation:5 paradox:1 lb:1 arbitrary:1 tuo:1 required:1 conflicting:1 nip:1 below:1 usually:1 royal:1 stinchcombe:3 critical:2 rely:1 improve:1 technology:1 started:2 catch:1 prior:1 review:1 acknowledgement:1 asymptotic:1 loss:42 var:2 foundation:1 integrate:1 sufficient:1 proxy:1 principle:1 row:2 last:2 infeasible:1 side:1 perceptron:1 absolute:1 benefit:1 distributed:3 calculated:1 xn:1 valid:2 collection:2 san:1 approximate:2 unreliable:1 ml:3 global:2 active:2 conceptual:1 assumed:3 xi:1 search:3 hornik:3 heidelberg:1 necessarily:1 ylxt:1 main:1 motivation:1 noise:5 border:1 repeated:1 representative:2 vr:2 wiley:1 precision:2 xl:2 candidate:3 third:1 ix:1 specific:1 xt:11 linden:1 evidence:1 derives:1 sequential:3 adding:1 gained:1 importance:3 bmft:1 entropy:1 led:1 hillclimbing:1 amsterdam:1 scalar:1 reflex:1 springer:1 nested:1 conditional:3 marked:2 typical:2 except:1 determined:1 averaging:2 experimental:3 select:1 mark:1 alexander:1 relevance:1 tested:1 unlearnable:1 |
4 | 1,001 | Neural Network Ensembles, Cross
Validation, and Active Learning
Anders Krogh"
Nordita
Blegdamsvej 17
2100 Copenhagen, Denmark
Jesper Vedelsby
Electronics Institute, Building 349
Technical University of Denmark
2800 Lyngby, Denmark
Abstract
Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity
is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among
the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble
generalization error, and how this type of ensemble cross-validation
can sometimes improve performance. It is shown how to estimate
the optimal weights of the ensemble members using unlabeled data.
By a generalization of query by committee, it is finally shown how
the ambiguity can be used to select new training data to be labeled
in an active learning scheme.
1
INTRODUCTION
It is well known that a combination of many different predictors can improve predictions. In the neural networks community "ensembles" of neural networks has been
investigated by several authors, see for instance [1, 2, 3]. Most often the networks
in the ensemble are trained individually and then their predictions are combined.
This combination is usually done by majority (in classification) or by simple averaging (in regression), but one can also use a weighted combination of the networks .
.. Author to whom correspondence should be addressed. Email: kroghlnordita. elk
232
Anders Krogh, Jesper Vedelsby
At the workshop after the last NIPS conference (December, 1993) an entire session
was devoted to ensembles of neural networks ( "Putting it all together", chaired by
Michael Perrone) . Many interesting papers were given, and it showed that this area
is getting a lot of attention .
A combination of the output of several networks (or other predictors) is only useful
if they disagree on some inputs. Clearly, there is no more information to be gained
from a million identical networks than there is from just one of them (see also
[2]). By quantifying the disagreement in the ensemble it turns out to be possible
to state this insight rigorously for an ensemble used for approximation of realvalued functions (regression). The simple and beautiful expression that relates the
disagreement (called the ensemble ambiguity) and the generalization error is the
basis for this paper, so we will derive it with no further delay.
2
THE BIAS-VARIANCE TRADEOFF
Assume the task is to learn a function J from RN to R for which you have a sample
of p examples, (xiJ , yiJ), where yiJ = J(xiJ) and J.t = 1, . . . ,p. These examples
are assumed to be drawn randomly from the distribution p(x) . Anything in the
following is easy to generalize to several output variables.
The ensemble consists of N networks and the output of network a on input x is
called va (x). A weighted ensemble average is denoted by a bar , like
V(x) =
L Wa Va(x).
(1)
a
This is the final output of the ensemble. We think of the weight Wa as our belief in
network a and therefore constrain the weights to be positive and sum to one. The
constraint on the sum is crucial for some of the following results.
The ambiguity on input x of a single member of the ensemble is defined as aa (x)
(V a(x) - V(x))2 . The ensemble ambiguity on input x is
a(x)
= Lwaaa(x) = LWa(va(x) a
V(x))2 .
=
(2)
a
It is simply the variance of the weighted ensemble around the weighed mean, and
it measures the disagreement among the networks on input x. The quadratic error
of network a and of the ensemble are
(J(x) - V a(x))2
(J(x) - V(X))2
(3)
(4)
respectively. Adding and subtracting J( x) in (2) yields
a(x)
=L
Wafa(X) - e(x)
(5)
a
(after a little algebra using that the weights sum to one) . Calling the weighted
average of the individual errors ?( x) = La Wa fa (x) this becomes
e(x)
= ?(x) -
a(x).
(6)
Neural Network Ensembles, Cross Validation, and Active Learning
233
All these formulas can be averaged over the input distribution . Averages over the
input distribution will be denoted by capital letter, so
J dxp(xVl! (x)
J dxp(x)aa(x)
J dxp(x)e(x).
E
(7)
(8)
(9)
The first two of these are the generalization error and the ambiguity respectively
for network n , and E is the generalization error for the ensemble. From (6) we then
find for the ensemble generalization error
(10)
The first term on the right is the weighted average of the generalization errors of
the individual networks (E = La waEa), and the second is the weighted average
of the ambiguities (A = La WaAa), which we refer to as the ensemble ambiguity.
The beauty of this equation is that it separates the generalization error into a term
that depends on the generalization errors of the individual networks and another
term that contain all correlations between the networks . Furthermore, the correlation term A can be estimated entirely from unlabeled data, i. e., no knowledge is
required of the real function to be approximated. The term "unlabeled example" is
borrowed from classification problems, and in this context it means an input x for
which the value of the target function f( x) is unknown.
Equation (10) expresses the tradeoff between bias and variance in the ensemble ,
but in a different way than the the common bias-variance relation [4] in which the
averages are over possible training sets instead of ensemble averages. If the ensemble
is strongly biased the ambiguity will be small , because the networks implement very
similar functions and thus agree on inputs even outside the training set. Therefore
the generalization error will be essentially equal to the weighted average of the
generalization errors of the individual networks. If, on the other hand , there is a
large variance , the ambiguity is high and in this case the generalization error will
be smaller than the average generalization error . See also [5].
From this equation one can immediately see that the generalization error of the
ensemble is always smaller than the (weighted) average of the ensemble errors,
E < E. In particular for uniform weights:
E
~ ~ 'fEcx
(11)
which has been noted by several authors , see e.g. [3] .
3
THE CROSS-VALIDATION ENSEMBLE
From (10) it is obvious that increasing the ambiguity (while not increasing individual
generalization errors) will improve the overall generalization. We want the networks
to disagree! How can we increase the ambiguity of the ensemble? One way is to
use different types of approximators like a mixture of neural networks of different
topologies or a mixture of completely different types of approximators. Another
234
Anders Krogh, Jesper Vedelsby
.
:~
1. -
t
-
,',
.. ,
E o...... -' '.- .. ' ........ ....,.
.'
..... , ...
v '. --:
,
.~.--c??
__ .. -.tI"
.
. -- - -\\
'1
-
.~
~.
, . _ ? ." ?
.. - .....
_._ ..... .'-._._.1
,
-
>
-
-1.k!
~
-4
.t.
f.
1\.1
:\,'. - ?-.l
:--,____
..
~~
.
~.
,
,'
-2
.~
If
o
2
\.
~
:
?
' 0'
~:
4
x
Figure 1: An ensemble of five networks were trained to approximate the square
wave target function f(x). The final ensemble output (solid smooth curve) and
the outputs of the individual networks (dotted curves) are shown. Also the square
root of the ambiguity is shown (dash-dot line) _ For training 200 random examples
were used, but each network had a cross-validation set of size 40, so they were each
trained on 160 examples.
obvious way is to train the networks on different training sets. Furthermore, to be
able to estimate the first term in (10) it would be desirable to have some kind of
cross-validation. This suggests the following strategy.
Chose a number K :::; p. For each network in the ensemble hold out K examples for
testing, where the N test sets should have minimal overlap, i. e., the N training sets
should be as different as possible. If, for instance, K :::; piN it is possible to choose
the K test sets with no overlap. This enables us to estimate the generalization error
E(X of the individual members of the ensemble, and at the same time make sure
that the ambiguity increases . When holding out examples the generalization errors
for the individual members of the ensemble, E(X, will increase, but the conjecture
is that for a good choice of the size of the ensemble (N) and the test set size
(K), the ambiguity will increase more and thus one will get a decrease in overall
generalization error.
This conjecture has been tested experimentally on a simple square wave function
of one variable shown in Figure 1. Five identical feed-forward networks with one
hidden layer of 20 units were trained independently by back-propagation using 200
random examples. For each network a cross-validation set of K examples was held
out for testing as described above. The "true" generalization and the ambiguity were
estimated from a set of 1000 random inputs. The weights were uniform, w(X
1/5
(non-uniform weights are addressed later).
=
In Figure 2 average results over 12 independent runs are shown for some values of
Neural Network Ensembles, Cross Validation, and Active Learning
Figure 2: The solid line shows the generalization error for uniform weights as
a function of K, where K is the size
of the cross-validation sets. The dotted
line is the error estimated from equation (10) . The dashed line is for the
optimal weights estimated by the use of
the generalization errors for the individual networks estimated from the crossvalidation sets as described in the text.
The bottom solid line is the generalization error one would obtain if the individual generalization errors were known
exactly (the best possible weights).
0.08
235
,-----r----,--~---r-----,
o
t=
w
0.06
c
o
~
.!::!
co...
~ 0.04
Q)
(!)
0 .02 '---_---1_ _---'-_ _--'-_ _-----'
o
20
40
60
80
Size of CV set
K (top solid line) . First, one should note that the generalization error is the same
for a cross-validation set of size 40 as for size 0, although not lower, so it supports
the conjecture in a weaker form. However, we have done many experiments, and
depending on the experimental setup the curve can take on almost any form, sometimes the error is larger at zero than at 40 or vice versa. In the experiments shown,
only ensembles with at least four converging networks out of five were used . If all
the ensembles were kept, the error would have been significantly higher at ]{ = a
than for K > a because in about half of the runs none of the networks in the ensemble converged - something that seldom happened when a cross-validation set
was used. Thus it is still unclear under which circumstances one can expect a drop
in generalization error when using cross-validation in this fashion.
The dotted line in Figure 2 is the error estimated from equation (10) using the
cross-validation sets for each of the networks to estimate Ea, and one notices a
good agreement.
4
OPTIMAL WEIGHTS
The weights Wa can be estimated as described in e.g. [3]. We suggest instead
to use unlabeled data and estimate them in such a way that they minimize the
generalization error given in (10) .
There is no analytical solution for the weights , but something can be said about
the minimum point of the generalization error. Calculating the derivative of E as
given in (10) subject to the constraints on the weights and setting it equal to zero
shows that
Ea - Aa
E or Wa = O.
(12)
=
(The calculation is not shown because of space limitations, but it is easy to do.)
That is, Ea - Aa has to be the same for all the networks. Notice that Aa depends
on the weights through the ensemble average of the outputs. It shows that the
optimal weights have to be chosen such that each network contributes exactly waE
236
Anders Krogh, Jesper Vedelsby
to the generalization error. Note, however, that a member of the ensemble can have
such a poor generalization or be so correlated with the rest of the ensemble that it
is optimal to set its weight to zero.
The weights can be "learned" from unlabeled examples, e.g. by gradient descent
minimization of the estimate of the generalization error (10). A more efficient
approach to finding the optimal weights is to turn it into a quadratic optimization
problem. That problem is non-trivial only because of the constraints on the weights
(L:a Wa = 1 and Wa 2:: 0). Define the correlation matrix,
C af3
=
f
dxp(x)V a (x)V f3 (x) .
(13)
Then, using that the weights sum to one, equation (10) can be rewritten as
E
=
L
a
wa Ea
+ L w a C af3 w f3 - L
af3
waCaa .
(14)
a
Having estimates of E a and C af3 the optimal weights can be found by linear programming or other optimization techniques. Just like the ambiguity, the correlation
matrix can be estimated from unlabeled data to any accuracy needed (provided that
the input distribution p is known).
In Figure 2 the results from an experiment with weight optimization are shown.
The dashed curve shows the generalization error when the weights are optimized as
described above using the estimates of Ea from the cross-validation (on K exampies). The lowest solid curve is for the idealized case, when it is assumed that the
errors Ea are known exactly, so it shows the lowest possible error. The performance
improvement is quite convincing when the cross-validation estimates are used.
It is important to notice that any estimate of the generalization error of the individual networks can be used in equation (14). If one is certain that the individual
networks do not overfit, one might even use the training errors as estimates for
Ea (see [3]). It is also possible to use some kind of regularization in (14), if the
cross-validation sets are small.
5
ACTIVE LEARNING
In some neural network applications it is very time consuming and/or expensive
to acquire training data, e.g., if a complicated measurement is required to find the
value of the target function for a certain input. Therefore it is desirable to only use
examples with maximal information about the function. Methods where the learner
points out good examples are often called active learning.
We propose a query-based active learning scheme that applies to ensembles of networks with continuous-valued output. It is essentially a generalization of query by
committee [6, 7] that was developed for classification problems. Our basic assumption is that those patterns in the input space yielding the largest error are those
points we would benefit the most from including in the training set.
Since the generalization error is always non-negative, we see from (6) that the
weighted average of the individual network errors is always larger than or equal to
the ensemble ambiguity,
f(X) 2:: a(x),
(15)
Neural Network Ensembles. Cross Validation. and Active Learning
237
2.5 r"':":'T---r--"T""--.-----r---,
.
.
.
:
0.5
o
10
20
30
Training set size
40
50
o
10
20
30
40
50
Training set size
Figure 3: In both plots the full line shows the average generalization for active
learning, and the dashed line for passive learning as a function of the number of
training examples. The dots in the left plot show the results of the individual
experiments contributing to the mean for the active learning. The dots in right plot
show the same for passive learning.
which tells us that the ambiguity is a lower bound for the weighted average of the
squared error. An input pattern that yields a large ambiguity will always have a
large average error. On the other hand, a low ambiguity does not necessarily imply
a low error. If the individual networks are trained to a low training error on the
same set of examples then both the error and the ambiguity are low on the training
points. This ensures that a pattern yielding a large ambiguity cannot be in the close
neighborhood of a training example. The ambiguity will to some extent follow the
fluctuations in the error. Since the ambiguity is calculated from unlabeled examples
the input-space can be scanned for these areas to any detail. These ideas are well
illustrated in Figure 1, where the correlation between error and ambiguity is quite
strong, although not perfect.
The results of an experiment with the active learning scheme is shown in Figure 3.
An ensemble of 5 networks was trained to approximate the square-wave function
shown in Figure 1, but in this experiments the function was restricted to the interval
from - 2 to 2. The curves show the final generalization error of the ensemble in a
passive (dashed line) and an active learning test (solid line). For each training set
size 2x40 independent tests were made, all starting with the same initial training
set of a single example. Examples were generated and added one at a time. In the
passive test examples were generated at random, and in the active one each example
was selected as the input that gave the largest ambiguity out of 800 random ones.
Figure 3 also shows the distribution of the individual results of the active and
passive learning tests. Not only do we obtain significantly better generalization by
active learning, there is also less scatter in the results. It seems to be easier for the
ensemble to learn from the actively generated set.
238
6
Anders Krogh. Jesper Vedelsby
CONCLUSION
The central idea in this paper was to show that there is a lot to be gained from
using unlabeled data when training in ensembles. Although we dealt with neural
networks, all the theory holds for any other type of method used as the individual
members of the ensemble.
It was shown that apart from getting the individual members of the ensemble to
generalize well, it is important for generalization that the individuals disagrees as
much as possible, and we discussed one method to make even identical networks
disagree. This was done by training the individuals on different training sets by
holding out some examples for each individual during training. This had the added
advantage that these examples could be used for testing, and thereby one could
obtain good estimates of the generalization error.
It was discussed how to find the optimal weights for the individuals of the ensemble.
For our simple test problem the weights found improved the performance of the
ensemble significantly.
Finally a method for active learning was described, which was based on the method
of query by committee developed for classification problems. The idea is that if the
ensemble disagrees strongly on an input, it would be good to find the label for that
input and include it in the training set for the ensemble. It was shown how active
learning improves the learning curve a lot for a simple test problem.
Acknowledgements
We would like to thank Peter Salamon for numerous discussions and for his implementation of linear programming for optimization of the weights. We also thank
Lars Kai Hansen for many discussions and great insights, and David Wolpert for
valuable comments.
References
[1] L.K. Hansen and P Salamon. Neural network ensembles. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12(10):993- 1001, Oct. 1990.
[2] D.H Wolpert. Stacked generalization. Neural Networks, 5(2):241-59, 1992.
[3] Michael P. Perrone and Leon N Cooper. When networks disagree: Ensemble method
for neural networks. In R. J. Mammone, editor, Neural Networks for Speech and Image
processing. Chapman-Hall, 1993.
[4] S. Geman , E . Bienenstock, and R Doursat. Neural networks and the bias/variance
dilemma. Neural Computation, 4(1):1-58, Jan. 1992.
[5] Ronny Meir. Bias, variance and the combination of estimators; the case of linear least
squares. Preprint (In Neuroprose), Technion, Heifa, Israel, 1994.
[6] H.S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Proceedings of
the Fifth Workshop on Computational Learning Theory, pages 287-294, San Mateo,
CA, 1992. Morgan Kaufmann.
[7] Y. Freund, H.S. Seung, E. Shamir, and N. Tishby. Information, prediction, and query
by committee. In Advances in Neural Information Processing Systems, volume 5, San
Mateo, California, 1993. Morgan Kaufmann.
| 1001 |@word seems:1 thereby:1 solid:6 electronics:1 initial:1 scatter:1 enables:1 drop:1 plot:3 half:1 selected:1 intelligence:1 af3:4 five:3 consists:1 little:1 increasing:2 becomes:1 provided:1 lowest:2 israel:1 kind:2 developed:2 finding:1 ti:1 exactly:3 unit:1 positive:1 fluctuation:1 might:1 chose:1 mateo:2 suggests:1 co:1 averaged:2 testing:3 implement:1 jan:1 area:2 significantly:3 suggest:1 get:1 cannot:1 unlabeled:9 close:1 context:1 ronny:1 weighed:1 attention:1 starting:1 independently:1 immediately:1 insight:2 estimator:1 his:1 variation:1 target:3 shamir:1 programming:2 agreement:1 approximated:1 expensive:1 geman:1 labeled:1 bottom:1 preprint:1 ensures:1 sompolinsky:1 decrease:1 valuable:1 seung:2 rigorously:1 trained:6 algebra:1 dilemma:1 learner:1 completely:1 basis:1 train:1 stacked:1 jesper:5 query:6 tell:1 outside:1 neighborhood:1 mammone:1 quite:2 larger:2 valued:2 kai:1 think:1 final:3 dxp:4 advantage:1 analytical:1 propose:1 subtracting:1 maximal:1 getting:2 crossvalidation:1 perfect:1 derive:1 depending:1 borrowed:1 strong:1 krogh:5 lars:1 generalization:42 yij:2 hold:2 around:1 hall:1 great:1 estimation:1 label:1 hansen:2 individually:1 largest:2 vice:1 weighted:10 minimization:1 clearly:1 always:4 beauty:1 improvement:1 anders:5 entire:1 hidden:1 relation:1 bienenstock:1 overall:2 among:2 classification:4 denoted:2 equal:3 f3:2 having:1 chapman:1 identical:3 randomly:1 individual:22 mixture:2 yielding:2 devoted:1 held:1 minimal:1 instance:2 predictor:2 uniform:4 delay:1 technion:1 tishby:1 combined:1 michael:2 together:1 squared:1 ambiguity:28 central:1 choose:1 derivative:1 actively:1 elk:1 depends:2 idealized:1 later:1 root:1 lot:3 wave:3 complicated:1 minimize:1 square:5 accuracy:2 variance:7 kaufmann:2 ensemble:58 yield:2 wae:1 generalize:2 dealt:1 none:1 converged:1 email:1 obvious:2 vedelsby:5 knowledge:1 improves:1 ea:7 back:1 salamon:2 feed:1 higher:1 follow:1 improved:2 done:3 strongly:2 furthermore:2 just:2 correlation:5 overfit:1 hand:2 propagation:1 building:1 contain:1 true:1 regularization:1 illustrated:1 during:1 noted:1 anything:1 passive:5 image:1 common:1 volume:1 million:1 discussed:3 refer:1 measurement:1 versa:1 cv:1 seldom:1 session:1 had:2 dot:3 something:2 showed:1 apart:1 certain:2 approximators:2 morgan:2 minimum:1 dashed:4 relates:1 full:1 desirable:2 smooth:1 technical:1 calculation:1 cross:18 va:3 prediction:3 converging:1 regression:2 basic:1 essentially:2 circumstance:1 sometimes:2 want:1 addressed:2 interval:1 crucial:1 biased:1 rest:1 doursat:1 sure:1 comment:1 subject:1 member:8 december:1 easy:2 gave:1 topology:1 idea:3 tradeoff:2 x40:1 expression:1 peter:1 speech:1 useful:1 meir:1 xij:2 notice:3 dotted:3 happened:1 estimated:8 nordita:1 express:1 lwa:1 putting:1 four:1 drawn:1 capital:1 kept:1 sum:4 run:2 letter:1 you:1 almost:1 entirely:1 layer:1 bound:1 dash:1 correspondence:1 quadratic:2 scanned:1 constraint:3 constrain:1 calling:1 leon:1 conjecture:3 combination:6 perrone:2 poor:1 smaller:2 restricted:1 neuroprose:1 lyngby:1 equation:7 agree:1 turn:2 pin:1 committee:6 needed:1 rewritten:1 disagreement:4 top:1 include:1 calculating:1 added:2 fa:1 strategy:1 unclear:1 said:1 gradient:1 separate:1 thank:2 blegdamsvej:1 majority:1 whom:1 extent:1 trivial:1 denmark:3 convincing:1 acquire:1 setup:1 holding:2 negative:1 implementation:1 unknown:1 disagree:4 descent:1 rn:1 community:1 david:1 copenhagen:1 required:2 optimized:1 california:1 learned:1 nip:1 able:1 bar:1 usually:1 pattern:4 reliable:2 including:1 belief:1 overlap:2 beautiful:1 scheme:3 improve:3 imply:1 numerous:1 realvalued:1 text:1 disagrees:2 acknowledgement:1 contributing:1 freund:1 expect:1 interesting:1 limitation:1 validation:18 editor:1 last:1 bias:5 weaker:1 institute:1 fifth:1 benefit:1 curve:7 calculated:1 opper:1 author:3 forward:1 made:1 san:2 transaction:1 approximate:2 active:18 assumed:2 consuming:1 continuous:2 quantifies:1 learn:2 ca:1 contributes:1 investigated:1 necessarily:1 fashion:1 cooper:1 formula:1 workshop:2 adding:1 gained:2 easier:1 wolpert:2 simply:1 applies:1 aa:5 oct:1 quantifying:1 experimentally:1 averaging:1 called:3 experimental:1 la:3 select:1 support:1 tested:1 correlated:1 |
5 | 1,002 | U sing a neural net to instantiate a
deformable model
Christopher K. I. Williams; Michael D. Revowand Geoffrey E. Hinton
Department of Computer Science, University of Toronto
Toronto, Ontario, Canada M5S lA4
Abstract
Deformable models are an attractive approach to recognizing nonrigid objects which have considerable within class variability. However, there are severe search problems associated with fitting the
models to data. We show that by using neural networks to provide
better starting points, the search time can be significantly reduced.
The method is demonstrated on a character recognition task.
In previous work we have developed an approach to handwritten character recognition based on the use of deformable models (Hinton, Williams and Revow, 1992a;
Revow, Williams and Hinton, 1993). We have obtained good performance with this
method, but a major problem is that the search procedure for fitting each model to
an image is very computationally intensive, because there is no efficient algorithm
(like dynamic programming) for this task. In this paper we demonstrate that it is
possible to "compile down" some of the knowledge gained while fitting models to
data to obtain better starting points that significantly reduce the search time.
1
DEFORMABLE MODELS FOR DIGIT RECOGNITION
The basic idea in using deformable models for digit recognition is that each digit has
a model, and a test image is classified by finding the model which is most likely to
have generated it. The quality of the match between model and test image depends
on the deformation of the model, the amount of ink that is attributed to noise and
the distance of the remaining ink from the deformed model.
?Current address: Department of Computer Science and Applied Mathematics, Aston
University, Birmingham B4 7ET, UK.
966
Christopher K. T. Williams, Michael D. Revow, Geoffrey E. Hinton
More formally, the two important terms in assessing the fit are the prior probability distribution for the instantiation parameters of a model (which penalizes very
distorted models), and the imaging model that characterizes the probability distribution over possible images given the instantiated model l . Let I be an image, M
be a model and z be its instantiation parameters. Then the evidence for model M
is given by
P(IIM)
=
J
P(zIM)P(IIM, z)dz
(1)
The first term in the integrand is the prior on the instantiation parameters and the
second is the imaging model i.e., the likelihood of the data given the instantiated
model. P(MII) is directly proportional to P(IIM), as we assume a uniform prior
on each digit.
Equation 1 is formally correct, but if z has more than a few dimensions the evaluation of this integral is very computationally intensive. However, it is often possible
to make an approximation based on the assumption that the integrand is strongly
peaked around a (global) maximum value z*. In this case, the evidence can be approximated by the highest peak of the integrand times a volume factor ~(zII, M),
which measures the sharpness of the peak 2 .
P(IIM) ~ P(z*IM)P(Ilz*, M)~(zII, M)
(2)
By Taylor expanding around z* to second order it can be shown that the volume
factor depends on the determinant of the Hessian of 10gP(z, 11M) . Taking logs
of equation 2, defining EdeJ as the negative log of P(z*IM), and EJit as the corresponding term for the imaging model, then the aim of the search is to find the
minimum of E tot = EdeJ + EJit . Of course the total energy will have many local
minima; for the character recognition task we aim to find the global minimum by
using a continuation method (see section 1.2).
1.1
SPLINES, AFFINE TRANSFORMS AND IMAGING MODELS
This section presents a brief overview of our work on using deformable models for
digit recognition. For a fuller treatment, see Revow, Williams and Hinton (1993) .
Each digit is modelled by a cubic B-spline whose shape is determined by the positions of the control points in the object-based frame. The models have eight control
points, except for the one model which has three, and the seven model which has
five. To generate an ideal example of a digit the control points are positioned at
their "home" locations. Deformed characters are produced by perturbing the control points away from their home locations. The home locations and covariance
matrix for each model were adapted in order to improve the performance.
The deformation energy only penalizes shape deformations. Affine transformations,
i.e., translation, rotation, dilation, elongation, and shear, do not change the underlying shape of an object so we want the deformation energy to be invariant under
them . We achieve this by giving each model its own "object-based frame" and
computing the deformation energy relative to this frame.
lThis framework has been used by many authors, e.g. Grenander et al (1991) .
2The Gaussian approximation has been popularized in the neural net community by
MacKay (1992) .
Using a Neural Net to Instantiate a Deformable Model
967
The data we used consists of binary-pixel images of segmented handwritten digits.
The general flavour of a imaging model for this problem is that there should be a
high probability of inked pixels close to the spline, and lower probabilities further
away. This can be achieved by spacing out a number of Gaussian "ink generators"
uniformly along the contour; we have found that it is also useful to have a uniform
background noise process over the area of the image that is able to account for
pixels that occur far away from the generators. The ink generators and background
process define a mixture model. Using the assumption that each data point is
generated independently given the instantiated model, P(Ilz*, M) factors into the
product of the probability density of each black pixel under the mixture model.
1.2
RECOGNIZING ISOLATED DIGITS
For each model, the aim of the search is to find the instantiation parameters that
minimize E tot . The search starts with zero deformations and an initial guess for
the affine parameters which scales the model so as to lie over the data with zero
skew and rotation. A small number of generators with the same large variance are
placed along the spline, forming a broad, smooth ridge of high ink-probability along
the spline. We use a search procedure similar to the (iterative) Expectation Maximization (EM) method of fitting an unconstrained mixture of Gaussians, except
that (i) the Gaussians are constrained to lie on the spline (ii) there is a deformation energy term and (iii) the affine transformation must be recalculated on each
iteration. During the search the number of generators is gradually increased while
their variance decreases according to predetermined "annealing" schedule3 .
After fitting all the models to a particular image, we wish to evaluate which of the
models best "explains" the data. The natural measure is the sum of Ejit, Edej
and the volume factor. However, we have found that performance is improved by
including four additional terms which are easily obtained from the final fits of the
model to the image. These are (i) a measure which penalizes matches in which
there are beads far from any inked pixels (the "beads in white space" problem),
and (ii) the rotation, shear and elongation of the affine transform. It is hard to
decide in a principled way on the correct weightings for all of these terms in the
evaluation function. We estimated the weightings from the data by training a
simple postprocessing neural network. These inputs are connected directly to the
ten output units. The output units compete using the "softmax" function which
guarantees that they form a probability distribution, summing to one.
2
PREDICTING THE INSTANTIATION PARAMETERS
The search procedure described above is very time consuming. However, given many
examples of images and the corresponding instantiation parameters obtained by the
slow method, it is possible to train a neural network to predict the instantiation
parameters of novel images. These predictions provide better starting points, so the
search time can be reduced.
3The schedule starts with 8 beads increasing to 60 beads in six steps, with the variance
decreasing from 0.04 to 0.0006 (measured in the object frame). The scale is set in the
object-based frame so that each model is 1 unit high.
968
2.1
Christopher K. I. Williams, Michael D. Revow, Geoffrey E. Hinton
PREVIOUS WORK
Previous work on hypothesizing instantiation parameters can be placed into two
broad classes, correspondence based search and parameter space search. In correspondence based search, the idea is to extract features from the image and identify
corresponding features in the model. Using sufficient correspondences the instantiation parameters of the model can be determined. The problem is that simple, easily
detectable image features have many possible matches, and more complex features
require more computation and are more difficult to detect. Grimson (1990) shows
how to search the space of possible correspondences using an interpretation tree.
An alternative approach, which is used in Hough transform techniques, is to directly work in parameter space. The Hough transform was originally designed for
the detection of straight lines in images, and has been extended to cover a number
of geometric shapes, notably conic sections. Ballard (1981) further extended the
approach to arbitrary shapes with the Generalized Hough Transform . The parameter space for each model is divided into cells ("binned"), and then for each image
feature a vote is added to each parameter space bin that could have produced that
feature. After collecting votes from all image features we then search for peaks in
the parameter space accumulator array, and attempt to verify pose. The Hough
transform can be viewed as a crude way of approximating the logarithm of the
posterior distribution P(zII, M) (e.g. Hunt et al , 1988).
However, these two techniques have only been used on problems involving rigid
models, and are not readily applicable to the digit recognition problem. For the
Hough space method, binning and vote collection is impractical in the high dimensional parameter space, and for the correspondence based approach there is a
lack of easily identified and highly discriminative features. The strengths of these
two techniques, namely their ability to deal with arbitrary scalings, rotations and
translations of the data, and their tolerance of extraneous features, are not really
required for a task where the input data is fairly well segmented and normalized.
Our approach is to use a neural network to predict the instantiation parameters for
each model, given an input image. Zemel and Hinton (1991) used a similar method
with simple 2-d objects, and more recently, Beymer et al (1993) have constructed
a network which maps from a face image to a 2-d parameter space spanning head
rotations and a smile/no-smile dimension. However, their method does not directly
map from images to instantiation parameters; they use a computer vision correspondence algorithm to determine the displacement field of pixels in a novel image
relative to a reference image, and then use this field as the input to the network.
This step limits the use of the approach to images that are sufficiently similar so
that the correspondence algorithm functions well.
2.2
INSTANTIATING DIGIT MODELS USING NEURAL
NETWORKS
The network which is used to predict the model instantiation parameters is shown
in figure 1. The (unthinned) binary images are normalized to give 16 x 16 8-bit
greyscale images which are fed into the neural network. The network uses a standard
three-layer architecture; each hidden unit computes a weighted sum of its inputs,
and then feeds this value through a sigmoidal nonlinearity u(x) = 1/(1 + e- X ). The
Using a Neural Net to Instantiate a Deformable Model
cps for 0 model
cps for I model
969
cps for 9 model
o
Figure 1: The prediction network architecture. "cps" stands for control points.
output values are a weighted linear combination of the hidden unit activities plus
output biases. The targets are the locations of the control points in the normalized
image, found from fitting models as described in section 1.2.
The network was trained with backpropagation to minimize the squared error, using
900 training images and 200 validation images of each digit drawn from the br
set of the CEDAR CDROM 1 database of Cities, States, ZIP Codes, Digits, and
Alphabetic Characters4 . Two test sets were used; one was obtained from data in the
br dataset, and the other was the (official) bs test set. After some experimentation
we chose a network with twenty hidden units, which means that the net has over
8,000 weights . With such a large number of weights it is important to regularize the
solution obtained by the network by using a complexity penalty; we used a weight
and optimized A on a validation set. Targets were only set for the
penalty AL: j
correct digit at the output layer; nothing was backpropagated from the other output
units. The net took 440 epochs to train using the default conjugate gradient search
method in the Xerion neural network simulator 5 . It would be possible to construct
ten separate networks to carry out the same task as the net described above, but
this would intensify the danger of overfitting, which is reduced by giving the network
a common pool of hidden units which it can use as it decides appropriate.
wJ
For comparison with the prediction net described above, a trivial network which
just consisted of output biases was trained; this network simply learns the average
value of the control point locations. On a validation set the squared error of the
prediction net was over three times smaller than the trivial net. Although this is
encouraging, the acid test is to compare the performance of elastic models settled
from the predicted positions using a shortened annealing schedule; if the predictions
are good, then only a short amount of settling will be required.
4Made available by the Unites States Postal Service Office of Advanced Technology.
5Xerion was designed and implemented by Drew van Camp, Tony Plate and Geoffrey
Hinton at the University of Toronto.
970
Christopher K. I. Williams, Michael D. Revow, Geoffrey E. Hinton
Figure 2: A comparision of the initial instantiations due to the prediction net (top row)
and the trivial net (bottom row) on an image of a 2. Notice that for the two model the
prediction net is much closer to the data. The other digit models mayor may not be greatly
affected by the input data; for example, the predictions from both nets seem essentially
the same for the zero, but for the seven the prediction net puts the model nearer to the
data.
The feedforward net predicts the position of the control points in the normalized
image. By inverting the normalization process, the positions of the control points
in the un-normalized image are determined. The model deformation and affine
transformation corresponding to these image control point locations can then be
determined by running a part of one iteration of the search procedure. Experiments
were then conducted with a number of shortened annealing schedules; for each one,
data obtained from settling on a part of the training data was used to train the
postprocessing net. The performance was then evaluated on the br test set.
The full annealing schedule has six stages. The shortened annealing schedules are:
1. No settling at all
2. Two iterations at the final variance of 0.0006
3. One iteration at 0.0025 and two at 0.0006
4. The full annealing schedule (for comparison)
The results on the br test set are shown in table 1. The general trends are that the
performance obtained using the prediction net is consistently better than the trivial
net, and that longer annealing schedules lead to better performance. A comparison
of schedules 3 and 4 in table 1 indicates that the performance of the prediction
net/schedule 3 combination is similar to (or slightly better than) that obtained
with the full annealing schedule, and is more than a factor of two faster. The
results with the full schedule are almost identical to the results obtained with the
default "box" initialization described in section 1.2. Figure 2 compares the outputs
of the prediction and trivial nets on a particular example. Judging from the weight
Using a Neural Net to Instantiate a Deformable Model
Schedule number
Trivial net
Prediction net
1
2
3
4
427
329
160
40
200
58
32
36
971
Average time required
to settle one model (s)
0.12
0.25
0.49
1.11
Table 1: Errors on the internal test set of 2000 examples for different annealing schedules.
The timing trials were carried out on a R-4400 machine.
vectors and activity patterns of the hidden units, it does not seem that some of the
units are specialized for a particular digit class.
A run on the bs test set using schedule 3 gave an error rate of 4.76 % (129 errors),
which is very similar to the 125 errors obtained using the full annealing schedule
and the box initialization. A comparison of the errors made on the two runs shows
that only 67 out of the 129 errors were common to the two sets. This suggests that
it would be very sensible to reject cases where the two methods do not agree.
3
DISCUSSION
The prediction net used above can be viewed as an interpolation scheme in the
control point position space of each digit z(I) = Zo + 2:i ai(I)zi, where z(I) is
the predicted position in the control point space, Zo is the contribution due to the
biases, ai is the activity of hidden unit i and Zi is its location in the control point
position space (learned from the data) . If there are more hidden units than output
dimensions, then for any particular image there are an infinite number of ways to
make this equation hold exactly. However, the network will tend to find solutions
so that the ai(I)'s will vary smoothly as the image is perturbed.
The nets described above output just one set of instantiation parameters for a
given model. However, it may be preferable to be able to represent a number of
guesses about model instantiation parameters; one way of doing this is to train a
network that has multiple sets of output parameters, as in the "mixture of experts"
architecture of Jacobs et aI (1991). The outputs can be interpreted as a mixture
distribution in the control point position space, conditioned on the input image.
Another approach to providing more information about the posterior distribution
is described in (Hinton, Williams and Revow, 1992b), where P(zlI) is approximated
using a fixed set of basis functions whose weighting depends on the input image I.
The strategies descriped above directly predict the instantiation parameters in parameter space. It is also possible to use neural networks to hypothesize correspondences, i.e. to predict an inked pixel's position on the spline given a local window
of context in the image. With sufficient matches it is then possible to compute
the instantiation parameters of the model. We have conducted some preliminary
experiments with this method (described in Williams, 1994), which indicate that
good performance can be achieved for the correspondence prediction task.
972
Christopher K. I. Williams, Michael D. Revow, Geoffrey E. Hinton
We have shown that the we can obtain significant speedup using the prediction net.
The schemes outlined above which allow multimodal predictions in instantiation
parameter space may improve performance and deserve further investigation. We
are also interested in improving the performance of the prediction net, for example
by outputting a confidence measure which could be used to adjust the length of
the elastic models' search appropriately. We believe that using machine learning
techniques like neural networks to help reduce the amount of search required to fit
complex models to data may be useful for many other problems.
Acknowledgements
This research was funded by Apple and by the Ontario Information Technology Research
Centre. We thank Allan Jepson, Richard Durbin, Rich Zemel, Peter Dayan, Rob Tibshirani
and Yann Le Cun for helpful discussions. Geoffrey Hinton is the Noranda Fellow of the
Canadian Institute for Advanced Research.
References
Ballard, D. H. (1981). Generalizing the Hough transfrom to detect arbitrary shapes.
Pattern Recognition, 13(2):111-122.
Beymer, D., Shashua, A., and Poggio, T . (1993). Example Based Image Analysis and
Synthesis. AI Memo 1431, AI Laboratory, MIT.
Grenander, U., Chow, Y., and Keenan, D. M. (1991). Hands: A pattern theoretic study of
biological shapes. Springer-Verlag.
Grimson, W. E. 1. (1990) . Object recognition by computer. MIT Press, Cambridge, MA.
Hinton, G. E., Williams, C. K. 1., and Revow, M. D. (1992a). Adaptive elastic models
for hand-printed character recognition. In Moody, J. E., Hanson, S. J., and Lippmann, R. P., editors, Advances in Neural Information Processing Systems 4. Morgan
Kauffmann.
Hinton, G. E., Williams, C. K. 1., and Revow, M. D. (1992b). Combinining two methods
of recognizing hand-printed digits. In Aleksander, 1. and Taylor, J., editors, Artificial
Neural Networks 2. Elsevier Science Publishers.
Hunt, D. J., Nolte, L. W., and Ruedger, W . H. (1988) . Performance of the Hough Transform and its Relationship to Statistical Signal Detection Theory. Computer Vision,
Graphics and Image Processing, 43:221- 238.
Jacobs, R. A., Jordan, M. 1., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive mixtures
of local experts. Neural Computation, 3(1).
MacKay, D. J. C. (1992). Bayesian Interpolation. Neural Computation, 4(3):415-447.
Revow, M. D., Williams, C. K. 1., and Hinton, G. E. (1993) . Using mixtures of deformable
models to capture variations in hand printed digits. In Srihari, S., editor, Proceedings
of the Third International Workshop on Frontiers in Handwriting Recognition, pages
142-152, Buffalo, New York, USA.
Williams, C. K. 1. (1994) . Combining deformable models and neural networks for handprinted digit recognition. PhD thesis, Dept. of Computer Science, University of
Toronto.
Zemel, R . S. and Hinton, G. E. (1991) . Discovering viewpoint-invariant relationships that
characterize objects. In Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors, Advances In Neural Information Processing Systems 3, pages 299-305. Morgan
Kaufmann Publishers.
| 1002 |@word deformed:2 trial:1 determinant:1 covariance:1 jacob:2 carry:1 initial:2 current:1 nowlan:1 must:1 readily:1 tot:2 predetermined:1 shape:7 hypothesize:1 designed:2 instantiate:4 guess:2 discovering:1 short:1 postal:1 toronto:4 location:7 sigmoidal:1 five:1 zii:3 along:3 constructed:1 consists:1 fitting:6 allan:1 notably:1 simulator:1 decreasing:1 encouraging:1 window:1 increasing:1 underlying:1 interpreted:1 developed:1 finding:1 transformation:3 impractical:1 guarantee:1 fellow:1 collecting:1 exactly:1 preferable:1 uk:1 control:14 unit:12 service:1 local:3 timing:1 limit:1 shortened:3 interpolation:2 black:1 plus:1 chose:1 initialization:2 suggests:1 compile:1 hunt:2 accumulator:1 backpropagation:1 digit:20 procedure:4 displacement:1 danger:1 area:1 significantly:2 reject:1 printed:3 confidence:1 close:1 put:1 context:1 map:2 demonstrated:1 dz:1 williams:14 starting:3 independently:1 sharpness:1 array:1 regularize:1 variation:1 kauffmann:1 target:2 programming:1 us:1 trend:1 recognition:12 approximated:2 predicts:1 database:1 binning:1 bottom:1 capture:1 wj:1 connected:1 decrease:1 highest:1 principled:1 grimson:2 complexity:1 dynamic:1 trained:2 basis:1 easily:3 multimodal:1 train:4 zo:2 instantiated:3 artificial:1 zemel:3 whose:2 ability:1 gp:1 transform:6 la4:1 final:2 xerion:2 grenander:2 net:28 took:1 outputting:1 product:1 combining:1 achieve:1 ontario:2 deformable:11 alphabetic:1 assessing:1 object:9 help:1 pose:1 measured:1 implemented:1 predicted:2 indicate:1 correct:3 settle:1 bin:1 explains:1 require:1 really:1 preliminary:1 investigation:1 biological:1 im:2 frontier:1 hold:1 around:2 sufficiently:1 recalculated:1 predict:5 major:1 vary:1 birmingham:1 applicable:1 city:1 weighted:2 mit:2 gaussian:2 aim:3 unthinned:1 aleksander:1 office:1 zim:1 consistently:1 likelihood:1 indicates:1 greatly:1 detect:2 camp:1 helpful:1 elsevier:1 dayan:1 rigid:1 chow:1 hidden:7 interested:1 pixel:7 extraneous:1 constrained:1 softmax:1 mackay:2 fairly:1 field:2 construct:1 fuller:1 elongation:2 identical:1 broad:2 peaked:1 hypothesizing:1 spline:7 richard:1 few:1 attempt:1 detection:2 highly:1 evaluation:2 severe:1 adjust:1 mixture:7 integral:1 closer:1 poggio:1 tree:1 taylor:2 hough:7 penalizes:3 logarithm:1 deformation:8 isolated:1 increased:1 cover:1 maximization:1 cedar:1 uniform:2 recognizing:3 conducted:2 graphic:1 characterize:1 perturbed:1 density:1 peak:3 international:1 pool:1 michael:5 synthesis:1 moody:2 squared:2 thesis:1 settled:1 expert:2 account:1 depends:3 doing:1 characterizes:1 shashua:1 start:2 contribution:1 minimize:2 variance:4 acid:1 kaufmann:1 identify:1 modelled:1 handwritten:2 bayesian:1 produced:2 apple:1 m5s:1 straight:1 classified:1 touretzky:1 energy:5 associated:1 attributed:1 handwriting:1 dataset:1 treatment:1 knowledge:1 schedule:15 positioned:1 feed:1 originally:1 improved:1 evaluated:1 box:2 strongly:1 just:2 stage:1 hand:4 christopher:5 lack:1 quality:1 believe:1 usa:1 verify:1 normalized:5 consisted:1 laboratory:1 white:1 attractive:1 deal:1 during:1 inked:3 generalized:1 nonrigid:1 plate:1 ridge:1 demonstrate:1 theoretic:1 postprocessing:2 image:38 novel:2 recently:1 common:2 rotation:5 specialized:1 shear:2 overview:1 perturbing:1 b4:1 volume:3 interpretation:1 significant:1 cambridge:1 ai:6 unconstrained:1 outlined:1 mathematics:1 nonlinearity:1 centre:1 funded:1 longer:1 posterior:2 own:1 verlag:1 binary:2 morgan:2 minimum:3 additional:1 zip:1 determine:1 signal:1 ii:2 full:5 multiple:1 segmented:2 smooth:1 match:4 faster:1 divided:1 prediction:18 involving:1 basic:1 instantiating:1 vision:2 expectation:1 mayor:1 essentially:1 iteration:4 normalization:1 represent:1 achieved:2 cell:1 cps:4 background:2 want:1 spacing:1 annealing:10 publisher:2 appropriately:1 tend:1 smile:2 seem:2 jordan:1 ideal:1 feedforward:1 iii:1 canadian:1 fit:3 gave:1 zi:2 architecture:3 identified:1 nolte:1 reduce:2 idea:2 br:4 intensive:2 six:2 penalty:2 peter:1 hessian:1 york:1 useful:2 amount:3 transforms:1 backpropagated:1 ten:2 ilz:2 reduced:3 continuation:1 generate:1 notice:1 judging:1 estimated:1 tibshirani:1 affected:1 four:1 drawn:1 imaging:5 sum:2 compete:1 run:2 distorted:1 zli:1 almost:1 decide:1 yann:1 home:3 mii:1 scaling:1 flavour:1 bit:1 layer:2 correspondence:9 durbin:1 activity:3 adapted:1 occur:1 binned:1 strength:1 comparision:1 integrand:3 speedup:1 department:2 according:1 popularized:1 combination:2 conjugate:1 smaller:1 slightly:1 em:1 character:5 cun:1 rob:1 b:2 invariant:2 gradually:1 computationally:2 equation:3 agree:1 skew:1 detectable:1 fed:1 available:1 gaussians:2 experimentation:1 eight:1 away:3 appropriate:1 alternative:1 top:1 remaining:1 tony:1 running:1 giving:2 approximating:1 ink:5 added:1 strategy:1 gradient:1 distance:1 separate:1 thank:1 sensible:1 seven:2 evaluate:1 trivial:6 spanning:1 code:1 length:1 relationship:2 providing:1 handprinted:1 difficult:1 greyscale:1 negative:1 memo:1 twenty:1 sing:1 buffalo:1 defining:1 hinton:17 variability:1 extended:2 head:1 frame:5 arbitrary:3 community:1 canada:1 inverting:1 namely:1 required:4 optimized:1 hanson:1 learned:1 nearer:1 address:1 able:2 deserve:1 pattern:3 cdrom:1 including:1 natural:1 settling:3 predicting:1 advanced:2 scheme:2 improve:2 aston:1 brief:1 technology:2 conic:1 carried:1 extract:1 prior:3 geometric:1 epoch:1 acknowledgement:1 keenan:1 relative:2 proportional:1 geoffrey:7 generator:5 validation:3 affine:6 sufficient:2 editor:4 viewpoint:1 translation:2 row:2 course:1 placed:2 iim:4 bias:3 allow:1 institute:1 taking:1 face:1 tolerance:1 van:1 dimension:3 default:2 stand:1 contour:1 computes:1 rich:1 author:1 collection:1 made:2 adaptive:2 far:2 lippmann:2 global:2 overfitting:1 instantiation:18 decides:1 summing:1 consuming:1 discriminative:1 noranda:1 search:20 iterative:1 bead:4 un:1 lthis:1 dilation:1 table:3 ballard:2 expanding:1 elastic:3 improving:1 complex:2 official:1 jepson:1 noise:2 nothing:1 unites:1 cubic:1 slow:1 position:9 wish:1 lie:2 crude:1 weighting:3 third:1 learns:1 down:1 evidence:2 workshop:1 gained:1 drew:1 phd:1 conditioned:1 smoothly:1 generalizing:1 simply:1 likely:1 beymer:2 forming:1 srihari:1 springer:1 ma:1 viewed:2 revow:11 considerable:1 change:1 hard:1 determined:4 except:2 uniformly:1 infinite:1 total:1 vote:3 formally:2 internal:1 dept:1 |
6 | 1,003 | Plasticity-Mediated Competitive Learning
Terrence J. Sejnowski
terry@salk.edu
Nicol N. Schraudolph
nici@salk.edu
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
San Diego, CA 92186-5800
and
Computer Science & Engineering Department
University of California, San Diego
La Jolla, CA 92093-0114
Abstract
Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited
to sparse representations, while decorrelation and factorization
schemes that support distributed representations are computationally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique
to Simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski,
1993); the same approach could be used to improve other learning
algorithms.
1 INTRODUCTION
Unsupervised neural networks frequently employ sets of nodes or subnetworks
with identical architecture and objective function. Some form of competitive interaction is then needed for these nodes to differentiate and efficiently complement
each other in their task.
476
Nicol Schraudolph, Terrence 1. Sejnowski
1.00 -
-
j ................................. '.'
f(y)
?4r(y)'....
0.50 -
........../ /....??1
0.00 -
'.:!' ...." ? ? , , ? ? . , ,. .. ' ..1???? ?????? ?? ?
=...:::....::::...:j:....:........-.. -~
=...
-4.00
y
-2.00
0.00
2.00
4.00
Figure 1: Activity f and plasticity f' of a logistic node as a function of its net input
y. Vertical lines indicate those values of y whose pre-images in input space are
depicted in Figure 2.
Inhibition is the simplest competitive mechanism: the most active nodes suppress
the ability of their peers to learn, either directly or by depressing their activity.
Since inhibition can be implemented by diffuse, nonadaptive mechanisms, it is an
attractive solution from both neurobiological and computational points of view.
However, inhibition can only form either localized (unary) or sparse distributed
representations, in which each output has only one state with significant information content.
For fully distributed representations, schemes to decorrelate (Barlow and Foldiak,
1989; Leen, 1991) and even factorize (Schmidhuber, 1992; Bell and Sejnowski, 1995)
node activities do exist. Unfortunately these require specific, weighted lateral
connections whose adaptation is computationally expensive and may interfere
with feedforward learning. While they certainly have their place as competitive
learning algorithms, the capability of biological neurons to implement them seems
questionable.
In this paper, we suggest an alternative approach: we extend the advantages of
simple inhibition to distributed representations by decoupling the competition
from the activation vector. In particular, we use neural plasticity - the derivative
of a logistic activation function - as a medium for competition.
Plasticity is low for both high and low activation values but high for intermediate
ones (Figure 1); distributed patterns of activity may therefore have localized plasticity. If competition is controlled by plasticity then, simple competitive mechanisms
will constrain us to localized plasticity but allow representations with distributed
activity.
The next section reintroduces the binary information gain optimization (BINGO)
algorithm for a single node; we then discuss how plasticity-mediated competition
improves upon the decorrelation mechanism used in our original extension to
multiple nodes. Finally, we establish a close relationship between the plasticity
and the entropy of a logistiC node that provides an intuitive interpretation of
plasticity-mediated competitive learning in this context.
Plasticity-Med;ated Competitive Learning
477
2 BINARY INFORMATION GAIN OPTIMIZATION
In (Schraudolph and Sejnowski, 1993), we proposed an unsupervised learning rule
that uses logistic nodes to seek out binary features in its input. The output
z
= f(y),
where f(y)
1
= 1 + e- Y
and y
= tV ? x
(1)
of each node is interpreted stochastically as the probability that a given feature is
present. We then search for informative directions in weight space by maximizing
the information gained about an unknown binary feature through observation of
z. This binary infonnation gain is given by
D.H(z)
= H(Z) -
H(z) ,
(2)
where H(z) is the entropy of a binary random variable with probability z, and z
is a prediction of z based on prior knowledge. Gradient ascent in this objective
results in the learning rule
D.w
<X
J'(y) . (y - fI) . x,
(3)
where fI is a prediction of y. In the simplest case, fI is an empirical average (y) of past
activity, computed either over batches of input data or by means of an exponential
trace; this amounts to a nonlinear version of the covariance rule (Sejnowski, 1977).
Using just the average as prediction introduces a strong preference for splitting the
data into two equal-sized clusters. While such a bias is appropriate in the initial
phase of learning, it fails to take the nonlinear nature of f into account. In order
to discount data in the saturated regions of the logistic function appropriately, we
weigh the average by the node's plasticity J'(y):
(y . f'(y))
(f'(y)) + C ,
fI = --'-'---'--'-'--'-'--
(4)
where c is a very small positive constant introduced to ensure numerical stability
for large values of y. Now the bias for splitting the data evenly is gradually relaxed
as the network's weights grow and data begins to fall into saturated regions of f.
3
PLASTICITY-MEDIATED COMPETITION
For multiple nodes the original BINGO algorithm used a decorrelating predictor
as the competitive mechanism:
g = y + (Qg -
2I)(y - (y)) ,
(5)
where Qg is the autocorrelation matrix of y, and I the identity matrix. Note that
Qg is computationally expensive to maintain; in connectionist implementations it
478
Nicol Schraudolph, Terrence J. Sejnowski
j
!
i
.: f
. ....~'. ..i..
,.
.: . ?,"f. e: 1',.
..... ...
" ... ~.',, " . ..:.....
, , :~X ~."
..
?'IJ"~~ .~~~.
.. . ~~
.. .
.j
I . .
::
': " !
Figure 2: The "three cigars" problem. Each plot shows the pre-image of zero net
. input, superimposed on a scatter plot of the data set, in input space. The two
flanking lines delineate the "plastic region" where the logistic is not saturated,
providing an indication of weight vector size. Left, two-node BINGO network
using decorrelation (Equations 3 & 5) fails to separate the three data clusters. Right,
same network using plasticity-mediated competition (Equations 4 & 6) succeeds.
is often approximated by lateral anti-Hebbian connections whose adaptation must
occur on a faster time scale than that of the feedforward weights (Equation 3) for
reasons of stability (Leen, 1991). In practice this means that learning is slowed
significantly.
In addition, decorrelation can be inappropriate when nonlinear objectives are optimized - in our case, two prominent binary features may well be correlated.
Consider the "three cigars" problem illustrated in Figure 2: the decorrelating predictor (left) forces the two nodes into a near-orthogonal arrangement, interfering
with their ability to detect the parallel gaps separating the data clusters.
For our purposes, decorrelation is thus too strong a constraint on the discriminants:
all we require is that the discovered features be distinct. We achieve this by reverting
to the simple predictor of Equation 4 while adding a global, plasticity-mediated
excitation l factor to the weight update:
~Wi ex: f'(Yi) . (Yi - 1li) . X ?
L
f'(Yj)
(6)
j
As Figure 2 (right) illustrates, this arrangement solves the "three cigars" problem. In the high-dimensional environment of hand-written digit recognition, this
algorithm discovers a set of distributed binary features that preserve most of the
information needed to classify the digits, even though the network was never given
any class labels (Figure 3).
1 The interaction is excitatory rather than inhibitory since a node's plasticity is inversely
correlated with the magnitude of its net input.
Plasticity-Mediated Competitive Learning
.... ........
.
..
.....
.......
....
. ..
.
....
?
???????
........ .. ................
........
-.....
..
.......
.
..........
??????
....
?????????
..............
.
..
??????????
................... . ..
...................
,
...?
479
"
..,
?
............ ............. ......
.......
?????
???????? ....
" ,
? I
,
~
'
.
"
...
,
???
..?.............. ,
?...
.-
a ....
"
I.
I
......
?
.....
.....
....
.?..????
??
? I ............
.. ..
"
"
t
'"
~
_
......
.....
......
..... ....
......
..?????.....
..
..
???
...?........
??
........
.
????
..
, ?????
a .......
'
? ......
?
?????
???
'
?
'
'
'"
,
...... I
??? ,
...........
l
. . . . . . to. . . . .
.. ... a ..
Figure 3: Weights found by a four-node network running the improved BINGO
algorithm (Equations 4 & 6) on a set of 1200 handwritten digits due to (Guyon et aI.,
1989). Although the network is unsupervised, its four-bit output conveys most of
the information necessary to classify the digits.
4 PLASTICITY AND BINARY ENTROPY
It is possible to establish a relationship between the plasticity /' of a logistiC node
and its entropy that provides an intuitive account of plasticity-mediated competition as applied to BINGO. Consider the binary entropy
H(z)
= - z logz -
(1 - z) log(l - z)
(7)
A well-known quadratic approximation is
= 8e- 1 z (1 -
H(z)
z) ~ H(z)
(8)
Now observe that the plasticity of a logistic node
!'(Y)=:Y l+le _ y =, .. =z(l-z)
(9)
is in fact proportional to H(z) - that is, a logistic node's plasticity is in effect
a convenient quadratic approximation to its binary output entropy. The overall
entropy in a layer of such nodes equals the sum of individual entropies less their
redundancy:
(10)
H(z) =
H(zj) - R(Z)
L
j
The plasticity-mediated excitation factor in Equation 6
(11)
j
j
is thus proportional to an approximate upper bound on the entropy of the layer,
which in turn indicates how much more information remains to be gained by
learning from a particular input. In the context of BINGO, plasticity-mediated
480
Nicol SchraudoLph. Terrence J. Sejnowski
competition thus scales weight changes according to a measure of the network's
ignorance: the less it is able to identify a given input in terms of its set of binary
features, the more it tries to learn doing so.
5 CONCLUSION
By using the derivative of a logistic activation function as a medium for competitive
interaction, we were able to obtain differentiated, fully distributed representations
without resorting to computationally expensive decorrelation schemes. We have
demonstrated this plasticity-mediated competition approach on the BINGO feature
extraction algorithm, which is significantly improved by it. A close relationship
between the plasticity of a logistic node and its binary output entropy provides an
intuitive interpretation of this unusual form of competition.
Our general approach of using a nonmonotonic function of activity - rather than
activity itself - to control competitive interactions may prove valuable in other
learning schemes, in particular those that seek distributed rather than local representations.
Acknowledgements
We thank Rich Zemel and Paul Viola for stimulating discussions, and the McDonnell-Pew Center for Cognitive Neuroscience in San Diego for financial support.
References
Barlow, H. B. and Foldiak, P. (1989). Adaptation and decorrelation in the cortex. In
Durbin, R. M., Miall, c., and Mitchison, G. J., editors, The Computing Neuron,
chapter 4, pages 54-72. Addison-Wesley, Wokingham.
Bell, A. J. and Sejnowski, T. J. (1995). A non-linear information maximisation
algorithm that performs blind separation. In Advances in Neural Information
Processing Systems, volume 7, Denver 1994.
Guyon,!., Poujaud, 1., Personnaz, L., Dreyfus, G., Denker, J., and Le Cun, Y. (1989).
Comparing different neural network architectures for classifying handwritten
digits. In Proceedings of the International Joint Conference on Neural Networks,
volume II, pages 127-132. IEEE.
Leen, T. K. (1991). Dynamics of learning in linear feature-discovery networks.
Network, 2:85-105.
Schmidhuber, J. (1992). Learning factorial codes by predictability minimization.
Neural Computation, 4(6):863-879.
Schraudolph, N. N. and Sejnowski, T. J. (1993). Unsupervised discrimination of
clustered data via optimization of binary information gain. In Hanson, S. J.,
Cowan, J. D., and Giles, C. L., editors, Advances in Neural Information Processing Systems, volume 5, pages 499-506, Denver 1992. Morgan Kaufmann, San
Mateo.
Sejnowski, T. J. (1977). Storing covariance with nonlinearly interacting neurons.
Journal of Mathematical Biology, 4:303-321.
| 1003 |@word version:1 seems:1 seek:2 covariance:2 decorrelate:1 initial:1 past:1 comparing:1 activation:4 scatter:1 must:1 written:1 numerical:1 informative:1 plasticity:27 plot:2 update:1 discrimination:1 provides:3 node:22 preference:1 mathematical:1 prove:1 autocorrelation:1 frequently:1 inappropriate:1 begin:1 medium:2 interpreted:1 differentiation:1 questionable:1 control:1 positive:1 engineering:1 local:1 mateo:1 limited:1 factorization:1 yj:1 practice:1 maximisation:1 implement:1 digit:5 logz:1 empirical:1 poujaud:1 bell:2 significantly:2 convenient:1 pre:2 suggest:1 close:2 context:2 demonstrated:1 center:1 maximizing:1 bingo:7 splitting:2 rule:3 financial:1 stability:2 diego:3 us:1 expensive:3 approximated:1 recognition:1 region:3 valuable:1 weigh:1 environment:1 dynamic:1 upon:1 basis:1 joint:1 chapter:1 distinct:1 sejnowski:11 zemel:1 nonmonotonic:1 peer:1 whose:3 ability:2 itself:1 differentiate:1 advantage:1 indication:1 net:3 interaction:5 adaptation:3 achieve:1 intuitive:3 competition:11 cluster:3 ij:1 strong:2 solves:1 implemented:1 indicate:1 direction:1 require:2 clustered:1 biological:2 extension:1 purpose:1 label:1 infonnation:1 weighted:1 minimization:1 rather:3 superimposed:1 indicates:1 detect:1 unary:1 overall:1 equal:2 never:1 extraction:2 identical:1 biology:1 unsupervised:4 connectionist:1 simplify:1 employ:1 preserve:1 individual:1 phase:1 maintain:1 certainly:1 saturated:3 introduces:1 necessary:1 orthogonal:1 classify:2 giles:1 predictor:3 too:1 international:1 terrence:4 stochastically:1 cognitive:1 derivative:2 li:1 account:2 blind:1 ated:1 view:1 try:1 doing:1 competitive:13 capability:1 parallel:1 kaufmann:1 efficiently:1 identify:1 handwritten:2 plastic:1 conveys:1 gain:5 knowledge:1 improves:1 wesley:1 improved:2 decorrelating:2 depressing:1 leen:3 delineate:1 though:1 just:1 hand:1 nonlinear:3 interfere:1 logistic:11 effect:1 barlow:2 laboratory:1 illustrated:1 ignorance:1 attractive:1 excitation:2 prominent:1 performs:1 image:2 dreyfus:1 discovers:1 fi:4 discriminants:1 denver:2 volume:3 extend:1 interpretation:2 significant:1 ai:1 pew:1 resorting:1 cortex:1 inhibition:4 foldiak:2 jolla:1 reintroduces:1 schmidhuber:2 binary:15 yi:2 morgan:1 relaxed:1 nici:1 ii:1 multiple:2 hebbian:1 faster:1 schraudolph:7 controlled:1 qg:3 prediction:3 achieved:1 addition:1 grow:1 appropriately:1 ascent:1 med:1 cowan:1 near:1 feedforward:2 intermediate:1 architecture:2 factorial:1 amount:1 discount:1 simplest:2 exist:1 zj:1 inhibitory:2 neuroscience:1 redundancy:1 four:2 nonadaptive:2 sum:1 place:1 guyon:2 separation:1 bit:1 layer:2 bound:1 quadratic:2 durbin:1 activity:9 occur:1 constraint:1 constrain:1 diffuse:2 department:1 tv:1 according:1 mcdonnell:1 wi:1 cun:1 slowed:1 gradually:1 flanking:1 computationally:4 equation:6 remains:1 discus:1 turn:1 mechanism:6 needed:2 reverting:1 letting:1 addison:1 subnetworks:1 unusual:1 denker:1 observe:1 appropriate:1 differentiated:1 alternative:2 batch:1 original:2 running:1 ensure:1 establish:2 personnaz:1 objective:3 arrangement:2 gradient:1 separate:1 thank:1 lateral:2 separating:1 evenly:1 reason:1 code:1 relationship:3 providing:1 unfortunately:1 trace:1 suppress:1 implementation:1 unknown:1 upper:1 vertical:1 neuron:3 observation:1 anti:1 viola:1 neurobiology:1 discovered:1 interacting:1 introduced:1 complement:1 nonlinearly:1 connection:2 optimized:1 hanson:1 california:1 able:2 pattern:1 terry:1 decorrelation:7 force:1 scheme:4 improve:2 inversely:1 conventionally:1 mediated:11 prior:1 acknowledgement:1 discovery:1 nicol:4 fully:3 proportional:2 localized:3 editor:2 classifying:1 storing:1 interfering:1 excitatory:1 bias:2 allow:1 institute:1 fall:1 sparse:2 distributed:10 rich:1 san:4 miall:1 approximate:1 neurobiological:1 global:1 active:1 factorize:1 mitchison:1 search:1 learn:2 nature:1 ca:2 decoupling:1 correlated:2 cigar:3 paul:1 mediate:1 salk:3 predictability:1 fails:2 exponential:1 specific:1 unattractive:1 adding:1 gained:2 magnitude:1 illustrates:1 gap:1 entropy:10 depicted:1 stimulating:1 sized:1 identity:1 content:1 change:1 la:1 succeeds:1 support:2 ex:1 |
7 | 1,004 | ICEG Morphology Classification using an
Analogue VLSI Neural Network
Richard Coggins, Marwan Jabri, Barry Flower and Stephen Pickard
Systems Engineering and Design Automation Laboratory
Department of Electrical Engineering J03,
University of Sydney, 2006, Australia.
Email: richardc@sedal.su.oz.au
Abstract
An analogue VLSI neural network has been designed and tested
to perform cardiac morphology classification tasks. Analogue techniques were chosen to meet the strict power and area requirements
of an Implantable Cardioverter Defibrillator (ICD) system. The robustness of the neural network architecture reduces the impact of
noise, drift and offsets inherent in analogue approaches. The network is a 10:6:3 multi-layer percept ron with on chip digital weight
storage, a bucket brigade input to feed the Intracardiac Electrogram (ICEG) to the network and has a winner take all circuit
at the output. The network was trained in loop and included a
commercial ICD in the signal processing path. The system has successfully distinguished arrhythmia for different patients with better
than 90% true positive and true negative detections for dangerous
rhythms which cannot be detected by present ICDs. The chip was
implemented in 1.2um CMOS and consumes less than 200nW maximum average power in an area of 2.2 x 2.2mm2.
1
INTRODUCTION
To the present time, most ICDs have used timing information from ventricular
leads only to classify rhythms which has meant some dangerous rhythms can not
be distinguished from safe ones, limiting the use of the device. Even two lead
732
Richard Coggins, Marwan Jabri, Barry Flower, Stephen Pickard
4.00
HO
3.00
2.00
I.SO
_ _ _:::::::!
Q
1.00
O.SO
Figure 1: The Morphology of ST and VT retrograde 1:1.
atrial/ventricular systems fail to distinguish some rhythms when timing information alone is used [Leong and Jabri, 1992]. A case in point is the separation of Sinus Tachycardia (ST) from Ventricular Tachycardia with 1:1 retrograde conduction.
ST is a safe arrhythmia which may occur during vigorous exercise and is characterised by a heart rate of approximately 120 beats/minute. VT retrograde 1:1 also
occurs at the same low rate but can be a potentially fatal condition. False negative
detections can cause serious heart muscle injury while false positive detections deplete the batteries, cause patient suffering and may lead to costly transplantation
of the device. Figure 1 shows however, the way in which the morphology changes
on the ventricular lead for these rhythms. Note, that the morphology change is
predominantly in the "QRS complex" where the letters QRS are the conventional
labels for the different points in the conduction cycle during which the heart is
actually pumping blood.
For a number of years, researchers have studied template matching schemes in order
to try and detect such morphology changes. However, techniques such as correlation
waveform analysis [Lin et. al., 1988], though quite successful are too computationally intensive to meet power requirements. In this paper, we demonstrate that
an analogue VLSI neural network can detect such morphology changes while still
meeting the strict power and area requirements of an implantable system. The
advantages of an analogue approach are born out when one considers that an energy efficient analogue to digital converter such as [Kusumoto et. al., 1993] uses
1.5nJ per conversion implying 375nW power consumption for analogue to digital
conversion of the ICEG alone. Hence, the integration of a bucket brigade device and
analogue neural network provides a very efficient way of interfacing to the analogue
domain. Further, since the network is trained in loop with the ICD in real time,
the effects of device offsets, noise, QRS detection jitter and signal distortion in the
analogue circuits are largely alleviated.
The next section discusses the chip circuit designs. Section 3 describes the method
ICEG Morphology Classification Using an Analogue VLSI Neural Network
733
AowAcId. . .
1axl Syna.... AIRy
"-
Column
AoIcIr.-
I
o.ta Reglsl...
IClkcMmux
I
Bu1I...
I WTAI
10 DOD DO
Figure 2: Floor Plan and Photomicrograph of the chip
used to train the network for the morphology classification task. Section 4 describes
the classifier performance on seven patients with arrhythmia which can not be
distinguished using the heart rate only. Section 5 summarises the results, remaining
problems and future directions for the work .
2
ARCHITECTURE
The neural network chip consists of a 10:6:3 multilayer perceptron, an input bucket
brigade device (BBD) and a winner take all (WTA) circuit at the output. A floor
plan and photomicrograph of the chip appears in figure 2. The BBD samples the
incoming ICEG at a rate of 250Hz. For three class problems, the winner take all
circuit converts the winning class to a digital signal. For the two class problem
considered in this paper , a simple thresholding function suffices. The following
subsections briefly describe the functional elements of the chip . The circuit diagrams
for the chip building blocks appear in figure 3.
2.1
BUCKET BRIGADE DEVICE
One stage of the bucket brigade circuit is shown in figure 3. The BBD uses a
two phase clock to shift charge from cell to cell and is based on a design by
Leong [Leong, 1992] . The BBD operates by transferring charge deficits from S
to D in each of the cells. PHIl and PHI2 are two phase non-overlapping clocks.
The cell is buffered from the synapse array to maintain high charge transfer efficiency. A sample and hold facility is provided to store the input on the gates of the
synapses. The BBD clocks are generated off chip and are controlled by the QRS
complex detector in the lCD.
2.2
SYNAPSE
This synapse has been used on a number of neural network chips previously.
e.g . [Coggins et. al., 1994] . The synapse has five bits plus sign weight storage which
734
Richard Coggins, Marwan Jabri, Barry Flower, Stephen Pickard
NEURON
.-----------------------------------------------------------,,,
,,
~ !
BUJIOIII'
00
BUCKET BRIGADE ClLL
"
Figure 3: Neuron, Bucket Brigade and Synapse Circuit Diagrams.
sets the bias to a differential pair which performs the multiplication. The bias references for the weights are derived from a weighted current source in the corner of
the chip. A four quadrant multiplication is achieved by the four switches at the top
of the differential pair.
2.3
NEURON
Due to the low power requirements, the bias currents of the synapse arrays are of
the order of hundreds of nano amps, hence the neurons must provide an effective
resistance of many mega ohms to feed the next synapse layer while also providing
gain control. Without special high resistance polysilicon, simple resistive neurons
use prohibitive area, However, for larger networks with fan-in much greater than
ten, an additional problem of common mode cancellation is encountered, That is,
as the fan-in increases, a larger common mode range is required or a cancellation
scheme using common mode feedback is needed.
The neuron of figure 3 implements such a cancellation scheme, The mirrors MO/M2
and Ml/M3 divide the input current and facilitate the sum at the drain of M7.
M7/M8 mirrors the sum so that it may be split into two equal currents by the
mirrors formed by M4, M5 and M6 which are then subtracted from the input
currents. Thus, the differential voltage vp - Vm is a function of the transistor
transconductances, the common mode input current and the feedback factor , The
gain of the neuron can be controlled by varying the width to length ratio of the
mirror transistors MO and Ml. The implementation in this case allows seven gain
combinations, using a three bit RAM cell to store the gain,
ICEG Morphology Classification Using an Analogue VLSI Neural Network
735
Implantable
C.cio?erlor
DefibrillalOr
RunnngMUME
Ne .....1
Nelwa'1<
Chip
Figure 4: Block Diagram of the Training and Testing System.
The importance of a common mode cancellation scheme for large networks can
be seen when compared to the straight forward approach of resistive or switched
capacitor neurons. This may be illustrated by considering the energy usage of
the two approaches. Firstly, we need to define the required gain of the neuron
as a function of its fan-in . If we assume that useful inputs to the network are
mostly sparse, i.e. with a small fraction of non-zero values, then the gain is largely
independent of the fan-in, yet the common mode signal increases linearly with fanin. For the case of a neuron which does not cancel the common mode, the power
supply voltage must be increased to accommodate the common mode signal, thus
leading to a quadratic increase in energy use with fan-in. A common mode cancelling
neuron on the other hand , suffers only a linear increase in energy use with fan-in
since extra voltage range is not required and the increased energy use arises only
due to the linear increase in common mode current.
3
TRAINING SYSTEM
The system used to train and test the neural network is shown in figure 4. Control
of training and testing takes place on the PC. The PC uses a PC-LAB card to
provide analogue and digital I/O . The PC plays the ICEG signal to the input of
the commercial ICD in real time. Note, that the PC is only required for initially
training the network and in this case as a source of the heart signal. The commercial
ICD performs the function of QRS complex detection using analogue circuits. The
QRS complex detection signal is then used to freeze the BBD clocks of the chip, so
that a classification can take place.
When training, a number of examples of the arrhythmia to be classified are selected
from a single patient data base recorded during an electrophysiological study and
previously classified by a cardiologist. Since most of the morphological information
is in the QRS complex, only these segments of the data are repeatedly presented to
736
Richard Coggins. Marwan Jabri. Barry Flower. Stephen Pickard
Patient
1
2
3
4
5
6
7
% Training Attempts Converged
Run ~
Run 1
H=3
80
80
0
60
100
100
80
H= 6
10
100
0
10
80
40
100
H=3
60
0
0
40
0
60
40
H=6
60
10
10
40
60
60
100
Average
Iterations
62
86
101
77
44
46
17
Table 1: Training Performance of the system on seven patients.
the network. The weights are adjusted according to the training algorithm running
on the PC using the analogue outputs of the network to reduce the output error .
The PC writes weights to the chip via the digital I/Os of the PC-LAB card and the
serial weight bus of network. The software package implementing the training and
testing, called MUME [Jabri et. al ., 1992], provides a suite of training algorithms
and control options. Online training was used due to its success in training small
networks and because the presentation of the QRS complexes to the network was
the slowest part of the training procedure. The algorithm used for weight updates
in this paper was summed weight node perturbation [Flower and Jabri, 1993].
The system was trained on seven different patients separately all of whom had
VT with 1: 1 retrograde conduction. Note, that patient independent training has
been tried but with mixed results [Tinker, 1992] . Table 1 summarises the training
statistics for the seven patients. For each patient and each architecture, five training
runs were performed starting from a different random initial weight set. Each
of the patients was trained with eight of each class of arrhythmia. The network
architecture used was 10:H:1, where H is the number of hidden layer neurons and
the unused neurons being disabled by setting their input weights to zero. Two sets
of data were collected denoted Run 1 and Run 2. Run 1 corresponded to output
target values of ?0.6V within margin 0.45V and Run 2 to output target values of
?0.2V within margin 0.05V. A training attempt was considered to have converged
when the training set was correctly classified within two hundred training iterations.
Once the morphologies to be distinguished have been learned for a given patient,
the remainder of the patient data base is played back in a continuous stream and
the outputs of the classifier at each QRS complex are logged and may be compared
to the classifications of a cardiologist. The resulting generalisation performance is
discussed in the next section.
4
MORPHOLOGY CLASSIFIER GENERALISATION
PERFORMANCE
Table 2 summarises the generalisation performance of the system on the seven
patients for the training attempts which converged. Most of the patients show a
correct classification rate better than 90% for at least one architecture on one of the
ICEG Morphology Classification Using an Analogue VLSI Neural Network
Patient
1
2
3
4
5
6
7
No. of
Complexes
ST
VT
440
61
57
94
67
146
166
65
61
96
61
99
28
80
1
2
3
4
5
6
7
440
94
67
166
61
61
28
61
57
146
65
96
99
80
737
% Correct Classifications Run 1
H = 6
H - i3
VT
ST
ST
VT
89?10 89?3
58?0
99?0
99?1
99?1
100?0 99?1
66?44 76?37
99?1
50?3
82?1 75?13
89?9
94?6
84?8
97?1
90?5
99?1
97?3
98?5
99?1
99?1
% Correct Classifications Run 2
86?14 99?1
88?2
99?1
94?6
94?3
84?2
99?1
76?18 59?2
87?7 100?0
88?2
49?5
84?1
82?5
92?6 90?10
99?1
99?1
94?3
99?0
94?3
92?3
Table 2: Generalisation Performance of the system on seven patients.
runs, whereas, a timing based classifier can not separate these arrhythmia at all.
For each convergent weight set the network classified the test set five times. Thus,
the "% Correct" columns denote the mean and standard deviation of the classifier
performance with respect to both training and testing variations. By duty cycling
the bias to the network and buffers, the chip dissipates less than 200n W power for
a nominal heart rate of 120 beats/minute during generalisation.
5
DISCUSSION
Referring to table 1 we see that the patient 3 data was relatively difficult to train.
However, for the one occasion when training converged generalisation performance
was quite acceptable. Inspection of this patients data showed that typically, the
morphologies of the two rhythms were very similar. The choice of output targets,
margins and architecture appear to be patient dependent and possibly interacting
factors. Although larger margins make training easier for some patients they appear
to also introduce more variability in generalisation performance. This may be due
to the non-linearity of the neuron circuit. Further experiments are required to
optimise the architecture for a given patient and to clarify the effect of varying
targets, margins and neuron gain. Penalty terms could also be added to the error
function to minimise the possibility of missed detections of the dangerous rhythm.
The relatively slow rate of the heart results in the best power consumption being
obtained by duty cycling the bias currents to the synapses and the buffers. Hence,
the bias settling time of the weighted current source is the limiting factor for reducing power consumption further for this design. By modifying the connection of the
current source to the synapses using a bypassing technique to reduce transients in
Riclulrd Coggins, Marwan Jabri, Barry Flower, Stephen Pickard
738
the weighted currents, still lower power consumption could be achieved.
6
CONCLUSION
The successful classification of a difficult cardiac arrhythmia problem has been
demonstrated using. an analogue VLSI neural network approach. Furthermore, the
chip developed has shown very low power consumption of less than 200n W, meeting the requirements of an implantable system. The chip has performed well, with
over 90% classification performance for most patients studied and has proved to be
robust when the real world influence of analogue QRS detection jitter is introduced
by a commercial implantable cardioverter defibrillator placed in the signal path to
the classifier.
Acknowledgements
The authors acknowledge the funding for the work in this paper provided under
Australian Generic Technology Grant Agreement No. 16029 and thank Dr. Phillip
Leong of the University of Sydney and Dr. Peter Nickolls of Telectronics Pacing
Systems Ltd., Australia for their helpful suggestions and advice.
References
[Castro et. al., 1993] H.A. Castro, S.M. Tam, M.A. Holler, "Implementation and
Performance of an analogue Nonvolatile Neural Network," Analogue Integrated
Circuits and Signal Processing, vol. 4(2), pp. 97-113, September 1993.
[Lin et. al., 1988] D. Lin, L.A. Dicarlo, and J .M. Jenkins, "Identification of Ventricular Tachycardia using Intracavitary Electrograms: analysis of time and frequency domain patterns," Pacing (3 Clinical Electrophysiology, pp. 1592-1606,
November 1988.
[Leong, 1992] P.H.W. Leong, Arrhythmia Classification Using Low Power VLSI,
PhD Thesis, University of Sydney, Appendix B, 1992.
[ Kusumoto et. al., 1993] K. Kusumoto et. al., "A lObit 20Mhz 30mW Pipelined
Interpolating ADC," ISSCC, Digest of Technical Papers, pp. 62-63, 1993.
[Leong and Jabri, 1992] P.H.W. Leong and M. Jabri, "MATIC - An Intracardiac Tachycardia Classification System", Pacing (3 Clinical Electrophysiology,
September 1992.
[Coggins et. al., 1994] R.J. Coggins and M.A. Jabri, "WATTLE: A Trainable Gain
Analogue VLSI Neural Network", NIPS6, Morgan Kauffmann Publishers, 1994.
[Jabri et. al., 1992] M.A. Jabri, E.A. Tinker and L. Leerink, "MUME- A MultiNet-Multi-Architecture Neural Simulation Environment", Neural Network Simulation Environments, Kluwer Academic Publications, January, 1994.
[Flower and Jabri, 1993] B. Flower and M. Jabri, "Summed Weight Neuron Perturbation: an O(N) improvement over Weight Perturbation," NIPS5, Morgan
Kauffmann Publishers, pp. 212-219, 1993.
[Tinker, 1992] E.A. Tinker, "The SPASM Algorithm for Ventricular Lead Timing and Morphology Classification," SEDAL ICEG-RPT-016-92, Department of
Electrical Engineering, University of Sydney, 1992.
| 1004 |@word briefly:1 simulation:2 tried:1 accommodate:1 initial:1 born:1 amp:1 current:11 yet:1 must:2 icds:2 designed:1 update:1 alone:2 implying:1 prohibitive:1 device:6 selected:1 inspection:1 provides:2 node:1 ron:1 firstly:1 tinker:4 five:3 differential:3 m7:2 supply:1 consists:1 resistive:2 isscc:1 introduce:1 arrhythmia:8 morphology:15 multi:2 m8:1 considering:1 provided:2 linearity:1 circuit:11 developed:1 adc:1 nj:1 suite:1 axl:1 charge:3 um:1 classifier:6 control:3 grant:1 appear:3 positive:2 engineering:3 timing:4 pumping:1 meet:2 path:2 approximately:1 plus:1 au:1 studied:2 range:2 testing:4 block:2 implement:1 writes:1 procedure:1 area:4 matching:1 alleviated:1 quadrant:1 cannot:1 pipelined:1 storage:2 influence:1 conventional:1 demonstrated:1 phil:1 starting:1 m2:1 array:2 variation:1 kauffmann:2 limiting:2 target:4 commercial:4 play:1 nominal:1 us:3 agreement:1 element:1 electrical:2 mume:2 cycle:1 morphological:1 consumes:1 environment:2 battery:1 lcd:1 trained:4 segment:1 efficiency:1 chip:17 train:3 describe:1 effective:1 detected:1 corresponded:1 quite:2 larger:3 distortion:1 tested:1 fatal:1 transplantation:1 statistic:1 online:1 advantage:1 transistor:2 remainder:1 cancelling:1 loop:2 oz:1 requirement:5 cmos:1 sydney:4 implemented:1 australian:1 direction:1 safe:2 waveform:1 correct:4 modifying:1 australia:2 transient:1 implementing:1 suffices:1 pacing:3 coggins:8 adjusted:1 clarify:1 hold:1 bypassing:1 considered:2 nw:2 mo:2 label:1 successfully:1 weighted:3 interfacing:1 i3:1 varying:2 voltage:3 publication:1 derived:1 improvement:1 multinet:1 slowest:1 detect:2 helpful:1 dependent:1 typically:1 transferring:1 integrated:1 initially:1 hidden:1 vlsi:9 classification:16 denoted:1 plan:2 integration:1 special:1 summed:2 equal:1 once:1 mm2:1 cancel:1 future:1 richard:4 inherent:1 serious:1 implantable:5 m4:1 phase:2 maintain:1 attempt:3 detection:8 possibility:1 pc:8 divide:1 increased:2 classify:1 column:2 bbd:6 injury:1 mhz:1 deviation:1 hundred:2 dod:1 successful:2 too:1 ohm:1 j03:1 conduction:3 referring:1 defibrillator:2 st:6 spasm:1 off:1 vm:1 holler:1 thesis:1 recorded:1 possibly:1 nano:1 dr:2 corner:1 tam:1 leading:1 automation:1 stream:1 cardioverter:2 performed:2 try:1 lab:2 option:1 cio:1 formed:1 largely:2 percept:1 pickard:5 vp:1 identification:1 researcher:1 straight:1 classified:4 converged:4 detector:1 synapsis:3 suffers:1 email:1 energy:5 pp:4 frequency:1 gain:8 proved:1 subsection:1 electrophysiological:1 actually:1 back:1 appears:1 feed:2 ta:1 synapse:7 though:1 furthermore:1 stage:1 correlation:1 clock:4 hand:1 su:1 o:1 overlapping:1 mode:10 disabled:1 usage:1 effect:2 phillip:1 facilitate:1 true:2 building:1 facility:1 hence:3 laboratory:1 rpt:1 illustrated:1 during:4 width:1 rhythm:7 m5:1 occasion:1 demonstrate:1 performs:2 funding:1 predominantly:1 common:10 functional:1 brigade:7 winner:3 discussed:1 kluwer:1 sedal:2 buffered:1 freeze:1 cancellation:4 had:1 clll:1 base:2 showed:1 store:2 buffer:2 success:1 vt:6 meeting:2 muscle:1 seen:1 morgan:2 greater:1 additional:1 floor:2 barry:5 signal:10 stephen:5 reduces:1 technical:1 polysilicon:1 academic:1 clinical:2 lin:3 serial:1 controlled:2 impact:1 multilayer:1 patient:23 intracardiac:2 sinus:1 iteration:2 achieved:2 cell:5 whereas:1 separately:1 diagram:3 source:4 publisher:2 extra:1 strict:2 hz:1 capacitor:1 mw:1 unused:1 leong:8 split:1 m6:1 switch:1 architecture:8 converter:1 reduce:2 intensive:1 shift:1 minimise:1 icd:5 duty:2 ltd:1 penalty:1 peter:1 resistance:2 cause:2 repeatedly:1 useful:1 ten:1 sign:1 per:1 mega:1 correctly:1 vol:1 four:2 blood:1 photomicrograph:2 retrograde:4 ram:1 fraction:1 year:1 convert:1 sum:2 run:10 package:1 letter:1 jitter:2 logged:1 place:2 separation:1 missed:1 acceptable:1 appendix:1 bit:2 layer:3 distinguish:1 played:1 convergent:1 fan:6 syna:1 quadratic:1 encountered:1 dangerous:3 occur:1 software:1 ventricular:6 relatively:2 department:2 according:1 combination:1 describes:2 cardiac:2 qrs:10 wta:1 castro:2 bucket:7 heart:7 computationally:1 previously:2 bus:1 discus:1 fail:1 needed:1 jenkins:1 eight:1 generic:1 distinguished:4 subtracted:1 robustness:1 ho:1 gate:1 top:1 remaining:1 running:1 transconductances:1 summarises:3 added:1 occurs:1 digest:1 costly:1 cycling:2 september:2 deficit:1 card:2 separate:1 thank:1 consumption:5 seven:7 whom:1 considers:1 collected:1 richardc:1 length:1 dicarlo:1 providing:1 ratio:1 difficult:2 mostly:1 potentially:1 negative:2 design:4 implementation:2 perform:1 conversion:2 neuron:16 acknowledge:1 november:1 beat:2 january:1 variability:1 interacting:1 perturbation:3 drift:1 introduced:1 pair:2 required:5 connection:1 learned:1 flower:8 pattern:1 atrial:1 optimise:1 analogue:22 power:13 settling:1 dissipates:1 electrogram:1 scheme:4 technology:1 leerink:1 ne:1 acknowledgement:1 drain:1 multiplication:2 mixed:1 suggestion:1 digital:6 switched:1 thresholding:1 placed:1 bias:6 perceptron:1 template:1 nickolls:1 wattle:1 sparse:1 feedback:2 world:1 forward:1 author:1 ml:2 incoming:1 marwan:5 matic:1 continuous:1 table:5 transfer:1 robust:1 complex:8 interpolating:1 jabri:15 domain:2 tachycardia:4 linearly:1 noise:2 suffering:1 advice:1 slow:1 nonvolatile:1 winning:1 exercise:1 minute:2 offset:2 false:2 importance:1 mirror:4 phd:1 margin:5 easier:1 fanin:1 vigorous:1 electrophysiology:2 cardiologist:2 presentation:1 telectronics:1 iceg:9 change:4 included:1 characterised:1 generalisation:7 operates:1 reducing:1 called:1 m3:1 arises:1 meant:1 trainable:1 |
8 | 1,005 | Real-Time Control of a Tokamak Plasma
Using Neural Networks
Chris M Bishop
Neural Computing Research Group
Department of Computer Science
Aston University
Birmingham, B4 7ET, U.K.
c.m .bishop@aston .ac .uk
Paul S Haynes, Mike E U Smith, Tom N Todd,
David L Trotman and Colin G Windsor
AEA Technology, Culham Laboratory,
Oxfordshire OX14 3DB
(Euratom/UKAEA Fusion Association)
Abstract
This paper presents results from the first use of neural networks
for the real-time feedback control of high temperature plasmas in
a tokamak fusion experiment. The tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the tokamak, hydrogen
plasmas, at temperatures of up to 100 Million K, are confined
by strong magnetic fields. Accurate control of the position and
shape of the plasma boundary requires real-time feedback control
of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural
network approach can give significantly better performance than
the linear technique currently used on most tokamak experiments.
The practical application of the neural network approach requires
high-speed hardware, for which a fully parallel implementation of
the multilayer perceptron, using a hybrid of digital and analogue
technology, has been developed.
1008
1
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
INTRODUCTION
Fusion of the nuclei of hydrogen provides the energy source which powers the sun.
It also offers the possibility of a practically limitless terrestrial source of energy.
However, the harnessing of this power has proved to be a highly challenging problem. One of the most promising approaches is based on magnetic confinement of a
high temperature (10 7 - 108 Kelvin) plasma in a device called a tokamak (from the
Russian for 'toroidal magnetic chamber') as illustrated schematically in Figure 1.
At these temperatures the highly ionized plasma is an excellent electrical conductor, and can be confined and shaped by strong magnetic fields. Early tokamaks
had plasmas with circular cross-sections, for which feedback control of the plasma
position and shape is relatively straightforward. However, recent tokamaks, such as
the COMPASS experiment at Culham Laboratory, as well as most next-generation
tokamaks, are designed to produce plasmas whose cross-sections are strongly noncircular. Figure 2 illustrates some of the plasma shapes which COMPASS is designed to explore. These novel cross-sections provide substantially improved energy
confinement properties and thereby significantly enhance the performance of the
tokamak.
z
R
Figure 1: Schematic cross-section of a tokamak experiment showing the toroidal vacuum vessel (outer D-shaped curve) and plasma
(shown shaded). Also shown are the radial (R) and vertical (Z) coordinates. To a good approximation, the tokamak can be regarded
as axisymmetric about the Z-axis, and so the plasma boundary can
be described by its cross-sectional shape at one particular toroidal
location.
Unlike circular cross-section plasmas, highly non-circular shapes are more difficult to
produce and to control accurately, since currents through several control coils must
be adjusted simultaneously. Furthermore, during a typical plasma pulse, the shape
must evolve, usually from some initial near-circular shape. Due to uncertainties
in the current and pressure distributions within the plasma, the desired accuracy
for plasma control can only be achieved by making real-time measurements of the
position and shape of the boundary, and using error feedback to adjust the currents
in the control coils.
The physics of the plasma equilibrium is determined by force balance between the
1009
Real-Time Control of Tokamak Plasma Using Neural Networks
circle
ellipse
O-shape
bean
Figure 2: Cross-sections of the COMPASS vacuum vessel showing
some examples of potential plasma shapes. The solid curve is the
boundary of the vacuum vessel, and the plasma is shown by the
shaded regions.
thermal pressure of the plasma and the pressure of the magnetic field, and is relatively well understood. Particular plasma configurations are described in terms
of solutions of a non-linear partial differential equation called the Grad-Shafranov
(GS) equation. Due to the non-linear nature of this equation, a general analytic
solution is not possible. However, the GS equation can be solved by iterative numerical methods, with boundary conditions determined by currents flowing in the
external control coils which surround the vacuum vessel. On the tokamak itself it
is changes in these currents which are used to alter the position and cross-sectional
shape of the plasma. Numerical solution of the GS equation represents the standard technique for post-shot analysis of the plasma, and is also the method used
to generate the training dataset for the neural network, as described in the next
section. However , this approach is computationally very intensive and is therefore
unsuitable for feedback control purposes.
For real-time control it is necessary to have a fast (typically:::; 50J.lsec.) determination of the plasma boundary shape. This information can be extracted from a
variety of diagnostic systems , the most important being local magnetic measurements taken at a number of points around the perimeter of the vacuum vessel.
Most tokamaks have several tens or hundreds of small pick up coils located at carefully optimized points around the torus for this purpose. We shall represent these
magnetic signals collectively as a vector m .
For a large class of equilibria, the plasma boundary can be reasonably well represented in terms of a simple parameterization, governed by an angle-like variable B,
given by
R(B)
Z(B)
Ro + a cos(B + 8 sinB)
Zo + a/\,sinB
where we have defined the following parameters
(1)
1010
Ro
Zo
a
K
6
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
radial distance of the plasma center from the major axis of the torus,
vertical distance of the plasma center from the torus midplane,
minor radius measured in the plane Z = Zo,
elongation,
triangularity.
We denote these parameters collectively by Yk. The basic problem which has to be
addressed, therefore, is to find a representation for the (non-linear) mapping from
the magnetic signals m to the values of the geometrical parameters Yk, which can
be implemented in suitable hardware for real-time control.
The conventional approach presently in use on many tokamaks involves approximating the mapping between the measured magnetic signals and the geometrical
parameters by a single linear transformation. However, the intrinsic non-linearity
of the mappings suggests that a representation in terms of feedforward neural networks should give significantly improved results (Lister and Schnurrenberger, 1991;
Bishop et a/., 1992; Lagin et at., 1993). Figure 3 shows a block diagram of the
control loop for the neural network approach to tokamak equilibrium control.
Neural
Network
Figure 3: Block diagram of the control loop used for real-time
feedback control of plasma position and shape.
2
SOFTWARE SIMULATION RESULTS
The dataset for training and testing the network was generated by numerical solution of the GS equation using a free-boundary equilibrium code. The data base
currently consists of over 2,000 equilibria spanning the wide range of plasma positions and shapes available in COMPASS. Each equilibrium configuration takes
several minutes to generate on a fast workstation. The boundary of each configuration is then fitted using the form in equation 1, so that the equilibria are labelled
with the appropriate values of the shape parameters. Of the 120 magnetic signals
available on COMPASS which could be used to provide inputs to the network, a
1011
Real-Time Control o/Tokamak PLasma Using Neural Networks
subset of 16 has been chosen using sequential forward selection based on a linear
representation for the mapping (discussed below) .
It is important to note that the transformation from magnetic signals to flux surface
parameters involves an exact linear invariance. This follows from the fact that, if all
of the currents are scaled by a constant factor, then the magnetic fields will be scaled
by this factor, and the geometry of the plasma boundary will be unchanged . It is
important to take advantage of this prior knowledge and to build it into the network
structure, rather than force the network to learn it by example. We therefore
normalize the vector m of input signals to the network by dividing by a quantity
proportional to the total plasma current. Note that this normalization has to be
incorporated into the hardware implementation of the network, as will be discussed
in Section 3.
1.2
4
01
2
2
01
c
c
.5.
0-
~
:E
.5.
CIS
:E
1iI
~
::J
?
1iI
CD
-2
go.8
.5.
0-
?
CIS
:E
1iI 0 .4
CD
c
::J
c
::J
-2
-4
Database
?
Database
1.2
4
~
~CD
Z
~
:::I
CD
z
.2
Database
2
~O.8
~
?
CD
z
~O.4
-2
:::I
CD
Z
?
-4
Database
Database
.2
Database
Figure 4: Plots of the values from the test set versus the values
predicted by the linear mapping for the 3 equilibrium parameters,
together with the corresponding plots for a neural network with 4
hidden units.
The results presented in this paper are based on a multilayer perceptron architecture
having a single layer of hidden units with 'tanh' activation functions , and linear
output units. Networks are trained by minimization of a sum-of-squares error using
a standard conjugate gradients optimization algorithm, and the number of hidden
J012
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
units is optimized by measuring performance with respect to an independent test
set. Results from the neural network mapping are compared with those from the
optimal linear mapping, that is the single linear transformation which minimizes
the same sum-of-squares error as is used in the neural network training algorithm,
as this represents the method currently used on a number of present day tokamaks .
Initial results were obtained on networks having 3 output units, corresponding to
the values of vertical position ZQ, major radius RQ, and elongation K; these being
parameters which are of interest for real-time feedback control. The smallest normalized test set error of 11.7 is obtained from the network having 16 hidden units.
By comparison, the optimal linear mapping gave a normalized test set error of 18.3.
This represents a reduction in error of about 30% in going from the linear mapping
to the neural network. Such an improvement, in the context of this application , is
very significant.
For the experiments on real-time feedback control described in Section 4 the currently available hardware only permitted networks having 4 hidden units, and so we
consider the results from this network in more detail. Figure 4 shows plots of the
network predictions for various parameters versus the corresponding values from
the test set portion of the database. Analogous plots for the optimal linear map
predictions versus the database values are also shown. Comparison of the corresponding figures shows the improved predictive capability of the neural network,
even for this sub-optimal network topology.
3
HARDWARE IMPLEMENTATION
The hardware implementation of the neural network must have a bandwidth of 2:
20 kHz in order to cope with the fast timescales of the plasma evolution. It must
also have an output precision of at least (the the analogue equivalent of) 8 bits in
order to ensure that the final accuracy which is attainable will not be limited by the
hardware system. We have chosen to develop a fully parallel custom implementation
of the multilayer perceptron, based on analogue signal paths with digitally stored
synaptic weights (Bishop et al., 1993). A VME-based modular construction has
been chosen as this allows flexibility in changing the network architecture, ease of
loading network weights, and simplicity of data acquisition. Three separate types
of card have been developed as follows:
? Combined 16-input buffer and signal normalizer.
This provides an analogue hardware implementation of the input normalization described earlier.
? 16 x 4 matrix multiplier
The synaptic weights are produced using 12 bit frequency-compensated
multiplying DACs (digital to analogue converters) which can be configured
to allow 4-quadrant multiplication of analogue signals by a digitally stored
number.
? 4-channel sigmoid module
There are many ways to produce a sigmoidal non-linearity, and we have
opted for a solution using two transistors configured as along-tailed-pair,
Real-Time Control of Tokamak Plasma Using Neural Networks
1013
to generate a 'tanh ' sigmoidal transfer characteristic. The principal drawback of such an approach is the strong temperature sensitivity due to the
appearance of temperature in the denominator of the exponential transistor
transfer characteristic. An elegant solution to this problem has been found
by exploiting a chip containing 5 transistors in close thermal contact. Two
of the transistors form the long-tailed pair, one of the transistors is used
as a heat source, and the remaining two transistors are used to measure
temperature. External circuitry provides active thermal feedback control,
and stability to changes in ambient temperature over the range O?C to 50?C
is found to be well within the acceptable range.
The complete network is constructed by mounting the appropriate combination
of cards in a VME rack and configuring the network topology using front panel
interconnections. The system includes extensive diagnostics, allowing voltages at
all key points within the network to be monitored as a function of time via a series
of multiplexed output channels.
4
RESULTS FROM REAL-TIME FEEDBACK CONTROL
Figure 5 shows the first results obtained from real-time control of the plasma in
the COMPASS tokamak using neural networks. The evolution of the plasma elongation, under the control of the neural network, is plotted as a function of time
during a plasma pulse. Here the desired elongation has been preprogrammed to
follow a series of steps as a function of time. The remaining 2 network outputs
(radial position Ro and vertical position Zo) were digitized for post-shot diagnosis ,
but were not used for real-time control. The solid curve shows the value of elongation given by the corresponding network output, and the dashed curve shows the
post-shot reconstruction of the elongation obtained from a simple 'filament' code,
which gives relatively rapid post-shot plasma shape reconstruction but with limited
accuracy. The circles denote the elongation values given by the much more accurate
reconstructions obtained from the full equilibrium code. The graph clearly shows
the network generating the required elongation signal in close agreement with the
reconstructed values. The typical residual error is of order 0.07 on elongation values
up to around 1.5. Part of this error is attributable to residual offset in the integrators used to extract magnetic field information from the pick-up coils, and this is
currently being corrected through modifications to the integrator design. An additional contribution to the error arises from the restricted number of hidden units
available with the initial hardware configuration. While these results represent the
first obtained using closed loop control, it is clear from earlier software modelling of
larger network architectures (such as 32- 16-4) that residual errors of order a few %
should be attainable. The implementation of such larger networks is being persued,
following the successes with the smaller system.
Acknowledgements
We would like to thank Peter Cox, Jo Lister and Colin Roach for many useful
discussions and technical contributions. This work was partially supported by the
UK Department of Trade and Industry.
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
1014
1.8
shot 9576
c:
o
~
14
C)
?
c:
o
as
1.0
0.0
0.1
0.2
time (sec.)
Figure 5: Plot of the plasma elongation K. as a function of time
during shot no. 9576 on the COMPASS tokamak, during which the
elongation was being controlled in real-time by the neural network.
References
Bishop C M, Cox P, Haynes P S, Roach C M, Smith M E U, Todd T N and Trotman
D L, 1992. A neural network approach to tokamak equilibrium control. In Neural
Network Applications, Ed. J G Taylor, Springer Verlag, 114-128.
Bishop C M, Haynes P S, Roach C M, Smith ME U, Todd T N, and Trotman D L.
1993. Hardware implementation of a neural network for plasma position control in
COMPASS-D. In Proceedings of the 17th. Symposium on Fusion Technology, Rome,
Italy. 2 997-1001.
Lagin L, Bell R, Davis S, Eck T, Jardin S, Kessel C, Mcenerney J, Okabayashi
M, Popyack J and Sauthoff N. 1993. Application of neural networks for real-time
calculations of plasma equilibrium parameters for PBX-M, In Proceedings of the
17th. Symposium on Fusion Technology, Rome, Italy. 21057-106l.
Lister J Band Schnurrenberger H. 1991. Fast non-linear extraction of plasma
parameters using a neural network mapping. Nuclear Fusion. 31, 1291-1300.
| 1005 |@word cox:2 loading:1 pulse:2 simulation:2 attainable:2 pressure:3 pick:2 thereby:1 solid:2 shot:6 reduction:1 initial:3 configuration:4 series:2 pbx:1 current:7 activation:1 must:4 numerical:3 shape:16 analytic:1 designed:2 plot:5 mounting:1 device:2 parameterization:1 plane:1 smith:7 provides:3 location:1 sigmoidal:2 along:1 constructed:1 differential:1 symposium:2 consists:1 rapid:1 integrator:2 eck:1 linearity:2 panel:1 substantially:1 minimizes:1 developed:2 transformation:3 ro:3 toroidal:3 scaled:2 uk:2 control:30 unit:8 configuring:1 kelvin:1 understood:1 local:1 todd:7 path:1 suggests:1 challenging:1 shaded:2 co:1 ease:1 limited:2 range:3 practical:1 filament:1 testing:1 block:2 bell:1 significantly:3 okabayashi:1 radial:3 quadrant:1 close:2 selection:1 context:1 equivalent:1 map:1 conventional:1 demonstrated:1 center:2 compensated:1 straightforward:1 go:1 simplicity:1 regarded:1 nuclear:1 stability:1 coordinate:1 analogous:1 construction:1 exact:1 agreement:1 located:1 database:8 mike:1 module:1 electrical:1 solved:1 region:1 sun:1 trade:1 yk:2 rq:1 digitally:2 preprogrammed:1 trained:1 predictive:1 chip:1 represented:1 various:1 zo:4 heat:1 fast:4 harnessing:1 whose:1 modular:1 larger:2 interconnection:1 ionized:1 itself:1 final:1 advantage:1 transistor:6 reconstruction:3 loop:3 flexibility:1 normalize:1 exploiting:1 produce:3 generating:1 develop:1 ac:1 measured:2 minor:1 strong:3 dividing:1 implemented:1 predicted:1 involves:2 radius:2 drawback:1 bean:1 adjusted:1 practically:1 around:3 equilibrium:11 mapping:10 circuitry:1 major:2 early:1 smallest:1 purpose:2 birmingham:1 currently:6 tanh:2 minimization:1 clearly:1 rather:1 voltage:1 lister:3 improvement:1 modelling:1 opted:1 normalizer:1 typically:1 hidden:6 going:1 field:6 shaped:2 having:4 elongation:11 extraction:1 haynes:7 represents:3 alter:1 few:2 simultaneously:1 geometry:1 limitless:1 interest:1 possibility:1 highly:3 circular:4 custom:1 adjust:1 diagnostics:1 perimeter:1 accurate:2 ambient:1 partial:1 necessary:1 taylor:1 desired:2 circle:2 plotted:1 fitted:1 industry:1 earlier:2 compass:8 measuring:1 subset:1 hundred:1 front:1 stored:2 combined:1 sensitivity:1 physic:1 enhance:1 together:1 jo:1 containing:1 external:2 potential:1 sec:1 includes:1 sinb:2 configured:2 closed:1 portion:1 parallel:2 capability:1 contribution:2 square:2 accuracy:3 characteristic:2 accurately:1 produced:1 vme:2 multiplying:1 synaptic:2 ed:1 energy:3 acquisition:1 frequency:1 monitored:1 workstation:1 proved:1 dataset:2 knowledge:1 carefully:1 day:1 follow:1 tom:1 flowing:1 improved:3 permitted:1 strongly:1 furthermore:1 rack:1 russian:1 normalized:2 multiplier:1 evolution:2 laboratory:2 illustrated:1 during:4 davis:1 complete:1 midplane:1 temperature:8 geometrical:2 novel:1 sigmoid:1 b4:1 khz:1 million:1 discussed:2 association:1 measurement:2 significant:1 surround:1 had:1 surface:1 base:1 recent:1 italy:2 buffer:1 verlag:1 success:1 additional:1 colin:2 signal:10 ii:3 dashed:1 full:1 technical:1 determination:1 calculation:1 offer:1 cross:8 long:1 post:4 controlled:2 schematic:1 prediction:2 basic:1 multilayer:3 denominator:1 represent:2 normalization:2 confined:2 achieved:1 schematically:1 windsor:5 addressed:1 diagram:2 source:3 unlike:1 elegant:1 db:1 near:1 feedforward:1 variety:1 gave:1 architecture:3 topology:2 bandwidth:1 confinement:3 converter:1 intensive:1 grad:1 peter:1 aea:1 culham:2 useful:1 clear:1 ten:2 band:1 hardware:10 generate:3 diagnostic:1 diagnosis:1 shall:1 group:1 key:1 changing:1 dacs:1 graph:1 sum:2 angle:1 uncertainty:1 acceptable:1 bit:2 layer:1 g:4 software:3 lsec:1 speed:1 relatively:3 department:2 combination:1 vacuum:5 conjugate:1 smaller:1 making:1 modification:1 presently:1 restricted:1 taken:1 computationally:1 equation:7 available:4 appropriate:2 magnetic:15 chamber:1 remaining:2 ensure:1 unsuitable:1 build:1 ellipse:1 approximating:1 unchanged:1 contact:1 quantity:1 gradient:1 distance:2 separate:1 card:2 thank:1 outer:1 chris:1 me:1 terrestrial:1 spanning:1 code:3 balance:1 difficult:1 implementation:8 design:1 allowing:1 vertical:4 roach:3 thermal:3 incorporated:1 digitized:1 rome:2 david:1 lagin:2 pair:2 required:1 extensive:1 optimized:2 usually:1 below:1 analogue:6 power:2 suitable:1 hybrid:1 force:2 residual:3 aston:2 technology:4 axis:2 extract:1 prior:1 acknowledgement:1 evolve:1 multiplication:1 fully:2 generation:1 proportional:1 versus:3 digital:2 nucleus:1 cd:6 supported:1 free:1 allow:1 perceptron:3 wide:1 feedback:10 boundary:10 curve:4 forward:1 tokamak:23 flux:1 cope:1 reconstructed:1 active:1 hydrogen:2 iterative:1 zq:1 tailed:2 promising:1 nature:1 reasonably:1 learn:1 channel:2 transfer:2 vessel:5 excellent:1 timescales:1 paul:1 schnurrenberger:2 attributable:1 precision:1 sub:1 position:10 torus:3 exponential:1 governed:1 minute:1 bishop:10 showing:2 offset:1 fusion:7 intrinsic:1 sequential:1 ci:2 illustrates:1 trotman:7 explore:1 appearance:1 sectional:2 partially:1 collectively:2 springer:1 extracted:1 coil:5 microsecond:1 labelled:1 change:2 typical:2 determined:2 corrected:1 conductor:1 principal:2 called:2 total:1 invariance:1 experimental:1 plasma:43 arises:1 multiplexed:1 |
9 | 1,006 | Real-Time Control of a Tokamak Plasma
Using Neural Networks
Chris M Bishop
Neural Computing Research Group
Department of Computer Science
Aston University
Birmingham, B4 7ET, U.K.
c.m .bishop@aston .ac .uk
Paul S Haynes, Mike E U Smith, Tom N Todd,
David L Trotman and Colin G Windsor
AEA Technology, Culham Laboratory,
Oxfordshire OX14 3DB
(Euratom/UKAEA Fusion Association)
Abstract
This paper presents results from the first use of neural networks
for the real-time feedback control of high temperature plasmas in
a tokamak fusion experiment. The tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the tokamak, hydrogen
plasmas, at temperatures of up to 100 Million K, are confined
by strong magnetic fields. Accurate control of the position and
shape of the plasma boundary requires real-time feedback control
of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural
network approach can give significantly better performance than
the linear technique currently used on most tokamak experiments.
The practical application of the neural network approach requires
high-speed hardware, for which a fully parallel implementation of
the multilayer perceptron, using a hybrid of digital and analogue
technology, has been developed.
1008
1
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
INTRODUCTION
Fusion of the nuclei of hydrogen provides the energy source which powers the sun.
It also offers the possibility of a practically limitless terrestrial source of energy.
However, the harnessing of this power has proved to be a highly challenging problem. One of the most promising approaches is based on magnetic confinement of a
high temperature (10 7 - 108 Kelvin) plasma in a device called a tokamak (from the
Russian for 'toroidal magnetic chamber') as illustrated schematically in Figure 1.
At these temperatures the highly ionized plasma is an excellent electrical conductor, and can be confined and shaped by strong magnetic fields. Early tokamaks
had plasmas with circular cross-sections, for which feedback control of the plasma
position and shape is relatively straightforward. However, recent tokamaks, such as
the COMPASS experiment at Culham Laboratory, as well as most next-generation
tokamaks, are designed to produce plasmas whose cross-sections are strongly noncircular. Figure 2 illustrates some of the plasma shapes which COMPASS is designed to explore. These novel cross-sections provide substantially improved energy
confinement properties and thereby significantly enhance the performance of the
tokamak.
z
R
Figure 1: Schematic cross-section of a tokamak experiment showing the toroidal vacuum vessel (outer D-shaped curve) and plasma
(shown shaded). Also shown are the radial (R) and vertical (Z) coordinates. To a good approximation, the tokamak can be regarded
as axisymmetric about the Z-axis, and so the plasma boundary can
be described by its cross-sectional shape at one particular toroidal
location.
Unlike circular cross-section plasmas, highly non-circular shapes are more difficult to
produce and to control accurately, since currents through several control coils must
be adjusted simultaneously. Furthermore, during a typical plasma pulse, the shape
must evolve, usually from some initial near-circular shape. Due to uncertainties
in the current and pressure distributions within the plasma, the desired accuracy
for plasma control can only be achieved by making real-time measurements of the
position and shape of the boundary, and using error feedback to adjust the currents
in the control coils.
The physics of the plasma equilibrium is determined by force balance between the
1009
Real-Time Control of Tokamak Plasma Using Neural Networks
circle
ellipse
O-shape
bean
Figure 2: Cross-sections of the COMPASS vacuum vessel showing
some examples of potential plasma shapes. The solid curve is the
boundary of the vacuum vessel, and the plasma is shown by the
shaded regions.
thermal pressure of the plasma and the pressure of the magnetic field, and is relatively well understood. Particular plasma configurations are described in terms
of solutions of a non-linear partial differential equation called the Grad-Shafranov
(GS) equation. Due to the non-linear nature of this equation, a general analytic
solution is not possible. However, the GS equation can be solved by iterative numerical methods, with boundary conditions determined by currents flowing in the
external control coils which surround the vacuum vessel. On the tokamak itself it
is changes in these currents which are used to alter the position and cross-sectional
shape of the plasma. Numerical solution of the GS equation represents the standard technique for post-shot analysis of the plasma, and is also the method used
to generate the training dataset for the neural network, as described in the next
section. However , this approach is computationally very intensive and is therefore
unsuitable for feedback control purposes.
For real-time control it is necessary to have a fast (typically:::; 50J.lsec.) determination of the plasma boundary shape. This information can be extracted from a
variety of diagnostic systems , the most important being local magnetic measurements taken at a number of points around the perimeter of the vacuum vessel.
Most tokamaks have several tens or hundreds of small pick up coils located at carefully optimized points around the torus for this purpose. We shall represent these
magnetic signals collectively as a vector m .
For a large class of equilibria, the plasma boundary can be reasonably well represented in terms of a simple parameterization, governed by an angle-like variable B,
given by
R(B)
Z(B)
Ro + a cos(B + 8 sinB)
Zo + a/\,sinB
where we have defined the following parameters
(1)
1010
Ro
Zo
a
K
6
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
radial distance of the plasma center from the major axis of the torus,
vertical distance of the plasma center from the torus midplane,
minor radius measured in the plane Z = Zo,
elongation,
triangularity.
We denote these parameters collectively by Yk. The basic problem which has to be
addressed, therefore, is to find a representation for the (non-linear) mapping from
the magnetic signals m to the values of the geometrical parameters Yk, which can
be implemented in suitable hardware for real-time control.
The conventional approach presently in use on many tokamaks involves approximating the mapping between the measured magnetic signals and the geometrical
parameters by a single linear transformation. However, the intrinsic non-linearity
of the mappings suggests that a representation in terms of feedforward neural networks should give significantly improved results (Lister and Schnurrenberger, 1991;
Bishop et a/., 1992; Lagin et at., 1993). Figure 3 shows a block diagram of the
control loop for the neural network approach to tokamak equilibrium control.
Neural
Network
Figure 3: Block diagram of the control loop used for real-time
feedback control of plasma position and shape.
2
SOFTWARE SIMULATION RESULTS
The dataset for training and testing the network was generated by numerical solution of the GS equation using a free-boundary equilibrium code. The data base
currently consists of over 2,000 equilibria spanning the wide range of plasma positions and shapes available in COMPASS. Each equilibrium configuration takes
several minutes to generate on a fast workstation. The boundary of each configuration is then fitted using the form in equation 1, so that the equilibria are labelled
with the appropriate values of the shape parameters. Of the 120 magnetic signals
available on COMPASS which could be used to provide inputs to the network, a
1011
Real-Time Control o/Tokamak PLasma Using Neural Networks
subset of 16 has been chosen using sequential forward selection based on a linear
representation for the mapping (discussed below) .
It is important to note that the transformation from magnetic signals to flux surface
parameters involves an exact linear invariance. This follows from the fact that, if all
of the currents are scaled by a constant factor, then the magnetic fields will be scaled
by this factor, and the geometry of the plasma boundary will be unchanged . It is
important to take advantage of this prior knowledge and to build it into the network
structure, rather than force the network to learn it by example. We therefore
normalize the vector m of input signals to the network by dividing by a quantity
proportional to the total plasma current. Note that this normalization has to be
incorporated into the hardware implementation of the network, as will be discussed
in Section 3.
1.2
4
01
2
2
01
c
c
.5.
0-
~
:E
.5.
CIS
:E
1iI
~
::J
?
1iI
CD
-2
go.8
.5.
0-
?
CIS
:E
1iI 0 .4
CD
c
::J
c
::J
-2
-4
Database
?
Database
1.2
4
~
~CD
Z
~
:::I
CD
z
.2
Database
2
~O.8
~
?
CD
z
~O.4
-2
:::I
CD
Z
?
-4
Database
Database
.2
Database
Figure 4: Plots of the values from the test set versus the values
predicted by the linear mapping for the 3 equilibrium parameters,
together with the corresponding plots for a neural network with 4
hidden units.
The results presented in this paper are based on a multilayer perceptron architecture
having a single layer of hidden units with 'tanh' activation functions , and linear
output units. Networks are trained by minimization of a sum-of-squares error using
a standard conjugate gradients optimization algorithm, and the number of hidden
J012
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
units is optimized by measuring performance with respect to an independent test
set. Results from the neural network mapping are compared with those from the
optimal linear mapping, that is the single linear transformation which minimizes
the same sum-of-squares error as is used in the neural network training algorithm,
as this represents the method currently used on a number of present day tokamaks .
Initial results were obtained on networks having 3 output units, corresponding to
the values of vertical position ZQ, major radius RQ, and elongation K; these being
parameters which are of interest for real-time feedback control. The smallest normalized test set error of 11.7 is obtained from the network having 16 hidden units.
By comparison, the optimal linear mapping gave a normalized test set error of 18.3.
This represents a reduction in error of about 30% in going from the linear mapping
to the neural network. Such an improvement, in the context of this application , is
very significant.
For the experiments on real-time feedback control described in Section 4 the currently available hardware only permitted networks having 4 hidden units, and so we
consider the results from this network in more detail. Figure 4 shows plots of the
network predictions for various parameters versus the corresponding values from
the test set portion of the database. Analogous plots for the optimal linear map
predictions versus the database values are also shown. Comparison of the corresponding figures shows the improved predictive capability of the neural network,
even for this sub-optimal network topology.
3
HARDWARE IMPLEMENTATION
The hardware implementation of the neural network must have a bandwidth of 2:
20 kHz in order to cope with the fast timescales of the plasma evolution. It must
also have an output precision of at least (the the analogue equivalent of) 8 bits in
order to ensure that the final accuracy which is attainable will not be limited by the
hardware system. We have chosen to develop a fully parallel custom implementation
of the multilayer perceptron, based on analogue signal paths with digitally stored
synaptic weights (Bishop et al., 1993). A VME-based modular construction has
been chosen as this allows flexibility in changing the network architecture, ease of
loading network weights, and simplicity of data acquisition. Three separate types
of card have been developed as follows:
? Combined 16-input buffer and signal normalizer.
This provides an analogue hardware implementation of the input normalization described earlier.
? 16 x 4 matrix multiplier
The synaptic weights are produced using 12 bit frequency-compensated
multiplying DACs (digital to analogue converters) which can be configured
to allow 4-quadrant multiplication of analogue signals by a digitally stored
number.
? 4-channel sigmoid module
There are many ways to produce a sigmoidal non-linearity, and we have
opted for a solution using two transistors configured as along-tailed-pair,
Real-Time Control of Tokamak Plasma Using Neural Networks
1013
to generate a 'tanh ' sigmoidal transfer characteristic. The principal drawback of such an approach is the strong temperature sensitivity due to the
appearance of temperature in the denominator of the exponential transistor
transfer characteristic. An elegant solution to this problem has been found
by exploiting a chip containing 5 transistors in close thermal contact. Two
of the transistors form the long-tailed pair, one of the transistors is used
as a heat source, and the remaining two transistors are used to measure
temperature. External circuitry provides active thermal feedback control,
and stability to changes in ambient temperature over the range O?C to 50?C
is found to be well within the acceptable range.
The complete network is constructed by mounting the appropriate combination
of cards in a VME rack and configuring the network topology using front panel
interconnections. The system includes extensive diagnostics, allowing voltages at
all key points within the network to be monitored as a function of time via a series
of multiplexed output channels.
4
RESULTS FROM REAL-TIME FEEDBACK CONTROL
Figure 5 shows the first results obtained from real-time control of the plasma in
the COMPASS tokamak using neural networks. The evolution of the plasma elongation, under the control of the neural network, is plotted as a function of time
during a plasma pulse. Here the desired elongation has been preprogrammed to
follow a series of steps as a function of time. The remaining 2 network outputs
(radial position Ro and vertical position Zo) were digitized for post-shot diagnosis ,
but were not used for real-time control. The solid curve shows the value of elongation given by the corresponding network output, and the dashed curve shows the
post-shot reconstruction of the elongation obtained from a simple 'filament' code,
which gives relatively rapid post-shot plasma shape reconstruction but with limited
accuracy. The circles denote the elongation values given by the much more accurate
reconstructions obtained from the full equilibrium code. The graph clearly shows
the network generating the required elongation signal in close agreement with the
reconstructed values. The typical residual error is of order 0.07 on elongation values
up to around 1.5. Part of this error is attributable to residual offset in the integrators used to extract magnetic field information from the pick-up coils, and this is
currently being corrected through modifications to the integrator design. An additional contribution to the error arises from the restricted number of hidden units
available with the initial hardware configuration. While these results represent the
first obtained using closed loop control, it is clear from earlier software modelling of
larger network architectures (such as 32- 16-4) that residual errors of order a few %
should be attainable. The implementation of such larger networks is being persued,
following the successes with the smaller system.
Acknowledgements
We would like to thank Peter Cox, Jo Lister and Colin Roach for many useful
discussions and technical contributions. This work was partially supported by the
UK Department of Trade and Industry.
C. Bishop, P. Haynes, M. Smith, T. Todd, D. Trotman, C. Windsor
1014
1.8
shot 9576
c:
o
~
14
C)
?
c:
o
as
1.0
0.0
0.1
0.2
time (sec.)
Figure 5: Plot of the plasma elongation K. as a function of time
during shot no. 9576 on the COMPASS tokamak, during which the
elongation was being controlled in real-time by the neural network.
References
Bishop C M, Cox P, Haynes P S, Roach C M, Smith M E U, Todd T N and Trotman
D L, 1992. A neural network approach to tokamak equilibrium control. In Neural
Network Applications, Ed. J G Taylor, Springer Verlag, 114-128.
Bishop C M, Haynes P S, Roach C M, Smith ME U, Todd T N, and Trotman D L.
1993. Hardware implementation of a neural network for plasma position control in
COMPASS-D. In Proceedings of the 17th. Symposium on Fusion Technology, Rome,
Italy. 2 997-1001.
Lagin L, Bell R, Davis S, Eck T, Jardin S, Kessel C, Mcenerney J, Okabayashi
M, Popyack J and Sauthoff N. 1993. Application of neural networks for real-time
calculations of plasma equilibrium parameters for PBX-M, In Proceedings of the
17th. Symposium on Fusion Technology, Rome, Italy. 21057-106l.
Lister J Band Schnurrenberger H. 1991. Fast non-linear extraction of plasma
parameters using a neural network mapping. Nuclear Fusion. 31, 1291-1300.
Pulsestream Synapses with Non-Volatile
Analogue Amorphous-Silicon Memories.
A.J. Holmes, A.F. Murray, S. Churcher and J. Hajto
Department of Electrical Engineering
University of Edinburgh
Edinburgh, EH9 3JL
M. J. Rose
Dept. of Applied Physics and Electronics,
Dundee University
Dundee DD14HN
Abstract
A novel two-terminal device, consisting of a thin lOooA layer of p+
a-Si:H sandwiched between Vanadium and Chromium electrodes,
exhibits a non-volatile, analogue memory action. This device stores
synaptic weights in an ANN chip, replacing the capacitor previously
used for dynamic weight storage. Two different synapse designs are
discussed and results are presented.
1
INTRODUCTION
Analogue hardware implementations of neural networks have hitherto been hampered by the lack of a straightforward (local) analogue memory capability. The
ideal storage mechanism would be compact, non-volatile, easily reprogrammable,
and would not interfere with the normal silicon chip fabrication process.
Techniques which have been used to date include resistors (these are not generally
reprogrammable, and suffer from being large and difficult to fabricate with any accuracy), dynamic capacitive storage [4] (this is compact, reprogrammable and simple,
but implies an increase in system complexity, arising from off-chip refresh circuitry),
764
A. J. Holmes, A. F. Murray, S. Churcher, J. Hajto, M. J. Rose
EEPROM ("floating gate") memory [5] (which is compact, reprogrammable, and
non-volatile, but is slow, and cannot be reprogrammed in situ), and local digital
storage (which is non-volatile, easily programmable and simple, but consumes area
horribly).
Amorphous silicon has been used for synaptic weight storage [1, 2], but only as
either a high-resistance fixed weight medium or a binary memory.
In this paper, we demonstrate that novel amorphous silicon memory devices can be
incorporated into standard CMOS synapse circuits, to provide an analogue weight
storage mechanism which is compact, non-volatile, easily reprogrammable, and simple to implement.
2
a-Si:H MEMORY DEVICES
The a-Si:H analogue memory device [3] comprises a lOooA thick layer of amorphous
silicon (p+ a-Si:H) sandwiched between Vanadium and Chromium electrodes.
The a-Si device takes the form of a two-terminal, programmable resistor. It is an
"add-on" to a conventional CMOS process, and does not demand that the normal
CMOS fabrication cycle be disrupted. The a-Si device sits on top of the completed
chip circuitry, making contact with the CMOS arithmetic elements via holes cut in
the protective passivation layer, as shown in Figure 1.
CMOS Passivation
Figure 1: The construction of a-Si:H Devices on a CMOS chip
After fabrication a number of electronic procedures must be performed in order to
program the device to a given resistance state.
Programming, and Pre-Programming Procedures
Before the a-Si device is usable, the following steps must be carried out:
? Forming: This is a once-only process, applied to the a-Si device in its
"virgin" state, where it has a resistance of several MO. A series of 300ns
pulses, increasing in amplitude from 5v to 14v, is applied to the device
electrodes. This creates a vertical conducting channel or filament whose
approximate resistance is 1KO. This filament can then be programmed to
a value in the range lKO to 1 MO . The details of the physical mechanisms
are not yet fully established, but it is clear that conduction occurs through
a narrow (sub-micron) conducting channel.
Pulsestream Synapses with Non-Volatile Analogue Amorphous-Silicon Memories
765
? Write: To decrease the device's resistance, negative "Write", pulses are
applied.
? Erase: To increase the device's resistance, positive" Erase" , pulses are applied.
? Usage: Pulses below O.5v do not change the device resistance. The resistance can therefore be utilised as a weight storage medium using a voltage
of less than O.5v without causing reprogramming.
Programming pulses, which range between 2v and 5v, are typically 120ns in duration. Programming is therefore much faster than for other EEPROM (floating
gate) devices used in the same context, which use a series of 100jls pulses to set the
threshold voltage [5].
The following sections describe synapse circuits using the a-Si:H devices. These
synapses use the reprogrammable a-Si:H resistor in the place of a storage capacitor
or EEPROM cell. These new synapses were implemented on a chip referred to as
ASiTEST2, consisting of five main test blocks, each comprising of four synapses
connected to a single neuron.
3
The EPSILON based synapse
The first synapse to be designed used the a-Si:H resistor as a direct replacement for
the storage capacitor used in the EPSILON [4] synapse.
+Sv
Neuron
1
I
V
..
t.
~:l:
><!:
~
Mirror Set
E
30
a-Si => Vw
Circuitry
Original
Storage
Capacitor
O.5v
<0-----------.,...
__
...
EPSILON Synapse
Figure 2: The EPSILON Synapse with a-Si:H weight storage
In the original EPSILON chip the weight voltage was stored as a voltage on a
capacitor. In this new synapse design, shown in Figure 2, the a-Si:H resistance is
set such that the voltage drop produced by Iset is equivalent to the original weight
voltage, Vw, that was stored dynamically on the capacitor.
A new, simpler, synapse, which can be operated from a single +5v supply, was also
be included on the ASiTEST2 chip.
A. J. Holmes, A. F. Murray, S. Churcher, J. Hajto, M. J. Rose
766
4
The MkII synapse
The circuit is shown in Figure 3. The a-Si:H memory is used to store a current,
Iasi. This current is subtracted from a zero current, Isy...:z" to give a weight current
, +/-Iw, which adds or subtracts charge from the activity capacitor, Cact, thus
implementing excitation or inhibition respectively.
For the circuit to function correctly we must limit the voltage on the activity capacitor to the range [1.5v,3.5v], to ensure that the transistors mirroring Isy_z and
Iasi remain in saturation. As Figure 3 shows, there are few reference signals and
the circuit operates from a single +5v power supply rail, in sharp contrast to many
earlier analogue neural circuits, including our own.
1v""
+5vPWm
II .
~
881
Vsel
.r--\.
-.L
....L
*
Comparator
PWout
..rL
Cact
Vramp
~
Ov
E
;.
Mirror Set
"'E~-----------:~~
E
Synapse
Power Supplies
V5_0=5.Ov
References
Vrstv?2.5v
Isy_z=5uA
;.
Neuron
Tail Currents
Ineu=4uA
Figure 3: The MkII synapse
On first inspection the main drawback of this design would appear to be a reliance
on the accuracy with which the zero current Isy...:z, is mirrored across an entire chip.
The variation in this current means that two cells with the same synapse resistance
could produce widely differing values of Iw. However, during programming we
do not use the resistance of the a-Si:H device as a target value. We monitor the
voltage on Cact for a given PWin signal, increasing or decreasing the resistance
of the a-Si:H device until the desired voltage level is achieved.
Example: To set a weight to be the maximum positive value, we adjust the a-Si
resistance until a PWin signal of 5us, the maximum input signal, gives a voltage of
3.5v on the integration capacitor.
We are able to set the synapse weight using the whole integration range of [1.5v,3.5v]
by only closing Vsel for the desired synapse during programming. In normal operating mode all four Vsel switches will be closed so that the integration charge is
summed over all four local capacitors.
Pulsestream Synapses with Non-Volatile Analogue Amorphous-Silicon Memories
4.1
767
Example - Stability Test
As an example of the use of integration voltage as means of monitoring the resistance
of a particular synapse we have included a stability test. This was carried out on
one of the test chips which contained the MkII synapse.
The four synapses on the test chip were programmed to give different levels of
activation. The chip was then powered up for 30mins each day during a 7-day
period, and the activation levels for each synapse were measured three times.
3.5
Stability Test - PWin = 3us
,----~-_._--;--__..___r...:....,.-_._,.__-,.--.,..,.-__..__:___.,
testl
test2
test4
test3
testS
test7
test6
3
?
,.
I
t:'"
-~
1:1
?
't
,
?
~
?
- ~ -:- - ~ -:- -
25
?
2
?
~.
?
-:- - - . of - ~-:- -
..
.
I
I
?
?
{,o-:- - .
- ~-s4
.
,
.
.
'.
.?
.?
.?
.?
- - ~ - - ~ - - --:- - - ~ - - -~- - - w. -- --r s2
- - - .;. '" - -: -011>- - :.. ~ - ~ -oGii - -:- - - - i-- ~ - ..; -sl
?
.
.
.
.
.
?
?
~ - - ~ - - -~- - - ~ - ?
I
?
I
I
,
I
?
?
?
,
?
?
I
I
?
?
10
20
30
-.
L---L- -- -~
~ -s3
,
?
,
,
?
?
?
?
?
?
?
,
I
,
?
?
I
:
I
?
:
?
I
?
?
?
?
?
?
?
40
50
60
,
70
80
?
90
Measurement Index
Figure 4: ASiTEST2- Stability Test
As figure 4 shows, the memories remain in the same resistance state (i.e retain their
programmed weight value) over the whole 7-day period. Separate experiments on
isolated devices indicate much longer hold times - of the order of months at least.
5
ASiTEST3
Recently we have received our latest, overtly neural, a-Si:H based test chip. This
contains an 8x8 array of the MkII synapses.
The circuit board for this device has been constructed and partially tested while
the ASiTEST3 chips are awaiting the deposition of the a-Si:H layers. We have been
able to use an ASiTEST2 chip containing two of the MkII synapse test blocks i.e.
8 synapses and 2 neurons to exercise much of the board's functionality.
The test board contains a simple state machine which has four different states:
? State 0: Load Input Pulsewidths into SRAM from PC.
? State 1: Apply Input Pulsewidth signals to chipl.
? State 2: Use Vramp to generate threshold function for chipl. The resulting
Pulsewidth outputs are used as the inputs to chip2, as well as being stored
A. J. Holmes, A. F. Murray, S. Churcher, J. Hajto, M. J. Rose
768
in SRAM .
? State 3: Use Vramp to generate threshold function for chip2. Read resulting
Pulsewidth Outputs into SRAM .
? State 0: Read Output Pulsewidths from SRAM into PC.
The results obtained during a typical test cycle are shown in Figure 5.
IE-- Statel --;,,;,,*1E E - - State2
-----;>~i~
State3 ---;!>~I
~r-IF~~~--r-------~---------l
4v
3v
PWin_O
2v
Iv
Ov~ . . . . . . . . .
3.5v r;;;;;""-~,"",,,or;:::::;;:;::;;;;:;;;::;;t---~----t--------,
~:~
....... ~;a~;"""'"
2.Ov
~'Sig~;,id"""""
... ~"Li~""""
......
. . . . . . . . . . . . . . . . . . . . . . . ,.
....... .
.... .....
..........
.. ..
l.~
e?_e
_____ ...... ___
?
_____
?
____________ . . . . . . . .
_____
????
____
..........
.
5v
4v
3v
2v
:;; ~,...,................-..--t;?.~...:...~__--! ..........~~.n-.-~~~~~~......J
IS.
10.
Figure 5: ASiTEST3 Board Scope Waveforms
As this figure shows different ramp signals, corresponding to different threshold
functions, can be applied to chipl and chip2 neurons.
10.0
Single Buffer PulscWidth Sweeps
.----.,..----r------.----r----.------,
9.0
8.0
!
7.0 -
~
5'O~~~~~~~-~~-+++~~~~N~~~
~
i6.o~
~----
J::
2D
Neal-Syal
N~~JIII
1.0
N~~ya2
o.oL---~--~- --~--?
o
0.5
1.0
1.5
2.0
2.5
3.0
Pulscwldlh Input [WI)
Figure 6: ASiTEST3 Board - MkII Synapse Characteristic
While the signals shown in Figure 5 appear noisy the multiplier characteristic that
the chip produces is still admirably linear, as shown in Figure 6. In this experiment
all eight synapses on a test chip were programmed into different resistance states
and PWin was swept from 0 to 3us.
Pulsestream Synapses with Non- Volatile Analogue Amorphous-Silicon Memories
6
769
Conclusions
We have demonstrated the use of novel a-Si:H analogue memory devices as a means
of storing synaptic weights in a Pulsewidth ANN. We have also demonstrated the
operation of an interface board which allows two 8x8 ANN chips, operating as a
two layer network, to be controlled by a simple PC interface card.
This technology is most suitable for small networks in, for example, remote control and other embedded-system applications where cost and power considerations
favour a single all-inclusive ANN chip with non-volatile, but programmable weights.
Another possible application of this technology is in large networks constructed
using Thin Film Technology(TFT). If TFT's were used in place of the CMOS transistors then the area constraint imposed by crystalline silicon would be removed,
allowing truly massively parallel networks to be integrated.
In summary - the a-Si:H analogue memory devices described in this paper provide a
route to an analogue, non-volatile and fast synaptic weight storage medium. At the
present time neither the programming nor storage mechanisms are fully understood
making it difficult to compare this new device with more established technologies
such as the ubiquitous Floating-Gate EEPROM technique. Current research is
focused on firstly, improving the yield on the a-Si:H device which is unacceptably
low at present, a demerit that we attribute to imperfections in the a-Si fabrication
process and secondly, improving understanding of the device physics and hence the
programming and storage mechanisms.
Acknowledgements
This research has been jointly funded by BT, and EPSRC (formerly SERC), the
Engineering and Physical Sciences Research Council.
References
[1] W. Hubbard et al.(1986) Electronic Neural Networks AlP Conference Proceedings - Snowbird 1986 :227-234
[2] H.P. Graf (1986) VLSI Implementation of a NN memory with several hundreds
of neurons AlP Conference Proceedings - Snowbird 1986 :182-187.
[3] M.J. Rose et al (1989) Amorphous Silicon Analogue Memory Devices Journal
of Non-Crystalline Solids 1(115):168-170
[4] A.Hamilton et al. (1992) Integrated Pulse-Stream Neural Networks - Results,
Issues and Pointers IEEE Transactions on N.N.s 3(3):385-393
[5] M.Holler, S.Tam, H.Castro and R.Benson (1989) An Electrically Trainable ANN
with 10240 Floating Gate Synapses. Int Conf on N.N.s Proc :191-196
[6] A.F.Murray and A.V.W.Smith.(1987) Asynchronous Arithmetic for VLSI Neural Systems. Electronics Letters 23(12):642-643
[7] A.J. Holmes et al. (1993) Use of a-Si:H Memory Devices for Non-volatile Weight
Storage in ANNs. Proc lCAS 15 :817-820
| 1006 |@word pulsestream:4 cox:2 chromium:2 loading:1 simulation:2 pulse:9 attainable:2 pressure:3 pick:2 thereby:1 solid:3 shot:6 reduction:1 electronics:2 configuration:4 series:4 contains:2 initial:3 pbx:1 current:15 activation:3 si:26 yet:1 must:7 refresh:1 numerical:3 shape:16 analytic:1 designed:3 plot:5 drop:1 mounting:1 device:30 unacceptably:1 parameterization:1 plane:1 inspection:1 sram:4 smith:8 pointer:1 provides:3 location:1 sits:1 iset:1 sigmoidal:2 simpler:1 firstly:1 five:1 along:1 constructed:3 direct:1 differential:1 symposium:2 supply:3 consists:1 fabricate:1 admirably:1 rapid:1 nor:1 integrator:2 terminal:2 ol:1 decreasing:1 eck:1 increasing:2 erase:2 ua:2 linearity:2 panel:1 medium:3 circuit:7 hitherto:1 substantially:1 minimizes:1 developed:2 differing:1 transformation:3 charge:2 ro:3 toroidal:3 scaled:2 uk:2 control:31 unit:8 configuring:1 appear:2 kelvin:1 hamilton:1 before:1 positive:2 understood:2 local:4 todd:7 engineering:2 limit:1 id:1 path:1 jls:1 dynamically:1 suggests:1 challenging:1 shaded:2 co:1 ease:1 limited:2 programmed:4 range:7 practical:1 filament:3 testing:1 block:4 implement:1 procedure:2 area:2 bell:1 significantly:3 okabayashi:1 pre:1 radial:3 quadrant:1 cannot:1 close:2 selection:1 storage:15 context:2 conventional:2 map:1 demonstrated:3 center:2 equivalent:2 compensated:1 straightforward:2 go:1 latest:1 duration:1 imposed:1 focused:1 simplicity:1 holmes:5 dundee:2 regarded:1 nuclear:1 array:1 stability:5 coordinate:1 variation:1 analogous:1 construction:2 target:1 exact:1 programming:8 sig:1 agreement:1 element:1 located:1 cut:1 database:8 mike:1 epsrc:1 module:1 electrical:2 solved:1 region:1 cycle:2 sun:1 connected:1 remote:1 trade:1 decrease:1 removed:1 consumes:1 yk:2 rq:1 digitally:2 rose:5 complexity:1 dynamic:2 preprogrammed:1 trained:1 ov:4 predictive:1 creates:1 easily:3 chip:20 represented:1 various:1 awaiting:1 zo:4 heat:1 fast:5 describe:1 harnessing:1 whose:2 modular:1 larger:2 widely:1 film:1 tested:1 ramp:1 interconnection:1 ionized:1 jointly:1 itself:1 noisy:1 final:1 advantage:1 transistor:8 reconstruction:3 causing:1 loop:3 date:1 jiii:1 flexibility:1 normalize:1 exploiting:1 electrode:3 produce:5 generating:1 cmos:7 develop:1 ac:1 snowbird:2 measured:3 minor:1 received:1 strong:3 dividing:1 implemented:2 predicted:1 involves:2 implies:1 indicate:1 waveform:1 radius:2 drawback:2 thick:1 bean:1 functionality:1 attribute:1 alp:2 implementing:1 secondly:1 adjusted:1 hold:1 practically:1 around:3 normal:3 equilibrium:11 mapping:10 scope:1 mo:2 circuitry:4 major:2 early:1 smallest:1 purpose:2 proc:2 birmingham:1 currently:6 tanh:2 iw:2 council:1 hubbard:1 minimization:1 clearly:1 imperfection:1 rather:1 voltage:12 lister:3 improvement:1 modelling:1 opted:1 normalizer:1 contrast:1 nn:1 typically:2 entire:1 integrated:2 bt:1 hidden:6 vlsi:2 going:1 comprising:1 issue:1 integration:4 summed:1 field:6 once:1 shaped:2 having:4 elongation:11 extraction:1 haynes:7 represents:3 thin:2 alter:1 few:3 simultaneously:1 floating:4 geometry:1 consisting:2 replacement:1 limitless:1 interest:1 possibility:1 highly:3 circular:4 custom:1 situ:1 adjust:2 truly:1 multiplexed:1 diagnostics:1 operated:1 pc:3 perimeter:1 accurate:2 ambient:1 partial:1 necessary:1 iv:1 taylor:1 desired:4 circle:2 plotted:1 isolated:1 fitted:1 industry:1 earlier:3 compass:8 measuring:1 cost:1 subset:1 hundred:2 fabrication:4 front:1 stored:5 conduction:1 sv:1 combined:1 disrupted:1 sensitivity:1 ie:1 retain:1 physic:3 off:1 holler:1 enhance:1 together:1 jo:1 containing:2 external:2 tam:1 conf:1 usable:1 tft:2 li:1 potential:1 protective:1 sec:1 includes:1 int:1 sinb:2 configured:2 stream:1 performed:1 utilised:1 closed:2 portion:1 parallel:3 capability:2 contribution:2 amorphous:8 square:2 accuracy:5 characteristic:4 conducting:2 yield:1 accurately:1 produced:2 vme:2 multiplying:1 monitoring:1 anns:1 synapsis:12 synaptic:6 ed:1 energy:3 acquisition:1 frequency:1 monitored:1 workstation:1 proved:1 dataset:2 knowledge:1 ubiquitous:1 amplitude:1 carefully:1 test2:1 day:4 follow:1 tom:1 flowing:1 improved:3 permitted:1 synapse:21 strongly:1 furthermore:1 until:2 replacing:1 lack:1 rack:1 interfere:1 mode:1 russian:1 usage:1 normalized:2 multiplier:2 evolution:2 hence:1 read:2 laboratory:2 neal:1 illustrated:1 during:8 davis:1 excitation:1 complete:1 demonstrate:1 midplane:1 temperature:8 interface:2 geometrical:2 consideration:1 novel:4 recently:1 sigmoid:1 volatile:12 passivation:2 physical:2 rl:1 b4:1 khz:1 million:1 association:1 discussed:3 jl:1 tail:1 measurement:3 significant:1 silicon:10 surround:1 i6:1 closing:1 had:1 funded:1 overtly:1 longer:1 surface:1 inhibition:1 operating:2 base:1 add:2 own:1 recent:1 italy:2 reprogrammable:6 massively:1 store:2 buffer:2 verlag:1 route:1 binary:1 success:1 swept:1 additional:1 churcher:4 colin:2 period:2 signal:17 ii:4 dashed:1 full:1 arithmetic:2 isy:2 technical:1 faster:1 determination:1 calculation:1 offer:1 cross:8 long:1 post:4 controlled:3 schematic:1 prediction:2 basic:1 ko:1 multilayer:3 denominator:1 testl:1 represent:2 normalization:2 confined:2 achieved:2 cell:2 schematically:1 windsor:5 addressed:1 diagram:2 source:3 unlike:1 elegant:1 db:1 capacitor:10 near:1 vw:2 ideal:1 feedforward:1 variety:1 switch:1 gave:1 architecture:3 topology:2 bandwidth:1 confinement:3 converter:1 intensive:1 grad:1 favour:1 suffer:1 peter:1 aea:1 resistance:16 culham:2 action:1 programmable:3 mirroring:1 useful:1 generally:1 clear:2 s4:1 ten:2 band:1 hardware:11 generate:5 sl:1 mirrored:1 s3:1 diagnostic:1 arising:1 correctly:1 diagnosis:1 pwin:4 write:2 shall:1 group:1 key:1 four:5 reliance:1 threshold:4 monitor:1 changing:1 neither:1 dacs:1 graph:1 sum:2 angle:1 micron:1 uncertainty:1 letter:1 place:2 electronic:2 acceptable:1 bit:2 eh9:1 layer:6 g:4 activity:2 constraint:1 inclusive:1 software:3 reprogrammed:1 lsec:1 speed:1 min:1 relatively:3 department:3 combination:1 vacuum:5 conjugate:1 electrically:1 smaller:1 remain:2 across:1 wi:1 making:3 modification:1 castro:1 presently:1 benson:1 restricted:1 taken:1 computationally:1 equation:7 previously:1 mechanism:5 available:4 operation:1 apply:1 eight:1 appropriate:2 magnetic:15 chamber:1 subtracted:1 gate:4 original:3 hampered:1 capacitive:1 remaining:2 ensure:2 include:1 top:1 completed:1 unsuitable:1 serc:1 epsilon:5 build:1 ellipse:1 approximating:1 murray:5 sandwiched:2 unchanged:1 contact:2 sweep:1 quantity:1 occurs:1 exhibit:1 gradient:1 distance:2 separate:2 card:3 thank:1 outer:1 chris:1 me:1 terrestrial:1 spanning:1 code:3 index:1 balance:1 difficult:3 demerit:1 negative:1 implementation:10 design:4 allowing:2 vertical:5 neuron:6 roach:3 thermal:3 incorporated:2 digitized:1 rome:2 sharp:1 david:1 lagin:2 pair:2 required:1 extensive:1 optimized:2 narrow:1 established:2 able:2 usually:1 below:2 program:1 saturation:1 including:1 memory:18 crystalline:2 analogue:20 power:5 suitable:2 hybrid:1 force:2 residual:3 aston:2 technology:8 state3:1 axis:2 carried:2 x8:2 extract:1 formerly:1 prior:1 understanding:1 acknowledgement:2 powered:1 evolve:1 multiplication:1 graf:1 embedded:1 fully:4 generation:1 proportional:1 versus:3 digital:3 nucleus:1 ya2:1 storing:1 cd:6 summary:1 supported:1 free:1 asynchronous:1 allow:1 perceptron:3 wide:1 edinburgh:2 feedback:10 boundary:10 curve:4 forward:1 tokamak:23 subtracts:1 flux:1 cope:1 transaction:1 reconstructed:1 approximate:1 compact:4 active:1 state2:1 vramp:3 hydrogen:2 iterative:1 zq:1 tailed:2 promising:1 nature:1 reasonably:1 learn:1 channel:4 transfer:2 improving:2 vessel:5 excellent:1 timescales:1 main:2 whole:2 s2:1 paul:1 referred:1 schnurrenberger:2 board:6 attributable:1 slow:1 n:2 precision:1 sub:2 position:10 comprises:1 torus:3 exponential:1 resistor:4 exercise:1 governed:1 rail:1 minute:1 load:1 bishop:10 showing:2 offset:1 fusion:7 intrinsic:1 sequential:1 ci:2 mirror:2 illustrates:1 hole:1 demand:1 cact:3 trotman:7 explore:1 appearance:1 forming:1 sectional:2 contained:1 partially:2 collectively:2 springer:1 extracted:1 coil:5 comparator:1 month:1 ann:5 microsecond:1 eeprom:4 labelled:1 change:3 deposition:1 included:2 typical:3 determined:2 corrected:1 operates:1 conductor:1 principal:2 called:2 total:1 invariance:1 experimental:1 plasma:43 arises:1 dept:1 trainable:1 |
10 | 1,007 | Learning To Play the Game of Chess
Sebastian Thrun
University of Bonn
Department of Computer Science III
Romerstr. 164, 0-53117 Bonn, Germany
E-mail: thrun@carbon.informatik.uni-bonn.de
Abstract
This paper presents NeuroChess, a program which learns to play chess from the final
outcome of games. NeuroChess learns chess board evaluation functions, represented
by artificial neural networks. It integrates inductive neural network learning, temporal
differencing, and a variant of explanation-based learning. Performance results illustrate
some of the strengths and weaknesses of this approach.
1 Introduction
Throughout the last decades, the game of chess has been a major testbed for research on
artificial intelligence and computer science. Most oftoday's chess programs rely on intensive
search to generate moves. To evaluate boards, fast evaluation functions are employed which
are usually carefully designed by hand, sometimes augmented by automatic parameter tuning
methods [1]. Building a chess machine that learns to play solely from the final outcome of
games (win/loss/draw) is a challenging open problem in AI.
In this paper, we are interested in learning to play chess from the final outcome of games.
One of the earliest approaches, which learned solely by playing itself, is Samuel's famous
checker player program [10]. His approach employed temporal difference learning (in short:
TO) [14], which is a technique for recursively learning an evaluation function . Recently,
Tesauro reported the successful application of TO to the game of Backgammon, using
artificial neural network representations [16]. While his TO-Gammon approach plays grandmaster-level backgammon, recent attempts to reproduce these results in the context of Go
[12] and chess have been less successful. For example, Schafer [11] reports a system just
like Tesauro's TO-Gammon, applied to learning to play certain chess endgames. Gherrity [6]
presented a similar system which he applied to entire chess games. Both approaches learn
purely inductively from the final outcome of games. Tadepalli [15] applied a lazy version
of explanation-based learning [5, 7] to endgames in chess. His approach learns from the
final outcome, too, but unlike the inductive neural network approaches listed above it learns
analytically, by analyzing and generalizing experiences in terms of chess-specific knowledge.
1070
Sebastian Thrun
The level of play reported for all these approaches is still below the level of GNU-Chess, a
publicly available chess tool which has frequently been used as a benchmark. This illustrates
the hardness of the problem of learning to play chess from the final outcome of games.
This paper presents NeuroChess, a program that learns to play chess from the final outcome
of games. The central learning mechanisms is the explanation-based neural network (EBNN)
algorithm [9, 8]. Like Tesauro's TD-Gammon approach, NeuroChess constructs a neural
network evaluation function for chess boards using TO. In addition, a neural network version
of explanation-based learning is employed, which analyzes games in terms of a previously
learned neural network chess model. This paper describes the NeuroChess approach, discusses several training issues in the domain of chess, and presents results which elucidate
some of its strengths and weaknesses.
2
Temporal Difference Learning in the Domain of Chess
Temporal difference learning (TO) [14] comprises a family of approaches to prediction in
cases where the event to be predicted may be delayed by an unknown number of time steps.
In the context of game playing, TD methods have frequently been applied to learn functions
which predict the final outcome of games. Such functions are used as board evaluation
functions.
The goal of TO(O), a basic variant of TO which is currently employed in the NeuroChess
approach, is to find an evaluation function, V, which ranks chess boards according to their
goodness: If the board S is more likely to be a winning board than the board Sf, then
V(s) > V(Sf). To learn such a function, TO transforms entire chess games, denoted by
a sequence of chess boards So, SI, s2, . . . , StunaJ' into training patterns for V. The TO(O)
learning rule works in the following way. Assume without loss of generality we are learning
white's evaluation function. Then the target values for the final board is given by
{
I,
0,
-1,
if Stu.?tI is a win for white
if StUnaJ is a draw
if StonaJ is a loss for white
and the targets for the intermediate chess boards So, SI , S2, . .. , Stu.?tI-2 are given by
Vt.1fget( St)
I? V (St+2)
=
(1)
(2)
This update rule constructs V recursively. At the end of the game, V evaluates the final
outcome of the game (Eq. (l In between, when the assignment of V -values is less obvious,
V is trained based on the evaluation two half-moves later (Eq. (2?. The constant I (with
o ~ I ~ 1) is a so-called discount factor. It decays V exponentially in time and hence
favors early over late success. Notice that in NeuroChess V is represented by an artificial
neural network, which is trained to fit the target values vtarget obtained via Eqs. (l) and (2)
(cj [6, 11, 12, 16]).
?.
3
Explanation-Based Neural Network Learning
In a domain as complex as chess, pure inductive learning techniques. such as neural network Back-Propagation, suffer from enormous training times. To illustrate why, consider
the situation of a knight fork. in which the opponent's knight attacks our queen and king
simultaneously. Suppose in order to save our king we have to move it, and hence sacrifice
our queen. To learn the badness of a knight fork, NeuroChess has to discover that certain
board features (like the position of the queen relative to the knight) are important, whereas
Learning to Play the Game of Chess
1071
Figure 1: Fitting values and slopes in EBNN: Let V be the target function for which three
examples (s\, V(S\)), (S2' V(S2)), and (S3, V(S3)) are known. Based on these points the
S2)OS2, and a~;:3) are
learner might generate the hypothesis V'. If the slopes a~;:I),
also known, the learner can do much better: V".
ar
others (like the number of weak pawns) are not. Purely inductive learning algorithms such
as Back-propagation figure out the relevance of individual features by observing statistical
correlations in the training data. Hence, quite a few versions of a knight fork have to be
experienced in order to generalize accurately. In a domain as complex as chess, such an
approach might require unreasonably large amounts of training data.
Explanation-based methods (EBL) [5, 7, 15] generalize more accurately from less training
data. They rely instead on the availability of domain knowledge, which they use for explaining
and generalizing training examples. For example, in the explanation of a knight fork, EBL
methods employ knowledge about the game of chess to figure out that the position of the
queen is relevant, whereas the number of weak pawns is not. Most current approaches to
EBL require that the domain knowledge be represented by a set of symbolic rules. Since
NeuroChess relies on neural network representations, it employs a neural network version
of EBL, called explanation-based neural network learning (EBNN) [9]. In the context of
chess, EBNN works in the following way: The domain-specific knowledge is represented
by a separate neural network, called the chess model M. M maps arbitrary chess boards St
to the corresponding expected board St+2 two half-moves later. It is trained prior to learning
V, using a large database of grand-master chess games. Once trained, M captures important
knowledge about temporal dependencies of chess board features in high-quality chess play.
EBNN exploits M to bias the board evaluation function V. It does this by extracting slope
constraints for the evaluation function V at all non-final boards, i.e., all boards for which V
is updated by Eq. (2). Let
with
t E
{a, 1,2, ... , tlioa\ - 2}
denote the target slope of V at St, which, because
Eq. (2), can be rewritten as
oV target ( St)
=
'Y.
oV( St+2) OSt+2
._OSt+2
OSt
vtarget ( St)
(3)
is set to 'Y V (St+2) according
(4)
using the chain rule of differentiation. The rightmost term in Eq. (4) measures how infinitesimal small changes of the chess board St influence the chess board St+2. It can be
approximated by the chess model M:
ovtarget(St)
OSt
~
'Y.
OV(St+2) oM(st)
.
OSt+2
OSt
(5)
The right expression is only an approximation to the left side, because M is a trained neural
Sebastian Thrun
1072
~
bmrd at time
f
(W"T"~)
~
board attime 1+ I
(black to move)
~
board at time 1+2
(w"'?ro~)
predictive model network M
165 hidden unit,
V(1+2)
Figure 2: Learning an evaluation function in NeuroChess. Boards are mapped into a
high-dimensionalJeature vector, which forms the input for both the evaluation network V
and the chess model M. The evaluation network is trained by Back-propagation and the
TD(O) procedure. Both networks are employed for analyzing training example in order to
derive target slopes for V.
network and thus its first derivative might be erroneous. Notice that both expressions on
the right hand side of Eq. (5) are derivatives of neural network functions, which are easy to
compute since neural networks are differentiable.
The result of Eq . (5) is an estimate of the slope of the target function V at 8t . This slope
adds important shape information to the target values constructed via Eq. (2). As depicted in
Fig. 1, functions can be fit more accurately if in addition to target values the slopes of these
values are known. Hence, instead of just fitting the target values vtarget ( 8t), NeuroChess also
fits these target slopes. This is done using the Tangent-Prop algorithm [13].
The complete NeuroChess learning architecture is depicted in Fig. 2. The target slopes
provide a first-order approximation to the relevance of each chess board feature in the
goodness of a board position. They can be interpreted as biasing the network V based on
chess-specific domain knowledge, embodied in M . For the relation ofEBNN and EBL and
the accommodation of inaccurate slopes in EBNN see [8].
4
Training Issues
In this section we will briefly discuss some training issues that are essential for learning good
evaluation functions in the domain of chess. This list of points has mainly been produced
through practical experience with the NeuroChess and related TD approaches. It illustrates
the importance of a careful design of the input representation, the sampling rule and the
Learning to Play the Game of Chess
1073
parameter setting in a domain as complex as chess.
Sampling. The vast majority of chess boards are, loosely speaking, not interesting. If, for
example, the opponent leads by more than a queen and a rook, one is most likely to loose.
Without an appropriate sampling method there is the danger that the learner spends most
of its time learning from uninteresting examples. Therefore, NeuroChess interleaves selfplay and expert play for guiding the sampling process. More specifically, after presenting
a random number of expert moves generated from a large database of grand-master games,
NeuroChess completes the game by playing itself. This sampling mechanism has been found
to be of major importance to learn a good evaluation function in a reasonable amount of time.
Quiescence. In the domain of chess certain boards are harder to evaluate than others. For
example, in the middle of an ongoing material exchange, evaluation functions often fail to
produce a good assessment. Thus, most chess programs search selectively. A common
criterion for determining the depth of search is called quiescence. This criterion basically
detects material threats and deepens the search correspondingly. NeuroChess' search engine
does the same. Consequently, the evaluation function V is only trained using quiescent
boards.
Smoothness. Obviously, using the raw, canonical board description as input representation is
a poor choice. This is because small changes on the board can cause a huge difference in value,
contrasting the smooth nature of neural network representations. Therefore, NeuroChess
maps chess board descriptions into a set of board features . These features were carefully
designed by hand.
Discounting. The variable 'Y in Eq. (2) allows to discount values in time. Discounting has
frequently been used to bound otherwise infinite sums of pay-off. One might be inclined to
think that in the game of chess no discounting is needed, as values are bounded by definition.
Indeed, without discounting the evaluation function predicts the probability for winning-in
the ideal case. In practice, however, random disturbations of the evaluation function can
seriously hurt learning, for reasons given in [4, 17]. Empirically we found that learning
failed completely when no discount factor was used. Currently, NeuroChess uses 'Y = 0.98.
Learning rate. TO approaches minimize a Bellman equation [2]. In the NeuroChess
domain, a close-to-optimal approximation of the Bellman equation is the constant function
V(s) == O. This function violates the Bellman equation only at the end of games (Eq. (1?,
which is rare if complete games are considered. To prevent this, we amplified the learning
rate for final values by a factor of20, which was experimentally found to produce sufficiently
non-constant evaluation functions.
Software architecture. Training is performed completely asynchronously on up to 20
workstations simultaneously. One of the workstations acts as a weight server, keeping track
of the most recent weights and biases of the evaluation network. The other workstations
can dynamically establish links to the weight server and contribute to the process of weight
refinement. The main process also monitors the state of all other workstations and restarts
processes when necessary. Training examples are stored in local ring buffers (1000 items
per workstation).
5
Results
In this section we will present results obtained with the NeuroChess architecture. Prior to
learning an evaluation function, the model M (175 input, 165 hidden, and 175 output units)
is trained using a database of 120,000 expert games. NeuroChess then learns an evaluation
1074
I. e2e3 b8c6
2. dlf3 c6e5
3. f3d5 d7d6
4. flb5 c7c6
5. b5a4 g8f6
6. d5d4 c8f5
7. f2f4 e5d7
8. ele2d8a5
9. a4b3 d7c5
10. b I a3 c5b3
11 . a2b3 e7e5
12. f4e5 f6e4
13. e5d6 e8c8
14. b3b4 a5a6
15. b4b5 a6a5
Sebastian Thrun
16. b2b4 a5a4
17. b5c6 a4c6
18. gl f3 d8d6
19. d4a7 f5g4
20. c2c4 c8d7
21. b4b5 c6c7
22. d2d3 d6d3
23. b5b6 c7c6
24. e2d3 e4f2
25. d3c3 g4f3
26. g2f3 f2h 1
27. clb2 c6f3
28. a7a4 d7e7
29. a3c2 hi f2
30. b2a3 e7f6
31 . a3f8 f2e4
32. c3b2 h8f8
33. a4d7 f3f5
34. d7b7 f5e5
35. b2cl f8e8
36. b7d5 e5h2
37. ala7 e8e6
38. d5d8 f6g6
39. b6b7 e6d6
40. d8a5 d6c6
41 . a5b4 h2b8
42. a7a8 e4c3
43. c2d4 c6f6
44. b4e7 c3a2
45. cldl a2c3
46. d I c2 b8h2
47. c2c3 f6b6
48. e7e4 g6h6
49. d4f5 h6g5
50. e4e7 g5g4
51. f5h6 g7h6
52. e7d7 g4h5
53. d7d I h5h4
54. d I d4 h4h3
55. d4b6 h2e5
56. b6d4 e5e6
57. c3d2 e6f5
58. e3e4 f5 g5
59. d4e3 g5e3
60. d2e3 f7f5
61 . e4f5 h3g4 65. a8e8 e6d7
62. f5f6 h6h5
66. e8e7 d7d8
63. b7b8q g4f5 67. f4c7
64. b8f4 f5e6
final board
Figure 3: NeuroChess against GNU-Chess. NeuroChess plays white. Parameters: Both
players searched to depth 3, which could be extended by quiescence search to at most 11.
The evaluation network had no hidden units. Approximately 90% of the training boards
were sampled from expert play.
network V (175 input units, 0 to 80 hidden units, and one output units). To evaluate the level
of play, NeuroChess plays against GNU-Chess in regular time intervals. Both players employ
the same search mechanism which is adopted from GNU-Chess. Thus far, experiments lasted
for 2 days to 2 weeks on I to 20 SUN Sparc Stations.
A typical game is depicted in Fig. 3. This game has been chosen because it illustrates both
the strengths and the shortcomings of the NeuroChess approach. The opening of NeuroChess
is rather weak. In the first three moves NeuroChess moves its queen to the center of the
board.' NeuroChess then escapes an attack on its queen in move 4, gets an early pawn
advantage in move 12, attacks black's queen pertinaciously through moves 15 to 23, and
successfully exchanges a rook. In move 33, it captures a strategically important pawn, which,
after chasing black's king for a while and sacrificing a knight for no apparent reason, finally
leads to a new queen (move 63). Four moves later black is mate. This game is prototypical.
As can be seen from this and various other games, NeuroChess has learned successfully to
protect its material, to trade material, and to protect its king. It has not learned, however, to
open a game in a coordinated way, and it also frequently fails to play short.endgames even
if it has a material advantage (this is due to the short planning horizon). Most importantly, it
still plays incredibly poor openings, which are often responsible for a draw or a loss. Poor
openings do not surprise, however, as TD propagates values from the end of a game to the
beginning.
Table I shows a performance comparison of NeuroChess versus GNU-Chess, with and
without the explanation-based learning strategy. This table illustrates that NeuroChess wins
approximately 13% of all games against GNU-Chess, if both use the same search engine. It
'This is because in the current version NeuroChess still heavily uses expert games for sampling.
Whenever a grand-master moves its queen to the center of the board, the queen is usually safe, and there
is indeed a positive correlation between having the queen in the center and winning in the database.
NeuroChess falsely deduces that having the queen in the center is good. This effect disappears when
the level of self-play is increased, but this comes at the expense of drastically increased training time,
since self-play requires search.
Learning to Play the Game of Chess
# of games
100
200
500
1000
1500
2000
2400
GNU depth 2, NeuroChess depth 2
Back-propagation
EBNN
1
0
6
2
35
13
73
85
130
135
190
215
239
316
1075
GNU depth 4, NeuroChess depth 2
Back-propagation
EBNN
0
0
0
0
I
0
2
1
3
3
3
8
II
3
Table 1: Performance ofNeuroChess vs. GNU-Chess during training. The numbers show the
total number of games won against GNU-Chess using the same number of games for testing
as for training. This table also shows the importance of the explanation-based learning
strategy in EBNN. Parameters: both learners used the original GNU-Chess features, the
evaluation network had 80 hidden units and search was cut at depth 2, or 4, respectively (no
quiescence extensions).
also illustrates the utility of explanation-based learning in chess.
6 Discussion
This paper presents NeuroChess, an approach for learning to play chess from the final
outcomes of games. NeuroChess integrates TD, inductive neural network learning and
a neural network version of explanation-based learning. The latter component analyzes
games using knowledge that was previously learned from expert play. Particular care has
been taken in the design of an appropriate feature representation, sampling methods, and
parameter settings. Thus far, NeuroChess has successfully managed to beat GNU-Chess in
several hundreds of games. However, the level of play still compares poorly to GNU-Chess
and human chess players.
Despite the initial success, NeuroChess faces two fundamental problems which both might
weB be in the way of excellent chess play. Firstly, training time is limited, and it is to
be expected that excellent chess skills develop only with excessive training time. This is
particularly the case if only the final outcomes are considered. Secondly, with each step of
TO-learning NeuroChess loses information. This is partially because the features used for
describing chess boards are incomplete, i.e., knowledge about the feature values alone does
not suffice to determine the actual board exactly. But, more importantly, neural networks have
not the discriminative power to assign arbitrary values to all possible feature combinations.
It is therefore unclear that a TD-like approach will ever, for example, develop good chess
openmgs.
Another problem of the present implementation is related to the trade-off between knowledge
and search. It has been well recognized that the ul timate cost in chess is determi ned by the ti me
it takes to generate a move. Chess programs can generally invest their time in search, or in the
evaluation of chess boards (search-knowledge trade-off) [3] . Currently, NeuroChess does a
poor job, because it spends most of its time computing board evaluations. Computing a large
neural network function takes two orders of magnitude longer than evaluating an optimized
linear evaluation function (like that of GNU-Chess). VLSI neural network technology offers
a promising perspective to overcome this critical shortcoming of sequential neural network
simulations.
1076
Sebastian Thrun
Acknowledgment
The author gratefully acknowledges the guidance and advise by Hans Berliner, who provided
the features for representing chess boards, and without whom the current level of play would
be much worse. He also thanks Tom Mitchell for his suggestion on the learning methods,
and Horst Aurisch for his help with GNU-Chess and the database.
References
[I] Thomas S. Anantharaman. A Statistical Study of Selective Min-Max Search in Computer Chess.
PhD thesis, Carnegie Mellon University, School of Computer Science, Pittsburgh, PA, 1990.
Technical Report CMU-CS-90-173.
[2] R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957.
[3] Hans J. Berliner, Gordon Goetsch, Murray S. Campbell, and Carl Ebeling. Measuring the
performance potential of chess programs. Artificial Intelligence, 43:7-20, 1990.
[4] Justin A. Boyan. Generalization in reinforcement learning: Safely approximating the value
function. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information
Processing Systems 7, San Mateo, CA, 1995. Morgan Kaufmann. (to appear).
[5] Gerald Dejong and Raymond Mooney. Explanation-based learning: An alternative view. Machine Learning, 1(2): 145-176, 1986.
[6] Michael Gherrity. A Game-Learning Machine. PhD thesis, University of California, San Diego,
1993.
[7] Tom M. Mitchell, Rich Keller, and Smadar Kedar-Cabelli. Explanation-based generalization: A
unifying view. Machine Learning, 1(1 ):47-80, 1986.
[8] Tom M. Mitchell and Sebastian Thrun. Explanation based learning: A comparison of symbolic
and neural network approaches. In Paul E. Utgoff, editor, Proceedings of the Tenth International
Conference on Machine Learning, pages 197-204, San Mateo, CA, 1993. Morgan Kaufmann.
[9] Tom M. Mitchell and Sebastian Thrun. Explanation-based neural network learning for robot
control. In S. J. Hanson, J. Cowan, and C. L. Giles, editors, Advances in Neural Information
Processing Systems 5, pages 287-294, San Mateo, CA, 1993. Morgan Kaufmann.
[10] A. L. Samuel. Some studies in machine learning using the game of checkers. IBM Journal on
research and development, 3:210-229, 1959.
[11] Johannes Schafer. Erfolgsorientiertes Lemen mit Tiefensuche in Bauemendspielen. Technical
report, UniversiUit Karlsruhe, 1993. (in German).
[12] Nikolaus Schraudolph, Pater Dayan, and Terrence J. Sejnowski. Using the TD(lambda) algorithm
to learn an evaluation function for the game of go. In Advances in Neural Information Processing
Systems 6, San Mateo, CA, 1994. Morgan Kaufmann.
[13] Patrice Simard, Bernard Victorri, Yann LeCun, and John Denker. Tangent prop -a formalism for
specifying selected invariances in an adaptive network. In J. E. Moody, S. J. Hanson, and R. P.
Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 895-903, San
Mateo, CA, 1992. Morgan Kaufmann.
[14] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning,
3,1988.
[15] Prasad Tadepalli. Planning in games using approximately learned macros. In Proceedings of the
Sixth International Workshop on Machine Learning, pages 221-223, Ithaca, NY, 1989. Morgan
Kaufmann.
[16] Gerald J. Tesauro. Practical issues in temporal difference learning. Machine Learning, 8, 1992.
[17] Sebastian Thrun and Anton Schwartz. Issues in using function approximation for reinforcement learning. In M. Mozer, P. Smolensky, D. Touretzky, J. Elman, and A. Weigend, editors,
Proceedings of the 1993 Connectionist Models Summer School, Hillsdale, NJ, 1993. Erlbaum
Associates.
| 1007 |@word briefly:1 version:6 middle:1 tadepalli:2 open:2 simulation:1 prasad:1 harder:1 recursively:2 initial:1 deepens:1 seriously:1 rightmost:1 current:3 si:2 john:1 shape:1 designed:2 update:1 v:1 alone:1 intelligence:2 half:2 selected:1 item:1 beginning:1 short:3 ebnn:9 contribute:1 attack:3 firstly:1 constructed:1 c2:1 fitting:2 falsely:1 sacrifice:1 expected:2 indeed:2 hardness:1 elman:1 frequently:4 planning:2 bellman:4 berliner:2 detects:1 td:8 actual:1 provided:1 discover:1 bounded:1 schafer:2 suffice:1 interpreted:1 spends:2 contrasting:1 dejong:1 differentiation:1 nj:2 temporal:7 safely:1 ti:3 act:1 gherrity:2 ro:1 exactly:1 schwartz:1 control:1 unit:7 appear:1 positive:1 local:1 despite:1 sutton:1 analyzing:2 solely:2 approximately:3 might:5 black:4 mateo:5 dynamically:1 specifying:1 challenging:1 limited:1 practical:2 responsible:1 acknowledgment:1 testing:1 lecun:1 practice:1 chasing:1 procedure:1 danger:1 gammon:3 regular:1 advise:1 symbolic:2 get:1 close:1 context:3 influence:1 map:2 center:4 go:2 incredibly:1 keller:1 pure:1 rule:5 importantly:2 his:5 hurt:1 updated:1 elucidate:1 play:27 target:13 suppose:1 heavily:1 programming:1 carl:1 us:2 diego:1 hypothesis:1 pa:1 associate:1 approximated:1 particularly:1 cut:1 predicts:1 database:5 fork:4 capture:2 inclined:1 sun:1 trade:3 knight:7 ebl:5 mozer:1 utgoff:1 inductively:1 dynamic:1 gerald:2 trained:8 pawn:4 ov:3 predictive:1 purely:2 f2:1 learner:4 completely:2 represented:4 various:1 fast:1 shortcoming:2 universiuit:1 sejnowski:1 artificial:5 outcome:11 quite:1 apparent:1 otherwise:1 favor:1 think:1 itself:2 final:15 asynchronously:1 obviously:1 patrice:1 sequence:1 differentiable:1 advantage:2 macro:1 relevant:1 deduces:1 poorly:1 amplified:1 description:2 invest:1 produce:2 ring:1 help:1 illustrate:2 derive:1 develop:2 school:2 job:1 eq:11 predicted:1 c:1 come:1 safe:1 human:1 material:5 violates:1 hillsdale:1 require:2 exchange:2 assign:1 generalization:2 secondly:1 extension:1 sufficiently:1 considered:2 predict:2 week:1 major:2 early:2 integrates:2 currently:3 successfully:3 tool:1 mit:1 grandmaster:1 rather:1 earliest:1 backgammon:2 rank:1 mainly:1 lasted:1 dayan:1 inaccurate:1 entire:2 hidden:5 relation:1 quiescence:4 vlsi:1 reproduce:1 selective:1 interested:1 germany:1 g4f3:1 issue:5 denoted:1 development:1 construct:2 once:1 f3:1 having:2 sampling:7 excessive:1 of20:1 report:3 others:2 gordon:1 escape:1 few:1 employ:3 opening:3 strategically:1 richard:1 connectionist:1 simultaneously:2 individual:1 delayed:1 stu:2 attempt:1 huge:1 evaluation:29 weakness:2 chain:1 necessary:1 experience:2 incomplete:1 loosely:1 sacrificing:1 guidance:1 increased:2 formalism:1 giles:1 ar:1 goodness:2 measuring:1 assignment:1 queen:13 badness:1 cost:1 rare:1 uninteresting:1 hundred:1 successful:2 erlbaum:1 too:1 reported:2 stored:1 dependency:1 st:14 thanks:1 grand:3 fundamental:1 international:2 cabelli:1 off:3 terrence:1 michael:1 moody:1 thesis:2 central:1 f5:1 worse:1 lambda:1 expert:6 derivative:2 simard:1 potential:1 de:1 availability:1 coordinated:1 later:3 performed:1 view:2 observing:1 slope:11 om:1 minimize:1 publicly:1 kaufmann:6 who:1 generalize:2 weak:3 famous:1 raw:1 anton:1 accurately:3 produced:1 informatik:1 basically:1 mooney:1 touretzky:2 sebastian:8 whenever:1 definition:1 infinitesimal:1 evaluates:1 against:4 sixth:1 obvious:1 workstation:5 timate:1 sampled:1 mitchell:4 knowledge:11 cj:1 carefully:2 back:5 campbell:1 day:1 restarts:1 tom:4 leen:1 done:1 generality:1 just:2 correlation:2 hand:3 web:1 ebeling:1 assessment:1 propagation:5 quality:1 karlsruhe:1 building:1 effect:1 managed:1 inductive:5 analytically:1 hence:4 discounting:4 white:4 game:45 self:2 during:1 samuel:2 d4:1 criterion:2 won:1 presenting:1 complete:2 recently:1 common:1 empirically:1 exponentially:1 he:2 mellon:1 ai:1 smoothness:1 automatic:1 tuning:1 gratefully:1 had:2 interleaf:1 han:2 longer:1 robot:1 add:1 accommodation:1 recent:2 perspective:1 tesauro:5 sparc:1 certain:3 server:2 buffer:1 ost:6 success:2 vt:1 seen:1 analyzes:2 morgan:6 care:1 employed:5 recognized:1 determine:1 ii:1 smooth:1 technical:2 offer:1 schraudolph:1 prediction:1 variant:2 basic:1 cmu:1 sometimes:1 addition:2 whereas:2 interval:1 victorri:1 completes:1 ithaca:1 unlike:1 checker:2 cowan:1 extracting:1 ideal:1 intermediate:1 iii:1 easy:1 fit:3 architecture:3 intensive:1 expression:2 utility:1 ul:1 suffer:1 speaking:1 cause:1 generally:1 listed:1 johannes:1 transforms:1 amount:2 discount:3 generate:3 canonical:1 notice:2 s3:2 track:1 per:1 carnegie:1 threat:1 four:1 enormous:1 monitor:1 prevent:1 tenth:1 vast:1 sum:1 weigend:1 master:3 throughout:1 family:1 reasonable:1 yann:1 draw:3 gnu:15 bound:1 pay:1 hi:1 summer:1 strength:3 constraint:1 software:1 bonn:3 endgame:3 min:1 romerstr:1 ned:1 department:1 according:2 combination:1 poor:4 describes:1 chess:74 determi:1 taken:1 equation:3 previously:2 discus:2 loose:1 mechanism:3 fail:1 needed:1 describing:1 german:1 end:3 adopted:1 available:1 rewritten:1 opponent:2 denker:1 appropriate:2 nikolaus:1 save:1 alternative:1 original:1 thomas:1 unreasonably:1 unifying:1 exploit:1 murray:1 establish:1 approximating:1 move:16 strategy:2 g5:1 unclear:1 win:3 separate:1 mapped:1 thrun:9 link:1 majority:1 me:1 mail:1 whom:1 reason:2 differencing:1 carbon:1 expense:1 design:2 implementation:1 unknown:1 benchmark:1 mate:1 beat:1 situation:1 extended:1 ever:1 station:1 arbitrary:2 optimized:1 hanson:2 engine:2 california:1 learned:6 testbed:1 protect:2 justin:1 usually:2 below:1 pattern:1 biasing:1 smolensky:1 program:7 max:1 explanation:16 power:1 event:1 critical:1 rely:2 boyan:1 representing:1 technology:1 disappears:1 acknowledges:1 embodied:1 raymond:1 prior:2 tangent:2 determining:1 relative:1 loss:4 interesting:1 prototypical:1 suggestion:1 versus:1 propagates:1 editor:5 playing:3 ibm:1 gl:1 last:1 keeping:1 drastically:1 bias:2 side:2 explaining:1 face:1 correspondingly:1 overcome:1 depth:7 evaluating:1 rich:1 author:1 horst:1 refinement:1 reinforcement:2 san:6 adaptive:1 far:2 skill:1 uni:1 lippmann:1 rook:2 pittsburgh:1 quiescent:1 discriminative:1 search:14 decade:1 why:1 table:4 promising:1 learn:6 nature:1 ca:5 excellent:2 complex:3 domain:12 main:1 s2:5 paul:1 augmented:1 fig:3 board:41 ny:1 experienced:1 position:3 comprises:1 guiding:1 fails:1 winning:3 sf:2 late:1 learns:7 erroneous:1 specific:3 list:1 decay:1 a3:1 essential:1 workshop:1 kedar:1 sequential:1 importance:3 phd:2 magnitude:1 illustrates:5 horizon:1 surprise:1 generalizing:2 depicted:3 selfplay:1 likely:2 failed:1 lazy:1 partially:1 loses:1 relies:1 prop:2 goal:1 king:4 consequently:1 careful:1 change:2 experimentally:1 specifically:1 infinite:1 typical:1 called:4 total:1 bernard:1 invariance:1 player:4 selectively:1 searched:1 latter:1 relevance:2 ongoing:1 evaluate:3 princeton:2 |
11 | 1,008 | Multidimensional Scaling and Data Clustering
Thomas Hofmann & Joachim Buhmann
Rheinische Friedrich-Wilhelms-U niversitat
Institut fur Informatik ill, Romerstra6e 164
D-53117 Bonn, Germany
email:{th.jb}@cs.uni-bonn.de
Abstract
Visualizing and structuring pairwise dissimilarity data are difficult combinatorial optimization problems known as multidimensional scaling or pairwise data clustering.
Algorithms for embedding dissimilarity data set in a Euclidian space, for clustering
these data and for actively selecting data to support the clustering process are discussed
in the maximum entropy framework. Active data selection provides a strategy to discover
structure in a data set efficiently with partially unknown data.
1 Introduction
Grouping experimental data into compact clusters arises as a data analysis problem in psychology, linguistics, genetics and other experimental sciences. The data which are supposed
to be clustered are either given by an explicit coordinate representation (central clustering)
or, in the non-metric case, they are characterized by dissimilarity values for pairs of data
points (pairwise clustering). In this paper we study algorithms (i) for embedding non-metric
data in a D-dimensional Euclidian space, (ii) for simultaneous clustering and embedding of
non-metric data, and (iii) for active data selection to determine a particular cluster structure
with minimal number of data queries. All algorithms are derived from the maximum entropy
principle (Hertz et al., 1991) which guarantees robust statistics (Tikochinsky et al., 1984).
The data are given by a real-valued, symmetric proximity matrix D E R NXN , 'Dkl being
the pairwise dissimilarity between the data points k, l. Apart from the symmetry constraint
we make no further assumptions about the dissimilarities, i.e., we do not require D being a
metric. The numbers 'Dkl quite often violate the triangular inequality and the dissimilarity of
a datum to itself could be finite.
2
Statistical Mechanics of Multidimensional Scaling
Embedding dissimilarity data in a D-dimensional Euclidian space is a non-convex optimization problem which typically exhibits a large number of local minima. Stochastic search
methods like simulated annealing or its deterministic variants have been very successfulJy
460
Thomas Hofmann. Joachim Buhmann
applied to such problems. The question in multidimensional scaling is to find coordinates
{Xi }i~1 in a D-dimensional Euclidian space with minimal embedding costs
N
H MDS
[I Xi -
1 '"'
= 2N
L.,
Xk 12 - 'Dik ]2 .
(1)
i,k=1
Without loss of generality we shift the center of mass in the origin <2::= I Xk = 0).
In the maximum entropy framework the coordinates {Xi} are regarded as random variables
which are distributed according to the Gibbs distribution P ( { Xj} ) = exp( - f3 (H MDS - F). The
inverse temperature f3 = 1/T controls the expected embedding costs (HMDS) (expectation values are denoted by (.). To calculate the free energy F for H MDS we approximate the coupling
term 2 2:~"k=1 'DikxiXk/N ~ 2:[:1 xihi with the mean fields hi = 4 2:~= 1 'Dik(Xk}/N.
Standard t~chniques to evaluate the free energy F yield the equations
J J II
' 00
Z(HMDS)
00
dy
rv
- ' 00
- 00
f)
F(H MDS )
D
L
2
dR.d,d' exp (-f3NF),
(2)
J
(3)
d,d'=1
N
R.~d' - f3~ Lin
d,d'=1
i=1
00
dXjexp (-f3f(Xi)) '
- 00
N
f(Xi)
IXil4 -
~IXiI2 L 'Dik + 4xTR.xi + xT (hi -
4Y)?
k=1
The integral in Eq. (2) is dominated by the absolute minimum of F in the limit N
Therefore, we calculate the saddle point equations
(4)
~ 00.
N
R.
=
~L
((Xjxf) + l(l x iI 2)I)
i=1
I Xi exp( -f3f(Xj)dx i
I exp( -f3 f(Xj)dxi .
and
(5)
0
(6)
Equation (6) has been derived by differentiating F with respect to hi. I denotes the D x D
unit matrix. In the low temperature limit f3 ~ 00 the integral in (3) is dominated by the
minimum of f(Xi) . Therefore, a new estimate of (Xi) is calculated minimizing f with respect
to Xi. Since all explicit dependencies between the Xi have been eliminated, this minimization
can be performed independently for all i, 1 ~ i ~ N.
In the spirit of the EM algorithm for Gaussian mixture models we suggest the following
algorithm to calculate a meanfield approximation for the multidimensional scaling problem.
initialize (Xi)(O) randomly; t
while
2:::':1 I(Xi )(t ) -
(Xi)(t-I)I
E- step: estimate
M-step: calculate
>
(Xi) (t+l)
n (t),
= O.
t:
as a function of
h~t) and determine
(Xi)( t ) ,
y (t)
RY) ,
such
that the centroid condition is satisfied.
y(t ),
h~ t)
Multidimensional Scaling and Data Clustering
461
This algorithm was used to determine the embedding of protein dissimilarity data as shown in
Fig. 1d. The phenomenon that the data clusters are arranged in a circular fashion is explained
by the lack of small dissimilarity values. The solution in Fig. Id is about a factor of two
better than the embedding found by a classical MDS program (Gower, 1966). This program
determines a (N - 1)- space where the ranking of the dissimilarities is preserved and uses
principle component analysis to project this tentative embedding down to two dimensions.
Extensions to other MDS cost functions are currently under investigation.
3
Multidimensional Scaling and Pairwise Clustering
Embedding data in a Euclidian space precedes quite often a visual inspection by the data
analyst to discover structure and to group data into clusters. The question arises how both
problems, the embedding problem and the clustering problem, can be solved simultaneously.
The second algorithm addresses the problem to embed a data set in a Euclidian space such
that the clustering structure is approximated as faithfully as possible in the maximum entropy
sense by the clustering solution in this embedding space. The coordinates in the embedding
space are the free parameters for this optimization problem.
Clustering of non-metric dissimilarity data, also called pairwise clustering (Buhmann, Hofmann, 1994a), is a combinatorial optimization problem which depends on Boolean assignments Miv E {a, I} of datum i to cluster lJ. The cost function for pairwise clustering with
J( clusters is
If
1
N
N
(7)
E~:(M) =
2 N
MkvMlv'Dkl with
v=1 Pv
k=! 1=1
L
LL
In the meanfield approach we approximate the Gibbs distribution P( Ej;) corresponding
to the original cost function by a family of approximating distributions. The distribution
which represents most accurately the statistics of the original problem is determined by
the minimum of the Kullback-Leibler divergence to the original Gibbs distribution. In the
pairwise clustering case we introduce potentials {Ekv } for the effective interactions, which
define a set of cost functions with non-interacting assignments.
K
N
L L Mk 1jEkl;.
?<).; (M, {Ekv }) =
(8)
v=1 k=1
The optimal potentials derived from this minimization procedure are
{?kv} = arg min 'DKL (PO(E~' )IIP(E~)),
(9)
{?kv}
where PO(E9{) is the Gibbs distribution corresponding to E~., and 'DKL(?II?) is the KLdivergence. This method is equivalent to minimizing an upper bound on the free energy
(Buhmann, Hofmann, 1994b),
F(E~:) ::; Fo(E~. )
+ (VK)o,
with
VA" = Ej; - ?~"
(10)
(')0 denoting the average over all configurations of the cost function without interactions.
Correlations between assignment variables are statistically independent for PO( E9(), i.e.,
(MkvA11v)0 = (M kv )0(A11v )0. The averaged potential VI\, therefore, amounts to
K
(Vrd =
1
N
LL
v=1 k ,I=1
(Mkl;) (Mlv) 2 vN'Dk1 P
K
N
L L(A1kv)Eklj,
v=1 k=1
(11)
462
Thomas Hofmann. Joachim Buhmann
the subscript of averages being omitted for conciseness. The expected assignment variables
are
(12)
Minimizing the upper bound yields
(13)
The "optimal" potentials
[i~' =
1
N
IN)
(
N L(Mkv ) 'Dik - 2 N L(M1v)D kl
1v
k=1
Pv
1=1
J
(14)
depend on the given distance matrix, the averaged assignment variables and the cluster
probabilities. They are optimal in the sense, that if we set
(15)
the N * K stationarity conditions (13) are fulfilled for every i E {I, ... , N}, 11 E {I, ... , K}. A
simultaneous solution ofEq. (15) with (12) constitutes a necessary condition for a minimum
of the upper bound for the free energy :F.
The connection between the clustering and the multidimensional scaling problem is established, if we restrict the potentials [iv to be of the form IXi - Yvf with the centroids
YII = 2:~=1 Mkl/Xv/ 2::=1 Mkv. We consider the coordinates Xi as the variational parameters. The additional constraints restrict the family of approximating distributions, defined
by ?9". to a subset. Using the chain rule we can calculate the derivatives of the upper bound
(10), resulting in the exact stationary conditions for Xi,
K
N
'"
co
~ (Mia )(Mja ) (~Cia
co
-~Civ)Ya
a,v=1
=
K
jv ) x
(MjoJ(M
N
a,v=1
Pa
'~
" '~
"
j=1
N ( (Xk - Ya) a(Mka)
(~[ia - ~[ir/) [(Mia)! + ~
Oxi T)
1(Xj - Ya),
(16)
where ~[iOt = ?ia - [tao The derivatives a(Mka) /Oxi can be exactly calculated, since they
are given as the solutions of an linear equation system with N x K unknowns for every Xi. To
reduce the computational complexity an approximation can be derived under the assumption
ay 0/ / aXj ~ O. In this case the right hand side of (16) can be set to zero in a first order
approximation yielding an explicit formula for Xi,
K
KiXi
~ ~ L(Miv) (11Yv1l
v=1
K
2 -
[tv) (Yv - L(Mia)Ya) ,
(17)
a=1
with the covariance matrix Ki = ((yyT)j - (Y)i(Y)T) and (Y)i = 2:~=1 (Miv)Y v'
The derived system of transcendental equations given by (12), (17) and the centroid condition explicitly reflects the dependencies between the clustering procedure and the Euclidian
representation. Solving these equations simultaneously leads to an efficient algorithm which
Multidimensional Scaling and Data Clustering
a
463
.
4tHB
b
HB
HG,H~
HA
GGI
GP~
~
GGI~
MY
~
HBX,
HF, HE
GP
HG~~~
~
~
HBX,HF,HE
~.
GGGI
???
? [l}?,faitt\tvJqJ~!;t
. .?. ?. ,.'?..,.? ... .
~llt GP
c
GGI
0
420
~GGG
x
x
HAfo
++
Random Selection
380
?re
+
+
d
x~
HB
+
?
340
MY
---+
HG,HE,HF
300
# of selected Do,
Figure 1: Similarity matrix of 145 protein sequences of the globin family (a): dark gray levels
correspond to high similarity values; (b): clustering with embedding in two dimensions; (c):
multidimensional scaling solution for 2-dimensional embedding; (d): quality of clustering
solution with random and active data selection of 'D ik values. eKe has been calculated on the
basis of the complete set of 'Di k values.
interleaves the multidimensional scaling process and the clustering process and which avoids
an artificial separation into two uncorrelated processes . The described algorithm for simultaneous Euclidian embedding and data clustering can be used for dimensionality reduction,
e.g., high dimensional data can be projected to a low dimensional subspace in a nonlinear
fashion which resembles local principle component analysis (Buhmann, Hofmann, 1994b).
Figure (l) shows the clustering result for a real-world data set of 145 protein sequences. The
similarity values between pairs of sequences are determined by a sequence alignment program
which takes biochemical and structural information into account. The sequences belong to
different protein families like hemoglobin, myoglobin and other globins; they are abbreviated
with the displayed capital letters. The gray level visualization of the dissimilarity matrix with
dark values for similar protein sequences shows the formation of distinct "squares" along the
main diagonal. These squares correspond to the discovered partition after clustering. The
embedding in two dimensions shows inter-cluster distances which are in consistent agreement
with the similarity values of the data. In three and four dimensions the error between the
464
Thomas Hofmann. Joachim Buhmann
given dissimilarities and the constructed distances is further reduced. The results are in good
agreement with the biological classification.
4
Active Data Selection for Data Clustering
Active data selection is an important issue for the analysis of data which are characterized
by pairwise dissimilarity values. The size of the distance matrix grows like the square of
the number of data 'points'. Such a O(N2) scaling renders the data acquisition process
expensive. It is, therefore, desirable to couple the data analysis process to the data acquisition
process, i.e., to actively query the supposedly most relevant dissimilarity values. Before
addressing active data selection questions for data clustering we have to discuss the problem
how to modify the algorithm in the case of incomplete data.
If we want to avoid any assumptions about statistical dependencies, it is impossible to infer
unknown values and we have to work directly with the partial dissimilarity matrix. Since the
data enters only in the (re-)ca1culation of the potentials in (14), it is straightforward to appropriately modify these equations. All sums are restricted to terms with known dissimilarities
and the normalization factors are adjusted accordingly.
Alternatively we can try to explicitly estimate the unknown dissimilarity values based on
a statistical model. For this purpose we propose two models, relying on a known group
structure of the data. The first model (I) assumes that all dissimilarities between a point
i and points j belonging to a group G ~ are i.i.d. random variables with the probability
density Pi/1 parameterized by eiw In this scheme a subset of the known dissimilarities of
i and j to other points k are used as samples for the estimation of V ij . The selection
of the specific subset is determined by the clustering structure. In the second model (II)
we assume that the dissimilarities between groups G v, G ~ are i.i.d. random variables with
density PV/1 parameterized by e,IW The parameters ev~ are estimated on the basis of all
known dissimilarities {Vij E V} between points from G v and G~.
The assignments of points to clusters are not known a priori and have to be determined in the
light of the (given and estimated) data. The data selection strategy becomes self-consistent
if we interpret the mean fields (.I"vfiv) of the clustering solution as posterior probabilities for
the binary assignment variables. Combined with a maximum likelihood estimation for the
unknown parameters given the posteriors, we arrive at an EM-like iteration scheme with the
E-step replaced by the clustering algorithm.
The precise form of the M-Step depends on the parametric form of the densities Pi~ or PI/~'
respectively. In the case of Gaussian distributions the M-Step is described by the following
estimation equations for the location parameters
(I),
with 1T:j~ = 1+~vl' ((Mil/){Mj~)
+ (l\tfi~)(Mjv)).
(II),
(18)
Corresponding expressions are derived
for the standard deviations at) or a~'~, respectively. In the case of non-normal distributions
the empirical mean might still be a good estimator of the location parameter, though not
necessarily a maximum likelihood estimator. The missing dissimilarities are estimated by
the following statistics, derived from the empirical means.
- (I)
Dij
K
=
'""
~ (l\tfiv)(JVfj~)
1/,11=)
i\[
- (I)
J. i~mi~
JY
1~
+ N jvmjv
- (I)
+ N.
}V
(I),
D~~)
!}
= '"" .".ij
m- (I)
"11/1
~
11-:5:~
'v~
(II) ,
(19)
Multidimensional Scaling and Data Clustering
465
2600 r - - r -........-.----,--........-.-----,.,
L,
'\.,
2400
\c,
~---,-- ...
\
2200
""-!
1\
\t:
Ac t i ve Da t~:L--,
Se 1 ec t ion
\ _______________________ _
2000
o
400
BOO
1200
# of selected dissimilarities
Figure 2: Similarity matrix of 54 word fragments generated by a dynamic programming
algorithm. The clustering costs in the experiment with active data selection requires only half
as much data as a random selection strategy.
=
with Nil'
E'D.kE'D(i11k11)' For model (I) we have used a pooled estimator to exploit the
data symmetry. The iteration scheme finally leads to estimates (jill or (j'lt' respectively for the
parameters and Dij for all unknown dissimilarities.
Criterion for Active Data Selection: We will use the expected reduction in the variance of
the free energy Fo as a score, which should be maximized by the selection criterion. Fo is
given by Fo(D) = -~ E;;:', log E;~l exp( -{3?i/l(D)). If we query a new dissimilarity
D ij the expected reduction of the variance of the free energy is approximated by
aFO]2 V [D .. _ D .. ]
~ .. = 2 [aDij
tJ
tJ
t)
(20)
The partial derivatives can be calculated exactly by solving a system of linear equations with
N x [ ..: unknowns. Alternatively a first order approximation in f /I = O( 1/ N P,/) yields
(21)
This expression defines a relevance measure of Dij for the clustering problem since a Dij
value contributes to the clustering costs only if the data i and j belong to the same cluster.
Equation (21) summarizes the mean-field contributions aFo/aDij ~ a(H)o/aDjj .
To derive the final form of our scoring function we have to calculate an approximation of
the variance in Eq. (20) which measures the expected squared error for replacing the true
value Dij with our estimate D ij . Since we assumed statistical independence the variances
are additive V [Dij - Dij] = V [Dij] + V [Dij]. The total population variance is a sum
of inner- and inter-cluster variances, that can be approximated by the empirical means and
by the empirical variances instead of the unknown parameters of Pill or P'lt'. The sampling
variance of the statistics Dij is estimated under the assumption, that the empirical means ifl'ill
466
Thomas Hofmann, Joachim Buhmann
or mVJ.l respectively are uncorrelated. This holds in the hard clustering limit. We arrive at
the following final expression for the variances of model (II)
v [Vij-Dij ]
~
L1TYJl[(Dij-mvJl)2+(I+I: 1T:JJ.l
V~Jl
Vk1EV
1Tkl(j~Jl)l
VJl
(22)
For model (I) a slightly more complicated formula can be derived. Inserting the estimated
variances into Eq. (20) leads to the final expression for our scoring function.
To demonstrate the efficiency of the proposed selection strategy, we have compared the
clustering costs achieved by active data selection with the clustering costs resulting from
randomly queried data. Assignments int the case of active selection are calculated with
statistical model (I). Figure 1d demonstrates that the clustering costs decrease significantly
faster when the selection criterion (20) is implemented. The structure of the clustering
solution has been completely inferred with about 3300 selected V ik values. The random
strategy requires about 6500 queries for the same quality. Analogous comparison results for
linguistic data are summarized in Fig. 2. Note the inconsistencies in this data set reflected by
smallVik values outside the cluster blocks (dark pixels) or by the large Vik values (white
pixels) inside a block.
Conclusion: Data analysis of dissimilarity data is a challenging problem in molecular biology, linguistics, psychology and, in general, in pattern recognition. We have presented
three strategies to visualize data structures and to inquire the data structure by an efficient
data selection procedure. The respective algorithms are derived in the maximum entropy
framework for maximal robustness of cluster estimation and data embedding. Active data
selection has been shown to require only half as much data for estimating a clustering solution
of fixed quality compared to a random selection strategy. We expect the proposed selection
strategy to facilitate maintenance of genome and protein data bases and to yield more robust
data prototypes for efficient search and data base mining.
Acknowledgement: It is a pleasure to thank M. Vingron and D. Bavelier for providing the
protein data and the linguistic data, respectively. We are also grateful to A. Polzer and H.J.
Warneboldt for implementing the MDS algorithm. This work was partially supported by the
Ministry of Science and Research of the state Nordrhein-Westfalen.
References
Buhmann, J., Hofmann, T. (l994a). Central and Pairwise Data Clustering by Competitive
Neural Networks. Pages 104-111 of" Advances in Neural Infonnation Processing
Systems 6. Morgan Kaufmann Publishers.
Buhmann, J., Hofmann, T. (1994b). A Maximum Entropy Approach to Pairwise Data
Clustering. Pages 207-212 of" Proceedings of the International Conference on Pattern
Recognition, Hebrew University, Jerusalem, vol. II. IEEE Computer Society Press.
Gower, J. C. (1966). Some distance properties of latent root and vector methods used in
multivariate analysis. Biometrika, 53, 325-328.
Hertz, J., Krogh, A., Palmer, R. G. (1991). Introduction to the Theory ofNeural Computation.
New York: Addison Wesley.
Tikochinsky, y, Tishby, N.Z., Levine, R. D. (1984). Alternative Approach to MaximumEntropy Inference. Physical Review A, 30, 2638-2644.
| 1008 |@word covariance:1 euclidian:8 reduction:3 configuration:1 fragment:1 selecting:1 score:1 denoting:1 dx:1 transcendental:1 additive:1 partition:1 hofmann:10 civ:1 stationary:1 half:2 selected:3 accordingly:1 inspection:1 xk:4 provides:1 location:2 afo:2 along:1 constructed:1 ik:2 eiw:1 inside:1 adij:2 introduce:1 pairwise:11 inter:2 expected:5 mechanic:1 ry:1 relying:1 eke:1 becomes:1 project:1 discover:2 dk1:1 estimating:1 mass:1 kldivergence:1 guarantee:1 every:2 multidimensional:12 f3f:2 exactly:2 biometrika:1 demonstrates:1 control:1 unit:1 before:1 local:2 modify:2 xv:1 limit:3 id:1 subscript:1 might:1 resembles:1 challenging:1 co:2 palmer:1 statistically:1 averaged:2 block:2 procedure:3 empirical:5 significantly:1 word:1 suggest:1 protein:7 selection:21 impossible:1 equivalent:1 deterministic:1 center:1 missing:1 maximumentropy:1 straightforward:1 jerusalem:1 independently:1 convex:1 ke:1 rule:1 estimator:3 regarded:1 embedding:18 population:1 coordinate:5 analogous:1 exact:1 programming:1 us:1 ixi:1 origin:1 agreement:2 pa:1 approximated:3 expensive:1 recognition:2 levine:1 solved:1 enters:1 inquire:1 calculate:6 tikochinsky:2 ifl:1 decrease:1 supposedly:1 complexity:1 bavelier:1 dynamic:1 depend:1 solving:2 grateful:1 efficiency:1 basis:2 completely:1 pill:1 po:3 distinct:1 effective:1 mka:2 query:4 precedes:1 artificial:1 formation:1 outside:1 quite:2 valued:1 triangular:1 statistic:4 gp:3 itself:1 final:3 sequence:6 propose:1 interaction:2 maximal:1 inserting:1 yii:1 relevant:1 supposed:1 kv:3 cluster:13 coupling:1 derive:1 ac:1 ggi:3 ij:4 eq:3 krogh:1 implemented:1 c:1 stochastic:1 implementing:1 mkv:2 require:2 clustered:1 investigation:1 biological:1 adjusted:1 extension:1 hold:1 proximity:1 normal:1 exp:5 visualize:1 omitted:1 purpose:1 estimation:4 combinatorial:2 currently:1 iw:1 infonnation:1 faithfully:1 reflects:1 minimization:2 xtr:1 gaussian:2 avoid:1 ej:2 axj:1 mil:1 linguistic:2 structuring:1 derived:9 joachim:5 vk:1 rheinische:1 fur:1 likelihood:2 centroid:3 sense:2 inference:1 biochemical:1 vl:1 typically:1 lj:1 germany:1 tao:1 pixel:2 arg:1 classification:1 ill:2 issue:1 denoted:1 priori:1 initialize:1 tkl:1 field:3 f3:5 eliminated:1 sampling:1 biology:1 represents:1 constitutes:1 jb:1 randomly:2 simultaneously:2 divergence:1 ve:1 hemoglobin:1 replaced:1 stationarity:1 circular:1 mining:1 alignment:1 mixture:1 yielding:1 light:1 tj:2 hg:3 chain:1 integral:2 niversitat:1 necessary:1 partial:2 respective:1 institut:1 iv:1 incomplete:1 re:2 minimal:2 mk:1 boolean:1 assignment:8 cost:12 addressing:1 subset:3 deviation:1 dij:12 tishby:1 dependency:3 my:2 combined:1 density:3 international:1 squared:1 central:2 satisfied:1 iip:1 e9:2 dr:1 derivative:3 actively:2 account:1 potential:6 de:1 pooled:1 summarized:1 int:1 explicitly:2 ranking:1 depends:2 vi:1 performed:1 try:1 root:1 yv:1 hf:3 competitive:1 complicated:1 ofeq:1 contribution:1 square:3 ir:1 kaufmann:1 variance:10 efficiently:1 wilhelms:1 yield:4 correspond:2 maximized:1 accurately:1 informatik:1 mia:3 simultaneous:3 llt:1 fo:4 email:1 energy:6 acquisition:2 dxi:1 conciseness:1 di:1 couple:1 mi:1 dimensionality:1 yyt:1 wesley:1 reflected:1 arranged:1 though:1 generality:1 correlation:1 hand:1 replacing:1 nonlinear:1 lack:1 mkl:2 defines:1 quality:3 gray:2 grows:1 facilitate:1 true:1 symmetric:1 leibler:1 mlv:1 white:1 visualizing:1 ll:2 self:1 criterion:3 ay:1 complete:1 demonstrate:1 temperature:2 mja:1 variational:1 physical:1 myoglobin:1 discussed:1 he:3 belong:2 jl:2 interpret:1 gibbs:4 queried:1 interleaf:1 similarity:5 base:2 posterior:2 multivariate:1 apart:1 inequality:1 binary:1 inconsistency:1 scoring:2 morgan:1 minimum:5 additional:1 ministry:1 determine:3 ii:9 rv:1 violate:1 desirable:1 infer:1 faster:1 characterized:2 lin:1 vjl:1 molecular:1 dkl:5 jy:1 va:1 variant:1 maintenance:1 metric:5 expectation:1 iteration:2 normalization:1 globin:2 achieved:1 ion:1 preserved:1 want:1 annealing:1 publisher:1 appropriately:1 spirit:1 tfi:1 structural:1 iii:1 boo:1 hb:2 xj:4 independence:1 psychology:2 restrict:2 reduce:1 inner:1 prototype:1 vik:1 shift:1 expression:4 dik:4 render:1 york:1 jj:1 se:1 amount:1 dark:3 reduced:1 fulfilled:1 estimated:5 vol:1 group:4 four:1 capital:1 jv:1 sum:2 inverse:1 letter:1 parameterized:2 arrive:2 family:4 vn:1 separation:1 dy:1 scaling:13 summarizes:1 bound:4 hi:3 ki:1 datum:2 constraint:2 dominated:2 bonn:2 min:1 mvj:1 tv:1 according:1 belonging:1 hertz:2 vingron:1 slightly:1 em:2 explained:1 restricted:1 equation:10 visualization:1 abbreviated:1 discus:1 addison:1 oxi:2 cia:1 alternative:1 robustness:1 thomas:5 original:3 denotes:1 clustering:42 linguistics:2 assumes:1 gower:2 exploit:1 approximating:2 classical:1 society:1 question:3 strategy:8 parametric:1 md:7 diagonal:1 exhibit:1 subspace:1 distance:5 pleasure:1 thank:1 simulated:1 ofneural:1 analyst:1 providing:1 minimizing:3 hebrew:1 difficult:1 unknown:8 upper:4 finite:1 displayed:1 precise:1 interacting:1 discovered:1 inferred:1 pair:2 kl:1 friedrich:1 connection:1 tentative:1 established:1 address:1 pattern:2 ev:1 program:3 ia:2 meanfield:2 buhmann:10 scheme:3 miv:3 jill:1 review:1 acknowledgement:1 nxn:1 loss:1 expect:1 ekv:2 consistent:2 principle:3 ggg:1 vij:2 uncorrelated:2 pi:3 genetics:1 supported:1 free:7 side:1 differentiating:1 absolute:1 distributed:1 calculated:5 dimension:4 world:1 avoids:1 genome:1 projected:1 ec:1 approximate:2 compact:1 uni:1 kullback:1 active:11 assumed:1 xi:20 alternatively:2 search:2 latent:1 mj:1 robust:2 symmetry:2 contributes:1 necessarily:1 da:1 main:1 n2:1 fig:3 fashion:2 pv:3 explicit:3 iot:1 down:1 formula:2 embed:1 xt:1 specific:1 grouping:1 dissimilarity:27 entropy:6 lt:2 saddle:1 visual:1 partially:2 determines:1 hard:1 determined:4 called:1 nil:1 total:1 experimental:2 ya:4 support:1 arises:2 relevance:1 evaluate:1 phenomenon:1 |
12 | 1,009 | An experimental comparison
of recurrent neural networks
Bill G. Horne and C. Lee Giles?
NEe Research Institute
4 Independence Way
Princeton, NJ 08540
{horne.giles}~research.nj.nec.com
Abstract
Many different discrete-time recurrent neural network architectures have been proposed. However, there has been virtually no
effort to compare these arch:tectures experimentally. In this paper
we review and categorize many of these architectures and compare
how they perform on various classes of simple problems including
grammatical inference and nonlinear system identification.
1
Introduction
In the past few years several recurrent neural network architectures have emerged.
In this paper we categorize various discrete-time recurrent neural network architectures, and perform a quantitative comparison of these architectures on two problems: grammatical inference and nonlinear system identification.
2
RNN Architectures
We broadly divide these networks into two groups depending on whether or not the
states of the network are guaranteed to be observable. A network with observable
states has the property that the states of the system can always be determined from
observations of the input and output alone. The archetypical model in this class
.. Also with UMIACS, University of Maryland, College Park, MD 20742
698
Bill G. Horne, C. Lee Giles
Table 1: Terms that are weighted in various single layer network architectures. Ui
represents the ith input at the current time step, Zi represents the value of the lh
node at the previous time step.
Architecture
First order
High order
Bilinear
Quadratic
bias
x
x
Ui
Zi
x
x
x
x
x
x
UiUj
ZiUj
ZiZj
x
x
x
x
x
was proposed by Narendra and Parthasarathy [9]. In their most general model, the
output of the network is computed by a multilayer perceptron (MLP) whose inputs
are a window of past inputs and outputs, as shown in Figure la. A special case of
this network is the Time Delay Neural Network (TDNN), which is simply a tapped
delay line (TDL) followed by an MLP [7]. This network is not recurrent since there
is no feedback; however, the TDL does provide a simple form of dynamics that
gives the network the ability model a limited class of nonlinear dynamic systems.
A variation on the TDNN, called the Gamma network, has been proposed in which
the TDL is replaced by a set of cascaded filters [2]. Specifically, if the output of
one of the filters is denoted xj(k), and the output of filter i connects to the input
of filter j, the output of filter j is given by,
xj(k + 1) = I-'xi(k) + (l-I-')xj(k).
In this paper we only consider the case where I-' is fixed, although better results can
be obtained if it is adaptive.
Networks that have hidden dynamics have states which are not directly accessible
to observation. In fact, it may be impossible to determine the states of a system
from observations of it's inputs and outputs alone. We divide networks with hidden dynamics into three classes: single layer networks, multilayer networks, and
networks with local feedback.
Single layer networks are perhaps the most popular of the recurrent neural network
models. In a single layer network, every node depends on the previous output of
all of the other nodes. The function performed by each node distinguishes the
types of recurrent networks in this class. In each of the networks, nodes can be
characterized as a nonlinear function of a weighted sum of inputs, previous node
outputs, or products of these values. A bias term may also be included. In this
paper we consider first-order networks, high-order networks [5], bilinear networks,
and Quadratic networks[12]. The terms that are weighted in each of these networks
are summarized in Table 1.
Multilayer networks consist of a feedforward network coupled with a finite set of
delays as shown in Figure lb. One network in this class is an architecture proposed
by Robinson and Fallside [11], in which the feedforward network is an MLP. Another
popular networks that fits into this class is Elman's Simple Recurrent Network
(SRN) [3]. An Elman network can be thought of as a single layer network with an
extra layer of nodes that compute the output function, as shown in Figure lc.
In locally recurrent networks the feedback is provided locally within each individual
An Experimental Comparison of Recurrent Neural Networks
699
MLP
Figure 1: Network architectures: (a) Narendra and Parthasarathy's Recurrent Neural Network, (b) Multilayer network and (c) an Elman network.
node, but the nodes are connected together in a feed forward architecture. Specifically, we consider nodes that have local output feedback in which each node weights
a window of its own past outputs and windows of node outputs from previous layers.
Networks with local recurrence have been proposed in [1, 4, 10].
3
3.1
Experimental Results
Experimental methodology
In order to make the comparison as fair as possible we have adopted the following
methodology.
?
?
?
Resources. We shall perform two fundamental comparisons. One in which the
number of weights is roughly the same for all networks, another in which the
number of states is equivalent. In either case, we shall make these numbers large
enough that most of the networks can achieve interesting performance levels.
Number of weights. For static networks it is well known that the generalization
performance is related to the number of weights in the network. Although this
theory has never been extended to recurrent neural networks, it seems reasonable
that a similar result might apply. Therefore, in some experiments we shall try
to keep the number of weights approximately equal across all networks.
Number of states. It can be argued that for dynamic problems the size of the
state space is a more relevant measure for comparison than the number of
weights. Therefore, in some experiments we shall keep the number of states
equal across all networks.
Vanilla learning. Several heuristics have been proposed to help speed learning
and improve generalization of gradient descent learning algorithms. However,
such heuristics may favor certain architectures. In order to avoid these issues,
we have chosen simple gradient descent learning algorithms.
Number of simulations. Due to random initial conditions, the recurrent
neural network solutions can vary widely. Thus, to try to achieve a statistically
significant estimation of the generalization of these networks, a large number of
experiments were run.
700
Bill G. Horne, C. Lee Giles
o
stan
);::===:====,O'l+------ll
o
o
Figure 2: A randomly generated six state finite state machine.
3.2
Finite state machines
We chose two finite state machine (FSM) problems for a comparison of the ability of
the various recurrent networks to perform grammatical inference. The first problem
is to learn the minimal, randomly generated six state machine shown in Figure 2.
The second problem is to infer a sixty-four state finite memory machine [6] described
by the logic function
y(k) = u(k - 3)u(k)
+ u(k -
3)y(k - 3) + u(k)u(k - 3)Y(k - 3)
where u(k) and y(k) represent the input and output respectively at time k and x
represents the complement of x.
Two experiments were run. In the first experiment all of the networks were designed
such that the number of weights was less than, but as close to 60 as possible. In the
second experiment, each network was restricted to six state variables, and if possible,
the networks were designed to have approximately 75 weights. Several alternative
architectures were tried when it was possible to configure the architecture differently
and yield the same number of weights, but those used gave the best results.
A complete set of 254 strings consisting of all strings of length one through seven is
sufficient to uniquely identify both ofthese FSMs. For each simulation, we randomly
partitioned the data into a training and testing set consisting of 127 strings each.
The strings were ordered lexographically in the training set.
For each architecture 100 runs were performed on each problem. The on-line Back
Propagation Through Time (BPTT) algorithm was used to train the networks.
Vanilla learning was used with a learning rate of 0.5. Training was stopped at 1000
epochs. The weights of all networks were initialized to random values uniformly
distributed in the range [-0.1,0.1]. All states were initialize to zeros at the beginning of each string except for the High Order net in which one state was arbitrarily
initialized to a value of 1.
Table 2 summarizes the statistics for each experiment. From these results we draw
the following conclusions.
?
?
The bilinear and high-order networks do best on the small randomly generated
machine, but poorly on the finite memory machine. Thus, it would appear that
there is benefit to having second order terms in the network, at least for small
finite state machine problems.
Narendra and Parthasarathy's model and the network with local recurrence do
far better than the other networks on the problem of inferring the finite memory
An Experimental Comparison of Recurrent Neural Networks
701
Table 2: Percentage classification error on the FSM experiment for (a) networks with
approximately the same number of weights, (b) networks with the same number of
state variables. %P = The percentage of trials in which the training set was learned
perfectly, #W = the number of weights, and #S = the number of states.
F5M
Architecture t
N&P
TDNN
RND
Gamma
First Order
High Order
Bilinear
Quadratic
Mullilayer
Elman
Local
N&P
TDNN
FMM
Gamma
First Order
High Order
Bilinear
Quadratic
Multilayer
Elman
Local
training
mean
2 .8
12.5
19.6
12.9
0.8
1.3
12.9
19 .4
3.5
2. 8
0 .0
6.9
7.7
4 .8
5.3
9 .5
32.5
36. 7
12.0
0.1
error
( std)
(M)
(2.1)
(H)
(6.9)
(1.5)
(2 . 7)
(13.4)
(13 .6)
~5.~~
1.5
~0 . 2 ~
(2 .1 )
(2 .2)
(3 .0)
(4.0)
(10 .4)
(10.8)
(11.9)
(12.5)
' (0.3)
testing error
(std)
mea.n
16.9
(8 .6)
33.8
(U)
24 .8
(3 .2)
26 .5
(9 .0)
6 .2
(6 .1 )
5 .7
(6 .1)
17.7
(14 .1)
23 .4
( 13.5)
12.7
~9.7.6!~
26 .7
0 .1
15 .8
15.7
16 .0
26 .0
25 . 8
40.5
43 .5
24 .9
1.0
~1 .~~
(3 .2)
(3.3)
(6 .5)
( 5. 1 )
(7 .0)
(7 .3)
(8.5)
(7 .9)
( 3 .0)
'YoP
22
0
0
0
60
46
12
6
27
4
99
0
0
1
1
0
0
0
5
97
#W
56
56
56
48
50
55
45
54
55
60
#5
8
8
8
6
5
5
3
4
6
20
56
56
56
48
50
55
45
8
8
8
6
5
5
3
4
6
20
54
55
60
(a)
F5M
Architecture tt
N&P
TDNN
RND
Gamma
First Order
High Order
Bilinear
Quadratic
Mullilayer
Elman
Local
N&P
TDNN
FMM
Gamma
Firs t Order
High Order
Bilinear
Quadratic
MullUayer
Elman
Local
tra.lnlng
mea.n
4 .6
11 . 7
19.0
12.9
0 .3
0 .6
0 .2
15. 4
3.5
13.9
0 .1
6 .8
9 .0
4 .8
1.2
2 .6
12.6
38.1
12.8
15 .3
error
( std)
( 8.~~
( 2.0)
(H)
( 6.9)
( 0 .5)
( 0 .9)
( 0 .5)
(14 . 1)
( 5.5 )
( 405)
( 0 .8)
( 1.7)
(2.9)
(3 .0)
( 1.7)
( 402)
(17.3)
(12.6)
~H.:~
3 .8
testIng
mea.n
14.1
34.3
25 .2
26 .5
4 .6
4 .4
3.2
19.9
12.7
20.2
0 .3
16.2
14.9
16.0
25.1
20 .3
26.1
42.8
27.6
22.2
error
( std)
(11 .3 )
( 3 .9)
(3.1)
(9 .0)
( 5 .1)
( U)
( 2 .6)
(lU)
( 9 .1)
( 5.7)
( 1.4)
( 2 .9)
(2 .8)
(6 .5)
( 5 .1)
( 7 .2)
(12 .8)
( 9.2)
(10 .7)
( 409)
'YoP
38
0
0
0
79
55
83
16
27
0
#W
73
73
H
48
H
78
216
76
55
26
#5
6
6
6
6
6
6
6
6
6
6
97
0
0
1
31
21
13
0
8
0
73
73
73
48
H
78
216
76
55
26
6
6
6
6
6
6
6
6
6
6
(b)
tThe TDNN and Gamma network both had 8 input taps and 4 hidden layer nodes. For
the Gamma network, I' = 0.3 (RND) and I' = 0.7 (FMM). Narendra and Parthasarathy's
network had 4 input and output taps and 5 hidden layer nodes. The High-order network
used a "one-hot" encoding of the input values [5]. The multilayer network had 4 hidden
and output layer nodes. The locally recurrent net had 4 hidden layer nodes with 5 input
and 3 output taps, and one output node with 3 input and output taps.
ttThe TDNN, Gamma network, and N arendra and Parthasarathy's network all had 8
hidden layer nodes. For the Gamma network, I' = 0.3 (RND) and I' = 0.7 (FMM). The
High-order network again used a "one-hot" encoding of the input values. The multilayer
network had 5 hidden and 6 output layer nodes. The locally recurrent net had 3 hidden
layer nodes and one output layer node, all with only one input and output tap.
702
Bill G. Horne, C. Lee Giles
machine when the number of states is not constrained. It is not surprising that
the former network did so well since the sequential machine implementation of
a finite memory machine is similar to this architecture [6]. However, the result
for the locally recurrent network was unexpected.
? All of the recurrent networks do better than the TDNN on the small random
machine. However, on the finite memory machine the TDNN does surprisingly
well, perhaps because its structure is similiar to Narendra and Parthasarathy's
network which was well suited for this problem.
? Gradient-based learning algorithms are not adequate for many of these architectures. In many cases a network is capable of representing a solution to a
problem that the algorithm was not able to find. This seems particularly true
for the Multilayer network.
? Not surprisingly, an increase in the number of weights typically leads to overtraining. Although, the quadratic network, which has 216 weights, can consistently find solutions for the random machine that generalize well even though
there are only 127 training samples.
? Although the performance on the training set is not always a good indicator of'
generalization performance on the testing set, we find that if a network is able
to frequently find perfect solutions for the training data, then it also does well
on the testing data.
3.3
Nonlinear system identification
In this problem, we train the network to learn the dynamics of the following set of
equations proposed in [8]
zl(k)
Z2 ( k )
1
+
y(k)
+ 2z2(k)
l+z~(k)
zl(k+l)
=
+u
(k)
zl(k)Z2(k)
+ u (k)
1 + z~(k)
zl(k) + z2(k)
based on observations of u( k) and y( k) alone.
The same networks that were used for the finite state machine problems were used
here, except that the output node was changed to be linear instead of sigmoidal
to allow the network to have an appropriate dynamic range. We found that this
caused some stability problems in the quadratic and locally recurrent networks. For
the fixed number of weights comparison, we added an extra node to the quadratic
network, and dropped any second order terms involving the fed back output. This
gave a network with 64 weights and 4 states. For the fixed state comparison,
dropping the second order terms gave a network with 174 weights. The locally
recurrent network presented stability problems only for the fixed number of weights
comparison. Here, we used a network that had 6 hidden layer nodes and one output
node with 2 taps on the inputs and outputs each, giving a network with 57 weights
and 16 states. In the Gamma network a value of l' 0.8 gave the best results.
=
The networks were trained with 100 uniform random noise sequences of length 50.
Each experiment used a different randomly generated training set. The noise was
An Experimental Comparison of Recurrent Neural Networks
703
Table 3: Normalized mean squared error on a sinusoidal test signal for the nonlinear
system identification experiment.
Archi teet ure
N&P
TDNN
Gamma
First Order
High Order
Bilinear
Quadratic
Multilayer
Elman
Local
Fixed
#
weights
0.101
0.160
0.157
0.105
1.034
0.118
0.108
0.096
0.115
0.117
Fixed
#
states
0.067
0.165
0.151
0.105
1.050
0.111
0.096
0.084
0.115
0.123
uniformly distributed in the range [-2.0,2.0], and each sequence started with an
initial value of Xl(O) = X2(0) = O. The networks were tested on the response to
a sine wave of frequency 0.04 radians/second. This is an interesting test signal
because it is fundamentally different than the training data.
Fifty runs were performed for each network. BPTT was used for 500 epochs with a
learning rate of 0.002. The weights of all networks were initialized to random values
uniformly distributed in the range [-0.1,0.1].
Table 3 shows the normalized mean squared error averaged over the 50 runs on the
testing set. From these results we draw the following conclusions.
?
?
?
?
4
The high order network could not seem to match the dynamic range of its output
to the target, as a result it performed much worse than the other networks. It is
clear that there is benefit to adding first order terms since the bilinear network
performed so much better.
Aside from the high order network, all of the other recurrent networks performed
better than the TDNN, although in most cases not significantly better.
The multilayer network performed exceptionally well on this problem, unlike the
finite state machine experiments. We speculate that the existence of target output at every point along the sequence (unlike the finite state machine problems)
is important for the multilayer network to be successful.
Narendra and Parthasarathy's architecture did exceptionally well, even though
it is not clear that its structure is well matched to the problem.
Conclusions
We have reviewed many discrete-time recurrent neural network architectures and
compared them on two different problem domains, although we make no claim that
any of these results will necessarily extend to other problems.
Narendra and Parthasarathy's model performed exceptionally well on the problems
we explored. In general, single layer networks did fairly well, however it is important
to include terms besides simple state/input products for nonlinear system identification. All of the recurrent networks usually did better than the TDNN except
704
Bill G. Home, C. Lee Giles
on the finite memory machine problem. In these experiments, the use of averaging
filters as a substitute for taps in the TDNN did not seem to offer any distinct advantages in performance, although better results might be obtained if the value of
J.I. is adapted.
We found that the relative comparison of the networks did not significantly change
whether or not the number of weights or states were held constant. In fact, holding
one of these values constant meant that in some networks the other value varied
wildly, yet there appeared to be little correlation with generalization.
Finally, it is interesting to note that though some are much better than others,
many of these networks are capable of providing adequate solutions to two seemingly
disparate problems.
Acknowledgements
We would like to thank Leon Personnaz and Isabelle Rivals for suggesting we perform the experiments with a fixed number of states.
References
[1] A.D. Back and A.C. Tsoi. FIR and IIR synapses, a new neural network architecture for time series modeling. Neural Computation, 3(3):375-385, 1991.
[2] B. de Vries and J .C. Principe. The gamma model: A new neural model for
temporal processing. Neural Networks, 5:565-576, 1992.
[3] J .L. Elman. Finding structure in time. Cognitive Science, 14:179-211, 1990.
[4] P. Frasconi, M. Gori, and G. Soda. Local feedback multilayered networks.
Neural Computation, 4:120-130, 1992.
[5] C.L. Giles, C .B. Miller, et al. Learning and extracting finite state automata
with second-order recurrent neural networks. Neural Computation, 4:393-405,
1992.
[6] Z. Kohavi. Switching and finite automata theory. McGraw-Hill, NY, 1978.
[7] K.J. Lang, A.H. Waibel, and G.E . Hinton. A time-delay neural network architecture for isolated word recognition. Neural Networks, 3:23-44, 1990.
[8] K.S. Narendra. Adaptive control of dynamical systems using neural networks.
In Handbook of Intelligent Control, pages 141-183. Van Nostrand Reinhold,
NY, 1992.
[9] K.S. Narendra and K. Parthasarathy. Identification and control of dynamical
systems using neural networks. IEEE Trans. on Neural Networks, 1:4-27, 1990.
[10] P. Poddar and K.P. Unnikrishnan. Non-linear prediction of speech signals
using memory neuron networks. In Proc. 1991 IEEE Work. Neural Networks
for Sig. Proc., pages 1-10. IEEE Press, 1991.
[11] A.J. Robinson and F. Fallside. Static and dynamic error propagation networks
with application to speech coding. In NIPS, pages 632-641, NY, 1988. AlP.
[12] R.L . Watrous and G.M. Kuhn . Induction of finite-state automata using
second-order recurrent networks. In NIPS4, pages 309-316, 1992.
| 1009 |@word trial:1 seems:2 bptt:2 simulation:2 tried:1 initial:2 series:1 past:3 current:1 com:1 z2:4 surprising:1 lang:1 yet:1 designed:2 aside:1 alone:3 beginning:1 ith:1 node:25 sigmoidal:1 along:1 tdl:3 roughly:1 elman:9 fmm:4 frequently:1 little:1 window:3 provided:1 horne:5 matched:1 watrous:1 string:5 finding:1 nj:2 temporal:1 quantitative:1 every:2 zl:4 control:3 appear:1 dropped:1 local:10 switching:1 bilinear:9 encoding:2 ure:1 approximately:3 might:2 chose:1 limited:1 range:5 statistically:1 averaged:1 tsoi:1 testing:6 rnn:1 thought:1 significantly:2 word:1 close:1 mea:3 impossible:1 bill:5 equivalent:1 automaton:3 stability:2 variation:1 target:2 sig:1 tapped:1 recognition:1 particularly:1 std:4 connected:1 ui:2 dynamic:9 trained:1 differently:1 various:4 train:2 distinct:1 whose:1 emerged:1 heuristic:2 widely:1 ability:2 favor:1 statistic:1 seemingly:1 sequence:3 advantage:1 net:3 product:2 relevant:1 poorly:1 achieve:2 perfect:1 help:1 depending:1 recurrent:27 kuhn:1 filter:6 alp:1 argued:1 tthe:1 generalization:5 claim:1 narendra:9 vary:1 estimation:1 proc:2 weighted:3 always:2 avoid:1 unnikrishnan:1 consistently:1 inference:3 typically:1 hidden:10 issue:1 classification:1 denoted:1 constrained:1 special:1 initialize:1 fairly:1 equal:2 never:1 having:1 frasconi:1 represents:3 park:1 others:1 fundamentally:1 intelligent:1 few:1 distinguishes:1 randomly:5 gamma:12 individual:1 replaced:1 connects:1 consisting:2 mlp:4 sixty:1 configure:1 held:1 fsm:2 capable:2 lh:1 divide:2 srn:1 initialized:3 isolated:1 minimal:1 stopped:1 modeling:1 giles:7 uniform:1 delay:4 successful:1 iir:1 fundamental:1 accessible:1 lee:5 together:1 again:1 squared:2 fir:2 worse:1 cognitive:1 suggesting:1 sinusoidal:1 de:1 speculate:1 summarized:1 coding:1 tra:1 caused:1 depends:1 performed:8 try:2 sine:1 wave:1 miller:1 yield:1 identify:1 generalize:1 identification:6 lu:1 overtraining:1 synapsis:1 frequency:1 static:2 radian:1 popular:2 back:3 feed:1 methodology:2 response:1 though:3 wildly:1 arch:1 correlation:1 nonlinear:7 propagation:2 perhaps:2 normalized:2 true:1 former:1 ll:1 recurrence:2 uniquely:1 hill:1 complete:1 tt:1 extend:1 significant:1 isabelle:1 vanilla:2 had:8 zizj:1 own:1 certain:1 nostrand:1 arbitrarily:1 nips4:1 determine:1 signal:3 infer:1 match:1 characterized:1 offer:1 prediction:1 involving:1 fsms:1 multilayer:11 represent:1 kohavi:1 extra:2 fifty:1 umiacs:1 unlike:2 virtually:1 seem:2 extracting:1 feedforward:2 enough:1 independence:1 xj:3 zi:2 fit:1 architecture:23 gave:4 perfectly:1 whether:2 six:3 effort:1 speech:2 adequate:2 clear:2 rival:1 locally:7 percentage:2 broadly:1 discrete:3 shall:4 dropping:1 group:1 four:1 year:1 sum:1 run:5 soda:1 reasonable:1 home:1 draw:2 summarizes:1 layer:17 guaranteed:1 followed:1 quadratic:10 adapted:1 x2:1 archi:1 speed:1 leon:1 waibel:1 across:2 partitioned:1 ofthese:1 restricted:1 resource:1 equation:1 fed:1 adopted:1 apply:1 appropriate:1 alternative:1 existence:1 substitute:1 gori:1 include:1 giving:1 personnaz:1 added:1 md:1 fallside:2 gradient:3 thank:1 maryland:1 seven:1 induction:1 length:2 besides:1 providing:1 holding:1 disparate:1 implementation:1 perform:5 observation:4 neuron:1 finite:17 descent:2 similiar:1 extended:1 hinton:1 varied:1 lb:1 reinhold:1 complement:1 tap:7 learned:1 nee:1 nip:1 robinson:2 trans:1 able:2 usually:1 dynamical:2 appeared:1 including:1 memory:7 hot:2 cascaded:1 indicator:1 representing:1 improve:1 stan:1 started:1 tdnn:14 coupled:1 parthasarathy:9 review:1 epoch:2 acknowledgement:1 relative:1 interesting:3 sufficient:1 changed:1 surprisingly:2 bias:2 allow:1 perceptron:1 institute:1 distributed:3 grammatical:3 feedback:5 benefit:2 van:1 forward:1 adaptive:2 far:1 observable:2 mcgraw:1 keep:2 logic:1 handbook:1 xi:1 table:6 reviewed:1 learn:2 necessarily:1 domain:1 did:6 multilayered:1 noise:2 fair:1 ny:3 lc:1 inferring:1 archetypical:1 xl:1 explored:1 consist:1 sequential:1 adding:1 nec:1 vries:1 suited:1 simply:1 unexpected:1 ordered:1 rnd:4 exceptionally:3 experimentally:1 change:1 included:1 determined:1 specifically:2 uniformly:3 except:3 averaging:1 called:1 experimental:6 la:1 college:1 principe:1 meant:1 categorize:2 princeton:1 tested:1 |
13 | 101 | 133
TRAINING MULTILAYER PERCEPTRONS WITH THE
EXTENDED KALMAN ALGORITHM
Sharad Singhal and Lance Wu
Bell Communications Research, Inc.
Morristown, NJ 07960
ABSTRACT
A large fraction of recent work in artificial neural nets uses
multilayer perceptrons trained with the back-propagation
algorithm described by Rumelhart et. a1. This algorithm
converges slowly for large or complex problems such as
speech recognition, where thousands of iterations may be
needed for convergence even with small data sets. In this
paper, we show that training multilayer perceptrons is an
identification problem for a nonlinear dynamic system which
can be solved using the Extended Kalman Algorithm.
Although computationally complex, the Kalman algorithm
usually converges in a few iterations. We describe the
algorithm and compare it with back-propagation using twodimensional examples.
INTRODUCTION
Multilayer perceptrons are one of the most popular artificial neural net
structures being used today. In most applications, the "back propagation"
algorithm [Rllmelhart et ai, 1986] is used to train these networks. Although
this algorithm works well for small nets or simple problems, convergence is
poor if the problem becomes complex or the number of nodes in the network
become large [Waibel et ai, 1987]. In problems sllch as speech recognition,
tens of thousands of iterations may be required for convergence even with
relatively small elata-sets. Thus there is much interest [Prager anel Fallsiele,
1988; Irie and Miyake, 1988] in other "training algorithms" which can
compute the parameters faster than back-propagation anel/or can handle much
more complex problems.
In this paper, we show that training multilayer perceptrons can be viewed as
an identification problem for a nonlinear dynamic system. For linear dynamic
Copyright 1989. Bell Communications Research. Inc.
134
Singhal and Wu
systems with white input and observation noise, the Kalman algorithm
[Kalman, 1960] is known to be an optimum algorithm. Extended versions of
the Kalman algorithm can be applied to nonlinear dynamic systems by
linearizing the system around the current estimate of the parameters.
Although computationally complex, this algorithm updates parameters
consistent with all previously seen data and usually converges in a few
iterations. In the following sections, we describe how this algorithm can be
applied to multilayer perceptrons and compare its performance with backpropagation using some two-dimensional examples.
THE EXTENDED KALMAN FILTER
In this section we briefly outline the Extended Kalman filter. Mathematical
derivations for the Extended Kalman filter are widely available in the
literature [Anderson and Moore, 1979; Gelb, 1974] and are beyond the scope
of this paper.
Consider a nonlinear finite dimensional discrete time system of the form:
x(n+l) = In(x(n? + gn(x(n?w(n),
den) = hn(x(n?+v(n).
(1)
Here the vector x (n) is the state of the system at time n, w (n) is the input,
den) is the observation, v(n) is observation noise and In('), gn('), and h n(')
are nonlinear vector functions of the state with the subscript denoting possible
dependence on time. We assume that the initial state, x (0), and the
sequences {v (n)} and {w (n)} are independent and gaussian with
E [x (O)]=x(O), E {[x (O)-x (O)][x (O)-i(O?)I} = P(O),
E [w (n)] = 0, E [w (n )w t (l)] = Q (n )Onl'
E[v(n)] = 0, E[v(n)vt(l)] = R(n)onb
(2)
where Onl is the Kronecker delta. Our problem is to find an estimate i (n +1)
of x (n +1) given d (j) , O<j <n. We denote this estimate by i (n +11 n).
If the nonlinearities in (1) are sufficiently smooth, we can expand them llsing
Taylor series about the state estimates i (n In) and i (n In -1) to obtain
In(x(n? = I" (i(n In? + F(n)[x(n)-i(n In)] + ...
gn(x(n? = gil (i(n In? + ... = C(n) + ...
hn(x(n? = hll(i(n In-I? + J-f1(n)[x(n)-i(n In-1)] +
where
C(ll) = gn(i(n Ill?,
din (x)
F (ll ) = - - - .-ax
x = .i (II III)
, I-P
dh ll (x)
(n ) = --.,--Ox
(3)
x=i(IIII1-1)
i.e. G (n) is the value of the function g" (.) at i (n In) and the ij th
components of F (n) and H' (n) are the partial derivatives of the i th
components of f II ( . ) and hll (-) respectively with respect to the j th component
of x (n) at the points indicated. Neglecting higher order terms and assuming
Training Multilayer Perceptrons
knowledge of i (n In) and i (n In-I), the system in (3) can be approximated
as
x(n+l) = F(n)x(n) + G(n)w(n)
z (n ) = HI (n )x (n )+ v (n) + y (n ),
+ u(n)
(4)
n>O
where
(5)
u(n) = /n(i(n In? - F(n)i(n In)
y(n) = hn(i(n In-I? - H1(n)i(n In-1).
It can be shown [Anderson and Moore, 1979] that the desired estimate
i (n + 11 n) can be obtained by the recursion
i(n+1In) =/n(i(n In?
i(n In) = i(n In-I) + K(n)[d(n) - hn(i(n In-1?]
K(n) = P(n In-I)H(n)[R(n)+HI(n)P(n In-I)H(n)tl
P(n+Iln) = F(n)P(n In)FI(n) + G(n)Q(n)G1(n)
P(n In) = P(n In-I) - K(n)HI(n)P(n In-I)
(6)
(7)
(8)
(9)
(10)
with P(11 0) = P (0). K (n) is known as the Kalman gain. In case of a linear
system, it can be shown that P(n) is the conditional error covariance matrix
associated with the state and the estimate i (n +1/ n) is optimal in the sense
that it approaches the conditional mean E [x (n + 1) Id (0) ... d (n)] for large
n . However, for nonlinear systems, the filter is not optimal and the estimates
can only loosely be termed conditional means.
TRAINING MULTILAYER PERCEPTRONS
The network under consideration is a L layer perceptronl with the i th input
of the k th weight layer labeled as :J-l(n), the jth output being zjk(n) and the
weight connecting the i th input to the j th output being (}i~j' We assume that
the net has m inputs and I outputs. Thresholds are implemented as weights
connected from input nodes 2 with fixed unit strength inputs . Thus, if there
are N (k) nodes in the k th node layer, the total number of weights in the
system is
L
M = ~N(k-l)[N(k)-l].
(11)
k=1
Although the inputs and outputs are dependent on time 11, for notational
brevity, we wil1 not show this dependence unless explicitly needed .
l.
We use the convention that the number of layers is equal to the number of weight layers . Thus
we have L layers of Wl'iglrls labeled 1 ?
L and I ~ + I layer s of /lodes (including the input and
output nodes) labeled O ? . . L . We will refer to the kth weight layer or the kth node layer
unless the cont ext is clear.
2.
We adopt the convention that the 1st input node is the threshold. i.e.
the j th output node from the k th weight layer.
lit., is
the threshold for
135
136
Singhal and Wu
In order to cast the problem in a form for recursive estimation, we let the
weights in the network constitute the state x of the nonlinear system, i.e.
x = [Ob,Ot3 ... 0k(O),N(l)]t.
(12)
The vector x thus consists of all weights arranged in a linear array with
dimension equal to the total number of weights M in the system. The system
model thus is
x(n+l)=x(n) n>O,
den) = zL(n) + v(n) = hn(x(n),zO(n))
+ v(n),
(13)
(14)
where at time n, zO(n) is the input vector from the training set, d (n) is the
corresponding desired output vector, and ZL (n) is the output vector
produced by the net. The components of h n (.) define the nonlinear
relationships between the inputs, weights and outputs of the net. If r(?) is the
nonlinearity used, then ZL (n) = h n(x (n ),zO(n)) is given by
zL(n) = r{(OL)tr{(OL-l)tr ... r{(OlyzO(n)}? .. }}..
(15)
where r applies componentwise to vector arguments. Note that the input
vectors appear only implicitly through the observation function h n ( . ) in (14).
The initial state (before training) x (0) of the network is defined by populating
the net with gaussian random variables with a N(x(O),P(O)) distribution where
(0) and P (0) reflect any apriori knowledge about the weights. In the absence
of any such knowledge, a N (0,1/f. I) distribution can be used, where f. is a
small number and I is the identity matrix. For the system in (13) and (14),
the extended Kalman filter recursion simplifies to
x
i(I1+1) = i(n) + K(n)[d(n) - hn(i(n),zO(n))]
K (n) = P(n)H (n )[R (n )+H' (n )P(n )H(n )]-1
Pen +1) = P(n) - K (n )Ht (n)P (n)
(16)
(17)
(18)
where P(n) is the (approximate) conditional error covariance matrix .
Note that (16) is similar to the weight update equation in back-propagation
with the last term [ZL - h n (x ,ZO)] being the error at the output layer.
However, unlike the delta rule used in back-propagation, this error is
propagated to the weights through the Kalman gain K (n) which updates each
weight through the entire gradient matrix H (n) and the conditional error
covariance matrix P (n ). In this sense, the Kalman algorithm is not a local
training algorithm . However, the inversion required in (17) has dimension
equal to the llumber of outputs I, 110t the number of weights M, and thus
does not grow as weights arc added to the problem.
EXAMPLES AND RESULTS
To evaluale the Olltpul and the convergence properties of the extended
Kalman algorithm. we constructed mappings using two-dimensional inputs
with two or four outputs as shown in Fig. 1. Limiting the input vector to 2
dimensions allows liS to visualize the decision regiolls ohtained by the net and
Training Multilayer Perceptrons
to examine the outputs of any node in the net in a meaningful way. The xand y-axes in Fig. 1 represent the two inputs, with the origin located at the
center of the figures. The numbers in the figures represent the different
output classes.
2
-
-
1
t------+-----I
1
2
I
(a) REGIONS
(b) XOR
Figure 1. Output decision regions for two problems
The training set for each example consisted of 1000 random vectors uniformly
filling the region . The hyperbolic tangent nonlinearity was used as the
nonlinear element in the networks. The output corresponding to a class was
set to 0.9 when the input vector belonged to that class, and to -0.9 otherwise.
During training, the weights were adjusted after each data vector was
presented. Up to 2000 sweeps through the input data were used with the
stopping criteria described below to examine the convergence properties. The
order in which data vectors were presented was randomized for each sweep
through the data. In case of back-propagation, a convergence constant of 0.1
was used with no "momentum" factor. In the Kalman algorithm R was set to
I ?e-k / 50 , where k was the iteration number through the data. Within each
iteration, R was held constant.
The Stopping Criteria
Training was considered complete if anyone of the following
satisfied:
con~itions
was
a.
2000 sweeps through the input data were used,
h.
the RMS (root mean squared) error at the output averaged over all
training data during a sweep fell below a threshold 11' or
c.
the error reduction 8 after the i th sweep through the data fell below a
threshold I::., where 8; = !3b;_1 + (l-,B) Iei-ei_l I. Here !3 is some
positive constant less than unity, and ei is the error defined in b.
In our simulations we set ;3 = 0.97, II = 10-2 and 12 = 10- 5 ?
137
138
Singhal and Wu
Example 1 - Meshed, Disconnected Regions:
Figure l(a) shows the mapping with 2 disconnected, meshed regions
surrounded by two regions that fill up the space. We used 3-layer perceptrons
with 10 hidden nodes in each hidden layer to Figure 2 shows the RMS error
obtained during training for the Kalman algorithm and back-propagation
averaged over 10 different initial conditions. The number of sweeps through
the data (x-axis) are plotted on a logarithmic scale to highlight the initial
reduction for the Kalman algorithm. Typical solutions obtained by the
algorithms at termination are shown in Fig. 3. It can be seen that the Kalman
algorithm converges in fewer iterations than back-propagation and obtains
better solutions.
1
0.8
Average 0.6
RMS
Error 0.4
backprop
0.2
Kalman
0
2
1
5
10
20
50 100 200
No. of Iterations
500 10002000
Figure 2. Average output error during training for Regions problem using the
Kalman algorithm and backprop
I
I
(a)
(b)
Figure 3. Typical solutions for Regions problem using (a) Kalman algorithm
and (h) hackprop.
Training Multilayer Perceptrons
Example 2 - 2 Input XOR:
Figure 1(b) shows a generalized 2-input XOR with the first and third
quadrants forming region 1 and the second and fourth quadrants forming
region 2. We attempted the problem with two layer networks containing 2-4
nodes in the hidden layer. Figure 4 shows the results of training averaged
over 10 different randomly chosen initial conditions. As the number of nodes
in the hidden layer is increased, the net converges to smaller error values.
When we examine the output decision regions, we found that none of the nets
attempted with back-propagation reached the desired solution. The Kalman
algorithm was also unable to find the desired solution with 2 hidden nodes in
the network. However, it reached the desired solution with 6 out of 10 initial
conditions with 3 hidden nodes in the network and 9 out of 10 initial
conditions with 4 hidden nodes. Typical solutions reached by the two
algorithms are shown in Fig. 5. In all cases, the Kalman algorithm converged
in fewer iterations and in all but one case, the final average output error was
smaller with the Kalman algorithm.
1
0.8
Average 0.6
RMS
Error 0.4
Kalman 3 nodes
0.2
Kalman 4 nodes
0
1
2
5
10
50 100 200
20
No. of Iterations
500 10002000
Figure 4. Average output error during training for XOR problem using the
Kalman algorithm and backprop
CONCLUSIONS
In this paper, we showed that training feed-forward nets can be viewed as a
system identification problem for a nonlinear dynamic system. For linear
dynamic systems, the Kalman tllter is known to produce an optimal estimator.
Extended versions of the Kalman algorithm can be used to train feed-forward
networks. We examined the performance of the Kalman algorithm using
artifkially constructed examples with two inputs and found that the algorithm
typically converges in a few iterations. We also llsed back-propagation on the
same examples and found that invariably, the Kalman algorithm converged in
139
140
Singhal and Wu
l
2
1
~
1
2
I
2
I
I
(a)
(b)
Figure 5. Typical solutions for XOR problem using (a) Kalman algorithm and
(b) backprop.
fewer iterations. For the XOR problem, back-propagation failed to converge
on any of the cases considered while the Kalman algorithm was able to find
solutions with the same network configurations.
References
[1]
B. D. O. Anderson and J. B. Moore, Optimal Filtering, Prentice Hall,
1979.
[2]
A. Gelb, Ed., Applied Optimal Estimation, MIT Press, 1974.
[3]
B. Irie, and S. Miyake, "Capabilities of Three-layered Perceptrons,"
Proceedings of the IEEE International Conference on Neural Networks,
San Diego, June 1988, Vol. I, pp. 641-648.
[4]
R. E. Kalman, "A New Approach to Linear Filtering and Prediction
Problems," 1. Basic Eng., Trans. ASME, Series D, Vol 82, No.1, 1960,
pp.35-45.
[5]
R. W. Prager and F. Fallside, "The Modified Kanerva Model for
Automatic Speech Recognition," in 1988 IEEE Workshop on Speech
Recognition, Arden House, Harriman NY, May 31-Jllne 3,1988.
[6]
D. E. Rumelharl, G. E. Hinton and R. J. Williams, "Learning Internal
Representations by Error Propagation," in D. E. Rllmelhart and
J. L. McCelland (Eds.), Parallel Distributed Processing: Explorations in
the Microstructure oj' Cognition. Vol 1: Foundations. MIT Press, 1986.
[7J
A. Waibel, T. Hanazawa, G. Hinton, K. Shikano and K . Lang
"Phoneme Recognition Using Time-Delay Neural Networks," A 1R
internal Report TR-I-0006, October 30, 1987.
| 101 |@word briefly:1 version:2 inversion:1 llsed:1 termination:1 simulation:1 covariance:3 eng:1 tr:3 reduction:2 initial:7 configuration:1 series:2 denoting:1 current:1 xand:1 lang:1 update:3 fewer:3 node:17 mathematical:1 constructed:2 become:1 consists:1 examine:3 ol:2 becomes:1 nj:1 morristown:1 zl:5 unit:1 appear:1 before:1 positive:1 local:1 ext:1 id:1 subscript:1 examined:1 averaged:3 recursive:1 backpropagation:1 bell:2 hyperbolic:1 quadrant:2 layered:1 twodimensional:1 prentice:1 center:1 williams:1 miyake:2 rule:1 estimator:1 array:1 fill:1 handle:1 limiting:1 diego:1 today:1 us:1 origin:1 element:1 rumelhart:1 recognition:5 approximated:1 located:1 labeled:3 rllmelhart:2 solved:1 thousand:2 region:11 connected:1 prager:2 dynamic:6 trained:1 derivation:1 train:2 zo:5 describe:2 artificial:2 widely:1 otherwise:1 g1:1 hanazawa:1 final:1 sequence:1 net:12 convergence:6 optimum:1 produce:1 converges:6 ij:1 implemented:1 convention:2 sllch:1 filter:5 exploration:1 backprop:4 f1:1 microstructure:1 adjusted:1 around:1 sufficiently:1 considered:2 hall:1 scope:1 mapping:2 visualize:1 cognition:1 adopt:1 estimation:2 wl:1 mit:2 gaussian:2 modified:1 ax:2 june:1 notational:1 sense:2 dependent:1 stopping:2 entire:1 typically:1 hidden:7 expand:1 i1:1 ill:1 apriori:1 equal:3 lit:1 filling:1 report:1 few:3 randomly:1 invariably:1 interest:1 copyright:1 held:1 partial:1 neglecting:1 unless:2 taylor:1 loosely:1 desired:5 plotted:1 increased:1 gn:4 singhal:5 delay:1 st:1 international:1 randomized:1 connecting:1 squared:1 reflect:1 satisfied:1 containing:1 hn:6 slowly:1 derivative:1 li:1 nonlinearities:1 iei:1 inc:2 explicitly:1 h1:1 root:1 reached:3 capability:1 parallel:1 xor:6 phoneme:1 identification:3 produced:1 populating:1 none:1 converged:2 ed:2 pp:2 associated:1 con:1 propagated:1 gain:2 popular:1 knowledge:3 back:12 feed:2 higher:1 arranged:1 ox:1 anderson:3 ei:1 nonlinear:10 propagation:13 indicated:1 zjk:1 consisted:1 din:1 moore:3 white:1 ll:3 during:5 iln:1 linearizing:1 criterion:2 generalized:1 asme:1 outline:1 complete:1 consideration:1 fi:1 refer:1 ai:2 automatic:1 mccelland:1 nonlinearity:2 recent:1 showed:1 termed:1 vt:1 seen:2 converge:1 llsing:1 ii:3 smooth:1 faster:1 a1:1 prediction:1 basic:1 multilayer:10 iteration:12 represent:2 onb:1 grow:1 unlike:1 fell:2 iii:1 simplifies:1 rms:4 speech:4 constitute:1 clear:1 ten:1 gil:1 delta:2 discrete:1 vol:3 four:1 threshold:5 ht:1 fraction:1 fourth:1 wu:5 ob:1 decision:3 layer:16 hi:3 strength:1 kronecker:1 argument:1 anyone:1 relatively:1 waibel:2 poor:1 disconnected:2 smaller:2 unity:1 den:3 computationally:2 equation:1 kanerva:1 previously:1 needed:2 available:1 sweep:6 added:1 dependence:2 gradient:1 kth:2 irie:2 fallside:1 unable:1 assuming:1 kalman:34 cont:1 relationship:1 onl:2 october:1 observation:4 arc:1 finite:1 meshed:2 gelb:2 extended:9 communication:2 hinton:2 cast:1 required:2 componentwise:1 trans:1 beyond:1 able:1 usually:2 below:3 belonged:1 including:1 oj:1 lance:1 recursion:2 hll:2 axis:1 literature:1 tangent:1 highlight:1 filtering:2 foundation:1 anel:2 consistent:1 surrounded:1 last:1 jth:1 distributed:1 dimension:3 forward:2 san:1 approximate:1 obtains:1 implicitly:1 shikano:1 pen:1 complex:5 noise:2 fig:4 tl:1 ny:1 momentum:1 house:1 third:1 workshop:1 logarithmic:1 forming:2 sharad:1 failed:1 applies:1 dh:1 conditional:5 viewed:2 identity:1 absence:1 typical:4 uniformly:1 total:2 attempted:2 meaningful:1 perceptrons:12 internal:2 brevity:1 |
14 | 1,010 | Interference in Learning Internal
Models of Inverse Dynamics in Humans
Reza Shadmehr; Tom Brashers-Krug, and Ferdinando Mussa-lvaldi t
Dept. of Brain and Cognitive Sciences
M. I. T., Cambridge, MA 02139
reza@bme.jhu.edu, tbk@ai.mit.edu, sandro@parker.physio.nwu.edu
Abstract
Experiments were performed to reveal some of the computational
properties of the human motor memory system. We show that
as humans practice reaching movements while interacting with a
novel mechanical environment, they learn an internal model of the
inverse dynamics of that environment. Subjects show recall of this
model at testing sessions 24 hours after the initial practice. The
representation of the internal model in memory is such that there
is interference when there is an attempt to learn a new inverse
dynamics map immediately after an anticorrelated mapping was
learned. We suggest that this interference is an indication that
the same computational elements used to encode the first inverse
dynamics map are being used to learn the second mapping. We
predict that this leads to a forgetting of the initially learned skill.
1
Introduction
In tasks where we use our hands to interact with a tool, our motor system develops
a model of the dynamics of that tool and uses this model to control the coupled
dynamics of our arm and the tool (Shadmehr and Mussa-Ivaldi 1994). In physical
systems theory, the tool is a mechanical analogue of an admittance, mapping a force
as input onto a change in state as output (Hogan 1985). In this framework, the
?Currently at Dept. Biomedical Eng, Johns Hopkins Univ, Baltimore, MD 21205
tCurrently at Dept. Physiology, Northwestern Univ Med Sch (M211), Chicago, IL 60611
1118
Reza Shadmehr, Tom Brashers-Krug, Ferdinando Mussa-Ivaldi
Figure 1: The experimental setup. The robot is
a very low friction planar mechanism powered by
two torque motors that act on the shoulder and
elbow joints. Subject grips the end-point of the
robot which houses a force transducer and moves
the hand to a series of targets displayed on a monitor facing the subject (not shown) . The function of
the robot is to produce novel force fields that the
subject learns to compensate for during reaching
movements.
model developed by the motor control system during the learning process needs to
approximate an inverse of this mapping . This inverse dynamics map is called an
internal model of the tool.
We have been interested in understanding the representations that the nervous
system uses in learning and storing such internal models. In a previous work we
measured the way a learned internal model extrapolated beyond the training data
(Shadmehr and Mussa-Ivaldi 1994). The results suggested that the coordinate system of the learned map was in intrinsic (e.g., joint or muscles based) rather than in
extrinsic (e.g., hand based) coordinates. Here we present a mathematical technique
to estimate the input-output properties of the learned map. We then explore the
issue of how the motor memory might store two maps which have similar inputs
but different outputs.
2
Quantifying the internal model
In our paradigm, subjects learn to control an artificial tool: the tool is a robot
manipulandum which has torque motors that can be programmed to produce a
variety of dynamical environments (Fig. 1). The task for the subject is to grasp
the end-effector and make point to point reaching movements to a series of targets.
The environments are represented as force fields acting on the subject's hand, and a
typical case is shown in Fig. 2A. A typical experiment begins with the robot motors
turned off. In this "null" environment subjects move their hand to the targets in a
smooth, straight line fashion. When the force field is introduced, the dynamics of the
task change and the hand trajectory is significantly altered (Shadmehr and MussaIvaldi 1994). With practice (typically hundreds of movements), hand trajectories
return to their straight line path. We have suggested that practice leads to formation
of an internal model which functions as an inverse dynamics mapping, i.e., from a
desired trajectory (presumably in terms of hand position and velocity, Wolpert et
al. 1995) to a prediction of forces that will be encountered along the trajectory. We
designed a method to quantify these forces and estimate the output properties of
the internal model.
If we position a force transducer at the interaction point between the robot and the
subject, we can write the dynamics of the four link system in Fig. 1 in terms of the
Interference in Learning Internal Models of Inverse Dynamics in Humans
following coupled vector differential equations:
Ir(P)P + Gr(p,p)p = E(p,p)
III (q)q + GII(q, q)q
= C(q, q, q*(t?
1119
+ J'{ F
(1)
- f; F
(2)
where I and G are inertial and Corriolis/centripetal matrix functions, E is the
torque field produced by the robot's motors, i.e., the environment, F is the force
measured at the handle of the robot, C is the controller implemented by the motor
system of the subject, q*(t) is the reference trajectory planned by the motor system
of the subject, J is the Jacobian matrix describing the differential transformation
of coordinates from endpoint to joints, q and p are joint positions of the subject
and the robot, and the subscripts sand r denote subject or robot matrices.
?
In the null environment, i.e., E = in Eq. (1), a solution to this coupled system
is q = q*(t) and the arm follows the reference trajectory (typically a straight hand
path with a Gaussian tangential velocity profile). Let us name the controller which
accomplishes this task C = Co in Eq. (2). When the robot motors are producing a
force field E # 0, it can be shown that the solution is q = q*(t) if and only if the
new controller in Eq. (2) is C = C1 = Co + f[ J;T E. The internal model composed
by the subject is C 1 - Co, i.e., the change in the controller after some training
period. We can estimate this quantity by measuring the change in the interaction
force along a given trajectory before and after training. If we call these functions
Fo and FI, then we have:
Fo(q, q, ij, q*(t?
J;T(Co - IlIq - Gllq)
(3)
FI(q,q,ij,q*(t?
JII-T(Co+f;J;TE-Illq-Gllq)
(4)
The functions Fo and FI are impedances of the subject's arm as viewed from the
interaction port. Therefore, by approximating the difference FI - F o, we have an
estimate of the change in the controller. The crucial assumption is that the reference
trajectory q*(t) does not change during the training process.
In order to measure Fo, we had the subjects make movements in a series of environments. The environments were unpredictable (no opportunity to learn) and
their purpose was to perturb the controller about the reference trajectory so we
could measure Fo at neighboring states. Next, the environment in Fig. 2A was
presented and the subject given a practice period to adapt. After training, FI was
estimated in a similar fashion as Fo. The difference between these two functions was
calculated along all measured arm trajectories and the results were projected onto
the hand velocity space. Due to computer limitations, only 9 trajectories for each
target direction were used for this approximation. The resulting pattern of forces
were interpolated via a sum of Gaussian radial basis functions, and are shown in
Fig. 2B. This is the change in the impedance of the arm and estimates the inputoutput property of the internal model that was learned by this subject. We found
that this subject, which provided some of best results in the test group, learned to
change the effective impedance of his arm in a way that approximated the imposed
force field. This would be a sufficient condition for the arm to compensate for the
force field and allow the hand to follow the desired trajectory. An alternate strategy
might have been to simply co-contract arm muscles: this would lead to an increased
stiffness and an ability to resist arbitrary environmental forces. Figure 2B suggests
that practice led to formation of an internal model specific to the dynamics of the
imposed force field.
Reza Shadmehr, Tom Brashers-Krug, Ferdinando Mussa-Ivaldi
1120
A
-200
0
200
..... _<...-)
...
B
",",,-<...-)
Figure 2: Quantification of the change in impedance of a subject's arm after learning a
force field. A: The force field produced by the robot during the training period. B: The
change in the subject's arm impedance after the training period, i.e., the internal model.
2.1
Formation of the internal model in long-term memory
Here we wished to determine whether subjects retained the internal model in longterm motor memory. We tested 16 naive subjects. They were instructed to move
the handle of the robot to a sequence of targets in the null environment. Each
movement was to last 500 ? 50 msec . They were given visual feedback on the
timing of each movement. After 600 movements, subjects were able to consistently
reach the targets in proper time. These trajectories constituted a baseline set.
Subjects returned the next day and were re-familiarized with the timing of the
task. At this point a force field was introduced and subjects attempted to perform the exact task as before: get to the target in proper time. A sequence of 600
targets was given. When first introduced, the forces perturbed the subject's trajectories, causing them to deviate from the straight line path. As noted in previous
work (Shadmehr and Mussa-Ivaldi 1994), these deviations decreased with practice.
Eventually, subject's trajectories in the presence of the force field came to resemble
those of the baseline, when no forces were present. The convergence of the trajectories to those performed at baseline is shown for all 16 subjects in Fig. 3A. The
timing performance of the subjects while moving in the field is shown in Fig. 3B.
In order to determine whether subjects retained the internal model of the force
field in long-term memory, we had them return the next day (24 to 30 hours later)
and once again be tested on a force field. In half of the subjects, the force field
presented was one that they had trained on in the previous day (call this field 1).
In the other half, it was a force field which was novel to the subjects, field 2. Field
2 had a correlation value of -1 with respect to field 1 (i.e., each force vector in
field 2 was a 180 degree rotation of the respective vector in field 1). Subjects who
were tested on a field that they had trained on before performed significantly better
(p < 0.01) than their initial performance (Fig. 4A), signifying retention. However,
those who were given a field that was novel performed at naive levels (Fig. 4B).
This result suggested that the internal model formed after practice in a given field
was (1) specific to that field: performance on the untrained field was no better than
1121
Interference in Learning Internal Models of Inverse Dynamics in Humans
0.9
0.85
?-; 0.9
.~ 0.8
~
:?
~ 0.75
8
0.7
i
08
~
0.7
0.85
A
0
100
200
300
400
500
600
0.6
B
0
100
Movemen1 N!mber
200
300
400
500
600
Movement Number
Figure 3: Measures of performance during the training period (600 movements) for 16
naive subjects. Short breaks (2 minutes) were given at intervals of 200 movements. A :
Mean ? standard error (SE) of the correlation coefficient between hand trajectory in a
null environment (called baseline trajectories, measured before exposure to the field) , and
trajectory in the force field. Hand trajectories in the field converge to that in the null field
(i.e. , become straight, with a bell shaped velocity profile). B: Mean ? SE of the movement
period to reach a target. The goal was to reach the target in 0.5 ? 0.05 seconds.
I
0.8
1.,
1
0.75
., 0.9
E
i=
A
I:: \~ , ~ ',
:::;; 0.6
;
~iIJIJ
,l,ll,lI"
0)
"T' I,.", 1f11T'1
Y!1y~
o
100
E
i=
200
300
400
500
600
0.7
~ 0.65
E
~
0.8
:::;;
0.55
B
0
Movement Number
100
200
300
400
500
600
Movement Number
Figure 4: Subjects learned an internal model specific to the field and retained it in longterm memory. A: Mean ? standard error (SE) of the movement period in the force field
(called field 1) during initial practice session (upper trace) and during a second session
24-30 hours after the initial practice (lower trace). B: Movement period in a different
group of subjects during initial training (dark line) in field 1 and test in an anti-correlated
field (called field 2) 24-30 hours later (gray line).
performance recorded in a separate set of naive subjects who were given than field
in their initial training day; and (2) could be retained, as evidenced by performance
in the following day.
2.2
Interference effects of the motor memory
In our experiment the "tool" that subjects learn to control is rather unusual , nevertheless, subjects learn its inverse dynamics and the memory is used to enhance
performance 24 hours after its initial acquisition. We next asked how formation
of this memory affected formation of subsequent internal models. In the previous
section we showed that when a subject returns a day after the initial training, although the memory of the learned internal model is present , there is no interference
(or decrement in performance) in learning a new, anti-correlated field . Here we
show that when this temporal distance is significantly reduced, the just learned
1122
Reza Shadmehr, Tom Brashers-Krug, Ferdinanda Mussa-Ivaldi
200
300
400
Movement Number
Figure 5: Interference in sequential learning of two uncorrelated force fields: The lower
trace is the mean and standard error of the movement periods of a naive group of subjects
during initial practice in a force field (called field 1). The upper trace is the movement period of another group of naive subjects in field 1, 5 minutes after practicing 400 movements
in field 2, which was anti-correlated with field 1.
model interferes with learning of a new field.
Seven new subjects were recruited. They learned the timing of the task in a null
environment and in the following day were given 400 targets in a force field (called
field 1). They showed improvement in performance as before. After a short break
(5-10 minutes in which they walked about the lab or read a magazine), they were
given a new field: this field was called field 2 and was anti-correlated with respect
to field 1. We found a significant reduction (p < 0.01) in their ability to learn field
2 (Fig. 5) when compared to a subject group which had not initially trained in field
1. In other words, performance in field 2 shortly after having learned field 1 was
significantly worse than that of naives. Subjects seemed surprised by their inability
to master the task in field 2. In order to demonstrate that field 2 in isolation was
no more difficult to learn than field 1, we had a new set of subjects (n = 5) initially
learn field 2, then field 1. Now we found a very large decrement in learn ability of
field 1.
One way to explain the decrement in performance shown in Fig. 5 is to assume that
the same "computational elements" that represented the internal model of the first
field were being used to learn the second field.! In other words, when the second field
was given, because the forces were opposite to the first field, the internal model was
badly biased against representing this second field: muscle torque patterns predicted
for movement to a given target were in the wrong direction.
In the connectionist literature this is a phenomenon called temporal interference
(Sutton 1986). As a network is trained, some of its elements acquire large weights
and begin to dominate the input-output transformation. When a second task is
presented with a new and conflicting map (mapping similar inputs to different outputs), there are large errors and the network performs more poorly than a "naive"
network. As the network attempts to learn the new task, the errors are fed to each
element (i.e., pre-synaptic input). This causes most activity in those elements that
1 Examples of computational elements used by the nervous system to model inverse
dynamics of a mechanical system were found by Shidara et al. (1993), where it was shown
that the firing patterns of a set of Purkinje cells in the cerebellum could be reconstructed
by an inverse dynamic representation of the eye.
Interference in Learning Internal Models of Inverse Dynamics in Humans
1123
had the largest synaptic weight. If the learning algorithm is Hebbian , i.e., weights
change in proportion to co-activation of the pre- and the post-synaptic element,
then the largest weights are changed the most , effectively causing a loss of what
was learned in the first task . Therefore, from a computational stand point, we
would expect that the internal model of field 1 as learned by our subjects should be
destroyed by learning of field 2. Evidence for "catastrophic interference" in these
subjects is presented elsewhere in this volume (Brashers-Krug et al. 1995).
The phenomenon of interference in sequential learning of two stimulus-response
maps has been termed proactive interference or negative transfer in the psychological
literature. In humans, interference has been observed extensively in verbal tasks
involving short-term declarative memory (e.g., tasks involving recognition of words
in a list or pairing of non-sense syllables, Bruce 1933, Melton and Irwin 1940,
Sears and Hovland 1941). It has been found that interference is a function of the
similarity of the stimulus-response maps in the two tasks: if the stimulus in the new
learning task requires a response very different than what was recently learned, then
there is significant interference. Interestingly, it has been shown that the amount of
interference decreases with increased learning (or practice) on the first map (Siipola
and Israel 1933).
In tasks involving procedural memory (which includes motor learning, Squire 1986),
the question of interference has been controversial: Although Lewis et al. (1949)
reported interference in sequential learning of two motor tasks which involved moving levers in response to a set of lights, it has been suggested that the interference
that they observed might have been due to cognitive confusion (Schmidt 1988).
In another study, Ross (1974) reported little interference in subjects learning her
motor tasks.
We designed a task that had little or no cognitive components. We found that
shortly after the acquisition of a motor memory, that memory strongly interfered
with learning of a new, anti-correlated input-output mapping. However, this interference was not significant 24 hours after the memory was initially acquired . One
possible explanation is that the initial learning has taken place in a temporary and
vulnerable memory system. With time and/or practice, the information in this
memory had transferred to long-term storage (Brashers-Krug et al. 1995) .
Brain imaging studies during motor learning suggest that as subjects become more
proficient in a motor task, neural fields in the motor cortex display increases in
activity (Grafton et al. 1992) and new fields are recruited (Kawashima et al. 1994) .
It has been reported that when a subject attempts to learn two new motor tasks
successively (in this case the tasks consisted of two sequences of finger movements),
the neural activity in the motor cortex is lower for the second task , even when the
order ofthe tasks is reversed (Jezzard et al. 1994). It remains to be seen whether this
decrement in neural activity in the motor cortex is correlated with the interference
observed when subjects attempt to learn two different input-output mappings in
succession (Gandolfo et al. 1994) .
References
Brashers-Krug T , Shadmehr R, Todorov E (1995) Catastrophic interference in human
motor learning. Adv Neural Inform Proc Syst, vol 7, in press.
1124
Reza Shadmehr, Tom Brashers-Krug, Ferdinando Mussa-Ivaldi
Bruce RW (1933) Conditions of transfer of training. J Exp Psychol 16:343-361.
French, R. (1992) Semi-distributed Representations and Catastrophic Forgetting in Connectionist Networks, Connection Science 4:365-377.
Grafton ST et al. (1992) Functional anatomy of human procedural learning determined
with regional cerebral blood flow and PET. J Neurosci 12:2542-2548.
Gandolfo F, Shadmehr R, Benda B, Bizzi E (1994) Adaptive behavior ofthe monkey motor
system to virtual environments. Soc Neurosci Abs 20(2):1411.
Hogan N (1985) Impedance control: An approach for manipulation: Theory. J Dynam
Sys Meas Cont 107:1-7.
Jezzard P et al. (1994) Practice makes perfect: A functional MRI study oflong term motor
cortex plasticity. 2nd Ann Soc. Magnetic Res., p. 330.
Kawashima R, Roland PE, O'Sullivan BT (1994) Fields in human motor areas involved
in preparation for reaching, actual reaching, and visuomotor learning: A PET study. J
Neurosci 14:3462-3474.
Lewis D, Shephard AH, Adams JA (1949) Evidences of associative interference in psychomotor performance. Science 110:271-273.
Melton AW, Irwin JM (1940) The influence of degree of interpolated learning on retroactive
inhibition and the overt transfer of specific responses. Amer J Psychol 53:173-203.
Ross D (1974) Interference in discrete motor tasks: A test of the theory. PhD dissertation,
Dept. Psychology, Univ. Michigan, Ann Arbor.
Schmidt RA (1988) Motor Control and Learning: A Behavioral Emphasis. Human Kinetics
Books, Champaign IL, pp. 409-411.
Sears RR, Hovland CI (1941) Experiments on motor conflict. J Exp Psychol 28:280-286.
Shadmehr R, Mussa-Ivaldi FA (1994) Adaptive representation of dynamics during learning
of a motor task. J Neuroscience, 14(5):3208- 3224.
Shidara M, Kawano K, Gomi H, Kawato M (1993) Inverse dynamics model eye movement
control by Purkinje cells in the cerebellum. Nature 365:50-52.
Siipola EM, Israel HE (1933) Habit interference as dependent upon stage oftraining. Amer
J Psychol 45:205-227.
Squire LR (1986) Mechanisms of memory. Science 232:1612-1619.
Sutton RS (1986) Two problems with backpropagation and other steepest-descent learning
procedures for networks. Proc 8th Cognitive Sci Soc, pp. 823-831.
Wolpert DM, Ghahramani Z, Jordan MI (1995) Are arm trajectories planned in kimenatic
or dynamic coordinates? An adaptation stUdy. Exp Brain Res, in press.
| 1010 |@word mri:1 longterm:2 proportion:1 nd:1 r:1 eng:1 reduction:1 ivaldi:8 initial:10 series:3 interestingly:1 activation:1 john:1 familiarized:1 subsequent:1 chicago:1 plasticity:1 motor:31 designed:2 half:2 manipulandum:1 nervous:2 proficient:1 sys:1 steepest:1 short:3 dissertation:1 lr:1 mathematical:1 along:3 differential:2 become:2 surprised:1 transducer:2 pairing:1 behavioral:1 acquired:1 forgetting:2 ra:1 behavior:1 brain:3 torque:4 little:2 actual:1 unpredictable:1 jm:1 elbow:1 begin:2 provided:1 null:6 what:2 israel:2 monkey:1 developed:1 transformation:2 temporal:2 act:1 wrong:1 control:7 producing:1 before:5 retention:1 timing:4 sutton:2 krug:8 subscript:1 path:3 firing:1 might:3 emphasis:1 tbk:1 physio:1 suggests:1 co:7 programmed:1 testing:1 practice:14 backpropagation:1 sullivan:1 procedure:1 habit:1 area:1 jhu:1 physiology:1 significantly:4 bell:1 word:3 radial:1 pre:2 suggest:2 get:1 onto:2 storage:1 kawashima:2 influence:1 map:10 imposed:2 exposure:1 nwu:1 immediately:1 dominate:1 his:1 handle:2 coordinate:4 target:12 magazine:1 exact:1 us:2 element:7 velocity:4 approximated:1 recognition:1 melton:2 observed:3 adv:1 movement:23 decrease:1 grafton:2 environment:14 asked:1 dynamic:20 hogan:2 trained:4 upon:1 basis:1 joint:4 represented:2 finger:1 univ:3 sears:2 effective:1 artificial:1 visuomotor:1 formation:5 ability:3 associative:1 sequence:3 indication:1 rr:1 interferes:1 jezzard:2 interaction:3 adaptation:1 neighboring:1 turned:1 causing:2 poorly:1 inputoutput:1 convergence:1 produce:2 perfect:1 adam:1 retroactive:1 bme:1 measured:4 ij:2 wished:1 eq:3 shephard:1 soc:3 implemented:1 predicted:1 resemble:1 quantify:1 dynam:1 direction:2 anatomy:1 human:11 virtual:1 sand:1 ja:1 kinetics:1 exp:3 presumably:1 mapping:8 predict:1 bizzi:1 hovland:2 purpose:1 proc:2 overt:1 currently:1 ross:2 largest:2 tool:8 mit:1 gaussian:2 reaching:5 rather:2 encode:1 oflong:1 improvement:1 consistently:1 baseline:4 sense:1 dependent:1 typically:2 bt:1 initially:4 her:1 interested:1 issue:1 field:73 once:1 shaped:1 having:1 connectionist:2 stimulus:3 develops:1 tangential:1 composed:1 mussa:9 attempt:4 ab:1 grasp:1 light:1 respective:1 desired:2 re:3 effector:1 increased:2 psychological:1 purkinje:2 planned:2 measuring:1 deviation:1 hundred:1 gr:1 reported:3 perturbed:1 aw:1 st:1 contract:1 off:1 enhance:1 hopkins:1 again:1 lever:1 recorded:1 successively:1 worse:1 cognitive:4 book:1 return:3 li:1 syst:1 jii:1 includes:1 coefficient:1 squire:2 performed:4 later:2 break:2 lab:1 proactive:1 gandolfo:2 walked:1 bruce:2 il:2 ir:1 formed:1 who:3 succession:1 ofthe:2 produced:2 trajectory:21 straight:5 ah:1 gomi:1 explain:1 fo:6 reach:3 inform:1 synaptic:3 against:1 acquisition:2 pp:2 involved:2 dm:1 mi:1 recall:1 inertial:1 kawano:1 day:7 follow:1 tom:5 planar:1 response:5 amer:2 strongly:1 just:1 biomedical:1 stage:1 correlation:2 hand:13 french:1 reveal:1 gray:1 name:1 effect:1 consisted:1 read:1 cerebellum:2 ll:1 during:11 noted:1 demonstrate:1 confusion:1 performs:1 novel:4 recently:1 fi:5 rotation:1 kawato:1 functional:2 physical:1 reza:6 endpoint:1 volume:1 cerebral:1 he:1 significant:3 cambridge:1 ai:1 session:3 had:10 moving:2 robot:13 similarity:1 cortex:4 inhibition:1 sandro:1 showed:2 termed:1 store:1 manipulation:1 came:1 muscle:3 seen:1 accomplishes:1 determine:2 paradigm:1 period:10 converge:1 semi:1 mber:1 hebbian:1 smooth:1 champaign:1 adapt:1 compensate:2 long:3 post:1 roland:1 prediction:1 involving:3 controller:6 admittance:1 cell:2 c1:1 baltimore:1 decreased:1 interval:1 crucial:1 sch:1 biased:1 regional:1 subject:53 med:1 gii:1 recruited:2 flow:1 jordan:1 call:2 presence:1 iii:1 destroyed:1 variety:1 todorov:1 isolation:1 psychology:1 opposite:1 whether:3 returned:1 cause:1 se:3 grip:1 amount:1 benda:1 dark:1 extensively:1 rw:1 reduced:1 estimated:1 extrinsic:1 neuroscience:1 write:1 discrete:1 vol:1 affected:1 group:5 four:1 procedural:2 nevertheless:1 monitor:1 blood:1 practicing:1 imaging:1 sum:1 inverse:14 master:1 place:1 oftraining:1 syllable:1 display:1 encountered:1 badly:1 activity:4 interpolated:2 friction:1 transferred:1 alternate:1 em:1 interference:27 taken:1 equation:1 remains:1 describing:1 eventually:1 mechanism:2 fed:1 end:2 unusual:1 brashers:8 stiffness:1 magnetic:1 schmidt:2 shortly:2 opportunity:1 perturb:1 ghahramani:1 approximating:1 move:3 question:1 quantity:1 strategy:1 fa:1 md:1 distance:1 link:1 separate:1 reversed:1 sci:1 seven:1 shidara:2 declarative:1 pet:2 retained:4 cont:1 acquire:1 setup:1 difficult:1 trace:4 negative:1 proper:2 perform:1 anticorrelated:1 upper:2 descent:1 anti:5 displayed:1 shoulder:1 interacting:1 arbitrary:1 introduced:3 evidenced:1 mechanical:3 connection:1 resist:1 conflict:1 learned:15 conflicting:1 temporary:1 hour:6 beyond:1 suggested:4 able:1 dynamical:1 pattern:3 memory:19 explanation:1 analogue:1 force:33 quantification:1 arm:11 representing:1 altered:1 eye:2 psychol:4 coupled:3 naive:7 deviate:1 understanding:1 literature:2 powered:1 loss:1 expect:1 northwestern:1 limitation:1 facing:1 degree:2 controversial:1 sufficient:1 port:1 storing:1 uncorrelated:1 elsewhere:1 changed:1 extrapolated:1 last:1 verbal:1 allow:1 distributed:1 feedback:1 calculated:1 stand:1 seemed:1 instructed:1 adaptive:2 projected:1 reconstructed:1 approximate:1 skill:1 impedance:6 learn:15 transfer:3 nature:1 interact:1 untrained:1 constituted:1 neurosci:3 decrement:4 profile:2 fig:11 parker:1 fashion:2 position:3 msec:1 house:1 pe:1 jacobian:1 learns:1 minute:3 specific:4 list:1 meas:1 evidence:2 intrinsic:1 sequential:3 effectively:1 ci:1 phd:1 te:1 interfered:1 wolpert:2 illq:1 led:1 simply:1 explore:1 michigan:1 visual:1 vulnerable:1 environmental:1 lewis:2 ma:1 viewed:1 goal:1 quantifying:1 ann:2 ferdinando:4 change:11 typical:2 determined:1 shadmehr:12 acting:1 called:8 catastrophic:3 experimental:1 arbor:1 attempted:1 internal:26 centripetal:1 inability:1 signifying:1 irwin:2 preparation:1 dept:4 tested:3 phenomenon:2 correlated:6 |
15 | 1,011 | Active Learning with Statistical Models
David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan
cohnQpsyche.mit.edu. zoubinQpsyche.mit.edu. jordan~syche.mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
For many types of learners one can compute the statistically "optimal" way to select data. We review how these techniques have
been used with feedforward neural networks [MacKay, 1992; Cohn,
1994] . We then show how the same principles may be used to select
data for two alternative, statistically-based learning architectures:
mixtures of Gaussians and locally weighted regression. While the
techniques for neural networks are expensive and approximate, the
techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate.
1
ACTIVE LEARNING - BACKGROUND
An active learning problem is one where the learner has the ability or need to
influence or select its own training data. Many problems of great practical interest
allow active learning, and many even require it.
We consider the problem of actively learning a mapping X - Y based on a set of
training examples {(Xi,Yi)}~l' where Xi E X and Yi E Y. The learner is allowed
to iteratively select new inputs x (possibly from a constrained set), observe the
resulting output y, and incorporate the new examples (x, y) into its training set.
The primary question of active learning is how to choose which x to try next.
There are many heuristics for choosing x based on intuition, including choosing
places where we don't have data, where we perform poorly [Linden and Weber,
1993], where we have low confidence [Thrun and Moller, 1992], where we expect it
706
David Cohn, Zoubin Ghahramani, Michael I. Jordon
to change our model [Cohn et aI, 1990], and where we previously found data that
resulted in learning [Schmidhuber and Storck, 1993].
In this paper we consider how one may select x "optimally" from a statistical
viewpoint. We first review how the statistical approach can be applied to neural
networks, as described in MacKay [1992] and Cohn [1994]. We then consider two
alternative, statistically-based learning architectures: mixtures of Gaussians and
locally weighted regression. While optimal data selection for a neural network is
computationally expensive and approximate, we find that optimal data selection for
the two statistical models is efficient and accurate.
2
ACTIVE LEARNING - A STATISTICAL APPROACH
We denote the learner's output given input x as y(x). The mean squared error of
this output can be expressed as the sum of the learner's bias and variance. The
variance 0'3 (x) indicates the learner's uncertainty in its estimate at x. 1 Our goal
will be to select a new example x such that when the resulting example (x, y) is
added to the training set, the integrated variance IV is minimized:
IV =
J0'3
P (x)dx.
(1)
Here, P(x) is the (known) distribution over X. In practice, we will compute a
Monte Carlo approximation of this integral, evaluating 0'3 at a number of random
points drawn according to P(x).
Selecting x so as to minimize IV requires computing 0-3, the new variance at x given
(x, y). Until we actually commit to an x, we do not know what corresponding y we
will see, so the minimization cannot be performed deterministically.2 Many learning
architectures, however, provide an estimate of PWlx) based on current data, so we
can use this estimate to compute the expectation of 0-3. Selecting x to minimize
the expected integrated variance provides a solid statistical basis for choosing new
examples.
2.1
EXAMPLE: ACTIVE LEARNING WITH A NEURAL
NETWORK
In this section we review the use of techniques from Optimal Experiment Design
(OED) to minimize the estimated variance of a neural network [Fedorov, 1972;
MacKay, 1992; Cohn, 1994] . We will assume we have been given a learner y = fwO,
a training set {(Xi, yd}f;l and a parameter vector til that maximizes a likelihood
measure. One such measure is the minimum sum squared residual
52 =
~
m
f
(Yi - Y(Xi))2.
i=l
lUnless explicitly denoted, fI and O'~ are functions of x. For simplicity, we present our
results in the univariate setting. All results in the paper extend easily to the multivariate
case.
2This contrasts with related work by Plutowski and White [1993], which is concerned
with filtering an existing data set.
Active Learning with Statistical Models
707
The estimated output variance of the network is
O'~ ~ S2
y
(Oy(X ?) T(02 S2) (Oy(X?)
OW 2
ow
-1
OW
The standard OED approach assumes normality and local linearity. These assumptions allow replacing the distribution P(ylx) by its estimated mean y(x) and
variance
The expected value of the new variance, iT~, is then:
S2.
-2)...... 2
(O'g
...... O'g -
x)
S2O'~(x,
+ O'~(x)'
[MacKay, 1992].
(2)
where we define
_(
0' y
_) =
x, x -
S2 (OY(X?)T
(02S2)-1
(Oy(X?)
ow
ow2
ow?
For empirical results on the predictive power of Equation 2, see Cohn [1994] .
The advantages of minimizing this criterion are that it is grounded in statistics,
and is optimal given the assumptions. Furthermore, the criterion is continuous
and differentiable. As such, it is applicable in continuous domains with continuous
action spaces, and allows hillclimbing to find the "best" x.
For neural networks, however, this approach has many disadvantages. The criterion
relies on simplifications and strong assumptions which hold only approximately.
Computing the variance estimate requires inversion of a Iwl x Iwl matrix for each
new example, and incorporating new examples into the network requires expensive
retraining. Paass and Kindermann [1995] discuss an approach which addresses some
of these problems.
3
MIXTURES OF GAUSSIANS
The mixture of Gaussians model is gaining popularity among machine learning practitioners [Nowlan, 1991; Specht, 1991; Ghahramani and Jordan, 1994]. It assumes
that the data is produced by a mixture of N Gaussians gi, for i = 1, ... , N. We
can use the EM algorithm [Dempster et aI, 1977] to find the best fit to the data,
after which the conditional expectations of the mixture can be used for function
approximation.
For each Gaussian gi we will denote the estimated input/output means as JLx,i and
JLy,i and estimated covariances as O';,i'
and O'xy,i. The conditional variance of
y given x may then be written
O';,i
We will denote as ni the (possibly fractional) number of training examples for which
gi takes responsibility:
David Cohn, Zoubin Ghahramani, Michael I. Jordon
708
For an input x, each 9i has conditional expectation
Yi = J.Ly,i
A
xy -,i ( X + -0-2
0-
x,i
J.Lx,i ) ,
o-~
.=
y,J
2
Yi
0.
~
and variance (1'~,i:
.)2) .
1 + x - J.Lx,~
((
n'
0- 2 .
XI'
t
These expectations and variances are mixed according to the prior probability that
9i has of being responsible for x:
. = h.( ) _
h,_~x-
P(xli)
.
2:j=l P(xlj)
N
For input x then, the conditional expectation
variance may be written:
Y of the
resulting mixture and its
N
Y=
L hi Yi,
i:::l
In contrast to the variance estimate computed for a neural network, here o-~ can be
computed efficiently with no approximations.
3.1
ACTIVE LEARNING WITH A MIXTURE OF GAUSSIANS
We want to select x to minimize ( Cr~). With a mixture of Gaussians, the model's
estimated distribution of ii given x is explicit:
P(ylx)
N
N
i=l
i=l
= L hiP(ylx, i) = L hiN(Yi(X), o-;lx,i(X)),
=
where hi hi (x). Given this, calculation of ( Cr~) is straightforward: we model the
change in each 9i separately, calculating its expected variance given a new point
sampled from P(ylx, i) and weight this change by hi. The new expectations combine
to form the learner's new expected variance
(3)
where the expectation can be computed exactly in closed form:
Active Learning with Statistical Models
4
709
LOCALLY WEIGHTED REGRESSION
We consider here two forms of locally weighted regression (LWR): kernel regression
and the LOESS model [Cleveland et aI, 1988]. Kernel regression computes y as an
average of the Yi in the data set, weighted by a kernel centered at x. The LOESS
model performs a linear regression on points in the data set, weighted by a kernel
centered at x. The kernel shape is a design parameter: the original LOESS model
uses a "tricubic" kernel; in our experiments we use the more common Gaussian
hi(x) == hex - Xi) = exp( -k(x - xd 2),
where k is a smoothing constant. For brevity, we will drop the argument x for hi(x),
and define n = L:i hi. We can then write the estimated means and covariances as:
L:ihiXi
2
L:i hi(Xi- x )2
, Ux =
, Uxy = Lihi(Xi-X)(Yi-J.Ly)
n
n
n
_ L:i hiYi 2 _ Li hi(Yi - J.Ly)2 2 _ 2 u;y
J.Ly , Uy , Uyl x - Uy - - 2 .
n
n
~
We use them to express the conditional expectations and their estimated variances:
J.Lx =
kernel:
LOESS:
Y
,_
Y - J.Ly
= J.Ly,
+ ~( X q2
%
4.1
u
= -1!..
2
u?y
),...? __
J.Lx,
Y
V
(4)
n
u;lx (1 + (x n
J.Lx)2)
u;
(5)
ACTIVE LEARNING WITH LOCALLY WEIGHTED
REGRESSION
Again we want to select x to minimize (iT~) . With LWR, the model's estimated
distribution of y given x is explicit:
P(ylx) = N(y(x), u;lxCx))
The estimate of (iT~) is also explicit. Defining
kernel, the learner's expected new variance is
1.
kerne.
h as the weight assigned to x by the
(-2)
_ (iT~)
uy - --n+h
where the expectation can be computed exactly in closed form:
(6)
710
5
David Cohn, Zoubin Ghahramani, Michael 1. Jordon
EXPERIMENTAL RESULTS
Below we describe two sets of experiments demonstrating the predictive power of
the query selection criteria in this paper. In the first set, learners were trained on
data from a noisy sine wave. The criteria described in this paper were applied to
predict how a new training example selected at point x would decrease the learner's
variance. These predictions, along with the actual changes in variance when the
training points were queried and added, are plotted in Figure 1.
o.
- .- - . _. - predicted change
- - actual change
-0.5
o.
- ._.-. - .? predicted change
- - actual
.
9"8rl97".
\
-" i
~.
0.2
0.4
0.6
0.8
-0.2
,"
0.2
.-.- -.-.- predicted change
- - actual change
.
0.4
0.6
0.8
Figure 1: The upper portion of each plot indicates each learner's fit to noisy sinusoidal data. The lower portion of each plot indicates predicted and actual changes
in the learner's average estimated variance when x is queried and added to the
training set, for x E [0,1]. Changes are not plotted to scale with learners' fits.
In the second set of experiments, we a:pplied the techniques of this paper to learning
the kinematics of a two-joint planar arm (Figure 2; see Cohn [1994] for details).
Below, we illustrate the problem using the LOESS algorithm.
An example of the correlation between predicted and actual changes in variance
on this problem is plotted in Figure 2. Figure 3 demonstrates that this correlation may be exploited to guide sequential query selection. We compared a
LOESS learner which selected each new query so as to minimize expected variance
Active Learning with Statistical Models
711
with LOESS learners which selected queries according to various heuristics. The
variance-minimizing learner significantly outperforms the heuristics in terms of both
variance and MSE.
0 .025r--..---...,......-~---.----...---,---.",
o
o
0
0.02
o
~
c::
0.015
o
.~
~
0.01
til
~
"iii
0.005
::I
~
0
-0.005
o
-?$.01 -0.005
0
0.005 0,01 0.015
predicted delta variance
0.02
0.025
Figure 2: (left) The arm kinematics problem. (right) Predicted vs. actual changes
in model variance for LOESS on the arm kinematics problem. 100 candidate points
are shown for a model trained with 50 initial random examples. Note that most
of the potential queries produce very little improvement , and that the algorithm
successfully identifies those few that will help most.
0.2
0.1
3
VarianceO.04
MSE
0.02
0.01
0.3
0.004
0.1
50 100 150 200 250 300 350 400 450 500
training examples
50 100 150200 250 300 350 400 450 500
training examples
Figure 3: Variance and MSE for a LOESS learner selecting queries according to
the variance-minimizing criterion discussed in this paper and according to several
heuristics . "Sensitivity" queries where output is most sensitive to new data, "Bias"
queries according to a bias-minimizing criterion, ?Support" queries where the model
has the least data support. The variance of "Random" and "Sensitivity" are off the
scale. Curves are medians over 15 runs with non-Gaussian noise.
712
6
David Cohn. Zouhin Ghahramani. Michael 1. Jordon
SUMMARY
Mixtures of Gaussians and locally weighted regression are two statistical models
that offer elegant representations and efficient learning algorithms. In this paper
we have shown that they also offer the opportunity to perform active learning in an
efficient and statistically correct manner. The criteria derived here can be computed
cheaply and, for problems tested, demonstrate good predictive power.
Acknowledgements
This work was funded by NSF grant CDA-9309300, the McDonnell-Pew Foundation,
ATR Human Information Processing Laboratories and Siemens Corporate Research.
We thank Stefan Schaal for helpful discussions about locally weighted regression .
References
W. Cleveland, S. Devlin, and E. Grosse. (1988) Regression by local fitting. Journal of
Econometrics 37:87-114.
D. Cohn, 1. Atlas and R. Ladner. (1990) Training Connectionist Networks with Queries
and Selective Sampling. In D. Touretzky, ed., Advances in Neural Information Processing
Systems 2, Morgan Kaufmann.
D. Cohn. (1994) Neural network exploration using optimal experiment design. In J . Cowan
et al., eds., Advances in Neural Information Processing Systems 6. Morgan Kaufmann.
A. Dempster, N. Laird and D. Rubin. (1977) Maximum likelihood from incomplete data
via the EM algorithm. J. Royal Statistical Society Series B, 39:1-38.
V. Fedorov. (1972) Theory of Optimal Experiments. Academic Press, New York.
Z. Ghahramani and M. Jordan. (1994) Supervised learning from incomplete data via an
EM approach. In J. Cowan et al., eds., Advances in Neural Information Processing Systems
6. Morgan Kaufmann.
A. Linden and F. Weber. (1993) Implementing inner drive by competence reflection. In
H. Roitblat et al., eds., Proc. 2nd Int. Conf. on Simulation of Adaptive Behavior, MIT
Press, Cambridge.
D. MacKay. (1992) Information-based objective functions for active data selection, Neural
Computation 4( 4): 590-604.
S. Nowlan. (1991) Soft Competitive Adaptation: Neural Network Learning Algorithms
based on Fitting Statistical Mixtures. CMU-CS-91-126, School of Computer Science,
Carnegie Mellon University, Pittsburgh, PA.
Paass, G., and Kindermann, J . (1995) . Bayesian Query Construction for Neural Network
Models. In this volume.
M. Plutowski and H. White (1993). Selecting concise training sets from clean data. IEEE
Transactions on Neural Networks, 4, 305-318.
S. Schaal and C. Atkeson. (1994) Robot Juggling: An Implementation of Memory-based
Learning. Control Systems Magazine, 14(1):57-71.
J. Schmidhuber and J . Storck. (1993) Reinforcement driven information acquisition in
nondeterministic environments. Tech. Report, Fakultiit fiir Informatik, Technische Universitiit Munchen.
D. Specht. (1991) A general regression neural network. IEEE Trans. Neural Networks,
2(6):568-576.
S. Thrun and K. Moller. (1992) Active exploration in dynamic environments. In J. Moody
et aI., editors, Advances in Neural Information Processing Systems 4. Morgan Kaufmann.
| 1011 |@word inversion:1 retraining:1 nd:1 simulation:1 covariance:2 concise:1 solid:1 initial:1 series:1 selecting:4 outperforms:1 existing:1 current:1 nowlan:2 dx:1 written:2 shape:1 drop:1 plot:2 atlas:1 v:1 selected:3 provides:1 lx:7 along:1 combine:1 fitting:2 nondeterministic:1 manner:1 jly:1 expected:6 behavior:1 brain:1 actual:7 little:1 cleveland:2 linearity:1 maximizes:1 what:1 q2:1 xd:1 exactly:2 demonstrates:1 control:1 ly:6 grant:1 local:2 yd:1 approximately:1 statistically:4 uy:3 practical:1 responsible:1 practice:1 j0:1 empirical:1 significantly:1 confidence:1 zoubin:4 cannot:1 selection:5 influence:1 straightforward:1 simplicity:1 construction:1 magazine:1 us:1 pa:1 expensive:3 econometrics:1 decrease:1 intuition:1 dempster:2 environment:2 dynamic:1 oed:2 trained:2 predictive:3 learner:18 basis:1 easily:1 joint:1 various:1 describe:1 monte:1 query:11 choosing:3 heuristic:4 ability:1 statistic:1 gi:3 commit:1 noisy:2 laird:1 advantage:1 differentiable:1 adaptation:1 poorly:1 produce:1 help:1 illustrate:1 school:1 strong:1 predicted:7 c:1 correct:1 centered:2 human:1 exploration:2 implementing:1 require:1 hold:1 exp:1 great:1 mapping:1 predict:1 proc:1 applicable:1 kindermann:2 sensitive:1 successfully:1 weighted:10 minimization:1 stefan:1 mit:4 gaussian:3 cr:2 derived:1 schaal:2 improvement:1 indicates:3 likelihood:2 iwl:2 tech:1 contrast:2 storck:2 helpful:1 integrated:2 selective:1 among:1 denoted:1 loess:9 constrained:1 smoothing:1 mackay:5 lwr:2 sampling:1 lihi:1 paass:2 minimized:1 connectionist:1 report:1 few:1 resulted:1 interest:1 mixture:12 accurate:2 integral:1 xy:2 iv:3 incomplete:2 plotted:3 cda:1 jordon:4 hip:1 soft:1 disadvantage:1 technische:1 optimally:1 sensitivity:2 off:1 michael:5 moody:1 squared:2 again:1 choose:1 possibly:2 cognitive:1 conf:1 til:2 actively:1 li:1 potential:1 sinusoidal:1 int:1 explicitly:1 performed:1 try:1 sine:1 responsibility:1 closed:2 portion:2 wave:1 competitive:1 minimize:6 ni:1 variance:31 kaufmann:4 efficiently:1 xli:1 bayesian:1 roitblat:1 produced:1 informatik:1 carlo:1 drive:1 touretzky:1 ed:4 acquisition:1 sampled:1 massachusetts:1 fractional:1 actually:1 supervised:1 planar:1 furthermore:1 until:1 correlation:2 replacing:1 cohn:13 assigned:1 iteratively:1 laboratory:1 white:2 criterion:8 demonstrate:1 performs:1 reflection:1 hin:1 weber:2 fi:1 common:1 volume:1 extend:1 discussed:1 mellon:1 cambridge:2 ai:4 queried:2 pew:1 funded:1 robot:1 multivariate:1 own:1 driven:1 schmidhuber:2 yi:10 exploited:1 plutowski:2 morgan:4 minimum:1 ii:1 corporate:1 academic:1 calculation:1 offer:2 prediction:1 regression:13 expectation:9 cmu:1 grounded:1 kernel:8 background:1 want:2 separately:1 median:1 elegant:1 cowan:2 jordan:4 practitioner:1 feedforward:1 iii:1 concerned:1 fit:3 architecture:3 inner:1 devlin:1 juggling:1 york:1 action:1 ylx:5 fwo:1 locally:8 nsf:1 estimated:10 delta:1 popularity:1 write:1 carnegie:1 express:1 demonstrating:1 drawn:1 clean:1 sum:2 run:1 uncertainty:1 place:1 hi:9 simplification:1 argument:1 department:1 according:6 mcdonnell:1 em:3 xlj:1 computationally:1 equation:1 previously:1 discus:1 kinematics:3 know:1 specht:2 gaussians:9 observe:1 munchen:1 alternative:2 original:1 assumes:2 opportunity:1 calculating:1 ghahramani:7 society:1 objective:1 question:1 added:3 primary:1 ow:5 thank:1 thrun:2 atr:1 minimizing:4 design:3 implementation:1 perform:2 upper:1 ladner:1 fedorov:2 defining:1 competence:1 david:5 trans:1 address:1 below:2 uxy:1 including:1 gaining:1 royal:1 memory:1 power:3 residual:1 arm:3 normality:1 technology:1 uyl:1 identifies:1 review:3 prior:1 acknowledgement:1 expect:1 mixed:1 oy:4 filtering:1 foundation:1 rubin:1 principle:1 viewpoint:1 editor:1 summary:1 hex:1 bias:3 allow:2 guide:1 institute:1 curve:1 evaluating:1 computes:1 adaptive:1 reinforcement:1 atkeson:1 transaction:1 approximate:2 fiir:1 active:15 pittsburgh:1 xi:8 don:1 continuous:3 jlx:1 mse:3 moller:2 domain:1 s2:5 noise:1 allowed:1 grosse:1 deterministically:1 explicit:3 candidate:1 linden:2 incorporating:1 sequential:1 univariate:1 cheaply:1 hillclimbing:1 expressed:1 ux:1 relies:1 ma:1 conditional:5 goal:1 universitiit:1 change:13 experimental:1 siemens:1 select:8 fakultiit:1 support:2 brevity:1 incorporate:1 tricubic:1 tested:1 |
16 | 1,012 | A Rapid Graph-based Method for
Arbitrary Transformation-Invariant
Pattern Classification
Alessandro Sperduti
Dipartimento di Informatica
Universita di Pisa
Corso Italia 40
56125 Pisa, ITALY
David G. Stork
Machine Learning and Perception Group
Ricoh California Research Center
2882 Sand Hill Road # 115
Menlo Park, CA USA 94025-7022
perso~di.unipi.it
stork~crc.ricoh.com
Abstract
We present a graph-based method for rapid, accurate search
through prototypes for transformation-invariant pattern classification. Our method has in theory the same recognition accuracy as
other recent methods based on ''tangent distance" [Simard et al.,
1994], since it uses the same categorization rule. Nevertheless ours
is significantly faster during classification because far fewer tangent distances need be computed. Crucial to the success of our
system are 1) a novel graph architecture in which transformation
constraints and geometric relationships among prototypes are encoded during learning, and 2) an improved graph search criterion,
used during classification. These architectural insights are applicable to a wide range of problem domains. Here we demonstrate that
on a handwriting recognition task, a basic implementation of our
system requires less than half the computation of the Euclidean
sorting method.
1
INTRODUCTION
In recent years, the crucial issue of incorporating invariances into networks for pattern recognition has received increased attention, most especially due to the work of
666
Alessandro Sperduti, David G. Stork
Simard and his colleagues. To a regular hierachical backpropagation network Simard
et al. [1992] added a Jacobian network, which insured that directional derivatives
were also learned. Such derivatives represented directions in feature space corresponding to the invariances of interest, such as rotation, translation, scaling and
even line thinning. On small training sets for a function approximation problem,
this hybrid network showed performance superior to that of a highly tuned backpropagation network taken alone; however there was negligible improvement on
large sets. In order to find a simpler method applicable to real-world problems,
Simard, Le Cun & Denker [1993] later used a variation of the nearest neighbor
algorithm, one incorporating "tangent distance" (T-distance or D T ) as the classification metric - the smallest Euclidean distance between patterns after the optimal
transformation. In this way, state-of-the-art accuracy was achieved on an isolated
handwritten character task, though at quite high computational complexity, owing
to the inefficient search and large number of Euclidean and tangent distances that
had to be calculated.
Whereas Simard, Hastie & Saeckinger [1994] have recently sought to reduce this
complexity by means of pre-clustering stored prototypes, we here take a different
approach, one in which a (graph) data structure formed during learning contains
information about transformations and geometrical relations among prototypes.
Nevertheless, it should be noted that our method can be applied to a reduced
(clustered) training set such as they formed, yielding yet faster recognition. Simard
[1994] recently introduced a hierarchical structure of successively lower resolution
patterns, which speeds search only if a minority of patterns are classified more
accurately by using the tangent metric than by other metrics. In contrast, our
method shows significant improvement even if the majority or all of the patterns
are most accurately classified using the tangent distance.
Other methods seeking fast invariant classification include Wilensky and
Manukian's scheme [1994]. While quite rapid during recall, it is more properly
considered distortion (rather than coherent transformation) invariant. Moreover,
some transformations such as line thinning cannot be naturally incorporated into
their scheme. Finally, it appears as if their scheme scales poorly (compared to
tangent metric methods) as the number of invariances is increased.
It seems somewhat futile to try to improve significantly upon the recognition accuracy of the tangent metric approach - for databases such as NIST isolated
handwritten characters, Simard et al. [1993] reported accuracies matching that
of humans! Nevertheless, there remains much that can be done to increase the
computational efficiency during recall. This is the problem we address.
2
TRANSFORMATION INVARIANCE
In broad overview, during learning our method constructs a labelled graph data
structure in which each node represents a stored prototype (labelled by its category)
as given by a training set, linked by arcs representing the T-distance between them.
Search through this graph (for classification) takes advantage of the graph structure
and an improved search criterion. To understand the underlying computations, we
must first consider tangent space.
Graph-Based Method for Arbitrary Transformation-Invariant Pattern Classification
667
Figure 1: Geometry of tangent space. Here, a three-dimensional feature space
contains the "current" prototype, Pc, and the subspace consisting of all patterns
obtainable by performing continuous transformations of it (shaded). Two candidate
prototypes and a test pattern, T, as well as their projections onto the T-space of
Pc are shown. The insert (above) shows the progression of search through the
corresponding portion of the recognition graph. The goal is to rapidly find the
prototype closest to T (in the T-distance sense), and our algorithm (guided by the
minimum angle OJ in the tangent space) finds that P 2 is so closer to T than are
either PI or Pc (see text).
Figure 1 illustrates geometry of tangent space and the relationships among the fundamental entities in our trained system. A labelled ("current") trained pattern is
represented by Pc, and the (shaded) surface corresponds to patterns arising under
continuous transformations of Pc. Such transformations might include rotation,
translation, scaling, line thinning, etc. Following Simard et al. [1993], we approximate this surface in the vicinity of Pc by a subspace - the tangent space or T -space
of Pc - which is spanned by "tangent" vectors, whose directions are determined by
infinitessimally transforming the prototype Pc. The figure shows an ortho-normal
basis {TVa, TV b}, which helps to speed search during classification, as we shall see.
A test pattern T and two other (candidate) prototypes as well as their projections
onto the T-space of Pc are shown.
668
3
Alessandro Sperduti, David G. Stork
THE ALGORITHMS
Our overall approach includes constructing a graph (during learning), and searching
it (for classification). The graph is constructed by the following algorithm:
Graph construction
Initialize N = # patterns; k = # nearest neighbors; t = # invariant transformations
Begin Loop For each prototype Pi (i = 1 ~ N)
? Compute a t-dimensional orthonormal basis for the T -space of Pi
? Compute ("one-sided") T-distance of each of the N - 1 prototypes
P j (j i- i) using Pi'S T-space
? Represent Pj.l (the projection of P j onto the T-space of Pi) in the
tangent orthonormal frame of Pi
? Connect Pi to each of its k T-nearest neighbors, storing their associated normalized projections Ph
End Loop
During classification, our algorithm permits rapid search through prototypes. Thus
in Figure 1, starting at Pc we seek to find another prototype (here, P2) that is
closer to the test point T . After P2 is so chosen, it becomes the current pattern,
and the search is extended using its T-space. Graph search ends when the closest
prototype to T is found (Le., closest in a T-distance sense).
We let D~ denote the current minimum tangent distance. Our search algorithm is:
Graph search
Input Test
Initialize
?
?
?
Do
pattern T
Choose initial candidate prototype, Po
SetPc~Po
Set D~ ~ DT(P c , T), i.e., the T-distance ofT from Pc
T.L?P~
? For each prototype P j connected to Pc compute cos(Oj) = IT.Ll.L
? Sort these prototypes by increasing values of OJ and put them into a
candidate list
? Pick P j from the top of the candidate list
? In T-space of Pj, compute DT(P j , T)
If DT(P j , T) < D~ then Pc ~ P j and D~ ~ DT(P j , T)
otherwise mark P j as a "failure" (F), and pick next prototype from
the candidate list
Until Candidate list empty
Return D~ or the category label of the optimum prototype found
Graph-Based Method for Arbitrary Transformation-Invariant Pattern Classification
Dr
4.91
3.70
3.61
3.03
669
2.94
Figure 2: The search through the "2" category graph for the T-nearest stored
prototype to the test pattern is shown (N = 720 and k = 15 nearest neighbors).
The number of T-distance calculations is equal to the number of nodes visited plus
the number offailures (marked F); Le., in the case shown 5 + 26 = 31. The backward
search step attempt is thwarted because the middle node has already been visited
(marked M). Notice in the prototypes how the search is first a downward shift, then
a counter-clockwise rotation - a mere four steps through the graph.
Figure 2 illustrates search through a network of "2" prototypes. Note how the Tdistance of the test pattern decreases, and that with only four steps through the
graph the optimal prototype is found.
There are several ways in which our search technique can be incorporated into a
classifier. One is to store all prototypes, regardless of class, in a single large graph
and perform the search; the test pattern is classified by the label of the optimal
prototype found. Another, is to employ separate graphs, one for each category, and
search through them (possibly in parallel); the test is classified by the minimum
T-distance prototype found. The choice of method depends upon the hardware
limitations, performance speed requirements, etc. Figure 3 illustrates such a search
through a "2" category graph for the closest prototype to a test pattern "5." We
report below results using a single graph per category, however.
3.1
Computational complexity
If a graph contains N prototypes with k pointers (arcs) each, and if the patterns are
of dimension m, then the storage requirement is O(N((t + 1) . m 2 + kt)). The time
complexity of training depends upon details of ortho-normalization, sorting, etc.,
and is of little interest anyway. Construction is more than an order of magnitude
faster than neural network training on similar problems; for instance construction
of a graph for N = 720 prototypes and k = 100 nearest neighbors takes less than
Alessandro Sperduti, David G. Stork
670
[ZJ[ZJ[2J[2]
Dr
5.10
5.09
5.01
4.93
4.90
Figure 3: The search through a "2" category graph given a "5" test pattern. Note
how the search first tries to find a prototype that matches the upper arc of the
"5," and then one possessing skew or rotation. For this test pattern, the minimum
T-distance found for the "5" category (3.62) is smaller than the one found for the
"2" category shown here (4.22), and indeed for any other category. Thus the test
pattern is correctly classified as a "5."
20 minutes on a Sparc 10.
The crucial quantity of interest is the time complexity for search. This is, of course,
problem related, and depends upon the number of categories, transformation and
prototypes and their statistical properties (see next Section). Worst case analyses
(e.g., it is theoretically conceivable that nearly all prototypes must be visited) are
irrelevant to practice.
We used a slightly non-obvious search criterion at each step, the function cos(Oj),
as shown in Figure 1. Not only could this criterion be calculated very efficiently
in our orthonormal basis (by using simple inner products), but it actually led to
a slightly more accurate search than Euclidean distance in the T-space - perhaps
the most natural choice of criterion. The angle OJ seems to guide the "flow" of the
search along transformation directions toward the test point.
4
Simulations and results
We explored the search capabilities of our system on the binary handwritten digit
database of Guyon, et al. [1991J. We needed to scale all patterns by a linear factor
(0.833) to insure that rotated versions did not go outside the 16 x 16 pixel grid. As
required in all T-space methods, the patterns must be continuous valued (Le., here
grayscale); this was achieved by convolution with a spatially symmetric Gaussian
having a = .55 pixels. We had 720 training examples in each of ten digit categories;
the test set consisted of 1320 test patterns formed by transforming independent
prototypes in all meaningful combinations of the t = 6 transformations (four spatial
directions and two rotation senses).
We compared the Euclidean sorting method of Simard et al. [1993J to our graph
Graph-Based Method for Arbitrary Transformation-Invariant Pattern Classification
1.00
671
______-----:::::::::::::==---10. 6
?
0.4 u
.c
~
'"
u
0.2
...'
',-.
-
error
---~.
o
50
100
150
200
"
.. - ................ --
250
300
350
~
e
~
0
400
Computational complexity
(equivalent number of T-distance calculations)
Figure 4: Comparison of graph-based (heavy lines) and standard Euclidean sorting
searches (thin lines). Search accuracy is the percentage of optimal prototypes found
on the full test set of 1320 patterns in a single category (solid lines). The average
search error is the per pattern difference between the global optimum T -distance and
the one actually found, averaged over the non-optimal prototypes found through the
search (dashed lines). Note especially that for the same computational complexity,
our method has the same average error, but that this average is taken over a much
smaller number of (non-optimal) prototypes. For a given criterion search accuracy,
our method requires significantly less computation. For instance, if 90% of the
prototypes must be found for a requisite categorization accuracy (a typical value
for asymptotically high recognition accuracy), our graph-based method requires less
than half the computation of the Euclidean sorting method.
based method using the same data and transformations, over the full range of
relevant computational complexities. Figure 4 summarizes our results. For our
method, the computational complexity is adjusted by the number of neighbors
inspected, k. For their Euclidean sorting method, it is adjusted by the percentage
of Euclidean nearest neighbors that were then inspected for T -distance. We were
quite careful to employ as many computational tricks and shortcuts on both methods
we could think of. Our results reflect fairly on the full computational complexity,
which was dominated by tangent and Euclidean distance calculations.
We note parenthetically that many of the recognition errors for both methods could
be explained by the fact that we did not include the transformation of line thinning
(solely because we lacked the preprocessing capabilities); the overall accuracy of
both methods will increase when this invariance is also included.
5
CONCLUSIONS AND FUTURE WORK
We have demonstrated a graph-based method using tangent distance that permits search through prototypes significantly faster than the most popular current
approach. Although not shown above, ours is also superior to other tree-based
672
Alessandro Sperduli. David G. Stork
methods, such as k-d-trees, which are less accurate. Since our primary concern was
reducing the computational complexity of search (while matching Simard et al.'s
accuracy), we have not optimized over preprocessing steps, such as the Gaussian
kernel width or transformation set. We note again that our method can be applied
to reduced training sets, for instance ones pruned by the method of Simard, Hastie
& Saeckinger [1994]. Simard's [1994] recent method - in which low-resolution
versions of training patterns are organized into a hierarchical data structure so
as to reduce the number of multiply-accumulates required during search - is in
some sense "orthogonal" to ours. Our graph-based method will work with his lowresolution images too, and thus these two methods can be unified into a hybrid
system.
Perhaps most importantly, our work suggests a number of research avenues. We
used just a single ("central") prototype Po to start search; presumably having
several candidate starting points would be faster. Our general method may admit
gradient descent learning of parameters of the search criterion. For instance, we can
imagine scaling the different tangent basis vectors according to their relevance in
guiding correct searches as determined using a validation set. Finally, our approach
may admit elegant parallel implementations for real-world applications.
Acknowledgements
This work was begun during a visit by Dr. Sperduti to Ricoh CRC. We thank I.
Guyon for the use of her database of handwritten digits and Dr. K. V. Prasad for
assistance in image processing.
References
1. Guyon, P. Albrecht, Y. Le Cun, J. Denker & W. Hubbard. (1991) "Comparing
different neural network architectures for classifying handwritten digits," Proc. of
the Inter. Joint Conference on Neural Networks, vol. II, pp. 127-132, IEEE Press.
P. Simard. (1994) "Efficient computation of complex distance metrics using hierarchical filtering," in J. D. Cowan, G. Tesauro and J. Alspector (eds.) Advances in
Neural Information Processing Systems-6 Morgan Kaufmann pp. 168-175.
P. Simard, B. Victorrio, Y. Le Cun & J. Denker. (1992) "Tangent Prop - A formalism for specifying selected invariances in an adaptive network," in J. E. Moody, S.
J . Hanson and R. P. Lippmann (eds.) Advances in Neural Information Processing
Systems-4 Morgan Kaufmann pp. 895-903.
P. Y. Simard, Y. Le Cun & J. Denker. (1993) "Efficient Pattern Recognition Using
a New Transformation Distance," in S. J. Hanson, J. D. Cowan and C. L. Giles
(eds.) Advances in Neural Information Processing Systems-5 Morgan Kaufmann
pp.50-58.
P. Y. Simard, T. Hastie & E. Saeckinger. (1994) "Learning Prototype Models for
Tangent Distance," Neural Networks for Computing Snowbird, UT (April, 1994).
G. D. Wilensky & N. Manukian. (1994) "Nearest Neighbor Networks: New Neural
Architectures for Distortion-Insensitive Image Recognition," Neural Networks for
Computing Snowbird, UT (April, 1994).
| 1012 |@word version:2 middle:1 seems:2 simulation:1 seek:1 prasad:1 pick:2 parenthetically:1 solid:1 initial:1 contains:3 tuned:1 ours:3 current:5 com:1 comparing:1 yet:1 must:4 alone:1 half:2 fewer:1 selected:1 pointer:1 node:3 simpler:1 along:1 constructed:1 lowresolution:1 theoretically:1 inter:1 indeed:1 rapid:4 alspector:1 little:1 increasing:1 becomes:1 begin:1 moreover:1 underlying:1 insure:1 unified:1 transformation:22 classifier:1 negligible:1 accumulates:1 solely:1 might:1 plus:1 suggests:1 shaded:2 specifying:1 co:2 range:2 averaged:1 practice:1 backpropagation:2 digit:4 significantly:4 matching:2 projection:4 pre:1 road:1 regular:1 hierachical:1 cannot:1 onto:3 put:1 storage:1 equivalent:1 demonstrated:1 center:1 go:1 attention:1 starting:2 regardless:1 resolution:2 rule:1 insight:1 importantly:1 spanned:1 orthonormal:3 his:2 searching:1 anyway:1 variation:1 ortho:2 construction:3 inspected:2 imagine:1 us:1 trick:1 recognition:10 unipi:1 database:3 worst:1 connected:1 counter:1 decrease:1 alessandro:5 transforming:2 complexity:11 trained:2 upon:4 efficiency:1 basis:4 po:3 joint:1 represented:2 fast:1 outside:1 quite:3 encoded:1 whose:1 valued:1 distortion:2 otherwise:1 italia:1 think:1 advantage:1 product:1 relevant:1 loop:2 rapidly:1 poorly:1 empty:1 optimum:2 requirement:2 categorization:2 rotated:1 help:1 snowbird:2 nearest:8 received:1 p2:2 perso:1 direction:4 guided:1 correct:1 owing:1 human:1 sand:1 crc:2 clustered:1 dipartimento:1 adjusted:2 insert:1 considered:1 normal:1 presumably:1 sought:1 smallest:1 proc:1 applicable:2 label:2 visited:3 hubbard:1 gaussian:2 rather:1 improvement:2 properly:1 contrast:1 sense:3 her:1 relation:1 pixel:2 issue:1 classification:13 among:3 overall:2 art:1 spatial:1 initialize:2 fairly:1 equal:1 construct:1 having:2 represents:1 park:1 broad:1 nearly:1 thin:1 future:1 report:1 employ:2 geometry:2 consisting:1 attempt:1 interest:3 highly:1 multiply:1 yielding:1 pc:13 sens:1 accurate:3 kt:1 closer:2 orthogonal:1 tree:2 euclidean:10 sperduti:5 isolated:2 increased:2 instance:4 formalism:1 giles:1 insured:1 too:1 stored:3 reported:1 connect:1 fundamental:1 moody:1 again:1 reflect:1 central:1 successively:1 choose:1 possibly:1 dr:4 admit:2 simard:16 derivative:2 inefficient:1 return:1 albrecht:1 includes:1 depends:3 later:1 try:2 linked:1 portion:1 start:1 sort:1 parallel:2 capability:2 formed:3 accuracy:10 kaufmann:3 efficiently:1 directional:1 handwritten:5 accurately:2 mere:1 classified:5 ed:3 failure:1 colleague:1 corso:1 pp:4 obvious:1 naturally:1 associated:1 di:3 handwriting:1 popular:1 begun:1 recall:2 ut:2 organized:1 obtainable:1 actually:2 thinning:4 appears:1 dt:4 improved:2 april:2 done:1 though:1 just:1 until:1 perhaps:2 usa:1 normalized:1 consisted:1 vicinity:1 spatially:1 symmetric:1 ll:1 during:12 width:1 assistance:1 noted:1 criterion:7 hill:1 demonstrate:1 geometrical:1 image:3 novel:1 recently:2 possessing:1 superior:2 rotation:5 overview:1 stork:6 insensitive:1 significant:1 grid:1 had:2 surface:2 etc:3 closest:4 recent:3 showed:1 italy:1 irrelevant:1 sparc:1 tesauro:1 store:1 binary:1 success:1 morgan:3 minimum:4 somewhat:1 dashed:1 clockwise:1 ii:1 full:3 faster:5 match:1 calculation:3 visit:1 basic:1 metric:6 represent:1 normalization:1 kernel:1 achieved:2 whereas:1 crucial:3 elegant:1 cowan:2 flow:1 architecture:3 hastie:3 reduce:2 inner:1 prototype:41 avenue:1 shift:1 ten:1 ph:1 hardware:1 category:13 informatica:1 reduced:2 percentage:2 zj:2 notice:1 arising:1 per:2 correctly:1 shall:1 vol:1 group:1 four:3 nevertheless:3 pj:2 backward:1 graph:32 asymptotically:1 year:1 angle:2 guyon:3 architectural:1 summarizes:1 scaling:3 constraint:1 dominated:1 speed:3 pruned:1 performing:1 tv:1 according:1 combination:1 smaller:2 slightly:2 character:2 cun:4 explained:1 invariant:8 sided:1 taken:2 remains:1 skew:1 needed:1 end:2 permit:2 lacked:1 denker:4 progression:1 hierarchical:3 top:1 clustering:1 include:3 especially:2 universita:1 seeking:1 added:1 already:1 quantity:1 primary:1 conceivable:1 gradient:1 subspace:2 distance:25 separate:1 thank:1 entity:1 majority:1 toward:1 minority:1 relationship:2 ricoh:3 implementation:2 perform:1 upper:1 convolution:1 arc:3 nist:1 descent:1 extended:1 incorporated:2 thwarted:1 frame:1 arbitrary:4 david:5 introduced:1 required:2 optimized:1 hanson:2 california:1 coherent:1 learned:1 address:1 below:1 pattern:33 perception:1 oft:1 oj:5 natural:1 hybrid:2 representing:1 scheme:3 improve:1 text:1 geometric:1 acknowledgement:1 tangent:21 limitation:1 filtering:1 validation:1 storing:1 pi:7 heavy:1 translation:2 classifying:1 course:1 guide:1 understand:1 wide:1 neighbor:8 calculated:2 dimension:1 world:2 adaptive:1 preprocessing:2 far:1 approximate:1 lippmann:1 global:1 grayscale:1 search:39 continuous:3 ca:1 menlo:1 futile:1 complex:1 constructing:1 domain:1 tva:1 did:2 guiding:1 pisa:2 candidate:8 jacobian:1 minute:1 list:4 explored:1 concern:1 incorporating:2 magnitude:1 illustrates:3 downward:1 sorting:6 led:1 corresponds:1 prop:1 goal:1 marked:2 careful:1 labelled:3 shortcut:1 included:1 determined:2 typical:1 reducing:1 invariance:6 meaningful:1 mark:1 relevance:1 requisite:1 |
17 | 1,013 | Ocular Dominance and Patterned Lateral
Connections in a Self-Organizing Model of the
Primary Visual Cortex
Joseph Sirosh and Risto Miikkulainen
Department of Computer Sciences
University of Texas at Austin, Austin, 'IX 78712
email:
sirosh.risto~cs.utexas.edu
Abstract
A neural network model for the self-organization of ocular dominance and
lateral connections from binocular input is presented. The self-organizing
process results in a network where (1) afferent weights of each neuron organize into smooth hill-shaped receptive fields primarily on one of the retinas, (2) neurons with common eye preference form connected, intertwined
patches, and (3) lateral connections primarily link regions of the same eye
preference. Similar self-organization of cortical structures has been observed experimentally in strabismic kittens. The model shows how patterned lateral connections in the cortex may develop based on correlated
activity and explains why lateral connection patterns follow receptive field
properties such as ocular dominance.
1 Introduction
Lateral connections in the primary visual cortex have a patterned structure that closely
matches the response properties of cortical cells (Gilbert and Wiesel 1989; Malach et al.1993).
For example, in the normal visual cortex, long-range lateral connections link areas with similar orientation preference (Gilbert and Wiesel 1989). Like cortical response properties, the
connectivity pattern is highly plastic in early development and can be altered by experience
(Katz and Callaway 1992). In a cat that is brought up squint-eyed from birth, the lateral connections link areas with the same ocular dominance instead of orientation (Lowel and Singer
1992). Such patterned lateral connections develop at the same time as the orientation selectivity and ocular dominance itself (Burkhalter et al.1993; Katz and Callaway 1992). Together,
110
Joseph Sirosh, Risto Miikkulainen
these observations suggest that the same experience-dependent process drives the development of both cortical response properties and lateral connectivity.
Several computational models have been built to demonstrate how orientation preference,
ocular dominance, and retinotopy can emerge from simple self-organizing processes (e.g.
Goodhill1993; Miller 1994; Obermayer et al.1992; von der Malsburg 1973). These models
assume that the neuronal response properties are primarily determined by the afferent connections, and concentrate only on the self-organization of the afferent synapses to the cortex. Lateral interactions between neurons are abstracted into simple mathematical functions
(e.g. Gaussians) and assumed to be uniform throughout the network; lateral connectivity is not
explicitly taken into account. Such models do not explicitly replicate the activity dynamics
of the visual cortex, and therefore can make only limited predictions about cortical function.
We have previously shown how Kohonen's self-organizing feature maps (Kohonen 1982)
can be generalized to include self-organizing lateral connections and recurrent activity dynamics (the Laterally Interconnected Synergetically Self-Organizing Map (LISSOM); Sirosh
and Miikkulainen 1993, 1994a), and how the algorithm can model the development of ocular dominance columns and patterned lateral connectivity with abstractions of visual input.
LISSOM is a low-dimensional abstraction of cortical self-organizing processes and models a
small region of the cortex where all neurons receive the same input vector. This paper shows
how realistic, high-dimensional receptive fields develop as part of the self-organization, and
scales up the LISSOM approach to large areas of the cortex where different parts of the cortical network receive inputs from different parts of the receptor surface. The new model shows
how (1) afferent receptive fields and ocular dominance columns develop from simple retinal images, (2) input correlations affect the wavelength of the ocular dominance columns and
(3) lateral connections self-organize cooperatively and simultaneously with ocular dominance
properties. The model suggests new computational roles for lateral connections in the cortex,
and suggests that the visual cortex maybe maintained in a continuously adapting equilibrium
with the visual input by co adapting lateral and afferent connections.
2
The LISSOM Model of Receptive Fields and Ocular Dominance
The LISSOM network is a sheet of interconnected neurons (figure 1). Through afferent connections, each neuron receives input from two "retinas". In addition, each neuron has reciprocal excitatory and inhibitory lateral connections with other neurons. Lateral excitatory connections are short-range, connecting only close neighbors. Lateral inhibitory connections run
for long distances, and may even implement full connectivity between neurons in the network.
Neurons receive afferent connections from broad overlapping patches on the retina called
anatomical receptive fields, or RFs. The N x N network is projected on to each retina of
R x R receptors, and each neuron is connected to receptors in a square area of side s around
the projections. Thus, neurons receive afferents from corresponding regions of each retina.
Depending on the location of the projection, the number of afferents to a neuron from each
retina could vary from
x ~s (at the comers) to s x s (at the center).
ts
The external and lateral weights are organized through an unsupervised learning process. At
each training step, neurons start out with zero activity. The initial response TJij of neuron (i, j)
Ocular Dominance and Patterned Lateral Connections
Loft _ . .
111
fllgIIl Roll . .
Figure 1: The Receptive-Field LISSOM architecture. The afferent and lateral connectionsof a single
neuron in the liSSOM network are shown. All connection weights are positive.
is based on the scalar product
TJij
=
(T
(L
eabJJij ,ab
+
a,b
L
(1)
eCdJJij,Cd) ,
c,d
where eab and ecd are the activations of retinal receptors (a, b) and (c, d) within the receptive
fields of the neuron in each retina, JJij,ab and JJij,cd are the corresponding afferent weights,
and (T is a piecewise linear approximation of the familiar sigmoid activation function. The
response evolves over time through lateral interaction. At each time step, the neuron combines the above afferent activation I:: eJJ with lateral excitation and inhibition:
TJij(t)
=
(T
(L eJJ + L
"Ie
Eij,kITJkl(t -
1) - L
"Ii
k,1
Iij,klTJkl(t -
1)) ,
(2)
k,1
where Eij,kl is the excitatory lateral connection weight on the connection from neuron (k, l)
to neuron (i, j), Iij,kl is the inhibitory connection weight, and TJkl (t - 1) is the activity of
neuron (k, I) during the previous time step. The constants "Ie and "Ii determine the relative
strengths of excitatory and inhibitory lateral interactions. The activity pattern starts out diffuse and spread over a substantial part of the map, and converges iteratively into stable focused
patches of activity, or activity bubbles. After the-activity has settled, typically in a few iterations of equation 2, the connection weights of each neuron are modified. Both afferent and
lateral weights adapt according to the same mechanism: the Hebb rule, normalized so that the
sum of the weights is constant:
(
Wij,mn t
r ) _
+ vt
-
+
Wij,mn(t)
CtTJijXmn
'""
( )
wmn [Wij ,mn t
CtTJijXmn
+
1'
(3)
where TJij stands for the activity of neuron (i, j) in the final activity bubble, Wij,mn is the afferent or lateral connection weight (JJ, E or I), Ct is the learning rate for each type of connection
(Ct a for afferent weights, Ct E for excitatory, and Ct I for inhibitory) and X mn is the presynaptic
activity for afferent, TJ for lateral).
(e
Joseph Sirosh, Risto Miikkulainen
112
"
(a) Random Initial Weights
(b) Monocular RF
(c) Binocular RF
Figure 2: Self-organization of the afferent input weights into receptive fields. The afferent weights
of a neuron at position (42,39) in a 60 x 60 network are shown before (a) and after self-organization
(b). This particular neuron becomes monocular with strong connections to the right eye, and weak connections to the left. A neuron at position (38, 23) becomes binocular with appoximately equal weights
to both eyes (c).
Both excitatory and inhibitory lateral connections follow the same Hebbian learning process and strengthen by correlated activity. The short-range excitation keeps the activity of
neighboring neurons correlated, and as self-organization progresses, excitation and inhibition strengthen in the vicinity of each neuron. At longer distances, very few neurons have
correlated activity and therefore most long-range connections become weak. Such weak connections are eliminated, and through weight normalization, inhibition concentrates in a closer
neighborhood of each neuron. As a result, activity bubbles become more focused and local,
weights change in smaller neighborhoods, and receptive fields become better tuned to local
areas of each retina.
The input to the model consists of gaussian spots of "light" on each retina:
t
_
((x
<"x,y - exp -
- xd 2 + (y - Yi)2)
u2
(4)
where ex,y is the activation of receptor (x, V), u 2 is a constant determining the width of the
spot, and (Xi,Yi): 0 ~ xi, Yi < R its center. At each input presentation, one spot is randomly
placed at (Xi ,Yi) in the left retina, and a second spot within a radius of p x RN of (Xi, yd
in the right retina. The parameter p E [0, 1] specifies the spatial correlations between spots
in the two retinas, and can be adjusted to simulate different degrees of correlations between
images in the two eyes.
3
Simulation results
To see how correlation between the input from the two eyes affects the columnar structures
that develop, several simulations were run with different values of p. The afferent weights of
all neurons were initially random (as shown in figure 2a), with the total strength to both eyes
being equal.
Figures 2b,c show the final afferent receptive fields of two typical neurons in a simulation
with p = 1. In this case, the inputs were uncorrelated, simulating perfect strabismus. In
the early stages of such simulation, some of the neurons randomly develop a preference for
one eye or the other. Nearby neurons will tend to share the same preference because lateral
Ocular Dominance and Patterned Lateral Connections
(a) Connections of a Monocular Neuron
113
(b) Connections of a Binocular Neuron
Figure 3: Ocular dominance and lateral connection patterns. The ocular dominance of a neuron is
measured as the difference in total afferent synaptic weight from each eye to the neuron. Each neuron
is labeled with a grey-scale value (black ~ white) that represents continuously changing eye preference from exclusive left through binocular to exclusive right. Small white dots indicate the lateral input
connections to the neuron marked with a big white dot. (a) The surviving lateral connections of a left
monocular neuron predominantly link areas of the same ocular dominance. (b) The lateral connections
of a binocular neuron come from both eye regions.
excitation keeps neural activity partially correlated over short distances. As self-organization
progresses, such preferences are amplified, and groups of neurons develop strong weights to
one eye. Figure 2b shows the afferent weights of a typical monocular neuron.
The extent of activity correlations on the network detennines the size of the monocular neuronal groups. Farther on the map, where the activations are anticorrelated due to lateral inhibition, neurons will develop eye preferences to the opposite eye. As a result, alternating
ocular dominance patches develop over the map, as shown in figure 3. 1 In areas between ocular dominance patches, neurons will develop approximately equal strengths to both eyes and
become binocular, like the one shown in figure 2e.
The width and number of ocular dominance columns in the network (and therefore, the wavelength of ocular dominance) depends on the input correlations (figure 4). When inputs in the
two eyes become more correlated (p < 1), the activations produced by the two inputs in the
network overlap closely and activity correlations become shorter range. By Hebbian adaptation, lateral inhibition concentrates in the neighborhood of each neuron, and the distance at
which activations becomes anticorrelated decreases. Therefore, smaller monocular patches
develop, and the ocular dominance wavelength decreases. Similar dependence was very recently observed in the cat primary visual cortex (LoweI1994). The LISSOM model demonstrates that the adapting lateral interactions and recurrent activity dynamics regulate the wavelength, and suggests how these processes help the cortex develop feature detectors at a scale
1 For a thorough treatment of the mathematical principles underlying the development of ocular dominance columns, see (GoodhillI993; Miller et al.1989; von der Malsburg and Singer 1988).
114
Joseph Sirosh, Risto Miikkulainen
-0
-0
(a) Strabismic case
(b ) Normal case
Figure 4: Ocular dominance wavelength in strabismic and normal models. In the strabismic case,
there are no between-eye correlations (p = 1), and broad ocular dominance columns are produced (a) .
With normal, partial between-eye correlations (p = 0.45 in this example), narrower stripes are formed
(b). As a result, there are more ocular dominance columns in the normal case and the ocular dominance
wavelength is smaller.
that matches the input correlations.
As eye preferences develop, left or right eye input tends to cause activity only in the left or
right ocular dominance patches. Activity patterns in areas of the network with the same ocular dominance tend to be highly correlated because they are caused by the same input spot.
Therefore, the long-range lateral connections between similar eye preference areas become
stronger, and those between opposite areas weaker. After the weak lateral connections are
eliminated, the initially wide-ranging connections are pruned, and eventually only connect
areas of similar ocular dominance as shown in figure 3. Binocular neurons between ocular
dominance patches will see some correlated activity in both the neigbboring areas, and maintain connections to both ocular dominance columns (figure 3b).
The lateral connection patterns shown above closely match observations in the primary visual cortex. Lowel and Singer (1992) observed that when between-eye correlations are abolished in kittens by surgically induced strabismus, long-range lateral connections primarily
link areas of the same ocular dominance. However, binocular neurons, located between ocular dominance columns, retained connections to both eye regions. The receptive field model
confinns that such patterned lateral connections develop based on correlated neuronal activity,
and demonstrates that they can self-organize simultaneously with ocular dominance columns.
The model also predicts that the long-range connections have an inhibitory function.
4 Discussion
In LISSOM, evolving lateral interactions and dynamic activity patterns are explicitly modeled. Therefore, LISSOM has several novel properties that set it apart from other selforganizing models of the cortex.
Previous models (e.g. Goodhill1993; Milleret al.1989; Obermayer et al.1992; von der Malsburg 1973) have concentrated only on forming ordered topographic maps where clusters of
adjacent neurons assume similar response properties such as ocular dominance or orientation
preference. The lateral connections in LISSOM, in addition, adapt to encode correlations be-
Ocular Dominance and Patterned Lateral Connections
115
tween the responses. 2 This property can be potentially very useful in models of cortical function. While afferent connections learn to detect the significant features in the input space (such
as ocularity or orientation), the lateral connections can learn correlations between these features (such as Gestalt principles), and thereby form a basis for feature grouping.
As an illustration, consider a single spot of light presented to the left eye. The spot causes disjoint activity patterns in the left-eye-dominant patches. How can these multiple activity patterns be recognized as representing the same spatially coherent entity? As proposed by Singer
et al. (1990), the long-range lateral connections between similar ocular dominance columns
could synchronize cortical activity, and form a coherently firing assembly of neurons. The
spatial coherence of the spot will then be represented by temporal coherence of neural activity. LISSOM can be potentially extended to model such feature binding.
Even after the network has self-organized, the lateral and afferent connections remain plastic
and in a continuously-adapting dynamic equilibrium with the input. Therefore, the receptive
field properties of neurons can dynamically readapt when the activity correlations in the network are forced to change. For example, when a small area of the cortex is set inactive (or
lesioned), the sharply-tuned afferent weight profiles of the neurons surrounding that region
expand in size, and neurons begin to respond to the stimuli that previously activated only the
lesioned area (Sirosh and Miikkulainen 1994b, 1994c). This expansion of receptive fields is
reversible, and when the lesion is repaired, neurons return to their original tuning. Similar
changes occur in response to retinal lesions as well. Such dynamic expansions of receptive
fields have been observed in the visual cortex (Pettet and Gilbert 1992). The LISSOM model
demonstrates that such plasticity is a consequence of the same self-organizing mechanisms
that drive the development of cortical maps.
5
Conclusion
The LISSOM model shows how a single local and unsupervised self-organizing process can
be responsible for the development of both afferent and lateral connection structures in the primary visual cortex. It suggests that this same developmental mechanism also encodes higherorder visual information such as feature correlations into the lateral connections. The model
forms a framework for future computational study of cortical reorganization and plasticity, as
well as dynamic perceptual processes such as feature grouping and binding.
Acknowledgments
This research was supported in part by National Science Foundation under grant #IRI9309273. Computer time for the simulations was provided by the Pittsburgh Supercomputing
Center under grants IRI930005P and TRA940029P.
References
Burkhalter, A., Bernardo, K. L., and Charles, V. (1993). Development of local circuits in
human visual cortex. Journalo/Neuroscience, 13:1916-1931.
Gilbert, C. D., and Wiesel, T. N. (1989). Columnar specificity of intrinsic horizontal and
corticocortical connections in cat visual cortex. Journal 0/ Neuroscience, 9:2432-2442.
2Tbe idea was conceived by von der Malsburg and Singer (1988), but not modeled.
116
Joseph Sirosh, Risto Miikkulainen
Goodhill, G. (1993). Topography and ocular dominance: a model exploring positive correlations. Biological Cybernetics, 69:109-118.
Katz, L. C., and Callaway, E. M. (1992). Development of local circuits in mammalian visual
cortex. Annual Review o/Neuroscience, 15:31-56.
Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biolog-
ical Cybernetics, 43:59-69.
Lowel, S. (1994). Ocular dominance column development: Strabismus changes the spacing
of adjacent columns in cat visual cortex. Journal 0/ Neuroscience, 14(12):7451-7468.
Lowel, S., and Singer, W. (1992). Selection of intrinsic horizontal connections in the visual
cortex by correlated neuronal activity. Science, 255:209-212.
Malach, R., Amir, Y., Harel, M., and Grinvald, A (1993). Relationship between intrinsic
connections and functional architecture revealed by optical imaging and in vivo targeted
biocytin injections in the primate striate cortex. Proceedings o/the National Academy
o/Sciences, USA,90:10469-10473.
Miller, K. D. (1994). A model for the development of simple cell receptive fields and the
ordered arrangement of orientation columns through activity-dependent competition between on- and off-center inputs. Journalo/Neuroscience, 14:409-441.
Miller, K. D., Keller, 1. B., and Stryker, M. P. (1989). Ocular dominance column development:
Analysis and simulation. Science, 245:605-615.
Obermayer, K., Blasdel, G. G., and Schulten, K. J. (1992). Statistical-mechanical analysis of
self-organization and pattern formation during the development of visual maps. Physical
Review A, 45:7568-7589.
Pettet, M. W., and Gilbert, C. D. (1992). Dynamic changes in receptive-field size in cat primary visual cortex. Proceedings o/the NationalAcademy 0/ Sciences, USA,89:83668370.
Singer, W., Gray, C., Engel, A, Konig, P., Artola, A, and Bracher, S. (1990). Formation of
cortical cell assemblies. In Cold Spring Harbor Symposia on Quantitative Biology, Vol.
LV, 939-952. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory.
Sirosh, J., and Miikkulainen, R. (1993). How lateral interaction develops in a self-organizing
feature map. In Proceedings o/the IEEE International Conference on Neural Networks
(San Francisco, CA), 1360--1365. Piscataway, NJ: IEEE.
Sirosh, J., and Miikkulainen, R. (1994a). Cooperative self-organization of afferent and lateral
connections in cortical maps. Biological Cybernetics, 71(1):66--78.
Sirosh, 1., and Miikkulainen, R. (1994b). Modeling cortical plasticity based on adapting lateral interaction. In The Neurobiologyo/Computation: Proceedings o/the Annual ComputationalNeuroscience Meeting. Dordrecht; Boston: Kluwer. In Press.
Sirosh, J., and Miikkulainen, R. (1994c). A neural network model oftopographic reorganization following cortical lesions. In Proceedings o/the World Congress on Computational
MediCine, Public Health and BioteChnology (Austin, TX). World Scientific. In Press.
von der Malsburg, C. (1973). Self-organization of orientation-sensitive cells in the striate
cortex. Kybernetik, 15:85-100.
von der Malsburg, C., and Singer, W. (1988). Principles of cortical network organization. In
Rakic, P., and Singer, W., editors, Neurobiology 0/Neocortex, 69-99. New York: Wiley.
| 1013 |@word wiesel:3 stronger:1 replicate:1 risto:6 grey:1 simulation:6 thereby:1 initial:2 tuned:2 biolog:1 activation:7 realistic:1 plasticity:3 eab:1 amir:1 reciprocal:1 short:3 farther:1 location:1 preference:12 mathematical:2 become:7 symposium:1 consists:1 combine:1 becomes:3 begin:1 provided:1 underlying:1 circuit:2 nj:1 temporal:1 thorough:1 quantitative:1 bernardo:1 xd:1 laterally:1 demonstrates:3 grant:2 organize:3 positive:2 before:1 local:5 tends:1 congress:1 consequence:1 kybernetik:1 receptor:5 firing:1 yd:1 approximately:1 black:1 dynamically:1 suggests:4 co:1 patterned:9 callaway:3 limited:1 range:9 acknowledgment:1 responsible:1 implement:1 spot:9 cold:3 area:15 eyed:1 evolving:1 adapting:5 projection:2 specificity:1 suggest:1 close:1 selection:1 sheet:1 gilbert:5 map:11 center:4 keller:1 focused:2 rule:1 strengthen:2 located:1 corticocortical:1 stripe:1 mammalian:1 malach:2 predicts:1 labeled:1 sirosh:12 observed:4 role:1 cooperative:1 region:6 connected:2 decrease:2 substantial:1 developmental:1 lesioned:2 dynamic:8 surgically:1 basis:1 comer:1 jjij:2 cat:5 represented:1 tx:1 surrounding:1 forced:1 burkhalter:2 formation:3 neighborhood:3 birth:1 dordrecht:1 topographic:1 artola:1 itself:1 final:2 interaction:7 interconnected:2 product:1 adaptation:1 kohonen:3 neighboring:1 organizing:10 amplified:1 academy:1 competition:1 konig:1 cluster:1 perfect:1 converges:1 help:1 depending:1 develop:14 recurrent:2 measured:1 progress:2 strong:2 c:1 indicate:1 come:1 concentrate:3 radius:1 closely:3 tjkl:1 detennines:1 correct:1 human:1 public:1 explains:1 biological:2 adjusted:1 exploring:1 cooperatively:1 pettet:2 around:1 normal:5 exp:1 equilibrium:2 blasdel:1 vary:1 early:2 utexas:1 sensitive:1 engel:1 brought:1 gaussian:1 modified:1 encode:1 detect:1 dependent:2 abstraction:2 typically:1 initially:2 ical:1 expand:1 wij:4 orientation:8 development:12 spatial:2 field:17 equal:3 shaped:1 eliminated:2 biology:1 represents:1 broad:2 unsupervised:2 future:1 stimulus:1 piecewise:1 develops:1 primarily:4 retina:12 few:2 randomly:2 harel:1 simultaneously:2 national:2 familiar:1 maintain:1 ab:2 organization:12 highly:2 light:2 activated:1 tj:1 closer:1 partial:1 experience:2 shorter:1 column:15 modeling:1 uniform:1 connect:1 international:1 ie:2 off:1 together:1 continuously:3 connecting:1 connectivity:5 von:6 settled:1 tjij:4 external:1 return:1 account:1 retinal:3 explicitly:3 afferent:27 depends:1 caused:1 start:2 vivo:1 square:1 formed:1 roll:1 miller:4 weak:4 plastic:2 produced:2 drive:2 cybernetics:3 detector:1 synapsis:1 synaptic:1 email:1 ocular:40 treatment:1 organized:3 follow:2 response:9 binocular:9 stage:1 correlation:16 receives:1 horizontal:2 overlapping:1 reversible:1 gray:1 scientific:1 usa:2 normalized:1 vicinity:1 spatially:1 alternating:1 iteratively:1 laboratory:1 rakic:1 white:3 adjacent:2 during:2 self:25 width:2 maintained:1 excitation:4 generalized:1 hill:1 demonstrate:1 image:2 ranging:1 novel:1 recently:1 charles:1 predominantly:1 common:1 sigmoid:1 functional:1 physical:1 katz:3 kluwer:1 significant:1 tuning:1 strabismic:4 dot:2 stable:1 cortex:25 surface:1 inhibition:5 longer:1 dominant:1 apart:1 selectivity:1 vt:1 meeting:1 der:6 yi:4 recognized:1 determine:1 ii:2 full:1 multiple:1 hebbian:2 smooth:1 match:3 adapt:2 long:7 prediction:1 iteration:1 normalization:1 cell:4 receive:4 addition:2 spacing:1 induced:1 tend:2 surviving:1 revealed:1 affect:2 harbor:3 architecture:2 opposite:2 idea:1 texas:1 inactive:1 biotechnology:1 cause:2 jj:1 york:1 useful:1 selforganizing:1 maybe:1 neocortex:1 concentrated:1 specifies:1 inhibitory:7 neuroscience:5 disjoint:1 conceived:1 loft:1 anatomical:1 intertwined:1 vol:1 dominance:40 group:2 changing:1 imaging:1 sum:1 tbe:1 run:2 respond:1 wmn:1 topologically:1 throughout:1 patch:9 coherence:2 ct:4 annual:2 activity:32 strength:3 occur:1 sharply:1 diffuse:1 encodes:1 nearby:1 simulate:1 spring:3 pruned:1 optical:1 injection:1 department:1 according:1 piscataway:1 smaller:3 remain:1 joseph:5 evolves:1 primate:1 taken:1 equation:1 monocular:7 previously:2 eventually:1 mechanism:3 singer:9 gaussians:1 regulate:1 simulating:1 original:1 ecd:1 include:1 assembly:2 lissom:14 malsburg:6 medicine:1 ejj:2 arrangement:1 coherently:1 receptive:17 primary:6 exclusive:2 lowel:4 dependence:1 striate:2 obermayer:3 stryker:1 distance:4 link:5 higherorder:1 lateral:56 entity:1 presynaptic:1 extent:1 reorganization:2 retained:1 modeled:2 illustration:1 relationship:1 potentially:2 squint:1 anticorrelated:2 neuron:55 observation:2 t:1 extended:1 neurobiology:1 rn:1 mechanical:1 kl:2 connection:60 coherent:1 biocytin:1 pattern:10 goodhill:1 ocularity:1 built:1 rf:3 overlap:1 synchronize:1 mn:5 representing:1 altered:1 eye:25 bubble:3 health:1 review:2 determining:1 relative:1 repaired:1 abolished:1 topography:1 lv:1 foundation:1 degree:1 principle:3 editor:1 uncorrelated:1 share:1 cd:2 austin:3 excitatory:6 placed:1 supported:1 side:1 weaker:1 neighbor:1 wide:1 emerge:1 cortical:16 stand:1 world:2 projected:1 san:1 supercomputing:1 miikkulainen:11 gestalt:1 keep:2 abstracted:1 kitjkl:1 pittsburgh:1 assumed:1 francisco:1 xi:4 why:1 learn:2 ca:1 correlated:10 expansion:2 tween:1 spread:1 big:1 profile:1 lesion:3 neuronal:4 hebb:1 ny:1 wiley:1 iij:2 position:2 schulten:1 strabismus:3 grinvald:1 perceptual:1 ix:1 grouping:2 intrinsic:3 columnar:2 boston:1 wavelength:6 eij:2 forming:1 visual:19 ordered:2 partially:1 scalar:1 u2:1 binding:2 marked:1 presentation:1 narrower:1 targeted:1 retinotopy:1 experimentally:1 change:5 determined:1 typical:2 called:1 total:2 kitten:2 ex:1 |
18 | 1,014 | Associative Decorrelation Dynamics:
A Theory of Self-Organization and
Optimization in Feedback Networks
Dawei W. Dong*
Lawrence Berkeley Laboratory
University of California
Berkeley, CA 94720
Abstract
This paper outlines a dynamic theory of development and adaptation in neural networks with feedback connections. Given input ensemble, the connections change in strength according to an
associative learning rule and approach a stable state where the
neuronal outputs are decorrelated . We apply this theory to primary visual cortex and examine the implications of the dynamical
decorrelation of the activities of orientation selective cells by the
intracortical connections. The theory gives a unified and quantitative explanation of the psychophysical experiments on orientation
contrast and orientation adaptation. Using only one parameter , we
achieve good agreements between the theoretical predictions and
the experimental data.
1
Introduction
The mammalian visual system is very effective in detecting the orientations of lines
and most neurons in primary visual cortex selectively respond to oriented lines and
form orientation columns [1) . Why is the visual system organized as such? We
*Present address: Rockefeller University, B272, 1230 York Avenue, NY, NY 10021-6399.
926
Dawei W Dong
believe that the visual system is self-organized, in both long term development and
short term adaptation, to ensure the optimal information processing.
Linsker applied Hebbian learning to model the development of orientation selectivity and later proposed a principle of maximum information preservation in early
visual pathways [2]. The focus of his work has been on the feedforward connections
and in his model the feedback connections are isotropic and unchanged during the
development of orientation columns; but the actual circuitry of visual cortex involves extensive, columnar specified feedback connections which exist even before
functional columns appear in cat striate cortex [3].
Our earlier research emphasized the important role of the feedback connections in
the development of the columnar structure in visual cortex. We developed a theoretical framework to help understand the dynamics of Hebbian learning in feedback networks and showed how the columnar structure originates from symmetry
breaking in the development of the feedback connections (intracortical, or lateral
connections within visual cortex) [4].
Figure 1 illustrates our theoretical predictions. The intracortical connections break
symmetry and develop strip-like patterns with a characteristic wave length which
is comparable to the developed intracortical inhibitory range and the LGN-cortex
afferent range (left). The feedforward (LGN-cortex) connections develop under the
influence of the symmetry breaking development of the intracortical connections.
The developed feedforward connections for each cell form a receptive field which
is orientation selective and nearby cells have similar orientation preference (right) .
Their orientations change in about the same period as the strip-like pattern of the
intracortical connections.
Figure 1: The results of the development of visual cortex with feedback connections. The
simulated cortex consists of 48 X 48 neurons, each of which connects to 5 X 5 other cortical
neurons (left) and receives inputs from 7 X 7 LGN neurons (right). In this figure, white
inclicates positive connections and black inclicates negative connections. One can see that
the change of receptive field's orientation (right) is highly correlated with the strip-like
pattern of intracortical connections (left).
Many aspects of our theoretical predictions agree qualitatively with neurobiological observations in primary visual cortex. Another way to test the idea of optimal
Associative Correlation Dynamics
927
information processing or any self-organization theory is through quantitative psychophysical studies. The idea is to look for changes in perception following changes
in input environments. The psychophysical experiments on orientation illusions
offer some opportunities to test our theory on orientation selectivity.
Orientation illusions are the effects that the perceived orientations of lines are affected by the neighboring (in time or space) oriented stimuli, which have been
observed in many psychophysical experiments and were attributed to the inhibitory
interactions between channels tuned to different orientations [5]. But there is no unified and quantitative explanation. Neurophysiological evidences support our earlier
computational model in which intracortical inhibition plays the role of gain-control
in orientation selectivity [6]. But in order for the gain-control mechanism to be
effective to signals of different statistics, the system has to develop and adapt in
different environments.
In this paper we examine the implication of the hypothesis that the intracortical
connections dynamically decorrelate the activities of orientation selective cells, i.e.,
the intracortical connections are actively adapted to the visual environment, such
that the output activities of orientation selective cells are decorrelated. The dynamics which ensures such decorrelation through associative learning is outlined in the
next section as the theoretical framework for the development and the adaptation
of intracortical connections. We only emphasize the feedback connections in the
following sections and assume that the feedforward connections developed orientation selectivities based on our earlier works. The quantitative comparisons of the
theory and the experiments are presented in section 3.
2
Associative Decorrelation Dynamics
There are two different kinds of variables in neural networks. One class of variables
represents the activity of the nerve cells, or neurons. The other class of variables
describes the synapses, or connections, between the nerve cells. A complete model
of an adaptive neural system requires two sets of dynamical equations, one for each
class of variables, to specify the evolution and behavior of the neural system.
The set of equations describing the change of the state of activity of the neurons is
dVi
adt
-I
= -ViI
+ ~T.
L..J .. v.. + 1I}}
I
(1)
j
in which a is a time constant, Tij is the strength of the synaptic connection from
neuron j to neuron i, and Ii is the additional feedforward input to the neuron besides
those described by the feedback connection matrix nj . A second set of equations
describes the way the synapses change with time due to neuronal activity. The
learning rule proposed here is
B dnj = (V,. - V.')!,
dt
in which B is a time constant and
in the following.
Vi'
I
I}
(2)
is the feedback learning signal as described
The feedback learning signal Vi' is generated by a Hopfield type associative memory
network: Vi' = Lj T/j Vi , in which T/j is the strength of the associative connection
928
Dawei W Dong
from neuron j to neuron i, which is the recent correlation between the neuronal
activities Vi and Vj determined by Hebbian learning with a decay term [4]
B
,dTfj
,
dt = -Iij + ViVj
(3)
in which B' is a time constant. The Vi' and T[j are only involved in learning and
do not directly affect the network outputs.
It is straight forward to show that when the time constants B
dynamics reduces to
dT
B dt
= (1- < VVT ?
> > B' > > a, the
< VIT >
(4)
where bold-faced quantities are matrices and vectors and <> denotes ensemble
average. It is not difficult to show that this equation has a Lyapunov or "energy"
function
L = Tr(1- < VV T ?(1- < VVT
(5)
which is lower bounded and satisfies
>f
dL
<0
dt -
and
dL =0
dt
-+-
dTij =
dt
0 I'lor at,)
11"
(6)
Thus the dynamics is stable. When it is stable, the output activities are decorrelated ,
<VVT >= 1
(7)
The above equation shows that this dynamics always leads to a stable state where
the neuronal activities are decorrelated and their correlation matrix is orthonormal.
Yet the connections change in an associative fashion - equation (2) and (3) are
almost Hebbian . That is why we call it associative decorrelation dynamics. From information processing point of view, a network, self-organized to satisfy equation (7),
is optimized for Gaussian input ensembles and white output noises [7].
Linear First Order Analysis
In applying our theory of associative decorrelation dynamics to visual cortex to
compare with the psychophysical experiments on orientation illusions, the linear
first-order approximation is used, which is
T = TO + 6T,
V = Va +6V,
TO = 0, 6T ex - < I IT >
Va = I, 6V = TI
(8)
where it is assumed that the input correlations are small. It is interesting to notice
that the linear first-order approximation leads to anti-Hebbian feedback connections: Iij ex - < /i/j > which is guarantteed to be stable around T = 0 [8].
3
Quantitative Predictions of Orientation Illusions
The basic phenomena of orientation illusions are demonstrated in figure 2 (left).
On the top, is the effect of orientation contrast (also called tilt illusion): within the
two surrounding circles there are tilted lines; the orientation of a center rectangle
Associative Correlation Dynamics
929
appears rotated to the opposite side of its surrounding tilt. Both the two rectangles and the one without surround (at the left-center of this figure) are, in fact,
exactly same. On the bottom, is the effect of orientation adaptation (also called
tilt aftereffect): if one fixates at the small circle in one of the two big circles with
tilted lines for 20 seconds or so and then look at the rectangle without surround,
the orientation of the lines of the rectangle appears tilted to the opposite side.
These two effects of orientation illusions are both in the direction of repulsion: the
apparent orientation of a line is changed to increase its difference from the inducing
line. Careful experimental measurements also revealed that the angle with the
inducing line is
100 for maximum orientation adaptation effect [9] but 20 0 for
orientation contrast [10].
<"V
<"V
1
Ol..---~-~-""';:"'''''''''---'
-90
-45
o
Stimulus orientation
45
(J
90
(degree)
Figure 2: The effects of orientation contrast (upper-left) and orientation adaptation (lowerleft) are attributed to feedback connections between cells tuned to different orientations
(upper-right, network; lower-right, tuning curve).
Orientation illusions are attributed to the feedback connections between orientation selective cells. This is illustrated in figure 2 (right). On the top is the network
of orientation selective cells with feedback connections. Only four cells are shown.
From the left, they receive orientation selective feedforward inputs optimal at -45 0 ,
00 ,45 0 , and 90 0 , respectively. The dotted lines represent the feedback connections
(only the connections from the second cell are drawn). On the bottom is the orientation tuning curve of the feedforward input for the second cell, optimally tuned to
stimulus of 00 (vertical), which is assumed to be Gaussian of width (T = 20 0 ? Because of the feedback connections, the output of the second cell will have different
tuning curves from its feedforward input, depending on the activities of other cells.
For primary visual cortex, we suppose that there are orientation selective neurons
tuned to all orientations. It is more convenient to use the continuous variable e
instead of the index i to represent neuron which is optimally tuned to the orientation
of angle e. The neuronal activity is represented by V(e) and the feedforward input
to each neuron is represented by I(e). The feedforward input itself is orientation
930
Dawei W. Dong
selective: given a visual stimulus of orientation
J(e) =
eo, the input is
e-(9-9 o )2/ q 2
(9)
This kind of the orientation tuning has been measured by experiments (for references, see [6]). Various experiments give a reasonable tuning width around 20?
?(7" = 20? is used for all the predictions).
Predicted Orientation Adaptation
For the orientation adaptation to stimulus of angle eo, substituting equation (9)
into equation (8), it is not difficult to derive that the network response to stimulus
of angle 0 (vertical) is changed to
V(e)
= e_ 92 / q2 _
ae-(9-9 o )2/ q 2 e-9~/2q2
(10)
in which (7" is the feedforward tuning width chosen to be 20? and a is the parameter
of the strength of decorrelation feedback.
The theoretical curve of perceived orientation ?(eo) is derived by assuming the
maximum likelihood of the the neural population, i.e., the perceived angle ? is the
angle at which Vee) is maximized. It is shown in figure 3 (right). The solid line is
the theoretical curve and the experimental data come from [9] (they did not give
the errors, the error bars are of our estimation,...., 0.2?). The parameter obtained
through X2 fit is the strength of decorrelation feedback: a = 0.42.
2.0 ;--'"T""---,--...,.....-----.------,
-
~ 1.5
-
~ 3.0
~
~
II)
II)
CD 1.0
}
."
~
."
~
0.5
~
2.0
II)
> 1.0
'il
'il
Q.,
4.0
0.0 f - - - - - - - - - - - - - J
o
10
20
30
40
Surround angle 80 (degree)
50
i:!
II)
Q.,
0.0
0
10
20
30
40
50
Adaptation angle 80 (degree)
Figure 3: Quantitative comparison of the theoretical predictions with the experimental
data of orientation contrast (left) and orientation adaptation (right).
It is very interesting that we can derive a relationship which is independent of the
parameter of the strength of decorrelation feedback a,
(eo - ?m)(3e o - 2?m) = (7"2
(11)
in which eo is the adaptation angle at which the tilt aftereffect is most significant
and ?m is the perceived angle.
Predicted Orientation Contrast
For orientation contrast, there is no specific adaptation angle, i.e., the network has
developed in an environment of all possible angles. In this case, when the surround
is of angle eo, the network response to a stimulus of angle e1 is
Vee) = e-(9-9 1)2/ q 2 _ ae-(9-9 o )2/ 3q 2
(12)
Associative Correlation Dynamics
931
in which fr and a has the same meaning as for orientation adaptation. Again assuming the maximum likelihood, ?(eo), the stimulus angle e1 at which it is perceived
as angle 0, is derived and shown in figure 3 (left). The solid line is the theoretical
curve and the experimental data come from [10] and their estimated error is "" 0.20.
The parameter obtained through X 2 fit is the strength of decorrelation feedback:
a = 0.32.
We can derive the peak position eo, i.e., the surrounding angle
orientation contrast is most significant,
~e~ =
3
fr2
eo
at which the
(13)
For fr = 20 0 , one immediately gets eo = 24 0 ? This is in good agreement with
experiments, most people experience the maximum effect of orientation contrast
around this angle.
Our theory predicts that the peak position of the surround angle for orientation
contrast should be constant since the orientation tuning width fr is roughly the
same for different human observers and is not going to change much for different
experimental setups. But the peak value of the perceived angle is not constant since
the decorrelation feedback parameter a is not necessarily same, indeed, it could be
quite different for different human observers and different experimental setups.
4
Discussion
First, we want to emphasis that in all the comparisons, the same tuning width fr is
used and the strength of decorrelation feedback a is the only fit parameter. It does
not take much imagination to see that the quantitative agreements between the
theory and the experiments are good. Further more, we derived the relationships
for the maximum effects, which are independent of the parameter a and have been
partially confirmed by the experiments.
Recent neurophysiological experiments revealed that the surrounding lines did influence the orientation selectivity of cells in primary visual cortex of the cat [11].
Those single cell experiments land further support to our theory. But one should
be cautioned that the cells in our theory should be considered as the average over
a large population of cells in cortex.
The theory not only explains the first order effects which are dominant in angle
range of 00 to 50 0 , as shown here, but also accounts for the second order effects
which can be seen in 500 to 90 0 range, where the sign of the effects is reversed.
The theory also makes some predictions for which not much experiment has been
done yet, for example, the prediction about how orientation contrast depends on
the distance of surrounding stimuli from the test stimulus [7].
Finally, this is not merely a theory for the development and the adaptation of
orientation selective cells, it can account for effect such as human vision adaptation
to colors as well [7]. We can derive the same equation as Atick etal [12] which agrees
with the experiment on the appearance of color hue after adaptation. We believe
that future psychophysical experiments could give us more quantitative results to
further test our theory and help our understanding of neural systems in general.
932
Dawei W. Dong
Acknowledgements
This work was supported in part by the Director, Office of Energy Research, Division of Nuclear Physics of the Office of High Energy and Nuclear Physics of the
U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
References
[1] Hubel DH, Wiesel TN, 1962 Receptive fields, binocular interactions, and functional
architecture in the cat's visual cortex J Physiol (London) 160, 106- 54. - 1963
Shape and arrangement of columns in cat's striate cortex J Physiol (London) 165,
559-68.
[2] Linsker R, 1986 From basic network principles to neural architecture ... Proc Natl
Acad Sci USA 83, 7508 8390 8779. - , 1989 An application of the principle of maximum information preservation to linear systems Advances in Neural Information
Processing Systems 1, Touretzky DS, ed, Morgan Kaufman, San Mateo, CA 186-94.
[3] Gilbert C, Wiesel T, 1989 Columnar Specificity of intrinsic horizontal and corticocortical connections in cat visual cortex J Neurosci 9(7), 2432-42. Luhmann HJ,
Martinez L, Singer W, 1986 Development of horizontal intrinsic connections in cat
striate cortex Exp Brain Res 63, 443-8.
[4] Dong DW, 1991 Dynamic properties of neural network with adapting synapses Proc
International Joint Conference on Neural Networks, Seattle, 2, 255- 260. - , 1991
Dynamic Properties of Neural Networks Ph D thesis, University Microfilms International, Ann Arbor, ML Dong DW, Hopfield JJ, 1992 Dynamic properties of neural
networks with adapting synapses Network: Computation in Neural Systems, 3(3),
267- 83.
[5] Gibson J J, Radner M, 1937 Adaptation, after-effect and contrast in the perception
of tilted lines J of Exp Psy 20, 453-67. Carpenter RHS, Blakemore C, 1973 Interactions between orientations in human vision Exp Brain Res 18, 287-303. Tolhurst
DJ, Thompson PG, 1975 Orientation illusions and after-effects: Inhibition between
channels Vis Res 15,967-72. Barlow HB, Foldiak P, 1989 Adaptation and decorrelation in the cortex The Computing Neuron, Durbin R, Miall C, Mitchison G, eds,
Addison- Wesley, New York, NY.
[6] Wehmeier U, Dong DW, Koch C, Van Essen DC, 1989 Modeling the mammalian
visual system Methods in Neuronal Modeling: From Synapses to Networks, Koch C,
Segev I, eds, MIT Press, Cambridge, MA 335-60.
[7] Dong DW, 1993 Associative Decorrelation Dynamics in Visual Cortex Lawrence
Berkeley Laboratory Technical Report LBL-34491.
[8] Dong DW, 1993 Anti-Hebbian dynamics and total recall of associative memory Proc
World Congress on Neural Networks, Portland, 2, 275-9.
[9] Campbell FW, Maffei L, 1971 The tilt after-effect: a fresh look Vis Res 11, 833-40.
[10] Westheimer G, 1990 Simultaneous orientation contrast for lines in the human fovea
Vis Res 30, 1913-21.
[11] Gilbert CD, Wiesel TN, 1990 The influence of contextual stimuli on the orientation
selectivity of cells in primary visual cortex of the cat Vis Res 30,1689-701.
[12] Atick JJ, Li Z, Redlich AN, 1993 What does post-adaptation color appearance reveal
about cortical color representation Vis Res 33, 123-9.
| 1014 |@word wiesel:3 pg:1 decorrelate:1 tr:1 solid:2 tuned:5 contextual:1 yet:2 tilted:4 physiol:2 shape:1 isotropic:1 short:1 detecting:1 tolhurst:1 preference:1 lor:1 director:1 consists:1 pathway:1 dtij:1 indeed:1 roughly:1 behavior:1 examine:2 ol:1 brain:2 actual:1 bounded:1 what:1 kind:2 kaufman:1 q2:2 developed:5 unified:2 nj:1 berkeley:3 quantitative:8 ti:1 exactly:1 control:2 originates:1 maffei:1 appear:1 before:1 positive:1 congress:1 acad:1 black:1 emphasis:1 mateo:1 dynamically:1 blakemore:1 range:4 illusion:9 gibson:1 adapting:2 convenient:1 specificity:1 get:1 influence:3 applying:1 gilbert:2 demonstrated:1 center:2 vit:1 thompson:1 immediately:1 rule:2 orthonormal:1 nuclear:2 his:2 dw:5 population:2 play:1 suppose:1 hypothesis:1 agreement:3 fr2:1 corticocortical:1 mammalian:2 predicts:1 observed:1 role:2 bottom:2 ensures:1 environment:4 dynamic:18 division:1 joint:1 hopfield:2 cat:7 represented:2 various:1 surrounding:5 effective:2 london:2 adt:1 apparent:1 quite:1 statistic:1 itself:1 associative:14 interaction:3 adaptation:20 fr:4 neighboring:1 radner:1 achieve:1 fixates:1 inducing:2 seattle:1 rotated:1 help:2 depending:1 develop:3 derive:4 measured:1 cautioned:1 predicted:2 involves:1 come:2 lyapunov:1 direction:1 human:5 explains:1 around:3 considered:1 koch:2 exp:3 lawrence:2 circuitry:1 substituting:1 early:1 perceived:6 estimation:1 proc:3 agrees:1 mit:1 always:1 gaussian:2 hj:1 office:2 derived:3 focus:1 portland:1 likelihood:2 contrast:13 psy:1 repulsion:1 lj:1 selective:10 going:1 lgn:3 orientation:65 development:11 field:3 represents:1 vvt:3 look:3 linsker:2 future:1 report:1 stimulus:11 oriented:2 connects:1 organization:2 highly:1 essen:1 natl:1 implication:2 experience:1 circle:3 re:7 lbl:1 theoretical:9 column:4 earlier:3 modeling:2 optimally:2 peak:3 international:2 contract:1 dong:10 e_:1 physic:2 again:1 thesis:1 luhmann:1 imagination:1 actively:1 wehmeier:1 account:2 li:1 de:1 intracortical:11 bold:1 satisfy:1 afferent:1 vi:11 depends:1 later:1 break:1 view:1 observer:2 wave:1 il:2 characteristic:1 ensemble:3 maximized:1 confirmed:1 straight:1 simultaneous:1 synapsis:5 touretzky:1 decorrelated:4 strip:3 synaptic:1 ed:3 energy:4 involved:1 attributed:3 gain:2 recall:1 color:4 organized:3 campbell:1 nerve:2 appears:2 wesley:1 dt:7 specify:1 response:2 done:1 atick:2 binocular:1 correlation:6 d:1 receives:1 horizontal:2 reveal:1 believe:2 usa:1 effect:15 barlow:1 evolution:1 laboratory:2 illustrated:1 white:2 during:1 self:4 width:5 outline:1 complete:1 tn:2 dawei:5 meaning:1 functional:2 tilt:5 measurement:1 significant:2 surround:5 cambridge:1 tuning:8 outlined:1 etal:1 dj:1 stable:5 cortex:22 inhibition:2 lowerleft:1 dominant:1 showed:1 recent:2 foldiak:1 rockefeller:1 selectivity:6 seen:1 morgan:1 additional:1 eo:10 period:1 signal:3 preservation:2 ii:5 reduces:1 hebbian:6 technical:1 adapt:1 offer:1 long:1 post:1 e1:2 va:2 prediction:8 basic:2 ae:2 vision:2 represent:2 cell:21 receive:1 want:1 call:1 feedforward:11 revealed:2 hb:1 affect:1 fit:3 architecture:2 opposite:2 idea:2 avenue:1 york:2 jj:2 tij:1 hue:1 ph:1 exist:1 inhibitory:2 notice:1 dotted:1 sign:1 estimated:1 affected:1 four:1 drawn:1 rectangle:4 merely:1 angle:21 respond:1 almost:1 reasonable:1 comparable:1 durbin:1 activity:11 strength:8 adapted:1 segev:1 x2:1 nearby:1 aspect:1 department:1 according:1 describes:2 equation:10 agree:1 aftereffect:2 describing:1 mechanism:1 singer:1 addison:1 apply:1 denotes:1 top:2 ensure:1 opportunity:1 unchanged:1 psychophysical:6 arrangement:1 quantity:1 receptive:3 primary:6 striate:3 fovea:1 reversed:1 distance:1 lateral:1 simulated:1 sci:1 fresh:1 assuming:2 length:1 besides:1 index:1 relationship:2 vee:2 westheimer:1 difficult:2 setup:2 negative:1 dnj:1 vivj:1 upper:2 vertical:2 neuron:15 observation:1 anti:2 dc:1 specified:1 extensive:1 connection:37 optimized:1 california:1 address:1 bar:1 dynamical:2 pattern:3 perception:2 memory:2 explanation:2 decorrelation:14 faced:1 understanding:1 acknowledgement:1 interesting:2 degree:3 principle:3 dvi:1 cd:2 land:1 changed:2 supported:1 side:2 understand:1 vv:1 van:1 feedback:24 curve:6 cortical:2 world:1 forward:1 qualitatively:1 adaptive:1 san:1 miall:1 emphasize:1 neurobiological:1 ml:1 hubel:1 assumed:2 mitchison:1 continuous:1 why:2 channel:2 ca:2 correlated:1 symmetry:3 necessarily:1 vj:1 did:2 neurosci:1 rh:1 big:1 noise:1 martinez:1 carpenter:1 neuronal:6 redlich:1 fashion:1 ny:3 iij:2 position:2 breaking:2 specific:1 emphasized:1 decay:1 evidence:1 dl:2 intrinsic:2 illustrates:1 columnar:4 vii:1 appearance:2 neurophysiological:2 visual:21 partially:1 satisfies:1 dh:1 ma:1 ann:1 careful:1 change:9 fw:1 determined:1 called:2 total:1 experimental:7 arbor:1 selectively:1 support:2 people:1 phenomenon:1 ex:2 |
19 | 1,015 | A Connectionist Technique for Accelerated
Textual Input: Letting a Network Do the Typing
Dean A. Pomerleau
pomerlea@cs.cmu.edu
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
Each year people spend a huge amount of time typing. The text people type
typically contains a tremendous amount of redundancy due to predictable
word usage patterns and the text's structure. This paper describes a
neural network system call AutoTypist that monitors a person's typing and
predicts what will be entered next. AutoTypist displays the most likely
subsequent word to the typist, who can accept it with a single keystroke,
instead of typing it in its entirety. The multi-layer perceptron at the heart
of Auto'JYpist adapts its predictions of likely subsequent text to the user's
word usage pattern, and to the characteristics of the text currently being
typed. Increases in typing speed of 2-3% when typing English prose and
10-20% when typing C code have been demonstrated using the system,
suggesting a potential time savings of more than 20 hours per user per year.
In addition to increasing typing speed, AutoTypist reduces the number of
keystrokes a user must type by a similar amount (2-3% for English, 1020% for computer programs). This keystroke savings has the potential to
significantly reduce the frequency and severity of repeated stress injuries
caused by typing, which are the most common injury suffered in today's
office environment.
1 Introduction
People in general, and computer professionals in particular, spend a huge amount of time
typing. Most of this typing is done sitting in front of a computer display using a keyboard as
the primary input device. There are a number of efforts using artificial neural networks and
other techniques to improve the comfort and efficiency of human-computer communication
using alternative modalities. Speech recognition [Waibel et al., 1988], handwritten character
recognition [LeCun et al., 1989], and even gaze tracking [Baluja & Pomerleau, 1993] have
1040
Dean Pomerleau
the potential to facilitate this communication. But these technologies are still in their infancy,
and at this point cannot approach the speed and accuracy of even a moderately skilled typist
for textual input.
Is there some way to improve the efficiency of standard keyboard-based human-computer
communication? The answer is yes, there are several ways to make typing more efficient.
The first, called the Dvorak keyboard, has been around for over 60 years. The Dvorak
keyboard has a different arrangement of keys, in which the most common letters, E, T, S,
etc., are on the home row right under the typist's fingers. This improved layout requires the
typist's fingers to travel1116th as far, resulting in an average of20% increase in typing speed.
Unfortunately, the de facto standard in keyboards is the inefficient QWERTY configuration,
and people are reluctant to learn a new layout.
This paper describes another approach to improving typing efficiency, which can be used
with either the QWERTY or DVORAK keyboards. It takes advantage of the hundreds of
thousands of computer cycles between the typist's keystrokes which are typically wasted
while the computer idly waits for additional input. By spending those cycles trying to predict
what the user will type next, and allowing the typist to accept the prediction with a single
keystroke, substantial time and effort can be saved over typing the entire text manUally.
There are actually several such systems available today, including a package called "Autocompletion" developed for gnu-emacs by the author, and an application called "Magic
Typist" developed for the Apple Macintosh by Olduvai Software. Each of these maintains
a database of previously typed words, and suggests completions for the word the user is
currently in the middle of typing, which can be accepted with a single keystroke. While reasonable useful, both have substantial drawbacks. These systems use a very naive technique
for calculating the best completion, simply the one that was typed most recently. In fact,
experiments conducted for this paper indicated that this "most recently used" heuristic is
correct only about 40% of the time. In addition, these two systems are annoyingly verbose,
always suggesting a completion if a word has been typed previously which matches the
prefix typed so far. They interrupt the user's typing to suggest a completion even if the
word they suggest hasn't been typed in many days, and there are many other alternative
completions for the prefix, making it unlikely that the suggestion will be correct. These
drawbacks are so severe that these systems frequently decrease the user's typing speed,
rather than increase it.
The Auto'JYpist system described in this paper employs an artificial neural network during the
spare cycles between keystrokes to make more intelligent decisions about which completions
to display, and when to display them.
2 The Prediction Task
To operationalize the goal of making more intelligent decisions about which completions
to display, we have defined the neural networks task to be the following: Given a list of
candidate completions for the word currently being typed, estimate the likelihood that the
user is actually typing each of them. For example, if the user has already types the prefix
"aut", the word he is trying to typing could anyone of a large number of possibilities,
including "autonomous", "automatic", "automobile" etc. Given a list of these possibilities
taken from a dictionary, the neural network's task is to estimate the probability that each of
these is the word the user will type.
A neural network cannot be expected to accurately estimate the probability for a particular
completion based on a unique representation for each word, since there are so many words
A Connectionist Technique for Accelerated Textual Input
ATTRIBUTE
absolute age
relative age
absolute frequency
relative frequency
typed previous
total length
remaining length
special character match
capitalization match
1041
DESCRIPTION
time since word was last typed
ratio of the words age to age of the
most recently typed alternative
number of times word has been typed
in the past
ratio of the words frequency to that
of the most often typed alternative
1 if user has typed word previously,
ootherwise
the word's length, in characters
the number of characters left after the
prefix to be typed for this word
the percentage of "special characters"
(Le. not a-z) in this word relative to the
percentage of special characters typed
recently
1 if the capitalization of the prefix the
user has already typed matches the word's
usual capitalization, 0 otherwise.
Table 1: Word attributes used as input to the neural network for predicting word probabilities.
in the English language, and there is only very sparse data available to characterize an
individual's usage pattern for any single word. Instead, we have chosen to use an input
representation that contains only those characteristics of a word that could conceivably have
an impact on its probability of being typed. The attributes we employed to characterize each
completion are listed in Table 1.
These are not the only possible attributes that could be used to estimate the probability of
the user typing a particular word. An additional characteristic that could be helpful is the
word's part of speech (i.e. noun, verb, adjective, etc.). However this attribute is not typically
available or even meaningful in many typing situations, for instance when typing computer
programs. Also, to effectively exploit information regarding a word's part of speech would
require the network to have knowledge about the context of the current text. In effect, it
would require at least an approximate parse tree of the current sentence. While there are
techniques, including connectionist methods [Jain, 1991], for generating parse trees, they
are prone to errors and computationally expensive. Since word probability predictions in
our system must occur many times between each key the user types, we have chosen to
utilize only the easy to compute attributes shown in Table 1 to characterize each completion.
3 Network Processing
The network architecture employed for this system is a feedforward multi-layer perceptron.
Each of the networks investigated has nine input units, one for each of the attributes listed
in Table 1, and a single output unit. As the user is typing a word, the prefix he has typed so
far is used to find candidate completions from a dictionary, which contains 20,000 English
words plus all words the user has typed previously. For each of these candidate completions,
the nine attributes in Table 1 are calculated, and scaled to the range of 0.0 to 1.0. These
values become the activations of the nine units in the input layer. Activation is propagated
through the network to produce an activation for the single output unit, representing the
1042
Dean Pomerleau
probability that this particular candidate completion is the one the user is actually typing.
These candidate probabilities are then used to determine which (if any) of the candidates
should be displayed to the typist, using a technique described in a later section.
To train the network, the user's typing is again monitored. After the user finishes typing a
word, for each prefix of the word a list of candidate completions, and their corresponding
attributes, is calculated. These form the input training patterns. The target activation for
the single output unit on a pattern is set to 1.0 if the candidate completion represented by
that pattern is the word the user was actually typing, and 0.0 if the candidate is incorrect.
Note that the target output activation is binary. As will be seen below, the actual output the
network learns to produce is an accurate estimate of the completion's probability. Currently,
training of the network is conducted off-line, using a fixed training set collected while a
user types normally. Training is performed using the standard backpropagation learning
algorithm.
4 Experiments
Several tests were conducted to determine the ability of multi-layer perceptrons to perform
the mapping from completion attributes to completion probability. In each of the tests,
networks were trained on a set of inputJoutputexemplars collected over one week of a single
subject's typing. During the training data collection phase, the subject's primary text editing
activities involved writing technical papers and composing email, so the training patterns
represent the word choice and frequency distributions associated with these activities. This
training set contained of 14,302 patterns of the form described above.
The first experiment was designed to determine the most appropriate network architecture
for the prediction task. Four architecture were trained on a 10,000 pattern subset of the
training data, and the remaining 4,302 patterns were used for cross validation. The first of
the four architectures was a perceptron, with the input units connected directly to the single
output unit. The remaining three architectures had a single hidden layer, with three, six
or twelve hidden units. The networks with hidden units were fully connected without skip
connections from inputs to output. Networks of three and six hidden units which included
skip connections were tested, but did not exhibit improved performance over the networks
without skip connections, so they are not reported.
Each of the network architectures were trained four times, with different initial random
weights. The results reported are those produced by the best set of weights from these
trials. Note that the variations between trials with a single architecture were small relative
to the variations between architectures. The trained networks were tested on a disjoint set
of 10,040 collected while the same subject was typing another technical paper.
Three different performance metrics were employed to evaluate the performance of these
architectures on the test set. The first was the standard mean squared error (MSE) metric,
depicted in Figure 1. The MSE results indicate that the architectures with six and twelve
hidden units were better able to learn the task than either the perceptron, or the network with
only three hidden units. However the difference appears to be relatively small, on the order
of about 10%.
MSE is not a very informative error metric, since the target output is binary (1 if the
completion is the one the user was typing, 0 otherwise), but the real goal is to predict
the probability that the completion is correct. A more useful measure of performance is
shown in Figure 2. For each of the four architectures, it depicts the predicted probability
that a completion is correct, as measured by the network's output activation value, vs. the
1043
A Connectionist Technique for Accelerated Textual Input
0.095
0.070 ......._ _
Perceptron
3 Hidden
Units
6 Hidden
Units
12 Hidden
Units
Figure 1: Mean squared error for four networks on the task of predicting completion
probability.
actual probability that a completion is correct. The lines for each of the four networks
were generated in the following manner. The network's output response on each of the
10,040 test patterns was used to group the test patterns into 10 categories. All the patterns
which represented completions that the network predicted to have a probability of between
o and 10% of being correct (output activations of 0.0-0.1) were placed in one category.
Completions that the network predicted to have a 10-20% change of being right were placed
in the second category, etc. For each of these 10 categories, the actual likelihood that
a completion classified within the category is correct was calculated by determining the
percent of the completions within that category that were actually correct.
As a concrete example, the network with 6 hidden units produced an output activation
between 0.2 and 0.3 on 861 of the 10,040 test patterns, indicating that on these patterns
it considered there to be a 20-30% chance that the completion each pattern represented
was the word the user was typing. On 209 of these 861 patterns in this category, the
completion was actually the one the user was typing, for a probability of 24.2%. Ideally, the
actual probability should be 25%, half way between the minimum and maximum predicted
probability thresholds for this category. This ideal classification performance is depicted as
the solid 45? line labeled "Target" in Figure 2. The closer the line for a given network matches
this 45? line, the more the network's predicted probability matches the actual probability
for a completion. Again, the networks with six and twelve hidden units outperformed the
networks with zero and three hidden units, as illustrated by their much smaller deviations
from the 45? line in Figure 2.
The output activations produced by the networks with six and twelve hidden units reflect
the actual probability that the completion is correct quite accurately. However prediction
accuracy is only half of what is required to perform the final system goal, which recall was
to identify as many high probability completions as possible, so they can be suggested to
the user without requiring him to manually type them. If overall accuracy of the probability
predictions were the only requirement, a network could score quite highly by classifying
1044
Dean Pomerleau
1.00
e
-a
0
.....
.....
.D
Perceptron
0.80
~
~
3 Hidden Units
6 Hidden Units
.D
~
n;;get
0.60
12 Hidden Units
0.40
u
<
0.20
0.00
Figure 2: Predicted vs.
architectures tested.
actual probability of a completion being correct for the four
every pattern into the 10-20% category, since about 15% of the 10,040 completions in the
test set represent the word the user was typing at the time. But a constant prediction of
10-20% probability on every alternative completion would not allow the system to identify
and suggest to the user those individual completions that are much more likely than the other
alternatives.
To achieve the overall system goal, the network must be able to accurately identify as many
high probability completions as possible. The ability of each of the four networks to achieve
this goal is shown in Figure 3. This figures shows the percent of the 10,040 test patterns each
of the four networks classified as having more than a 60% probability of being correct. The
60% probability threshold was selected because it represents a level of support for a single
completion that is significantly higher than the support for all the others. As can be seen in
Figure 3, the networks with hidden units again significantly outperformed the perceptron,
which was able to correctly identify fewer than half as many completions as highly likely.
5
Auto1)rpist System Architecture and Performance
The networks with six and twelve hidden units are able to accurately identify individual
completions that have a high probability of being the word the user is typing. In order
to exploit this prediction ability and speed up typing, we have build an X-window based
application called AutoTypist around the smaller of the two networks. The application
serves as the front end for the network, monitoring the user's typing and identifying likely
completions for the current word between each keystroke. If the network at the core of
AutoTypist identifies a single completion that it is both significantly more probably than all
the rest, and also longer than a couple characters, it will momentarily display the completion
after the current cursor location in whatever application the user is currently typing 1? If the
displayed completion is the word the user is typing, he can accept it with a single keystroke
(The criterion for displaying a completion, and the human interface for AutoTypist, are somewhat
more sophisticated than this description. However for the purposes of this paper, a high level
description is sufficient.
A Connectionist Technique for Accelerated Textual Input
1045
Percent of
6.0
Patterns Classified 5 .0
as over 60%
4.0
Probable
3.0
2.0
1.0
Perceptron
3 Hidden
Units
6 Hidden
Units
12 Hidden
Units
Figure 3: Percent of candidate completions classified as having more than a 60% chance of
being correct for the four architectures tested.
and move on to typing the next word. If the displayed completion is incorrect, he can
continue typing and the completion will disappear.
Quantitative results with the fully integrated Auto1Ypist system, while still preliminary, are
very encouraging. In a two week trial with two subjects, who could type at 40 and 60 wpm
without AutoTypists, their typings speeds were improved by 2.37% and 2.21 % respectively
when typing English text. Accuracy improvements during these trials were even larger,
since spelling mistakes become rare when AutoTypist is doing a significant part of the
typing automatically. When writing computer programs, speed improvements of 12.93%
and 18.47% were achieved by the two test subjects. This larger speedup was due to the
frequent repetition of variable and function names in computer programs, which Auto1Ypist
was able to expedite. Not only is computer code faster to produce with AutoTypist, it is
also easier to understand. AutoTypist encourages the programmer to use long, descriptive
variable and function names, by making him type them in their entirety only once. On
subsequent instances of the same name, the user need only type the first few characters and
then exploitAutoTypist's completion mechanism to type the rest. These speed improvements
were achieved by subjects who are already relatively proficient typists. Larger gains can
be expected for less skilled typists, since typing an entire word with a single keystroke will
save more time when each keystroke takes longer.
Perhaps an even more significant benefit results from the reduced number of keystrokes
Auto1Ypist requires the user to type. During the test trials described above, the two test
subjects had to strike an average of 2.89% fewer keys on the English text, and 16.42% fewer
keys on the computer code than would have been required to type the text out in its entirety.
Clearly this keystroke savings has the potential to benefit typists who suffer from repeated
stress injuries brought on by typing.
Unfortunately it is impossible to quantitatively compare these results with those of the other
completion-based typing aids described in the introduction, since the other systems have
not been quantitatively evaluated. Subjectively, Auto1Ypist is far less disturbing than the
1046
Dean Pomerleau
alternatives, since it only displays a completion when there is a very good chance it is the
correct one.
6
Future Work
Further experiments are required to verify the typing speed improvements possible with
AutoTypist, and to compare it with alternative typing improvement systems. Preliminary
experiments suggest a network trained on the word usage patterns of one user can generalize
to that of other users, but it may be necessary to train a new network for each individual
typist. Also, the experiments conducted for this paper indicate that a network trained on
one type of text, English prose, can generalize to text with quite different word frequency
patterns, C language computer programs. However substantial prediction improvements,
and therefore typing speedup, may be possible by training separate networks for different
types of text. The question of how to rapidly adapt a single network, or perhaps a mixture
of expert networks, to new text types is one which should be investigated.
Even without these extensions, AutoTypist has the potential to greatly improve the comfort
and efficiency of the typing tasks. For people who type English text two hours per workday,
even the conservative estimate of a 2% speedup translates into 10 hours of savings per
year. The potential time savings for computer programming is even more dramatic. A
programmer who types code two hours per workday could potentially save between 52
and 104 hours in a single year by using AutoTypist. With such large potential benefits,
commercial development of the AutoTypist system is also being investigated.
Acknowledgements
I would like to thank David Simon and Martial Hebert for their helpful suggestions, and for
acting as willing test subjects during the development of this system.
References
[Baluja & Pomerleau, 1993] Baluja, S. and Pomerleau, D.A. (1993) Non-Intrusive Gaze
Tracking Using Artificial Neural Networks. In Advances in Neural Information Processing Systems 6, San Mateo, CA: Morgan Kaufmann Publishers.
[Jain,1991] Jain, A.N. (1991) PARSEC: A connectionist learning architecture for parsing
spoken language. Carnegie Mellon University School of Computer Science Technical
Report CMU-CS-91-208.
[LeCun et al., 1989] LeCun, Y., Boser, B., Denker, 1.S., Henderson, D., Howard, R.E.,
Hubbard, W., and Jackel, L.D. (1989) Backpropagation applied to handwritten zip
code recognition. Neural Computation 1(4).
[Waibel et al., 1988] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., Lang, K. (1988)
Phoneme recognition: Neural Networks vs. Hidden Markov Models. Proceedings from
Int. Conf on Acoustics, Speech and Signal Processing, New York, New York.
| 1015 |@word trial:5 middle:1 willing:1 dramatic:1 solid:1 initial:1 configuration:1 contains:3 score:1 prefix:7 past:1 current:4 activation:9 lang:1 must:3 parsing:1 subsequent:3 informative:1 designed:1 v:3 half:3 selected:1 device:1 fewer:3 proficient:1 core:1 location:1 parsec:1 skilled:2 become:2 incorrect:2 manner:1 expected:2 frequently:1 multi:3 automatically:1 actual:7 encouraging:1 window:1 increasing:1 what:3 developed:2 spoken:1 quantitative:1 every:2 scaled:1 facto:1 whatever:1 unit:27 normally:1 mistake:1 plus:1 mateo:1 suggests:1 range:1 unique:1 lecun:3 backpropagation:2 significantly:4 word:45 wait:1 suggest:4 get:1 cannot:2 context:1 impossible:1 writing:2 dean:5 demonstrated:1 layout:2 identifying:1 autonomous:1 variation:2 target:4 today:2 commercial:1 user:35 programming:1 pa:1 recognition:4 expensive:1 predicts:1 database:1 labeled:1 thousand:1 cycle:3 connected:2 momentarily:1 decrease:1 substantial:3 predictable:1 environment:1 moderately:1 ideally:1 trained:6 efficiency:4 represented:3 finger:2 train:2 jain:3 artificial:3 quite:3 heuristic:1 spend:2 larger:3 otherwise:2 ability:3 hanazawa:1 final:1 advantage:1 descriptive:1 frequent:1 entered:1 rapidly:1 achieve:2 adapts:1 prose:2 description:3 workday:2 requirement:1 macintosh:1 produce:3 generating:1 completion:52 measured:1 school:2 c:2 entirety:3 skip:3 indicate:2 predicted:6 drawback:2 saved:1 correct:13 attribute:10 human:3 programmer:2 spare:1 require:2 preliminary:2 probable:1 extension:1 around:2 considered:1 mapping:1 predict:2 week:2 dictionary:2 purpose:1 outperformed:2 currently:5 jackel:1 him:2 hubbard:1 repetition:1 brought:1 clearly:1 always:1 rather:1 office:1 interrupt:1 improvement:6 likelihood:2 greatly:1 helpful:2 integrated:1 typically:3 entire:2 accept:3 unlikely:1 hidden:22 overall:2 classification:1 development:2 noun:1 special:3 once:1 saving:5 having:2 manually:2 represents:1 of20:1 future:1 report:1 connectionist:6 others:1 intelligent:2 quantitatively:2 employ:1 few:1 individual:4 phase:1 huge:2 possibility:2 highly:2 severe:1 henderson:1 mixture:1 accurate:1 closer:1 necessary:1 tree:2 instance:2 injury:3 deviation:1 subset:1 rare:1 hundred:1 conducted:4 front:2 characterize:3 reported:2 answer:1 person:1 twelve:5 off:1 gaze:2 concrete:1 again:3 squared:2 reflect:1 conf:1 expert:1 inefficient:1 suggesting:2 potential:7 de:1 int:1 verbose:1 hasn:1 caused:1 later:1 performed:1 doing:1 maintains:1 simon:1 accuracy:4 kaufmann:1 who:6 characteristic:3 phoneme:1 sitting:1 identify:5 yes:1 generalize:2 handwritten:2 accurately:4 produced:3 monitoring:1 apple:1 classified:4 email:1 typed:19 frequency:6 involved:1 associated:1 monitored:1 propagated:1 couple:1 gain:1 reluctant:1 recall:1 knowledge:1 sophisticated:1 actually:6 appears:1 higher:1 day:1 response:1 improved:3 editing:1 done:1 evaluated:1 wpm:1 parse:2 indicated:1 perhaps:2 usage:4 effect:1 name:3 requiring:1 verify:1 facilitate:1 illustrated:1 during:5 encourages:1 criterion:1 trying:2 stress:2 interface:1 percent:4 spending:1 recently:4 common:2 he:4 mellon:2 significant:2 automatic:1 language:3 had:2 longer:2 etc:4 subjectively:1 keyboard:6 binary:2 continue:1 morgan:1 seen:2 minimum:1 additional:2 aut:1 zip:1 employed:3 somewhat:1 determine:3 strike:1 signal:1 expedite:1 reduces:1 technical:3 match:6 faster:1 adapt:1 cross:1 long:1 impact:1 prediction:10 cmu:2 metric:3 represent:2 achieved:2 addition:2 suffered:1 modality:1 publisher:1 rest:2 probably:1 capitalization:3 subject:8 call:1 ideal:1 feedforward:1 easy:1 finish:1 architecture:15 reduce:1 regarding:1 translates:1 six:6 effort:2 suffer:1 speech:4 york:2 nine:3 useful:2 listed:2 amount:4 category:9 reduced:1 percentage:2 disjoint:1 per:5 correctly:1 carnegie:2 group:1 redundancy:1 key:4 four:10 threshold:2 monitor:1 utilize:1 wasted:1 year:5 package:1 letter:1 reasonable:1 home:1 decision:2 layer:5 gnu:1 display:7 activity:2 occur:1 software:1 speed:10 anyone:1 relatively:2 speedup:3 waibel:3 describes:2 smaller:2 character:8 making:3 conceivably:1 heart:1 taken:1 computationally:1 previously:4 mechanism:1 letting:1 serf:1 end:1 available:3 denker:1 appropriate:1 save:2 alternative:8 professional:1 remaining:3 calculating:1 exploit:2 build:1 disappear:1 move:1 arrangement:1 already:3 question:1 primary:2 usual:1 spelling:1 exhibit:1 separate:1 thank:1 collected:3 code:5 length:3 ratio:2 unfortunately:2 potentially:1 magic:1 pomerleau:8 perform:2 allowing:1 markov:1 howard:1 displayed:3 situation:1 hinton:1 severity:1 communication:3 verb:1 david:1 required:3 sentence:1 connection:3 acoustic:1 textual:5 tremendous:1 boser:1 hour:5 able:5 suggested:1 below:1 pattern:22 comfort:2 program:5 adjective:1 including:3 typing:51 predicting:2 representing:1 improve:3 technology:1 identifies:1 martial:1 auto:2 naive:1 text:15 acknowledgement:1 determining:1 relative:4 fully:2 suggestion:2 intrusive:1 age:4 validation:1 sufficient:1 displaying:1 classifying:1 row:1 prone:1 placed:2 last:1 english:8 hebert:1 allow:1 understand:1 perceptron:8 absolute:2 sparse:1 benefit:3 calculated:3 author:1 collection:1 disturbing:1 san:1 far:4 approximate:1 emacs:1 pittsburgh:1 shikano:1 table:5 learn:2 composing:1 ca:1 improving:1 mse:3 automobile:1 investigated:3 did:1 repeated:2 depicts:1 aid:1 candidate:10 infancy:1 learns:1 keystroke:13 operationalize:1 list:3 effectively:1 cursor:1 easier:1 depicted:2 simply:1 likely:5 contained:1 tracking:2 chance:3 goal:5 change:1 included:1 baluja:3 acting:1 conservative:1 called:4 total:1 accepted:1 meaningful:1 perceptrons:1 indicating:1 people:5 support:2 dvorak:3 accelerated:4 evaluate:1 tested:4 |
20 | 1,016 | Connectionist Speaker Normalization
with Generalized
Resource Allocating Networks
Cesare Furlanello
Istituto per La Ricerca
Scientifica e Tecnologica
Povo (Trento), Italy
furlan?lirst. it
Diego Giuliani
Istituto per La Ricerca
Scientifica e Tecnologica
Povo (Trento), Italy
giuliani?lirst.it
Edmondo Trentin
Istituto per La Ricerca
Scientifica e Tecnologica
Povo (Trento), Italy
trentin?lirst.it
Abstract
The paper presents a rapid speaker-normalization technique based
on neural network spectral mapping. The neural network is used
as a front-end of a continuous speech recognition system (speakerdependent, HMM-based) to normalize the input acoustic data from
a new speaker. The spectral difference between speakers can be
reduced using a limited amount of new acoustic data (40 phonetically rich sentences). Recognition error of phone units from the
acoustic-phonetic continuous speech corpus APASCI is decreased
with an adaptability ratio of 25%. We used local basis networks of
elliptical Gaussian kernels, with recursive allocation of units and
on-line optimization of parameters (GRAN model). For this application, the model included a linear term. The results compare
favorably with multivariate linear mapping based on constrained
orthonormal transformations.
1
INTRODUCTION
Speaker normalization methods are designed to minimize inter-speaker variations,
one of the principal error sources in automatic speech recognition. Training a speech
recognition system on a particular speaker (speaker-dependent or SD mode) generally gives better performance than using a speaker-independent system, which is
868
Cesare Furlanello. Diego Giuliani. Edmondo Trentin
trained to recognize speech from a generic user by averaging over individual differences. On the other hand, performance may be dramatically worse when a SD
system "tailored" on the acoustic characteristics of a speaker (the reference speaker)
is used by another one (the new or target speaker). Training a SD system for any
new speaker may be unfeasible: collecting a large amount of new training data
is time consuming for the speaker and unacceptable in some applications. Given
a pre-trained SD speech recognition system, the goal of normalization methods is
then to reduce to a few sentences the amount of training data required from a new
speaker to achieve acceptable recognition performance. The inter-speaker variation
of the acoustic data is reduced by estimating a feature vector transformation between the acoustic parameter space of the new speaker and that of the reference
speaker (Montacie et al., 1989; Class et al., 1990; Nakamura and Shikano, 1990;
Huang, 1992; Matsukoto and Inoue, 1992). This multivariate transformation, also
called spectral mapping given the type of features considered in the parameterization of speech data, provides an acoustic front-end to the recognition system.
Supervised speaker normalization methods require that the text of the training utterances required from the new speaker is known, while arbitrary utterances can
be used by unsupervised methods (Furui and Sondhi, 1991). Good performance
have been achieved with spectral mapping techniques based on MSE optimization
(Class et al., 1990; Matsukoto and Inoue, 1992). Alternative approaches presented
estimation of the spectral normalization mapping with Multi-Layer Perceptron neural networks (Montacie et al., 1989; Nakamura and Shikano, 1990; Huang, 1992;
Watrous, 1994).
This paper introduces a supervised speaker normalization method based on neural
network regression with a generalized local basis model of elliptical kernels (Generalized Resource Allocating Network: GRAN model). Kernels are recursively allocated
by introducing the heuristic procedure of (Platt, 1991) within the generalized RBF
schema proposed in (Poggio and Girosi, 1989). The model includes a linear term
and efficient on-line optimization of parameters is achieved by an automatic differentiation technique. Our results compare favorably with normalization by affine
linear transformations based on orthonormal constrained pseudoinverse. In this paper, the normalization module was integrated and tested as an acoustic front-end for
speaker-dependent continuous speech recognition systems. Experiments regarded
phone units recognition with Hidden Markov Model (HMM) recognition systems.
The diagram in Figure 1 outlines the general structure of the experiment with
GRAN normalization modules. The architecture is independent from the specific
speech recognition system and allows comparisons between different normalization
techniques. The GRAN model and a general procedure for data standardization are
described in Section 2 and 3. After a discussion of the spectral mapping problem
in Section 4, the APASCI corpus used in the experiments and the characteristics
of the acoustic data are described in Section 5. The recognition system and the
experiment set-up are detailed in Sections 6-8. Results are presented and discussed
in Section 9.
Connectionist Speaker Normalization with Generalized Resource Allocating Networks
DataBase:
reference phrase
869
phraseS
(Yj } j - I ? ...? ]
Dynamic Time Warping
Training
fx)
I(Xi(t), Yj(t}}
-'
Test
i-I ?...? I
'-------------------"1
Neural Network
supervised training
:
GRAN normalizati
Feature Extraction
Speech Signal
corresponding to phrase S
uttered by a new speaker
Output
Figure 1: System overview
2
THE GRAN MODEL
Feedforward artificial neural networks can be regarded as a convenient realization
of general functional superpositions in terms of simpler kernel functions (Barron
and Barron, 1988). With one hidden layer we can implement a multivariate superposition f(z) = Ef=o cxjKj(z,wj) where Kj is a function depending on an
input vector z and a parameter vector Wj, a general structure which allows to realize flexible models for multivariate regression. We are interested in the schema:
y = H K(x) + Ax + b with input vector x E Rd 1 and estimated output vector y E R 2 . K = (Kj) is a n-dimensional vector of local kernels, H is the
d2 x n real matrix of kernel coefficients, b E R d 2 is an offset term and A is a
d2 x d1 linear term. Implemented kernels are Gaussian, Hardy multiquadrics, inverse of Hardy multiquadrics and Epanenchnikov kernels, also in the NadarayaWatson normalized form (HardIe, 1990). The kernel allocation is based on a
recursive procedure: if appropriate novelty conditions are satisfied for the example (x', y/), a new kernel Kn+1 is allocated and the new estimate Yn+l becomes
Yn+l (x) = Yn(X) + Kn+1 (llx - x'llw)(y' - Yn(X)) (HardIe, 1990). Global properties and rates of convergence for recursive kernel regression estimates are given in
(Krzyzak, 1992). The heuristic mechanism suggested by (Platt, 1991) has been
extended to include the optimization of the weighted metrics as requested in the
generalized versions of RBF networks of (Poggio and Girosi, 1989). Optimization
regards kernel coefficients, locations and bandwidths, the offset term, the coefficient matrix A if considered, and the W matrix defining the weighted metrics in
the input space: IIxll~ = xtwtWx. Automatic differentiation is used for efficient
on-line gradient-descent procedure w.r. t. different error functions (L2, L1, entropy
fit), with different learning rates for each type of parameters.
870
Cesare FurLanello, Diego GiuLiani, Edmondo Trentin
Ij;-::=<p
X -----------+" Y
TJx
TJy
-1
TJy
x -----------" Y
Figure 2: Commutative diagram for the speaker normalization problem. The spectral mapping <p between original spaces X and Y is estimated by Ij; = TJy 1 . ip . TJx,
obtained by composition of the neural GRAN mapping ip between PCA spaces X
and Y with the two invertible PCA transformations TJx and TJy.
3
NETWORKS AND PCA TRANSFORMATIONS
The normalization module is designed to estimate a spectral mapping between the
acoustic spaces of two different speakers. Inter-speaker variability is reflected by
significant differences in data distribution in these multidimensional spaces (we considered 8 dimensions); in particular it is important to take into account global data
anisotropy. More generally, it is also crucial to decorrelate the features describing
the data. A general recipe is to apply the well-known Principal Component Analysis (PCA) to the data, in this case implemented from standard numerical routines
based on Singular Value Decomposition of the data covariance matrices. The network was applied to perform a mapping between the new feature spaces obtained
from the PCA transformations, mean translation included (Figure 2).
4
THE SPECTRAL MAPPING PROBLEM
A sound uttered by a speaker is generally described by a sequence offeature vectors
obtained from the speech signal via short-time spectral analysis (Sec. 5). The spectral representations of the same sequence of sounds uttered by two speakers are subject to significant variations (e.g. differences between male and female speakers, regional accents, ... ). To deal with acoustic differences, a suitable transformation (the
spectral mapping) is seeked which performs the "best" mapping between the corresponding spectra oftwo speakers. Let Y = (Yl, Y2, ... , YJ) and X = (x 1, X2, ... , XI) be
the spectral feature vector sequences of the same sentence uttered by two speakers,
called respectively the reference and the new speaker. The desired mapping is performed by a function <pC Xi) such that the transformed vector sequence obtained from
X = (Xi) approximates as close as possible the spectral vector sequence Y = (Yi).
To eliminate time differences between the two acoustic realizations, a time warping
function has to be determined yielding pairs C(k) = (i(k),j(k))k=1.. .K of corresponding indexes of feature vectors in X and Y, respectively. The desired spectral
Connectionist Speaker Normalization with Generalized Resource Allocating Networks
87 J
mapping r,o(Xi) is the one which minimizes Ef=l d(Yj(k)' r,o(Xi(k?)) where d(?,?) is a
distorsion measure in the acoustic feature space. To estImate the transformation, a
set of supervised pairs (Xi(k), Yj(k?) is considered. In summary, the training material
considered in the experiments consisted of a set of vector pairs obtained by applying
the Dynamic Time Warping (DTW) algorithm (Sakoe and Chiba, 1978) to a set
of phrases uttered by the reference and the new speaker.
5
THE APASCI CORPUS
The experiments reported in this paper were performed on a portion of APASCI,
an italian acoustic-phonetic continuous speech corpus. For each utterance, text
and phonetic transcriptions were automatically generated (Angelini et al., 1994).
The corpus consists of two portions. The first part, for the training and validation of speaker independent recognition systems, consists of a training set (2140
utterances), a development set (900 utterances) and a test set (860 utterances).
The sets contain, respectively, speech material from 100 speakers (50 males and 50
females), 36 speakers (18 males and 18 females) and 40 speakers (20 males and 20
females). The second portion of the corpus is for training and validation of speaker
dependent recognition systems. It consists of speech material from 6 speakers (3
males and 3 females). Each speaker uttered 520 phrases, 400 for training and 120
for test. Speech material in the test set was acquired in different days with respect
to the training set. A subset of 40 utterances from the training material forms the
adaptation training set, to be used for speaker adaptation/normalization purposes.
For this application, each signal in the corpus was processed to obtain its parametric
representation. The signal was preemphasized using a filter with transfer function
H(z)
1 - 0.95 X z-l, and a 20 ms Hamming window is then applied every 10
ms. For each frame, the normalized log-energy as well as 8 Mel Scaled Cepstral
Coefficients (MSCC) based on a 24-channel filter-bank were computed. Normalization of log-energy was performed by subtracting the maximum log-energy value in
the sentence; for each Mel coefficient, normalization was performed by subtracting
the mean value of the whole utterance. For both MSCC and the log-energy, the
first order derivatives as well as the second order derivatives were computed. For
each frame, all the computed acoustic parameters were combined in a single feature
vector with 27 components.
=
6
THE RECOGNITION SYSTEM
For each of the 6 speakers, a SD HMM recognition system was trained with the 400
utterances available in the APASCI corpus; the systems were bootstrapped with
gender dependent models trained on the gender dependent speech material (1000
utterances for male and 1140 utterances for female). A set of 38 context independent
acoustic-phonetic units was considered. Left-to-right HMMs with three and four
states were adopted for short (i.e. p,t,k,b,d,g) and long (e.g. a,i,u,Q,e) sounds
respectively. Silence, pause and breath were modeled with a single state ergodic
model. The output distribution probabilities were modeled with mixtures of 16
gaussian probability densities, diagonal covariance matrixes. Transitions leaving
the same state shared the same output distribution probabilities.
872
Cesare Furlanello, Diego Giuliani, Edmondo Trentin
Table 1: Phone Recognition Rate (Unit Accuracy %) without normalization
7
TRAINING THE NORMALIZATION MODULES
A set of 40 phrases was considered for each pair (new, re f erence) of speakers to train
the normalization modules. In order to take into account alternative pronunciation,
insertion or deletion of phonemes, pauses between words and other phenomena, the
automatic phonetic transcription and segmentation available in APASCI was used
for each utterance. Given two utterances corresponding to the same phrase, we considered only their segments having the same phonetic transcription. To determine
these segments the DTW algorithm was applied to the phonetic transcription of the
two utterances. The DTW algorithm was applied a second time to the obtained
segments and the resulting optimal alignment paths gave the desired set of vector
pairs. The DTW algorithm was applied only to the 8 MSCC and the other acoustic
parameters were left unmodified.
We trained networks with 8 inputs and 8 outputs. The model included a linear
term: first the linear term was fit to the data, and then the rest of the expansion
was estimated by fitting the residuals of the linear regression. The networks grew
up to 50 elliptical gaussian kernels using dynamic allocation. Kernel coefficients,
locations and bandwidths were optimized using different learning rates for 10 epochs
w.r.t the Ll norm, which proved to be more efficient than the usual L2 norm.
8
THE RECOGNITION EXPERIMENTS
Experiments concerned continuous phone recognition without any lexical and
phonetical constraint (no phone statistic was used).
For all the couples
(new, reference) of speakers in the database, a recognition experiment was performed using 90 (of the 120 available) test utterances from the new speaker with
the SD recognition system previously trained for the reference speaker. On average the test sets consisted of 4770 phone units. The experiments were repeated
transforming the test data with different normalization modules and performance
compared. Results are expressed in terms of insertions (Ins), deletions (Del) and
substitutions (Sub) of phone units made by the recognizer. Unit Accuracy (U A)
and Percent Correct (PC) performance indicators are respectively defined w.r.t.
the total number of units nunih as U A = 100 (1 - (Ins + Del + Sub)/nunit.) and
PC = 100 (1 - (Del + Sub)/nunit.). In Table 1 the baseline speaker dependent
performance for the 6 speaker dependent systems is reported. Row labels indicate the speaker reference model while column labels identify whose target acous-
Connectionist Speaker Normalization with Generalized Resource Allocating Networks
873
Table 2: Phone Recognition Rate (Unit Accuracy %) with NN normalization
tic data are used. Thus U A and PC entries in the main diagonal are for the
same speaker who trained the system while the remaining entries relate to performance obtained with new speakers. We also considered the adaptability ratios for
a U A and P PC (Montacie et al., 1989): Pa (aRT - aRT )/(aRR - aRT) and
Pp = (PRT - PRT )/(PRR - PRT) where aRT indicate accuracy for reference speaker
R and target T without normalization, aRR is the speaker dependent baseline accuracy and apex n indicates normalization. The same notation applies to the percent
correct adaptability ratio pp.
=
9
=
=
RESULTS AND CONCLUSIONS
Normalization experiments have been performed with the set-up described in the
previous Section. The phone recognition rates obtained with normalization modules
based on the GRAN model are reported in Table 2 in terms of Unit Accuracy (dee
Table 1 for the baseline performance). In Table 3 the performance of the GRAN
model (NN) and constrained orthonormal linear mapping (LIN) are compared with
the baseline performance (SD: no adaptation) in terms of both Unit Accuracy and
Percent Correct. The network shows an improvement, as evidenced by the variation
in the Pa and Pp values. Results are reported averaging performance over all the
pairs (new,reference) of speakers (Total column), and considering pairs of speakers
of the same gender and of different genders (Female: only female subjects, Male:
only males, Dill: different genders). An analysis of the adaptability ratios shows
that the effect of the network normalization is higher than with the linear network
for all the 3 subgroups of pairs: p~N = 0.20 vs p~IN = 0.16 for the Female
couples and liN = 0.16 vs p~IN = 0.15 for the Male couples. The improvement is
higher (p~N = 0.28, p~IN = 0.24) for speaker of different genders. Although these
preliminary experiments show only a minor improvement of performance achieved
by the network with respect to linear mappings, we expect that the selectivity of
the network could be exploited using acoustic contexts and code dependent neural
networks.
Acknowledgements
This work has been developed within a grant of the "Programma Nazionale di
Ricerca per la Bioelettronica" assigned by the Italian Ministry of University and
Technologic Research to Elsag Bailey. The authors would like to thank B. Angelini,
F. Brugnara, B. Caprile, R. De Mori, D. Falavigna, G. Lazzari and P. Svaizer.
874
Cesare Furlanello, Diego Giuliani, Edmondo Trentin
Table 3: Phone Recognition Rate (%) in terms of both Unit Accuracy, Percent
Correct, and adaptability ratio p.
References
Angelini, B., Brugnara, F., Falavigna, D., Giuliani, D., Gretter, R., and Omologo,
M. (September 1994). Speaker Independent Continuous Speech Recognition Using
an Acoustic-Phonetic Italian Corpus. In Proc. of ICSLP, pages 1391-1394.
Barron, A. R. and Barron, R. L. (1988). Statistical learning networks: a unifying
view. In Symp. on the Interface: Statistics and Computing Science, Reston, VI.
Class, F., Kaltenmeier, A., Regel, P., and Troller, K. (1990). Fast speaker adaptation for speech recognition system. In Proc. of ICASSP 90, pages 1-133-136.
Furui, S. and Sondhi, M. M., editors (1991). Advances in Speech Signal Processing.
Marcel Dekker and Inc.
HardIe, W. (1990). Applied nonparametric regression, volume 19 of Econometric
Society Monographs. Cambridge University Press, New York.
Huang, X. D. (1992). Speaker normalization for speech recognition. In Proc. of
ICASSP 92, pages 1-465-468.
Krzyzak, A. (1992). Global convergence of the recursive kernel regression estimates
with applications in classification and nonlinear system estimation. IEEE Transactions on Information Theory, 38(4):1323-1338.
Matsukoto, H. and Inoue, H. (1992). A piecewise linear spectral mapping for supervised speaker adaptation. In Proc. of ICASSP 92, pages 1-449-452.
Montacie, C., Choukri, K., and Chollet, G. (1989). Speech recognition using temporal decomposition and multi-layer feed-forward automata. In Proc. of ICASSP
89, pages 1-409-412.
Nakamura, S. and Shikano, K. (1990). A comparative study of spectral mapping
for speaker adaptation. In Proc. of ICASSP 90, pages 1-157-160.
Platt, J. (1991). A resource-allocating network for function interpolation. Neural
Computation, 3(2):213-225.
Poggio, T. and Girosi, F. (1989). A theory of networks for approximation and
learning. A.1. Memo No. 1140, MIT.
Sakoe, H. and Chiba, S. (1978). Dynamic programming algorithm optimization for
spoken word recognition. IEEE-A SSP, 26(1):43-49.
Watrous, R. (1994). Speaker normalization and adaptation using second-order connectionist networks. IEEE Trans. on Neural Networks, 4(1):21-30.
| 1016 |@word version:1 norm:2 dekker:1 d2:2 decomposition:2 covariance:2 decorrelate:1 recursively:1 substitution:1 hardy:2 bootstrapped:1 troller:1 elliptical:3 realize:1 numerical:1 girosi:3 designed:2 v:2 parameterization:1 short:2 provides:1 location:2 simpler:1 unacceptable:1 consists:3 fitting:1 symp:1 sakoe:2 acquired:1 inter:3 rapid:1 multi:2 automatically:1 anisotropy:1 window:1 considering:1 becomes:1 estimating:1 notation:1 tic:1 watrous:2 minimizes:1 developed:1 spoken:1 transformation:9 differentiation:2 temporal:1 every:1 collecting:1 multidimensional:1 choukri:1 scaled:1 platt:3 unit:13 grant:1 yn:4 local:3 sd:7 path:1 interpolation:1 falavigna:2 hmms:1 limited:1 yj:5 recursive:4 implement:1 procedure:4 erence:1 convenient:1 pre:1 speakerdependent:1 word:2 unfeasible:1 close:1 context:2 applying:1 lexical:1 uttered:6 automaton:1 ergodic:1 d1:1 regarded:2 orthonormal:3 variation:4 fx:1 diego:5 target:3 user:1 programming:1 pa:2 recognition:29 cesare:5 database:2 module:7 wj:2 nazionale:1 monograph:1 transforming:1 insertion:2 reston:1 dynamic:4 trained:7 furui:2 segment:3 basis:2 icassp:5 sondhi:2 train:1 fast:1 artificial:1 pronunciation:1 whose:1 heuristic:2 statistic:2 ip:2 sequence:5 subtracting:2 arr:2 adaptation:7 realization:2 achieve:1 normalize:1 trento:3 recipe:1 convergence:2 comparative:1 depending:1 ij:2 minor:1 nadarayawatson:1 implemented:2 marcel:1 indicate:2 correct:4 filter:2 material:6 require:1 icslp:1 dill:1 preliminary:1 considered:9 mapping:19 purpose:1 recognizer:1 estimation:2 proc:6 label:2 superposition:2 weighted:2 mit:1 gaussian:4 ax:1 improvement:3 indicates:1 baseline:4 dependent:9 prt:3 breath:1 nn:2 integrated:1 eliminate:1 hidden:2 italian:3 transformed:1 interested:1 llw:1 classification:1 flexible:1 development:1 constrained:3 art:4 extraction:1 having:1 unsupervised:1 connectionist:5 piecewise:1 few:1 recognize:1 individual:1 alignment:1 furlanello:5 introduces:1 male:9 mixture:1 yielding:1 pc:5 dee:1 allocating:6 istituto:3 poggio:3 offeature:1 montacie:4 desired:3 re:1 column:2 unmodified:1 phrase:7 introducing:1 subset:1 entry:2 front:3 reported:4 kn:2 combined:1 density:1 yl:1 invertible:1 satisfied:1 huang:3 worse:1 iixll:1 derivative:2 account:2 de:1 sec:1 includes:1 coefficient:6 inc:1 vi:1 performed:6 view:1 schema:2 portion:3 minimize:1 accuracy:8 phonetically:1 phoneme:1 characteristic:2 who:1 identify:1 povo:3 energy:4 pp:3 di:1 hamming:1 couple:3 proved:1 segmentation:1 adaptability:5 routine:1 feed:1 higher:2 supervised:5 day:1 reflected:1 hand:1 nonlinear:1 del:3 accent:1 hardie:3 mode:1 effect:1 normalized:2 y2:1 consisted:2 contain:1 assigned:1 deal:1 ll:1 speaker:65 mel:2 m:2 generalized:8 outline:1 performs:1 l1:1 interface:1 percent:4 omologo:1 ef:2 functional:1 overview:1 volume:1 discussed:1 approximates:1 significant:2 composition:1 cambridge:1 llx:1 automatic:4 rd:1 apex:1 multivariate:4 female:9 italy:3 phone:10 phonetic:8 selectivity:1 yi:1 exploited:1 ministry:1 novelty:1 determine:1 signal:5 giuliani:7 sound:3 long:1 lin:2 ricerca:4 regression:6 metric:2 normalization:31 kernel:15 tailored:1 achieved:3 decreased:1 diagram:2 singular:1 source:1 leaving:1 allocated:2 crucial:1 rest:1 regional:1 subject:2 feedforward:1 concerned:1 fit:2 gave:1 architecture:1 bandwidth:2 reduce:1 pca:5 krzyzak:2 speech:21 york:1 dramatically:1 generally:3 prr:1 detailed:1 amount:3 nonparametric:1 processed:1 reduced:2 estimated:3 brugnara:2 per:4 tecnologica:3 four:1 econometric:1 chollet:1 inverse:1 oftwo:1 acceptable:1 layer:3 constraint:1 x2:1 distorsion:1 gran:9 mori:1 resource:6 previously:1 describing:1 mechanism:1 end:3 adopted:1 available:3 apply:1 barron:4 spectral:17 generic:1 appropriate:1 bailey:1 alternative:2 original:1 remaining:1 include:1 unifying:1 society:1 warping:3 parametric:1 usual:1 diagonal:2 ssp:1 september:1 gradient:1 thank:1 hmm:3 code:1 index:1 modeled:2 ratio:5 relate:1 favorably:2 memo:1 perform:1 markov:1 descent:1 defining:1 extended:1 variability:1 grew:1 frame:2 arbitrary:1 evidenced:1 pair:8 required:2 sentence:4 optimized:1 acoustic:19 deletion:2 subgroup:1 trans:1 suggested:1 multiquadrics:2 suitable:1 nakamura:3 indicator:1 pause:2 residual:1 inoue:3 dtw:4 utterance:15 kj:2 text:2 epoch:1 l2:2 acknowledgement:1 expect:1 allocation:3 validation:2 affine:1 standardization:1 editor:1 bank:1 translation:1 row:1 summary:1 silence:1 perceptron:1 cepstral:1 regard:1 chiba:2 dimension:1 transition:1 rich:1 author:1 made:1 forward:1 acous:1 transaction:1 nunih:1 transcription:4 pseudoinverse:1 global:3 corpus:9 consuming:1 shikano:3 xi:7 spectrum:1 continuous:6 table:7 channel:1 transfer:1 requested:1 expansion:1 mse:1 main:1 whole:1 repeated:1 sub:3 specific:1 offset:2 commutative:1 entropy:1 expressed:1 tjx:3 applies:1 gender:6 goal:1 rbf:2 shared:1 included:3 determined:1 averaging:2 principal:2 called:2 total:2 la:4 caprile:1 tested:1 phenomenon:1 |
21 | 1,017 | A Critical Comparison of Models for
Orientation and Ocular Dominance
Columns in the Striate Cortex
E. Erwin
Beckman Institute
University of Illinois
Urbana, IL 61801, USA
K. Obermayer
Technische Fakultat
U niversitat Bielefeld
33615 Bielefeld, FRG
K. Schulten
Beckman Institute
University of Illinois
Urbana, IL 61801, USA
Abstract
More than ten of the most prominent models for the structure
and for the activity dependent formation of orientation and ocular dominance columns in the striate cort(>x have been evaluated.
We implemented those models on parallel machines, we extensively
explored parameter space, and we quantitatively compared model
predictions with experimental data which were recorded optically
from macaque striate cortex.
In our contribution we present a summary of our results to date.
Briefly, we find that (i) despite apparent differences, many models
are based on similar principles and, consequently, make similar predictions, (ii) certain "pattern models" as well as the developmental
"correlation-based learning" models disagree with the experimental data, and (iii) of the models we have investigated, "competitive
Hebbian" models and the recent model of Swindale provide the
best match with experimental data.
1
Models and Data
The models for the formation and structure of orientation and ocular dominance
columns which we have investigated are summarized in table 1. Models fall into
two categories: "Pattern models" whose aim is to achieve a concise description of
the observed patterns and "developmental models" which are focussed on the pro-
94
E. Erwin, K. Obermayer, K. Schulten
Class
Pattern
Models
Type
Structural
Models
Spectral
Models
Develop.
Models
Correlation
Based Learning
Competl bve
Hebbian
Other
Model
1. Icecube
2. Pinwheel
3. Gotz
4. Baxter
5. ROJer
6. Niebur
7. Swindale
8. Linsker
9. Miller
10. ~UM-h
11 . SOM-I
12. EN
13. Tanaka
14. Yuille
Reference
Hubel and Wiesel 1977 [~I
Braitenberg and Braitenberg 1979 161
Gotz 1987 (8)
Baxter and Dow 1989 11)
ROJer and Schwartz 1990 J20)
Niebur and Worgotter 1993 (15)
Swindale 1992a (21)
Linsker 1986c J12]
Miller 1989, 1994 113, 14)
Ubennayer, et. al. 1990 P~J
Obermayer, et. al. 1992(17)
Durbin and Mitchison 1990 (7)
Tanaka 1991 [22J
Yuille, et. al. 1992 (23)
Table 1: Models of visual cortical maps which have been evaluated.
cesses underlying their formation. Pattern models come in two varieties, "structural
models" and "spectral models", which describe orientation and ocular dominance
maps in real and in Fourier space, respectively. Developmental models fall into the
categories "correlations based learning", "competitive Hebbian" learning and a few
miscellaneous models.
Models are compared with data obtained from macaque striate cortex through optical imaging [2, 3, 4, 16]. Data were recorded from the representation of the parafovea
from the superficial layers of cortex. In the following we will state that a particular
model reproduces a particular feature of the experimental data (i) if there exists a
parameter regime where the model generates appropriate patterns and (ii) if the
phenomena are robust. We will state that a particular model does not reproduce a
certain feature (i) if we have not found an appropriate parameter regime and (ii) if
there exists either a proof or good intuitive reasons that a lllodel cannot reproduce
this feature.
One has to keep in mind, though, that model predictions are compared with a fairly
special set of data. Ocular dominance patterns, e.g., are known to vary between
species and even between different regions within area 17 of an individual. Consequently, a model which does not reproduce certain featurE'S of ocular dominance
or orientation colulllns in the macaque may well describE' those patterns in other
species. Interspecies differences, however, are not. the focus of this contribution;
results of corresponding modelling studies will be reported E'lsewhere.
2
Examples of Organizing Principles and Model Predictions
It has been suggested t.hat the most important principles underlying the pattern of
orientation and ocular dominance are "continuity" and "diversity" [7. 19, 21]. Continuity, because early image processing is often local in fE'atnre space, and diversity,
because, e.g., the visual system may want to avoid perceptual scotomata. The continuity and diversity principles underlie almost all dE'scriptive and developmental
A Critical Comparison of Models for Orientation and Ocular Dominance Columns
95
Figure 1: Typical patterns of orientation preferences as they are predicted by six
of the models list.ed in Table 1. Orientation preferences are coded by gray values,
where black - whit.e denotes preferences for vertical _ horizont.al - vertical. Top
row (left to right): Models 7, 11, 9. Bottom row (left to right) Models 5, 12, 8.
models, but. maps which comply with t.hese principles often differ in qualitat.ive ways:
The icecube model, e.g., obeys bot.h principles but. contains no singularities in the
orient.ation preference map and no branching of ocular dominance bands. Figure 1
shows orientat.ion maps generated by six different. algorithms taken from Tab. 1.
Although all pat.t.erns are consist.ent. wit.h the continuit.y and diversity const.raints,
closer comparison reveals differences. Thus additional element.s of organization must
be considered.
It has been suggested that maps are characterized by local correlations and global
disorder. Figure 2 (left) shows as an exam pIe two- point correlation functions of
orientation maps. The autocorrelation function [17] of one of the Cartesian coordinat.es of t.he orientation vector is plotted as a function of cortical distance. The
fact. that all correlation functions decay indicates that the orientation map exhibits
global disorder. Global disorder is predicted by all models except. the early pattern models 6, 8 and 9. Figure 2 (right) shows the corresponding power spectra.
Bandpass-like spectra which are typical for the experiment.al data [16] are well predicted by models 10- 12. Interestingly, they are not predicted by model 9, which
also fails reproducing the Mexican-hat shaped correlation functions (bold lines),
and model 13.
Based on the fact that. experimental maps are characterized by a bandpass-like
power spectrum it has been suggested that orientation maps may be organized
96
E. Erwin, K. Obermayer, K. Schulten
1.0 . . . _ - - - - - - - - - - - - - ,
--
1,0
" '0
.~
?
1,6
S...
0,4
~
-0.5
.................- _ __....-_
10
20
30
~----:."_
o
distance (normalized)
_4
40
1,8
&
0,2
0,0
0
5
10
15
20
distance (normalized)
Figure 2: Left: Spatial autocorrelation functions for one of the cartesian coordinates of the orientat.ion vector. Aut.ocorrelation functions were averaged over all
directions. Right: Complex power spectra of orientation maps. Power was averaged over all directions of the wave vector. Modelnumhers as in Tab. 1.
according to four principles [15]: continuity, diversity, homogeneity and isotropy.
If those principles are implemented using bandpass filtered noise the resulting
maps [15, 21] indeed share many properties with the experimental data. Above
principles alone, however, are not sufficient: (i) There are models such as model ?5
which are based on those principles but generate different patterns, (ii) homogeneity and isotropy are hardly ever fulfilled ([16] and next paragraph), and (iii) those
principles cannot. account for correlations between maps of various response properties [16].
Maps of orientation and ocular dominance in the macaque are anisotropic, i.e. ,
there exist preferred directions along which orientation and ocular dominance slabs
align [16]. Those anisotropies can emerge due to different mechanisms: (i) spontaneous symmetry breaking, (ii) model equations, which are not rotation invariant,
and (iii) appropriately chosen boundary conditions. Figure 3 illustrates mechanisms (ii) and (iii) for model 11. Bot.h mechanisms indeed predict anisotropic
pat.terns, however, preferred directions of orientation and ocular dominance align in
both cases (fig. 3, left and center). This is not true for the experimental data, where
preferred directions tend to be orthogonal [16]. Ort.hogonal preferred directions can
be generated by llsing different neighborhood funct.ions for different components of
the feature vector (fig. 3, right). However, this is not a satisfactory solution, and
the issue of anisotropies is still unsolved.
The pattern of orientation preference in the area 17 of the macaque exhibits four
local elements of organization: linear zones, singularit.ies, saddle point.s and fractures [16]. Those element.s are correctly predict.ed by most. of the pat.t,ern models,
except models 1- 3, and they appear in the maps generated by models 10- 14. Interestingly' models 9 and 13 predict very few linear zones, which is related to the
fact. that those models generate orientat.ion maps with lowpass-like power spect.ra.
Another important property of orientation maps is that orientation preferences and
their spatial layout across cortex are not correlated which each other. One conse-
A Critical Comparison of Models for Orientation and Ocular Dominance Columns
:.~ .
.~;
+
+
+
' .~,!-.
+
97
.
. 4-
+
Figure 3: Anisotropic orientation and ocular dominance maps generated by model
11. The figure shows Fourier spectra [17] of orientation (top row) and ocular dominance maps (bottom row). Left: Maps generated with an elliptic neighborhood
function (case (ii), see text); Center: Maps generated using circular input layers and an elliptical cortical sheet (case (iii), see text), Right: Maps generated
with different, elliptic neighborhood functions for orientation preference and ocular
dominance. '+' symbols indicate the locations of the origin.
quence is that there exist singularities, near which the curl of the orientation vector
field does not vanish (fig. 4, left). This rules out a class of pattern models where the
orientation map is derived from the gradient of a potential function, model 5. Figure 4 (right) shows another consequence of this property. In those figures cortical
area is plotted against the angular difference between the iso-orientation lines and
the local orientation preference. The even distribution found in the experimental
data is correctly predicted by models 1,6, 7 and 10-12. Model 8, however, predicts
preference for large difference angles while model 9 - over a wide range of parameters
- predicts preference for small difference angles (bold lines).
Finally, let us consider correlations between the patterns of orientation preference
and ocular dominance. Among the more prominent relationships present in macaque
data are [3, 16,21]: (i) Singularities are aligned with the centers of ocular dominance
bands, (ii) fractures are either aligned or run perpendicular, and (iii) iso-orientation
bands in linear zones intersect ocular dominance bands at approximately right angles. Those relationships are readily reproduced only by models 7 and 10- 12. For
model 9 reasonable orientation and ocular dominance patterns have not been generated at the same time. It would seem as if the parameter regime where reasonable
orientation columns emerge is incompatible with the parameter regime where ocular
dominance patterns are formed.
98
E. Erwin, K. Obermayer, K. Schulten
e'"
'0"
C+-I
0J.5
0.10
u
bO
'"
O.
=
~
8.
0. . +----.--_ _-....-_--1
03060
90
difference angle ( degrees)
Figure 4: Left: This singularity is an example of a feature in the experimental
data which is not allowed by model 5. The arrows indicat.e orientation vectors,
whose angular component is twice the value of the local orientation preference.
Right: Percentage of area as a function of the angular difference bet.ween preferred
orient.ation and t.he local orientation gradient vector. Model numbers as in Table 1.
3
The Current Status of the Model Comparison Project
Lack of space prohibit.s a detailed discussion of our findings hut we have summarized
the current status of our project in Tables 2 and 3. Given the models list.ed in
Tab. 1 and given the properties of t.he orientation and ocular dominance patt.erns
in macaque striate cortex listed in Tables 2 and 3 it is models 7 and 10-12 which
currently are in best agreement with the data. Those models, however, are fairly
abstract. and simplified, and they cannot easily be extended to predict receptive
field structure. Biological realism and predictions about. receptive fields are the
advantages of models 8 and 9. Those models, however, cannot account for the
observed orientation patterns. It. would, therefore, be of high interest, if elements
of both approaches could be combined to achieve a better description of the dat.a.
The main conclusion, however, is that there are now enough data available to allow
a better evaluation of model approaches than just by visual comparison of the
generated pat.terns. It. is our hope, that future studies will address at least those
propert.ies of t.he patterns which are known and well described, some of which are
list.ed in Tables 2 and 3. In case of developmental models more stringent tests
require experiments which (i) monitor the actual time-course of pattern formation,
and which (ii) study pattern development under experimentally modified conditions
(deprivation experiments). Currently there is not enough data available to constrain
models but the experiments are under way [5, 10, 11, 18].
Acknowledgements
VVe are very much indebted to Drs. Linsker, Tanaka and Yuille for sharing modelling
data. E.E. thanks t.he Beckman Institute for support.. K.O. thanks ZiF (Universitat
Bielefeld) for it.s hospitality. Computing time on a CM-2 and a CM-5 was made
available by NCSA.
A Critical Comparison of Models for Orientation and Ocular Dominance Columns
no.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
disorder
bandpass
linear
zones
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+2
+
+
+
+
+
+
+
+
+
+
-
+
+
+
-
?
-
+
+
+
?
Properties of OR Maps
saddle sing.
fracto indep.
points ?1/2
coord.
-
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
+2
+
+
+
+
+
+
+
+
+
+
-
-
+1
+1
+1
+
+
+1
+1
+1
+1
+
+
-
high
spec.
n
n
n
n
+
+
-
-
+
-
n
-/+
+
+
+
+
+
+
+
+
+
?
n
99
amsotropy
n
n
URbias
n
n
n
n
n
n
n
n
n
+
+
+
+
+
+
n
n
n
n
+
n
n
+
+
+
+
Table 2: Evaluation of orientation (OR) map models. Properties of the experimental maps include (left to right): global disorder; bandpass-like power spectra; the
presence of linear zones in roughly 50% of the map area; the presence of saddle
points, singularities (?1/2 with equal densities), and fractures; independence between cortical and orientation preference coordinates; a distribution favoring high
values of orientation specificity; global anisotropy; and a possible orientation bias.
Symbols: '+': There exists a parameter regime in which a model generates maps
with this property; '-': The model cannot reproduce this property; "n': The model
makes no predictions; "?': Not enough data available. 1 Models agree with the data
only if one assumes that fractures are loci of rapid orientation change rather than
real discontinuities. 20 ne of several cases.
References
[1] W. T. Baxter and B. M. Dow. Bioi. Cybern. , 61:171-182, 1989.
[2] G. G. Blasdel. J. Neurosci., 12:3115-3138, 1992.
[3] G. G. Blasdel. J. Neurosci., 12:3139-3161,1992.
[4] G. G. Blasdel and G. Salama. Nature, 321:579- 585, 1986.
[5] T. Bonhoeffer, D. Kim, and W. Singer. Soc. Neurosci. Abs., 19:1800, 1993.
[6] V. Braitenberg and C. Braitenberg. Bioi. Cybern., 33:179- 186, 1979.
[7] R. Durbin and G. Mitchison. Nature, 343:341-344, 1990.
[8] K. G. Gotz. Bioi. Cybern., 56:107-109, 1987.
[9] D. Rubel and T. N. Wiesel. Proc. Roy. Soc. Lond. B, 198:1-59, 1977.
[10] D. Rubel, T. N. Wiesel, and S. LeVay. Phil. Trans. Roy. Soc. Lond. B, 278:377409, 1977.
[11] D. Kim and T. Bonhoeffer. Soc. Neurosci. Abs., 19:1800, 1993.
[12] R. Linsker. Proc. Nat. Acad. Sci., USA, 83:8779-8783, 1986.
100
E. Erwin, K. Obermayer, K. Schulten
no.
segregation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
+
n
+
n
+
n
+
n
+
+
+
+
+
+
Properties of OD Maps
straani so- ODdisorder tropy bias bismus
n
+
+
n
n
n
n
n
n
+
n
n
n
n
n
+
+
n
n
n
n
n
+
+
+
n
n
n
n
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
n
Correlations Between OR and OD
sing.
spec.
global
local
orthog. orthog. vs.OD vs.OD
+:.l
+M
n
n
n
n
n
+2
n
n
+
n
n
n
n
_I
+1
+I ,O!
_I
n
n
n
n
-
+
+
+
n
n
?1
?1
+
+2
+1,2
n
n
n
n
n
n
n
n
n
?1
+
+2
+1,2
?1
+
+2
+1 ,2
n
n
n
n
Table 3: Left: Evaluation of ocular dominance (OD) map models. Properties of
the experimental maps include (left to right): Segregated bands of eye dominance;
global disorder; bandpass-like power spectra; global anisotropy; a bias to the representation of one eye; and OD-patterns in animals with strabismus. Right: Evalu
ation of correlations between OD and OR. Experimental maps show (left to right):
Local and global orthogonality between OR and OD slabs; singularities preferably
in monocular regions, and lower OR specificity in monocular regions. 1 Authors
treated OD and OR in independent models, but we consider a combined version.
2 Correlations are stronger than in the experimental data.
M
[13] K. D. Miller. J. Neurosci., 14:409- 441, 1994.
[14] K. D. Miller, J. B. Keller, and M. P. Stryker. Science, 245:605-615, 1989.
[15] E. Niebur and F. Worgotter. In F. H. Eeckman and J. M. Bower, Computation
and Neura.l Systems, pp. 409-413. Kluwer Academic Publishers, 1993.
[16] K. Obermayer and G. G. Blasdel. J. Neurosci., 13:4114-4129, 1993.
[17] K. Obermayer, G. G. Blasdel, and K. Schulten. Phys. Rev. A, 45:7568-7589,
1992.
[18] K Obermayer, L. Kiorpes, and G. G. Blasdel. In J. D. Cowan at al., Advances
in Neural Information Processing Systems 6. Morgan Kaufmann, 1994. 543550.
[19] K. Obermayer, H. Ritter, and K. Schulten.
87:8345- 8349, 1990.
Proc. Nat. Acad. Sci., USA,
[20] A. S. Rojer and E. L. Schwartz. Bioi. Cybern., 62:381 - 391, 1990.
[21] N. V. Swindale. Bioi. Cybern., 66:217-230, 1992.
[22] S. Tanaka. Bioi. Cybern., 65:91- 98, 1991.
[23] A. L. Yuille, J. A. Kolodny, and C. W. Lee. TR 91-3, Harvard Robotics
Laboratory, 1991.
| 1017 |@word version:1 briefly:1 wiesel:3 stronger:1 concise:1 tr:1 contains:1 optically:1 interestingly:2 cort:1 elliptical:1 current:2 od:9 must:1 readily:1 v:2 alone:1 spec:2 iso:2 realism:1 filtered:1 location:1 preference:13 conse:1 along:1 autocorrelation:2 paragraph:1 ra:1 indeed:2 rapid:1 roughly:1 anisotropy:4 actual:1 project:2 underlying:2 isotropy:2 cm:2 finding:1 preferably:1 um:1 schwartz:2 underlie:1 appear:1 local:8 consequence:1 acad:2 despite:1 approximately:1 black:1 twice:1 coord:1 range:1 perpendicular:1 obeys:1 averaged:2 area:5 intersect:1 specificity:2 cannot:5 sheet:1 cybern:6 map:32 center:3 phil:1 layout:1 keller:1 wit:1 disorder:6 rule:1 j12:1 coordinate:2 spontaneous:1 origin:1 agreement:1 harvard:1 element:4 roy:2 predicts:2 observed:2 bottom:2 region:3 indep:1 developmental:5 hese:1 funct:1 yuille:4 easily:1 lowpass:1 various:1 describe:2 formation:4 neighborhood:3 apparent:1 whose:2 ive:1 reproduced:1 advantage:1 aligned:2 date:1 organizing:1 achieve:2 levay:1 description:2 intuitive:1 ent:1 bve:1 develop:1 exam:1 soc:4 implemented:2 predicted:5 come:1 indicate:1 differ:1 direction:6 stringent:1 frg:1 require:1 biological:1 singularity:6 swindale:4 hut:1 considered:1 blasdel:6 predict:4 slab:2 vary:1 early:2 proc:3 beckman:3 currently:2 hope:1 hospitality:1 aim:1 modified:1 rather:1 avoid:1 bet:1 derived:1 focus:1 rubel:2 quence:1 modelling:2 indicates:1 kim:2 worgotter:2 dependent:1 raints:1 salama:1 favoring:1 reproduce:4 issue:1 among:1 orientation:43 development:1 animal:1 spatial:2 special:1 fairly:2 field:3 equal:1 shaped:1 linsker:4 future:1 braitenberg:4 quantitatively:1 few:2 homogeneity:2 individual:1 ab:2 organization:2 interest:1 circular:1 evaluation:3 closer:1 niversitat:1 orthogonal:1 plotted:2 column:7 whit:1 technische:1 universitat:1 reported:1 combined:2 thanks:2 density:1 ritter:1 lee:1 recorded:2 account:2 potential:1 diversity:5 de:1 summarized:2 bold:2 tab:3 competitive:2 wave:1 parallel:1 contribution:2 il:2 formed:1 kaufmann:1 miller:4 niebur:3 j20:1 indebted:1 phys:1 sharing:1 ed:4 against:1 pp:1 ocular:24 proof:1 unsolved:1 organized:1 response:1 evaluated:2 though:1 angular:3 just:1 correlation:12 dow:2 zif:1 lack:1 continuity:4 gray:1 usa:4 normalized:2 true:1 laboratory:1 satisfactory:1 branching:1 prohibit:1 prominent:2 pro:1 image:1 rotation:1 anisotropic:3 he:5 kluwer:1 eeckman:1 curl:1 illinois:2 cortex:6 indicat:1 ort:1 align:2 recent:1 certain:3 morgan:1 additional:1 aut:1 ween:1 llsing:1 ii:9 hebbian:3 match:1 academic:1 characterized:2 y:2 coded:1 prediction:6 erwin:5 robotics:1 ion:4 want:1 publisher:1 appropriately:1 tend:1 cowan:1 seem:1 structural:2 near:1 presence:2 iii:6 enough:3 baxter:3 variety:1 independence:1 six:2 parafovea:1 hardly:1 detailed:1 listed:1 ten:1 extensively:1 band:5 category:2 generate:2 exist:2 percentage:1 bot:2 fulfilled:1 correctly:2 kiorpes:1 patt:1 dominance:25 four:2 monitor:1 ce:1 imaging:1 orient:2 angle:4 run:1 bielefeld:3 almost:1 reasonable:2 incompatible:1 layer:2 spect:1 durbin:2 activity:1 orthogonality:1 constrain:1 propert:1 generates:2 fourier:2 lond:2 optical:1 ern:3 according:1 across:1 rev:1 gotz:3 invariant:1 taken:1 equation:1 agree:1 segregation:1 monocular:2 mechanism:3 singer:1 mind:1 locus:1 drs:1 fracture:4 evalu:1 available:4 spectral:2 appropriate:2 elliptic:2 hat:2 denotes:1 top:2 include:2 assumes:1 const:1 neura:1 dat:1 receptive:2 stryker:1 striate:5 obermayer:10 exhibit:2 gradient:2 distance:3 sci:2 reason:1 relationship:2 pie:1 fe:1 disagree:1 vertical:2 urbana:2 sing:2 pinwheel:1 rojer:3 pat:4 extended:1 ever:1 reproducing:1 fakultat:1 tanaka:4 macaque:7 discontinuity:1 address:1 trans:1 suggested:3 pattern:22 regime:5 power:7 critical:4 ation:3 treated:1 eye:2 ne:1 text:2 comply:1 acknowledgement:1 segregated:1 degree:1 sufficient:1 principle:11 share:1 row:4 course:1 summary:1 bias:3 allow:1 institute:3 fall:2 wide:1 focussed:1 emerge:2 boundary:1 cortical:5 author:1 made:1 simplified:1 preferred:5 status:2 keep:1 reproduces:1 hubel:1 reveals:1 global:9 mitchison:2 spectrum:7 table:9 nature:2 superficial:1 robust:1 symmetry:1 investigated:2 complex:1 som:1 main:1 neurosci:6 arrow:1 noise:1 allowed:1 interspecies:1 vve:1 fig:3 en:1 fails:1 schulten:7 strabismus:1 bandpass:6 perceptual:1 breaking:1 vanish:1 bower:1 deprivation:1 hogonal:1 symbol:2 explored:1 list:3 decay:1 exists:3 consist:1 nat:2 illustrates:1 cartesian:2 orientat:3 bonhoeffer:2 saddle:3 visual:3 bo:1 bioi:6 consequently:2 miscellaneous:1 experimentally:1 change:1 typical:2 except:2 ncsa:1 mexican:1 specie:2 e:1 experimental:13 zone:5 tern:2 support:1 phenomenon:1 correlated:1 |
22 | 1,018 | Generalization in Reinforcement Learning:
Safely Approximating the Value Function
Justin A. Boyan and Andrew W. Moore
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
jab@cs.cmu.edu, awm@cs.cmu .edu
Abstract
A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the
lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show
that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce
an entirely wrong policy. We then introduce Grow-Support, a new
algorithm which is safe from divergence yet can still reap the benefits
of successful generalization .
1
INTRODUCTION
Reinforcement learning-the problem of getting an agent to learn to act from sparse,
delayed rewards-has been advanced by techniques based on dynamic programming
(DP). These algorithms compute a value function which gives, for each state, the minimum possible long-term cost commencing in that state. For the high-dimensional
and continuous state spaces characteristic of real-world control tasks, a discrete representation of the value function is intractable; some form of generalization is required.
A natural way to incorporate generalization into DP is to use a function approximator,
rather than a lookup table, to represent the value function. This approach, which
dates back to uses of Legendre polynomials in DP [Bellman et al., 19631, has recently
worked well on several dynamic control problems [Mahadevan and Connell, 1990, Lin,
1993] and succeeded spectacularly on the game of backgammon [Tesauro, 1992, Boyan,
1992]. On the other hand, many sensible implementations have been less successful
[Bradtke, 1993, Schraudolph et al., 1994]. Indeed, given the well-established success
370
Justin Boyan, Andrew W. Moore
on backgammon, the absence of similarly impressive results appearing for other games
is perhaps an indication that using function approximation in reinforcement learning
does not always work well.
In this paper, we demonstrate that the straightforward substitution of function approximators for lookup tables in DP is not robust and, even in very benign cases, may
diverge, resulting in an entirely wrong control policy. We then present Grow-Support,
a new algorithm designed to converge robustly. Grow-Support grows a collection of
states over which function approximation is stable. One-step backups based on Bellman error are not used; instead, values are assigned by performing "rollouts" -explicit
simulations with a greedy policy. We discuss potential computational advantages of
this method and demonstrate its success on some example problems for which the
conventional DP algorithm fails.
2
DISCRETE AND SMOOTH VALUE ITERATION
Many popular reinforcement learning algorithms, including Q-Iearning and TD(O),
are based on the dynamic programmin~ algorithm known as value iteration [Watkins,
1989, Sutton, 1988, Barto et al., 1989J, which for clarity we will call discrete value
iteration. Discrete value iteration takes as input a complete model of the world as a
Markov Decision Task, and computes the optimal value function J*:
J* (x)
= the minimum possible sum of future costs starting from x
To assure that J* is well-defined, we assume here that costs are nonnegative and that
some absorbing goal state-with all future costs O-is reachable from every state. For
simplicity we also assume that state transitions are deterministic. Note that J* and
the world model together specify a "greedy" policy which is optimal for the domain:
optimal action from state x
= argmin(CosT(x,
a) + J*(NEXT-STATE(X, a)))
aEA
We now consider extending discrete value iteration to the continuous case: we replace
the lookup table over all states with a function approximator trained over a sample of
states. The smooth value iteration algorithm is given in the appendix. Convergence
is no longer guaranteed; we instead recognize four possible classes of behavior:
good convergence The function approximator accurately represents the intermediate value functions at each iteration (that is, after m iterations, the value
function correctly represents the cost of the cheapest m-step path), and successfully converges to the optimal J* value function.
lucky convergence The function approximator does not accurately represent the
intermediate value functions at each iteration; nevertheless, the algorithm
manages to converge to a value function whose greedy policy is optimal.
bad convergence The algorithm converges, i.e. the target J-values for the N training points stop changing, but the resulting value function and policy are
poor.
divergence Worst of all: small fitter errors may become magnified from one iteration
to the next, resulting in a value function which never stops changing.
The hope is that the intermediate value functions will be smooth and we will achieve
"good convergence." Unfortunately, our experiments have generated all four of these
behaviors-and the divergent behavior occurs frequently, even for quite simple problems.
Generalization in Reinforcement Learning: Safely Approximating the Value Function
2.1
37 J
DIVERGENCE IN SMOOTH VALUE ITERATION
We have run simulations in a variety of domains-including a continuous gridworld,
a car-on-the-hill problem with nonlinear dynamics, and tic-tac-toe versus a stochastic opponent-and using a variety of function approximators, including polynomial
regression, backpropagation, and local weighted regression. In our experiments, none
of these function approximators was immune from divergence.
The first set ofresults is from the 2-D continuous gridworld, described in Figure 1.
By quantizing the state space into a 100 x 100 grid, we can compute J* with discrete
value iteration, as shown in Figure 2. The optimal value function is exactly linear:
J*(x, y) = 20 - lOx - lOy.
Since J* is linear, one would hope smooth value iteration could converge to it with a
function approximator as simple as linear or quadratic regression. However, the intermediate value functions of Figure 2 are not smooth and cannot be fit accurately by
a low-order polynomial. Using linear regression on a sample of 256 randomly-chosen
states, smooth value iteration took over 500 iterations before "luckily" converging to
optimal. Quadratic regression, though it always produces a smaller fit error than linear regression, did not converge (Figure 3). The quadratic function, in trying to both
be flat in the middle of state space and bend down toward 0 at the goal corner, must
compensate by underestimating the values at the corner opposite the goal. These
underestimates then enlarge on each iteration, as the one-step DP lookaheads erroneously indicate that points can lower their expected cost-to-go by stepping farther
away from the goal. The resulting policy is anti-optimal.
fontinuous Gridworld
J*(x,y)
0.8
0.6
>.
0.4
0.2
0L-0~.~2-0~.~4-0~.~6~0~.~8~1
x
Figure 1: In the continuous gridworld domain, the state is a point (x, y) E [0,1]2. There are
four actions corresponding to short steps (length 0.05, cost 0.5) in each compass direction,
and the goal region is the upper right-hand corner. l*(x, y) is linear.
Iteration 12
Iteration 25
.8
1
Figure 2: Computation of 1* by discrete value iteration
Iteration 40
Justin Boyan, Andrew W. Moore
372
Iteration 17
Iteration 43
Iteration 127
1
.8
.8
.8
1
Figure 3: Divergence of smooth value iteration with quadratic regression (note z-axis).
J*(x , y)
Iteration 144
o.
o.
>.
o.
.8
o.
0.20 . 40.60 . 8
x
1
1
Figure 4: The 2-D continuous gridworld with puddles, its optimal value function, and a
diverging approximation of the value function by Local Weighted Regression (note z-axis).
car- o n-the-Hill
J* (pa s, vel)
0.5
pas
Figure 5: The car-on-the-hill domain. When the velocity is below a threshold, the car must
reverse up the left hill to gain enough speed to reach the goal, so r is discontinuous.
Iteration 11
Iterati on 101
Iteration 201
Figure 6: Divergeri'ce oYsmooth value iteration wit~'
for car-on-th~~hill~ The
neural net, a 2-layer MLP with 80 hidden units, was trained for 2000 epochs per iteration.
It may seem as though the divergence of smooth value iteration shown above can be
attributed to the global nature of polynomial regression. In fact, when the domain
is made slightly less trivial, the same types of instabilities appear with even a highly
Generalization in Reinforcement Learning: Safely Approximating the Value Function
373
Table 1: Summary of convergence results: Smooth value iteration
Domain
2-D grid world
2-D puddle world
Car-on-the-hill
Linear
lucky
Quadratic
diverge
-
-
-
-
LWR
good
diverge
good
Backprop
lucky
diverge
diverge
local memory-based function approximator such as local weighted regression (LWR)
[Cleveland and Delvin, 1988]. Figure 4 shows the continuous gridworld augmented
to include two oval "puddles" through which it is costly to step. Although LWR can
fit the corresponding J* function nearly perfectly, smooth value iteration with LWR
nonetheless reliably diverges. On another two-dimensional domain, the car-on-the-hill
(Figure 5), smooth value iteration with LWR did converge, but a neural net trained
by backpropagation did not (see Figure 6) . Table 1 summarizes our results .
In light of such experiments, we conclude that the straightforward combination of
DP and function approximation is not robust. A general-purpose learning method
will require either using a function approximator constrained to be robust during DP
[Yee, 1992], or an algorithm which explicitly prevents divergence even in the face of
imperfect function approximation, such as the Grow-Support algorithm we present
in Section 3.
2.2 RELATED WORK
Theoretically, it is not surprising that inserting a smoothing process into a recursive
DP procedure can lead to trouble. In [Thrun and Schwartz, 1993] one case is analyzed
with the assumption that errors due to function approximation bias are independently
distributed. Another area of theoretical analysis concerns inadequately approximated
J* functions. In [Singh and Yee, 1994] and [Williams, 1993] bounds are derived for the
maximum reduction in optimality that can be produced by a given error in function
approximation. If a basis function approximator is used, then the reduction can be
large [Sabes, 1993]. These results assume generalization from a dataset containing
true optimal values; the true reinforcement learning scenario is even harder because
each iteration of DP requires its own function approximation.
3
THE GROW-SUPPORT ALGORITHM
The Grow-Support algorithm is designed to construct the optimal value function with
a generalizing function approximator while being robust and stable. It recognizes that
function approximators cannot always be relied upon to fit the intermediate value
functions produced by DP. Instead, it assumes only that the function approximator
can represent the final J* function accurately. The specific principles of Grow-Support
are these:
1. We maintain a "support" set of states whose final J* values have been computed, starting with goal states, and growing this set out from the goal. The
fitter is trained only on these values, which we assume it is capable of fitting.
2. Instead of propagating values by one-step DP backups, we use simulations
with the current greedy policy, called "rollouts". They explicitly verify the
achievability of a state's cost-to-go estimate before adding that state to the
374
Justin Boyan, Andrew W. Moore
support. In a rollout, the J values are derived from costs of actual paths to the
goal, not from the values of the previous iteration's function approximation.
This prevents divergence .
3. We take maximum advantage of generalization. Each iteration, we add to
the support set any sample state which can, by executing a single action,
reach a state that passes the rollout test. In a discrete environment, this
would cause the support set to expand in one-step concentric "shells" back
from the goal. But in our continuous case, the function approximator may
be able to extrapolate correctly well beyond the support region-and when
this happens, we can add many points to the support set at once. This leads
to the very desirable behavior that the support set grows in big jumps in
regions where the value function is smooth.
Iteration 1,
I Support I =4
Iteration 2,
1Support 1=12
Iteration 3,
ISupportl=256
Figure 7: Grow-Support with quadratic regression on the gridworld. (Compare Figure 3.)
Iteration 1,
I Support I =3
Iteration 2,
ISupportl=213
Iteration 5,
ISupportl=253
Figure 8: Grow-Support with LWR on the two-puddle gridworld. (Compare Figure 4.)
Iteration 3,
I Support I =79
Iteration 8,
ISupportl=134
Iteration 14,
ISupportl=206
3
O.
2
O.
-2
o.
Figure 9: Grow-Support with backprop on car-on-the-hill. (Compare Figure 6.)
The algorithm, again restricted to the deterministic case for simplicity, is outlined in
the appendix. In Figures 7-9, we illustrate its convergence on the same combinations
of domain and function approximator which caused smooth value iteration to diverge.
In Figure 8, all but three points are added to the support within only five iterations,
Generalization in Reinforcement Learning: Safely Approximating the Value Function
375
and the resulting greedy policy is optimal. In Figure 9, after 14 iterations, the algorithm terminates. Although 50 states near the discontinuity were not added to the
support set, the resulting policy is optimal within the support set. Grow-support
converged to a near-optimal policy for all the problems and fitters in Table 1.
The Grow-Support algorithm is more robust than value iteration. Empirically, it was
also seen to be no more computationally expensive (and often much cheaper) despite
the overhead of performing rollouts. Reasons for this are (1) the rollout test is not
expensive; (2) once a state has been added to the support, its value is fixed and it
needs no more computation; and most importantly, (3) the aggressive exploitation
of generalization enables the algorithm to converge in very few iterations. However,
with a nondeterministic problem, where multiple rollouts are required to assess the
accuracy of a prediction, Grow-Support would become more expensive.
It is easy to prove that Grow-Support will always terminate after a finite number
of iterations. If the function approximator is inadequate for representing the J*
function, Grow-Support may terminate before adding all sample states to the support
set. When this happens, we then know exactly which of the sample states are having
trouble and which have been learned. This suggests potential schemes for adaptively
adding sample states to the support in problematic regions. Investigation of these
ideas is in progress.
In conclusion, we have demonstrated that dynamic programming methods may diverge when their tables are replaced by generalizing function approximators. Our
Grow-Support algorithm uses rollouts, rather than one-step backups, to assign training values and to keep inaccurate states out of the training set. We believe these
principles will contribute substantially to producing practical, robust, reinforcement
learning.
Acknowledgements
We thank Scott Fahlman, Geoff Gordon, Mary Lee, Michael Littman and Marc Ringuette for
their suggestions, and the NDSEG fellowship and NSF Grant IRI-9214873 for their support.
APPENDIX: ALGORITHMS
Smooth Value Iteration(X, G, A, NEXT-STATE, COST, FITJ):
Given: _ a finite collection of states X = {Xl, X2, .. . XN} sampled from the
continuous state space X C fR n , and goal region G C X
_ a finite set of allowable actions A
_ a deterministic transition function NEXT-STATE: X x A -+ X
_ the I-step cost function COST: X x A -+ fR
_ a smoothing function approximator FIT J
iter := 0
]<0) [i] := 0 Vi = 1 ... N
{X I t-+ J?(iter) [1] }
repeat
!rain ~ITJ(iter) to approximate the training set:
:
Iter .:= Iter + 1;
XN t-+ /iter)[N]
for ~ := 1 ... N do
.(iter) [.] ._ { 0
.
J
1.minaEA (COST(Xi,a) + FITJ(lter-I)(NEXT-STATE(xi,a)))
until j array stops changing
if Xi E G
otherwise
376
Justin Boyan, Andrew W. Moore
subroutine RoIloutCost(x, J):
Starting from state x , follow the greedy policy defined by value function J until
either reaching the goal, or exceeding a total path cost of J(x) + ?. Then return:
--t the actual total cost of the path, if goal is reached from x with cost ~ J(x) + e
--t 00, if goal is not reached in cost J(x) + ?.
Grow-Support(X,G,A, NEXT-STATE, COST, FITJ):
Given: ? exactly the same inputs as Smooth Value Iteration.
SUPPORT := {(Xi t-+ 0) I Xi E G}
repeat
Train FIT J to approximate the training set SUPPORT
for each Xi ~ SUPPORT do
c := minaEA [COsT(xi,a) + RolloutCost(NEXT-STATE(Xi, a), FITJ)]
if c < 00 then
add (Xi t-+ c) to the training set SUPPORT
until SUPPORT stops growing or includes all sample points.
References
[Barto et al., 1989] A . Barto, R. Sutton , and C . Watkins . Learning and sequential decision making. Technical Report COINS 89- 95, Univ. of Massachusetts, 1989 .
[Bellman et al., 1963] R . Bellman, R . Kalaba, and B . Kotkin. Polynomial approximation-a new computational technique in dynamic programming: Allocation processes . Mathematics of Computation, 17,
1963.
[Boyan , 1992] J. A . Boyan. Modular neural networks for learning context-dependent game strategies .
Master's thesis, Cambridge University, 1992.
[Bradtke, 1993] S. J. Bradtke. Reinforcement learning applied to linear quadratic regulation. In S. J .
Hanson, J . Cowan, and C . L . Giles, editors, NIPS-5. Morgan Kaufmann , 1993.
[Cleveland and Delvin, 1988] W . S. Cleveland and S. J. Delvin. Locally weighted regression : An approach
to regression analysis by local fitting. JASA , 83(403):596-610, September 1988.
[Lin, 1993] L.-J . Lin . Reinforcement Learning for Robots Using Neural Networks. PhD thesis, Carnegie
Mellon University, 1993.
[Mahadevan and Connell , 1990] S. Mahadevan and J. Connell . Automatic programming of behavior-based
robots using reinforcement learning. Technical report, IBM T. J . Watson Research Center, NY 10598,
1990 .
[Sabes, 1993] P. Sabes . Approximating Q-values with basis function represent ations. In Proceedings of
the Fourth C onnectionist Models Summer School, 1993.
[Schraudolph et al., 1994] N . Schraudolph, P . Dayan, and T. Sejnowski . Using TD(>.) to learn an evaluation function for the game of Go . In J. D. Cowan, G . Tesauro , and J . Alspector, editors, NIPS-6.
Morgan Kaufmann, 1994.
[Singh and Yee, 1994] S. P. Singh and R. Yee. An upper bound on the loss from approximate optimal-value
functions . Machine Learning, 1994. Technical Note (to appear) .
[Sutton, 1988] R . Sutton . Learning to predict by the methods of temporal differences. Machine Learning,
3,1988.
[Tesauro, 1992] G. Tesauro. Practical issues in temporal difference learning. Machine Learning, 8(3/4),
May 1992.
[Thrun and Schwartz, 1993] S. Thrun and A . Schwartz. Issues in using function approximation for reinforcement learning. In Proceedings of the Fourth Connectionist Models Summer School, 1993.
[Watkins, 1989] C . Watkins. Learning from Delayed Rewards. PhD thesis, Cambridge University, 1989 .
[Williams, 1993] R. Williams . Tight performance bounds on greedy policies based on imperfect value
functions . Technical Report NU-CCS-93-13, Northeastern University, 1993.
[Yee, 1992] R . Yee.
sachusetts, 1992.
Abstraction in control learning.
Technical Report COINS 92-16 , Univ. of Mas-
| 1018 |@word exploitation:1 middle:1 polynomial:5 simulation:3 reap:1 harder:1 reduction:2 substitution:1 current:1 surprising:1 yet:1 must:2 benign:2 enables:1 designed:2 greedy:7 short:1 farther:1 underestimating:1 contribute:1 five:1 rollout:3 become:2 prove:1 fitting:2 overhead:1 nondeterministic:1 introduce:1 theoretically:1 expected:1 indeed:1 alspector:1 behavior:5 frequently:1 growing:2 bellman:4 td:2 actual:2 curse:1 cleveland:3 tic:1 argmin:1 substantially:1 minaea:2 magnified:1 guarantee:1 safely:4 temporal:2 every:1 act:1 iearning:1 exactly:3 wrong:2 schwartz:3 control:4 unit:1 grant:1 appear:2 producing:1 before:3 local:5 sutton:4 despite:1 kalaba:1 path:4 suggests:1 practical:2 recursive:1 backpropagation:2 procedure:1 fitter:3 area:1 lucky:3 cannot:2 bend:1 context:1 instability:1 yee:6 conventional:1 deterministic:3 demonstrated:1 center:1 straightforward:3 go:3 starting:3 independently:1 williams:3 iri:1 wit:1 simplicity:2 array:1 importantly:1 target:1 programming:6 us:2 pa:3 assure:1 velocity:1 approximated:1 expensive:3 worst:1 region:5 environment:1 reward:2 littman:1 dynamic:8 trained:4 singh:3 tight:1 upon:1 basis:2 geoff:1 train:1 univ:2 sejnowski:1 whose:2 quite:1 modular:1 otherwise:1 final:2 inadequately:1 advantage:2 indication:1 quantizing:1 net:3 took:1 fr:2 inserting:1 date:1 achieve:1 getting:1 convergence:8 extending:1 diverges:1 produce:2 converges:2 executing:1 illustrate:1 andrew:5 propagating:1 school:2 progress:1 c:2 indicate:1 direction:1 safe:1 discontinuous:1 stochastic:1 luckily:1 awm:1 backprop:2 require:1 assign:1 generalization:10 investigation:1 lookaheads:1 predict:1 purpose:1 successfully:1 weighted:4 hope:2 always:4 rather:2 reaching:1 barto:3 derived:2 backgammon:3 dependent:1 dayan:1 abstraction:1 inaccurate:1 hidden:1 expand:1 subroutine:1 issue:2 constrained:1 smoothing:2 construct:1 never:1 lwr:6 enlarge:1 once:2 having:1 represents:2 nearly:1 future:2 sachusetts:1 report:4 connectionist:1 gordon:1 delvin:3 few:1 randomly:1 divergence:8 recognize:1 delayed:2 cheaper:1 replaced:1 rollouts:5 maintain:1 mlp:1 highly:1 evaluation:1 analyzed:1 light:1 succeeded:1 capable:1 theoretical:1 giles:1 compass:1 ations:1 cost:20 successful:3 inadequate:1 adaptively:1 lee:1 diverge:7 michael:1 together:1 itj:1 again:1 thesis:3 ndseg:1 containing:1 corner:3 return:1 aggressive:1 potential:2 lookup:4 includes:1 explicitly:2 caused:1 vi:1 spectacularly:1 reached:2 relied:1 ass:1 accuracy:1 kaufmann:2 characteristic:1 fitj:4 accurately:4 manages:1 produced:2 none:1 cc:1 converged:1 reach:2 underestimate:1 nonetheless:1 toe:1 attributed:1 stop:4 gain:1 dataset:1 sampled:1 popular:1 massachusetts:1 car:8 dimensionality:1 back:2 follow:1 specify:1 vel:1 though:2 until:3 hand:2 nonlinear:1 perhaps:1 believe:1 grows:2 mary:1 verify:1 true:2 assigned:1 moore:5 game:4 during:1 trying:1 allowable:1 hill:8 complete:1 demonstrate:2 bradtke:3 recently:1 absorbing:1 empirically:1 jab:1 stepping:1 mellon:2 cambridge:2 tac:1 automatic:1 grid:2 outlined:1 similarly:1 mathematics:1 immune:1 reachable:1 stable:2 robot:2 impressive:1 longer:1 add:3 own:1 tesauro:4 reverse:1 scenario:1 success:2 watson:1 approximators:5 seen:1 minimum:2 morgan:2 converge:6 multiple:1 desirable:1 smooth:16 technical:5 schraudolph:3 compensate:1 long:1 lin:3 prediction:1 converging:1 regression:13 cmu:2 iteration:53 represent:4 fellowship:1 grow:17 pass:1 cowan:2 seem:1 call:1 near:2 intermediate:5 mahadevan:3 enough:1 easy:1 variety:2 fit:6 perfectly:1 opposite:1 imperfect:2 idea:1 aea:1 cause:1 action:4 locally:1 problematic:1 nsf:1 correctly:2 per:1 carnegie:2 discrete:8 iter:7 four:3 nevertheless:1 threshold:1 clarity:1 changing:3 ce:1 lter:1 sum:1 run:1 master:1 fourth:2 decision:2 appendix:3 summarizes:1 entirely:2 layer:1 bound:3 guaranteed:1 summer:2 quadratic:7 nonnegative:1 worked:1 x2:1 flat:1 erroneously:1 speed:1 optimality:1 connell:3 performing:2 department:1 combination:3 poor:1 legendre:1 smaller:1 slightly:1 terminates:1 making:1 happens:2 restricted:1 computationally:1 discus:1 know:1 opponent:1 away:1 appearing:1 robustly:1 coin:2 assumes:1 rain:1 include:1 trouble:2 recognizes:1 approximating:5 added:3 occurs:1 strategy:1 costly:1 september:1 dp:12 thank:1 thrun:3 sensible:1 trivial:1 toward:1 reason:1 length:1 loy:1 regulation:1 unfortunately:1 implementation:1 reliably:1 policy:13 upper:2 markov:1 finite:3 anti:1 gridworld:8 concentric:1 required:2 hanson:1 learned:1 established:1 nu:1 discontinuity:1 nip:2 justin:5 able:1 beyond:1 below:1 scott:1 including:3 memory:1 natural:1 boyan:8 advanced:1 representing:1 scheme:1 sabes:3 axis:2 commencing:1 lox:1 epoch:1 acknowledgement:1 loss:1 suggestion:1 allocation:1 approximator:15 versus:1 agent:1 jasa:1 principle:2 editor:2 ibm:1 achievability:1 summary:1 repeat:2 fahlman:1 bias:1 face:1 sparse:1 benefit:1 distributed:1 xn:2 world:5 transition:2 computes:1 collection:2 reinforcement:14 made:1 jump:1 approximate:3 keep:1 global:1 pittsburgh:1 conclude:1 xi:9 continuous:9 table:8 learn:2 nature:1 robust:7 terminate:2 domain:9 marc:1 cheapest:1 did:3 backup:3 big:1 augmented:1 ny:1 fails:1 explicit:1 exceeding:1 xl:1 watkins:4 northeastern:1 down:1 bad:1 specific:1 divergent:1 concern:1 intractable:1 adding:3 sequential:1 phd:2 generalizing:3 prevents:2 ma:1 shell:1 goal:14 replace:2 absence:1 called:1 oval:1 total:2 diverging:1 puddle:4 support:40 incorporate:1 extrapolate:1 |
23 | 1,019 | A Mixture Model System for Medical and
Machine Diagnosis
Terrence J. Sejnowski
Magnus Stensmo
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
10010 North Torrey Pines Road
La Jolla, CA 92037, U.S.A.
{magnus,terry}~salk.edu
Abstract
Diagnosis of human disease or machine fault is a missing data problem
since many variables are initially unknown. Additional information needs
to be obtained. The j oint probability distribution of the data can be used to
solve this problem. We model this with mixture models whose parameters
are estimated by the EM algorithm. This gives the benefit that missing
data in the database itself can also be handled correctly. The request for
new information to refine the diagnosis is performed using the maximum
utility principle. Since the system is based on learning it is domain
independent and less labor intensive than expert systems or probabilistic
networks. An example using a heart disease database is presented.
1 INTRODUCTION
Diagnosis is the process of identifying diseases in patients or disorders in machines by
considering history, symptoms and other signs through examination. Diagnosis is a common
and important problem that has proven hard to automate and formalize. A procedural
description is often hard to attain since experts do not know exactly how they solve a
problem.
In this paper we use the information about a specific problem that exists in a database
1078
Magnus Stensmo. Terrence J. Sejnowski
of cases. The disorders or diseases are determined by variables from observations and
the goal is to find the probability distribution over the disorders, conditioned on what has
been observed. The diagnosis is strong when one or a few of the possible outcomes are
differentiated from the others. More information is needed if it is inconclusive. Initially
there are only a few clues and the rest of the variables are unknown. Additional information
is obtained by asking questions and doing tests. Since tests may be dangerous, time
consuming and expensive, it is generally not possible or desirable to find the answer to
every question. Unnecessary tests should be avoided .
. There have been many attempts to automate diagnosis. Early work [Ledley & Lusted, 1959]
realized that the problem is not always tractable due to the large number of influences that
can exist between symptoms and diseases. Expert systems, e.g. the INTERNIST system
for internal medicine [Miller et al., 1982], have rule-bases which are very hard and time
consuming to build. Inconsistencies may arise when new rules are added to an existing
database. There is also a strong domain dependence so knowledge bases can rarely be
reused for new applications.
Bayesian or probabilistic networks [Pearl, 1988] are a way to model a joint probability
distribution by factoring using the chain rule in probability theory. Although the models
are very powerful when built, there are presently no general learning methods for their
construction. A considerable effort is needed. In the Pathfinder system for lymph node
pathology [Heckerman et al., 1992] about 14,000 conditional probabilities had to be assessed
by an expert pathologist. It is inevitable that errors will occur when such large numbers of
manual assessments are involved.
Approaches to diagnosis that are based on domain-independent machine learning alleviate
some of the problems with knowledge engineering. For decision trees [Quinlan, 1986], a
piece of information can only be used if the appropriate question comes up when traversing
the tree. This means that irrelevant questions can not be avoided. Feedforward multilayer
perceptrons for diagnosis [Baxt, 1990] can classify very well, but they need full information
about a case. None of these these methods have adequate ways to handle missing data during
learning or classification.
The exponentially growing number of probabilities involved can make exact diagnosis
intractable. Simple approximations such as independence between all variables and conditional independence given the disease (naive Bayes) introduce errors since there usually are
dependencies between the symptoms. Even though systems based on these assumptions
work surprisingly well, correct diagnosis is not guaranteed. This paper will avoid these
assumptions by using mixture models.
2 MIXTURE MODELS
Diagnosis can be formulated as a probability estimation problem with missing inputs. The
probabilities of the disorders are conditioned on what has currently been observed. If we
model the joint probability distribution it is easy to marginalize to get any conditional
probability. This is necessary in order to be able to handle missing data in a principled
way [Ahmad & Tresp, 1993]. Using mixture models [McLachlan & Basford, 1988], a
simple closed form solution to optimal regression with missing data can be formulated. The
EM algorithm, a method from parametric statistics for parameter estimation, is especially
interesting in this context since it can also be formulated to handle missing data in the
A Mixture Model System for Medical and Machine Diagnosis
1079
training examples [Dempster et al., 1977; Ghahramani & Jordan, 1994].
2.1 THE EM ALGORITHM
The data underlying the model is assumed to be a set of N D-dimensional vectors X =
{ Z I, . . . , Z N }. Each data point is assumed to have been generated independently from a
mixture density with M components
M
p(z)
M
= LP(z,Wj;Oj) = LP(Wj)p(zlwj;Oj),
(1)
j=l
j=l
where each mixture component is denoted by Wj. p(Wj), the a priori probability for
mixturewj, and 8 = (0 1 , ... , OM) are the model parameters.
To estimate the parameters for the different mixtures so that it is likely that the linear combination of them generated the set of data points, we use maximum likelihood estimation. A
good method is the iterative Expectation-Maximization, or EM, algorithm [Dempster et al.,
1977].
Two steps are repeated. First a likelihood is formulated and its expectation is computed in
the E-step. For the type of models that we will use, this step will calculate the probability
that a certain mixture component generated the data point in question. The second step
is the M-step where the parameters that maximize the expectation are found. This can be
found analytically for models that can be written in an exponential form, e.g. Gaussian
functions. Equations can be derived for both batch and on-line learning. Update equations
for Gaussian distributions with and without missing data will be given here, other distributions are possible, e.g. binomial or multinomial [Stensmo & Sejnowski, 1994]. Details and
derivations can be found in [Dempster et al., 1977; Nowlan, 1991; Ghahramani & Jordan,
1994; Stensmo & Sejnowski, 1994].
From (1) we form the log likelihood of the data
N
N
M
L(8IX) = L logp(zi; OJ) = L log LP(Wj )P(Zi IWj; OJ).
j=l
There is unfortunately no analytic solution to the logarithm of the sum in the right hand side
of the equation. However, if we were to know which of the mixtures generated which data
point we could compute it. The EM algorithm solves this by introducing a set of binary
indicator variables Z = {Zij}. Zij = 1 if and only if the data point Zi was generated by
mixture component j. The log likelihood can then be manipulated to a form that does not
contain the log of a sum.
The expectation of %i using the current parameter values 8 k is used since %i is not known
directly. This is the E-step of the EM algorithm. The expected value is then maximized in
theM-step. The two steps are iterated until convergence. The likelihood will never decrease
after an iteration [Dempster et al., 1977]. Convergence is fast compared to gradient descent.
One of the main motivations for the EM-algorithm was to be able to handle missing values
for variables in a data set in a principled way. In the complete data case we introduced
missing indicator variables that helped us solve the problem. With missing data we add
the missing components to the Z already missing [Dempster et aI., 1977; Ghahramani &
Jordan, 1994].
1080
Magnus Stensmo, Terrence J. Sejnowski
2.2 GAUSSIAN MIXTURES
We specialize here the EM algorithm to the case where the mixture components are radial
Gaussian distributions. For mixture component j with mean I-'j and covariance matrix 1:j
this is
The form of the covariance matrix is often constrained to be diagonal or to have the
same values on the diagonal, 1:j = o} I. This corresponds to axis-parallel oval-shaped
and radially symmetric Gaussians, respectively. Radial and diagonal basis functions can
function well in applications [Nowlan, 1991], since several Gaussians together can form
complex shapes in the space. With fewer parameters over-fitting is minimized. In the radial
case, with variance o}
In the E-step the expected value of the likelihood is computed. For the Gaussian case this
becomes the probability that Gaussian j generated the data point
Pj(Z)
=
!(Wj)Gj(z)
.
l:k=l P(Wk)Gk(Z)
The M-step finds the parameters that maximize the likelihood from the E-step. For complete
data the new estimates are
(2)
N
where Sj = I:Pj(Zi).
i=l
When input variables are missing the Gj(z) is only evaluated over the set of observed
dimensions O. Missing (unobserved) dimensions are denoted by U. The update equation
= itY and use (2). The variance
for p(Wj) is unchanged. To estimate itj we set
becomes
zf
A least squares regression was used to fill in missing data values during classification. For
missing variables and Gaussian mixtures this becomes the same approach used by [Ahmad &
Tresp, 1993]. The result of the regression when the outcome variables are missing is a
probability distribution over the disorders. This can be reduced to a classification for
comparison with other systems by picking the outcome with the maximum of the estimated
probabilities.
A Mixture Model System for Medical and Machine Diagnosis
3
1081
REQUESTING MORE INFORMATION
During the diagnosis process, the outcome probabilities are refined at each step based on
newly acquired knowledge. It it important to select the questions that lead to the minimal
number of necessary tests. There is generally a cost associated with each test and the goal
is to minimize the total cost. Early work on automated diagnosis [Ledley & Lusted, 1959]
acknowledged the problem of asking as few questions as possible and suggested the use of
decision analysis for the solution. An important idea from the field of decision theory is the
maximum expected utility principle [von Neuman & Morgenstern, 1947]: A decision maker
should always choose the alternative that maximizes some expected utility of the decision .
For diagnosis it is the cost of misclassification . Each pair of outcomes has a utility u(x, y)
when the correct diagnosis is x but y has been incorrectly determined. The expectation can
be computed when we know the probabilities of the outcomes.
The utility values have to be assessed manually in what can be a lengthy and complicated
process. For this reason a simplification of this function has been suggested by [Heckerman et al., 1992]: The utility u(x, y) is 1 when both x and y are benign or both are malign,
and 0 otherwise. This simplification has been found to work well in practice. Another
complication with maximum expected utility principle can also make it intractable. In the
ideal case we would evaluate every possible sequence of future choices to see which is
the best. Since the size of the search tree of possibilities grows exponentially this is often
not possible. A simplification is to 100k ahead only one or a few steps at a time. This
nearsighted or myopic approach has been tested in practice with good results [Gorry &
Barnett, 1967; Heckerman et al., 1992] .
4
THE DIAGNOSIS SYSTEM
The system we have developed has two phases. First there is a learning phase where a
probabilistic model is built. This model is then used for inference in the diagnosis phase.
In the learning phase, the joint probability distribution of the data is modeled using mixture
models. Parameters are determined from a database of cases by the EM algorithm. The
k-means algorithm is used for initialization. Input and output variables for each case are
combined into one vector per case to form the set of training patterns. The outcomes and
other nominal variables are coded as J of N . Continuous variables are interval coded.
In the diagnosis phase, myopic one-step look-ahead was used and utilities were simplified
as above. The following steps were performed:
1. Initial observations were entered.
2. Conditional expectation regression was used to fill in unknown variables.
3. The maximum expected utility principle was used to recommend the next observation to make. Stop if nothing would be gained by further observations.
4. The user was asked to determine the correct value for the recommended observation. Any other observations could be made, instead of or in addition to this.
5. Continue with step 2.
Magnus Stensmo, Terrence J. Sejnowski
1082
Table 1: The Cleveland Heart Disease database.
1
2
3
4
5
6
7
8
9
10
Observation
age
sex
cp
trestbps
chol
fbs
restecg
thalach
exang
oldpeak
11
slope
12
13
ca
thaI
Disorder
num
14
Description
Age in years
Sex of subject
Chest pain
Resting blood pressure
Serum cholesterol
Fasting blood sugar
Resting electrocardiogr.
Max heart rate achieved
Exercise induced angina
ST depr. induced by
exercise relative to rest
Slope of peak exercise
STsegment
# major vess. col. ftourosc.
Defect type
Description
Heart disease
Values
continuous
male/female
four types
continuous
continuous
It or gt 120 mg/dl
five values
continuous
yes/no
continuous
up/flat/down
0-3
normal/fixed/reversible
Values
Not present/4 types
5 EXAMPLE
The Cleveland heart disease data set from UC, Irvine has been used to test the system. It
contains 303 examples of four types of heart disease and its absence. There are thirteen
continuous- or nominally-valued variables (Table 1). The continuous variables were interval
coded with one unit per standard deviation away from the mean value. This was chosen since
they were approximately normally distributed. Nominal variables were coded with one unit
per value. In total the 14 variables were coded with 55 units. The EM steps were repeated
until convergence (60-150 iterations). A varying number of mixture components (20-120)
were tried.
Previously reported results have used only presence or absence of the heart disease. The
best of these has been a classification rate of 78.9% using a system that incrementally
built prototypes [Gennari et al., 1989]. We have obtained 78.6% correct classification
with 60 radial Gaussian mixtures as described above. Performance increased with the
number of mixture components. It was not sensitive to a varying number of mixture
components during training unless there were too few of them. Previous investigators have
pointed out that there is not enough information in the thirteen variables in this data set to
reach 100% [Gennari et al., 1989].
An annotated transcript of a diagnosis session is shown in Figure 1.
6 CONCLUSIONS AND FURTHER WORK
Several properties of this model remain to be investigated. It should be tested on several
more databases. Unfortunately databases are typically proprietary and difficult to obtain.
Future prospects for medical databases should be good since some hospitals are now using
computerized record systems instead of traditional paper-based. It should be fairly easy to
A Mixture Model System for Medical and Machine Diagnosis
1083
The leftmost number of the five numbers in a line is the estimated probability for no heart
disease, followed by the probabilities for the four types of heart disease. The entropy, defined
as Pi log Pi' of the diagnoses are given at the same time as a measure of how decisive
the current conclusion is. A completely detennined diagnosis has entropy O. Initially all of
the variables are unknown and starting diagnoses are the unconditional prior probabilities.
l:.
Disorders (entropy = 1.85):
0.541254 0.181518 0.118812
What is cp ? 3
0.115512
0.042904
The first question is chest pain, and the answer changes the estimated probabilities. This
variable is continuous. The answer is to be interpreted how far from the mean the observation
is in standard deviations. As the decision becomes more conclusive, the entropy decreases.
Disorders (entropy = 0.69):
0.888209 0.060963 0.017322
What is age ? 0
0.021657
0.011848
Disorders (entropy = 0.57):
0.91307619 0.00081289 0.02495360
What is oldpeak ? -2
0.03832095
0.02283637
Disorders (entropy = 0.38):
0.94438718 0.00089016 0.02539957
What is chol? -1
0.02691099
0.00241210
0.00507073
0 . 00294036
Disorders (entropy = 0.11):
0.98848758 0.00028553 0.00321580
We have now detennined that the probability of no heart disease in this case is 98.8%. The
remaining 0.2% is spread out over the other possibilities.
Figure 1: Diagnosis example.
generate data for machine diagnosis.
An alternative way to choose a new question is to evaluate the variance change in the output
variables when a variable is changed from missing to observed. The idea is that a variable
known with certainty has zero variance. The variable with the largest resulting conditional
variance could be selected as the query, similar to [Cohn et aI., 1995].
One important aspect of automated diagnosis is the accompanying explanation for the
conclusion, a factor that is important for user acceptance. Since the basis functions have
local support and since we have estimates for the probability of each basis function having
generated the observed data, explanations for the conclusions could be generated.
Instead of using the simplified utilities with values 0 and 1 for the expected utility calculations they could be learned by reinforcement learning. A trained expert would evaluate the
quality of the diagnosis performed by the system, followed by adjustment of the utilities.
The 0 and 1 values can be used as starting values.
1084
Magnus Stensmo. Terrence J. Sejnowski
Acknowledgements
The heart disease database is from the University of California, Irvine Repository of
Machine Learning Databases and originates from R. Detrano, Cleveland Clinic Foundation.
Peter Dayan provided helpful comments on an earlier version of this paper.
References
Ahmad, S. & Tresp, V. (1993). Some solutions to the missing feature problem in vision.
In Advances in Neural Information Processing Systems, vol. 5, pp 393-400. Morgan
Kaufmann, San Mateo, CA.
Baxt, W. (1990). Use of an artificial neural network for data analysis in clinical decisionmaking: The diagnosis of acute coronary occlusion. Neural Computation, 2(4),480489.
Cohn, D. A., Ghahramani, Z. & Jordan, M.1. (1995). Active learning with statistical models.
In Advances in Neural Information Processing Systems, vol. 7. Morgan Kaufmann, San
Mateo, CA.
Dempster, A., Laird, N. & Rubin, D. (1977). Maximum likelihood from incomplete data
via the EM algorithm. Journal of the Royal Statistical Society, Series, B., 39, 1-38.
Gennari, 1, Langley, P. & Fisher, D. (1989). Models of incremental concept formation.
Artificial Intelligence, 40, 11-62.
Ghahramani, Z. & Jordan, M. (1994). Supervised learning from incomplete data via an EM
approach. In Advances in Neural Information Processing Systems, vol. 6, pp 120-127.
Morgan Kaufmann, San Mateo, CA.
Gorry, G. A. & Barnett, G. O. (1967). Experience with a model of sequential diagnosis.
Computers and Biomedical Research, 1, 490-507.
Heckerman, D., Horvitz, E. & Nathwani, B. (1992). Toward normative expert systems:
Part I. The Pathfinder project. Methods of Information in Medicine, 31, 90-105.
Ledley, R. S. & Lusted, L. B. (1959). Reasoning foundations of medical diagnosis. Science,
130(3366),9-21.
McLachlan, G. J. & Basford, K. E. (1988). Mixture Models: Inference and Applications to
Clustering. Marcel Dekker, Inc., New York, NY.
Miller, R. A., Pople, H. E. & Myers, 1 D. (1982). Internist-I: An experimental computerbased diagnostic consultant for general internal medicine. New England Journal of
Medicine, 307, 468-476.
Nowlan, S. J. (1991). Soft Competitive Adaptation: Neural Network Learning Algorithms
based on Fitting Statistical Mixtures. PhD thesis, School of Computer Science, Carnegie
Mellon University, Pittsburgh, PA.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
Inference. Morgan Kaufmann, San Mateo, CA.
Quinlan, 1 R. (1986). Induction of decision trees. Machine Learning, 1,81-106.
Stensmo, M . & Sejnowski, T. J. (1994). A mixture model diagnosis system. Tech. Rep.
INC-9401, Institute for Neural Computation, University of California, San Diego.
von Neuman, J. & Morgenstern, O. (1947). Theory of Games and Economic Behavior.
Princeton University Press, Princeton, NJ.
| 1019 |@word repository:1 version:1 reused:1 sex:2 dekker:1 tried:1 covariance:2 pressure:1 initial:1 contains:1 series:1 zij:2 horvitz:1 existing:1 current:2 nowlan:3 written:1 benign:1 shape:1 analytic:1 update:2 intelligence:1 fewer:1 selected:1 record:1 num:1 node:1 complication:1 five:2 specialize:1 fitting:2 introduce:1 acquired:1 expected:7 behavior:1 growing:1 considering:1 becomes:4 cleveland:3 provided:1 underlying:1 project:1 maximizes:1 what:7 interpreted:1 morgenstern:2 developed:1 unobserved:1 nj:1 certainty:1 every:2 exactly:1 neuman:2 internist:2 unit:3 medical:6 normally:1 originates:1 engineering:1 local:1 approximately:1 initialization:1 mateo:4 practice:2 langley:1 attain:1 road:1 radial:4 get:1 marginalize:1 context:1 influence:1 missing:20 serum:1 starting:2 independently:1 identifying:1 disorder:11 rule:3 fill:2 cholesterol:1 ity:1 handle:4 construction:1 nominal:2 diego:1 user:2 exact:1 pa:1 expensive:1 database:11 observed:5 calculate:1 wj:7 decrease:2 ahmad:3 prospect:1 ledley:3 disease:15 principled:2 dempster:6 sugar:1 asked:1 thai:1 trained:1 basis:3 completely:1 joint:3 derivation:1 fast:1 sejnowski:8 query:1 artificial:2 formation:1 outcome:7 refined:1 whose:1 solve:3 valued:1 plausible:1 otherwise:1 statistic:1 fasting:1 torrey:1 itself:1 laird:1 sequence:1 mg:1 myers:1 adaptation:1 baxt:2 entered:1 oint:1 detennined:2 description:3 convergence:3 decisionmaking:1 incremental:1 school:1 transcript:1 solves:1 strong:2 marcel:1 come:1 correct:4 annotated:1 human:1 alleviate:1 biological:1 accompanying:1 magnus:6 normal:1 automate:2 pine:1 major:1 early:2 estimation:3 currently:1 maker:1 sensitive:1 largest:1 mclachlan:2 always:2 gaussian:8 avoid:1 varying:2 derived:1 likelihood:8 tech:1 helpful:1 inference:3 dayan:1 factoring:1 typically:1 initially:3 classification:5 denoted:2 priori:1 constrained:1 fairly:1 uc:1 field:1 never:1 shaped:1 having:1 barnett:2 manually:1 look:1 inevitable:1 future:2 minimized:1 others:1 recommend:1 intelligent:1 few:5 manipulated:1 phase:5 occlusion:1 attempt:1 acceptance:1 possibility:2 male:1 mixture:26 unconditional:1 myopic:2 chain:1 necessary:2 experience:1 traversing:1 unless:1 tree:4 incomplete:2 logarithm:1 minimal:1 increased:1 classify:1 earlier:1 soft:1 asking:2 logp:1 maximization:1 cost:3 introducing:1 deviation:2 too:1 reported:1 dependency:1 answer:3 combined:1 st:1 density:1 peak:1 probabilistic:4 terrence:5 picking:1 together:1 itj:1 von:2 thesis:1 choose:2 expert:6 wk:1 north:1 inc:2 decisive:1 piece:1 performed:3 helped:1 closed:1 doing:1 competitive:1 bayes:1 parallel:1 complicated:1 slope:2 om:1 square:1 minimize:1 variance:5 kaufmann:4 miller:2 maximized:1 yes:1 fbs:1 bayesian:1 iterated:1 none:1 computerized:1 history:1 reach:1 manual:1 lengthy:1 pp:2 involved:2 associated:1 basford:2 stop:1 newly:1 irvine:2 radially:1 knowledge:3 formalize:1 supervised:1 evaluated:1 though:1 symptom:3 biomedical:1 until:2 hand:1 cohn:2 assessment:1 reversible:1 incrementally:1 quality:1 grows:1 contain:1 concept:1 analytically:1 symmetric:1 laboratory:1 during:4 game:1 leftmost:1 complete:2 cp:2 angina:1 reasoning:2 common:1 multinomial:1 gennari:3 exponentially:2 resting:2 mellon:1 ai:2 session:1 pointed:1 pathology:1 had:1 acute:1 gj:2 iwj:1 base:2 add:1 gt:1 female:1 jolla:1 irrelevant:1 certain:1 binary:1 continue:1 rep:1 fault:1 inconsistency:1 morgan:4 additional:2 nathwani:1 determine:1 maximize:2 recommended:1 full:1 desirable:1 england:1 calculation:1 clinical:1 coded:5 regression:4 multilayer:1 patient:1 expectation:6 vision:1 iteration:2 malign:1 achieved:1 gorry:2 addition:1 interval:2 rest:2 comment:1 subject:1 induced:2 jordan:5 chest:2 presence:1 ideal:1 feedforward:1 easy:2 enough:1 automated:2 independence:2 zi:4 economic:1 idea:2 prototype:1 requesting:1 intensive:1 handled:1 utility:12 pathfinder:2 effort:1 f:1 peter:1 york:1 adequate:1 proprietary:1 generally:2 chol:2 reduced:1 generate:1 exist:1 sign:1 estimated:4 oldpeak:2 correctly:1 per:3 diagnostic:1 diagnosis:34 carnegie:1 vol:3 four:3 procedural:1 acknowledged:1 blood:2 pj:2 defect:1 sum:2 year:1 powerful:1 decision:7 guaranteed:1 simplification:3 followed:2 refine:1 dangerous:1 occur:1 ahead:2 flat:1 pathologist:1 aspect:1 request:1 combination:1 heckerman:4 remain:1 em:12 lp:3 presently:1 heart:11 equation:4 previously:1 needed:2 know:3 tractable:1 gaussians:2 away:1 differentiated:1 appropriate:1 batch:1 alternative:2 binomial:1 remaining:1 clustering:1 quinlan:2 medicine:4 ghahramani:5 build:1 especially:1 society:1 unchanged:1 question:9 realized:1 added:1 already:1 parametric:1 dependence:1 diagonal:3 traditional:1 gradient:1 pain:2 reason:1 toward:1 induction:1 modeled:1 difficult:1 unfortunately:2 thirteen:2 gk:1 unknown:4 zf:1 observation:8 consultant:1 descent:1 incorrectly:1 neurobiology:1 introduced:1 pair:1 conclusive:1 lymph:1 california:2 learned:1 pearl:2 able:2 suggested:2 usually:1 pattern:1 built:3 oj:4 max:1 explanation:2 royal:1 terry:1 misclassification:1 examination:1 indicator:2 axis:1 naive:1 tresp:3 prior:1 acknowledgement:1 relative:1 interesting:1 coronary:1 proven:1 age:3 clinic:1 foundation:2 rubin:1 principle:4 pi:2 changed:1 surprisingly:1 side:1 detrano:1 institute:2 benefit:1 distributed:1 dimension:2 made:1 clue:1 reinforcement:1 avoided:2 simplified:2 san:5 far:1 sj:1 active:1 pittsburgh:1 unnecessary:1 assumed:2 consuming:2 search:1 iterative:1 continuous:9 table:2 ca:6 investigated:1 complex:1 domain:3 main:1 spread:1 motivation:1 arise:1 nothing:1 repeated:2 salk:2 ny:1 exponential:1 exercise:3 col:1 ix:1 down:1 specific:1 normative:1 inconclusive:1 exists:1 intractable:2 dl:1 sequential:1 gained:1 phd:1 conditioned:2 entropy:8 likely:1 labor:1 adjustment:1 nominally:1 corresponds:1 conditional:5 goal:2 formulated:4 stensmo:8 absence:2 considerable:1 hard:3 change:2 fisher:1 determined:3 total:2 oval:1 hospital:1 experimental:1 la:1 perceptrons:1 rarely:1 select:1 internal:2 support:1 assessed:2 investigator:1 evaluate:3 princeton:2 tested:2 |
24 | 102 | 186
AN APPLICATION OF THE PRINCIPLE OF
MAXIMUM INFORMATION PRESERVATION
TO LINEAR SYSTEMS
Ralph Linsker
IBM T. J. Watson Research Center, Yorktown Heights, NY 10598
ABSTRACT
This paper addresses the problem of determining the weights for a
set of linear filters (model "cells") so as to maximize the
ensemble-averaged information that the cells' output values jointly
convey about their input values, given the statistical properties of
the ensemble of input vectors. The quantity that is maximized is the
Shannon information rate, or equivalently the average mutual
information between input and output. Several models for the role
of processing noise are analyzed, and the biological motivation for
considering them is described. For simple models in which nearby
input signal values (in space or time) are correlated, the cells
resulting from this optimization process include center-surround
cells and cells sensitive to temporal variations in input signal.
INTRODUCTION
I have previously proposed [Linsker, 1987, 1988] a principle of "maximum
information preservation," also called the "infomax" principle, that may account for
certain aspects of the organization of a layered perceptual network. The principle
applies to a layer L of cells (which may be the input layer or an intermediate layer
of the network) that provides input to a next layer M. The mapping of the input
signal vector L onto an output signal vector M, f:L ~ M, is characterized by a
conditional probability density function ("pdf") p(MI L). The set S of allowed
mappings I is specified. The input pdf PL(L) is also given. (In the cases considered
here, there is no feedback from M to L.) The infomax principle states that a
mapping I should be chosen for which the Shannon information rate [Shannon,
1949]
R(j) ==
f dL PL(L) f dM p(MI L) 10g[P(MI L)/PM(M)]
(1)
is a maximum (over allIin the set S). Here PM(M) == fdLPL(L)P(MIL) is the pdf
of the output signal vector M. R is identical to the average mutual information
between Land M.
187
Maximum Infonnation Preservation to Linear Systems
To understand better how the info max principle may be applied to biological systems
and complex synthetic networks, it is useful to solve the infomax optimization
problem explicitly for simpler systems whose properties are nonetheless biologically
motivated. This paper therefore deals with the practical computation of infomax
solutions for cases in which the mappings! are constrained to be linear.
INFOMAX SOLUTIONS FOR A SET OF LINEAR FILTERS
We consider the case of linear model "neurons" with multivariate Gaussian input
and additive Gaussian noise. There are N input (L) cells and N' output (M) cells.
The input column vector L = (Lt,~, ... ,LNF is randomly selected from an
N-dimensional Gaussian distribution having mean zero. That is,
(2)
where QL is the covariance matrix of the input activities, Q6
(Superscript T denotes the matrix transpose.)
= JdL PL(L)LjL
j
?
To specify the set S of allowed mappings !:L .... M, we define a processing model
that includes a description of (i) how noise enters during processing, (ii) the
independent variables over which we are to maximize R, and (iii) any constraints
on their values. Figure 1 shows several such models. We shall analyze the simplest,
then explain the motivation for the more complex models and analyze them in turn.
Model A -- Additive noise of constant variance
In Model A of Fig. 1 the output signal value of the nth M cell is:
(3)
The noise components "11 are independently and identically distributed (fli.i.d. ")
random variables drawn from a Gaussian distribution having a mean of zero and
variance B.
Each mapping !:L .... M is characterized by the values of the {Cnj } and the noise
parameter B. The elements of the covariance matrix of the output activities are
(using Eqn. 3)
(4)
where
~nm
= 1 if n =
m and 0 otherwise.
Evaluating Eqn. 1 for this processing model gives the information rate:
R(j) = (1/2) In Det W(j)
(5)
where ~m = Q:!'/ B. (R is the difference of two entropy terms. See [Shannon,
1949], p.57, for the entropy of a Gaussian distribution.)
188
Linsker
If the components Cni of the C matrix are allowed to be arbitrarily large, then the
information rate can be made arbitrarily large, and the effects of noise become
arbitrarily small. One way to limit C is to impose a "resource constraint" on each
M cell. An example of such a constraint is ~jqj = 1 for all n. One can then attempt
directly, using numerical methods, to maximize Eqn. 5 over all allowed C for given
B. However, when some additional conditions (below) are satisfied, further
analytical progress can be made.
Suppose the NL-cells are uniformly spaced along the line interval [0,1] with periodic
boundary conditions, so that cell N is next to cell 1. [The analysis can be extended
to a two- (or higher-) dimensional array in a straightforward manner.] Suppose also
that (for given N) the covariance Q6 of the input values at cells i and j is a function
QL(Sj) only of the displacement s'J from i to j. (We deal with the periodicity by
defining Sab = b - a - Ya~ and choosing the integer Yab such that
-N/2 S Sab < N/2.) Then QL is a Toeplitz matrix, and its eigenvalues {Ak} are the
components of the discrete Fourier transform ("F.T.") of QL(S):
Ak = ~sQL(s) exp( -2~ks/N), (-N/2) S k
< N/2.
(6)
We now impose two more conditions: (1) N' = N. This simplifies the resulting
expressions, but is otherwise inessential, as we shall discuss. (2) We constrain each
M cell to have the same arrangement of C-values relative to the M cell's position.
That is, Cnj is to be a function C(Sni) only of the displacement Sni from n to i. This
constraint substantially reduces the computational demands. We would not expect
(S,C)
L?I
L?I
Figure 1.
(D)
Four processing models (A)-(D): Each diagram shows a single
M cell (indexed by n) having output activity Mn. Inputs {LJ may
be common to many M cells. All noise contributions (dotted
lines) are uncorrelated with one another and with {LJ. GC =
gain control (see text).
Maximum Information Preservation to Linear Systems
it to hold in general in a biologically realistic model -- since different M cells should
be allowed to develop different arrangements of weights -- although even then it
could be used as an Ansatz to provide a lower bound on R. The section,
"Temporally-correlated input patterns," deals with a situation in which it is
biologically plausible to impose this constraint.
Under these conditions, (Q:!') is also a Toeplitz matrix. Its eigenvalues are the
components of the F.T. of QM(snm). For N' = N these eigenvalues are (B + A~k) ,
where Zk = ICkl2 and Ck == ~sC(s) exp( -2'TT~ks/N) is the F.T. of C(s). [This
expression for the eigenvalues is obtained by rewriting Eqn. 4 as:
QM(snm) = B8n_m.o + ~j.jC(snJQL(Sj)C(sm) ,and taking the F.T. of both sides.]
Therefore
R = (1/2)~k In[l
+ AJcZk/ B].
(7)
We want to maximize R subject to ~sC(S)2 = 1, which is equivalent to ~Zk = N .
Using the Lagrange multiplier method, we maximize A == R + 11-(~Zk - N) over all
nonnegative {Zk}' Solving for (JA/ (JZk = 0 and requiring Zk ~ 0 for all k gives the
solution:
Zk
= max[( -1/211-)
(8)
- (B/Ak)' 0],
where (given B) 11- is chosen such that
~Zk =
N.
Note that while the optimal {Zk} are uniquely determined, the phases of the {ck} are
completely arbitrary [except that since the {C(s)} are real, we must have Ck * = c_ k
for all k]. The {C(s)} values are therefore not uniquely determined. Fig. 2a shows
two of the solutions for .an example in which QL(S) = exp[ - (s/ So)2] with So = 6,
N=N'=64, and B.:..:.l. Both solutions have ZO.?1..... ?6=5.417, 5.409, 5.378,
5.306, 5.134,4.689,3.376, and all other Zk == O. Setting all Ck phases to zero yields
the solid curve; a particular random choice of phases yields the dotted dHve. We
shall later see that imposing locality conditions on the {C(s)} (e.g., penalizing
nonzero C(s) for large Is I) can remove the phase ambiguity.
Our solution (Eqn. 8) can be described in terms of a so-called "water-filling"
analogy: If one plots B /Ak versus k, then Zk is the depth of "water" at k when one
"pours" into the "vessel" defined by the B / Ak curve a total quantity of "water" that
corresponds to ~Zk = N and brings the "water level" to ( -1/211-).
Let us contrast this problem with two other problems to which the "water-filling"
analogy has been applied in the information-theory literature. In our notation, they
are:
1.
Given a transfer function {C(s)} and the noise variance B, how should a given
total input signal power ~Ak be apportioned among the various wavenumbers
k so as to maximize the information rate R [Gallager, 1968]? Our problem is
complementary to this: we fix the input signal properties and seek an optimal
transfer function subject to constraints.
189
190
Linsker
2.
Rate-distortion (R-D) calculation [Berger, 1971]: Given a distortion measure
(that defines a "distance" between the actual input signal and an estimate of it
that can be reconstructed from the channel's output), and the input power
spectrum p. k}, what choice of {Zk} minimizes the average distortion for given
information rate (or minimizes the required rate for given distortion)? In the
R-D problem there is a process of reconstruction, and a given measure for
assessing the "goodness" of reconstruction. In contrast, in our network there
is no reconstruction of the input signal, and no criterion of the "goodness" of
such a hypothetical reconstruction is provided.
Note also that infomax optimization is not the same as computing which channel
(that is, which mapping !:L .... M) selected from an allowed set has the maximum
information-theoretic capacity. In that problem, one is free to encode the inputs
before transmission so as to make optimal use of (Le., "achieve the capacity of") the
channel. In our case, there is no such pre-encoding; the input ensemble is prescribed
(by the environment or by the output of an earlier processing stage) and we need to
maximize the channel rate for that ensemble.
The simplifying condition that N = N' (above) is unnecessarily restrictive. Eqn. 7
can be easily generalized to the case in which N is a mUltiple of N' and the N' M cells
are uniformly spaced on the unit interval. Moreover, in the limit that 1/N' is much
smaller than the correlation length scale of QL, it can be shown that R is unchanged
when we simultaneously increase N' and B by the same factor. (For example, two
adjacent M cells each having noise variance 2B jointly convey the same information
c
c
c
(0)
(b)
l
.'..- s
-10
\
,:
\,/
Figure 2.
Example infomax solutions C(s) for locally-correlated
inputs: (a) Model A; region of nonnegligible C(s) extends over
all s; phase ambiguity in Ck yields non unique C(s) solutions, two
of which are shown. See text for details. (b) Models C (solid
curve) and D (dotted curve) with Gaussian g(S)-l favoring short
connections; shows center-surround receptive fields, more
pronounced in Model D. (c) "Temporal receptive field" using
Model D for temporally correlated scalar input to a single M cell;
C(s) is the weight applied to the input signal that occurred s time
steps ago. Spacing between ordinate marks is 0.1; ~ C(S)2 = 1 in
each case.
Maximum Information Preservation to Linear Systems
about L as one M cell having noise variance B.) For biological applications we are
mainly interested in cases in which there are many L cells [so that C(s) can be
treated as a function of a continuous variable] and many M cells (so that the effect
of the noise process is described by the single parameter B/ N).
The analysis so far shows two limitations of Model A. First, the constraint
~iqi = 1 is quite arbitrary. (It certainly does not appear to be a biologically natural
constraint to impose!) Second, for biological applications we are interested in
predicting the favored values of {C(s)}, but the phase ambiguity prevents this. In
the next section we show that a modified noise model leads naturally, without
arbitrary constraints on ~iqi' to the same results derived above. We then turn to a
model that favors local connections over long-range ones, and that resolves the
phase ambiguity issue.
Model B -- Independent noise on each input line
In Model B of Fig. 1 each input Li to the nth M cell is corrupted by Li.d. Gaussian
noise V l1i of mean zero and variance B. The output is
(9)
Since each V ni is independent of all other noise terms (and of the inputs {Li }), we find
(10)
We may rewrite the last term as
then R = (1/2) In DetWwhere
B~l1m (~iqy!2 (~jc;)l/2.
The information rate is
(11)
Define c' ni == Cl1i(~kqk)-1/2 ; then J?,.m = ~lIm + (~,.jc'lIiQbC' mj)/ B. Note that this is
identical (except for the replacement C ~ C') to the expression following Eqn. (5),
in which QM was given by Eqn. (4). By definition, the {C' nil satisfy ~iC';i = 1 for
all n. Therefore, the problem of maximizing R for this model (with no constraints
on ~jq;) is identical to the problem we solved in the previous section.
Model C -- Favoring of local connections
Since the arborizations of biological cells tend to be spatially localized in many cases,
we are led to consider constraints or cost terms that favor localization. There are
various ways to implement this. Here we present a way of modifying the noise
process so that the infomax principle itself favors localized solutions, without
requiring additional terms unrelated to information transmission.
Model C of Fig. 1 is the same as Model B, except that now the longer connections
are "noisier" than the shorter ones. That is, the variance of VIIi is <V;i> = B~(sn;)
where g(s) increases with 1s I. [Equivalently, one could attenuate the signal on the
(i ~ n) line by g(sll;) 1/2 and have the same noise variance Bo on all lines.]
191
192
Linsker
This change causes the last term of Eqn. 10 to be replaced by
Under the conditions discussed earlier (Toeplitz QL and QM, and N
Bo8I1m~g(SIl)qi .
= N), we derive
(12)
Recall that the {ck } are related to {C(s)} by a Fourier transform (see just before Eqn.
7). To cotppute which choice of IC(s)} maximizes R for a given problem, we used
a gradient ascent algorithm several times, each time using a different random set of
initial I C(s)} values. For the problems whose solutions are exhibited in Figs. 2b and
2c, multiple starting points usually yielded the same solution to within the error
tolerance specified for the algorithm [apart from an arbitrary factor by which all of
the C(s)'s can be multiplied without affecting R], and that solution had the largest
R of any obtained for the given problem. That is, a limitation sometimes associated
with gradient ascent algorithms -- namely, that they may yield multiple "solutions"
that are local, but far from global, maxima -- did not appear to be a difficulty in these
cases.
Fig. 2b (solid curve) shows the infomax solution for an example having
QL(S) = exp[ - (S/sO)2] and g(s) = exp[(s/s.)2] with So = 4, s. = 6, N = N = 32,
and Bo = 0.1. There is a central excitatory peak flanked by shallow inhibitory
sidelobes (and weaker additional oscillations). (As noted, the negative of this
solution, having a central inhibitory region and excitatory sidelobes, gives the same
R.) As Bo is increased (a range from 0.001 to 20 was studied), the peak broadens,
the sidelobes become shallower (relative to the peak), and the receptive fields of
nearby M cells increasingly overlap. This behavior is an example of the
"redundancy-diversity" tradeoff discussed in [Linsker, 1988].
Model D -- Bounded output variance
Our previous models all produce output values Mn whose variance is not explicitly
constrained. More biologically realistic cells have limited output variance. For
example, a cell's firing rate must lie between zero and some maximum value. Thus,
the output of a model nonlinear cell is often taken to be a sigmoid function of
(~iCII;L)?
Within the context of linear cell models, we can capture the effect of a bounded
output variance by using Model D of Fig. 1. We pass the intermediate output
~iClIi(Li + VIIi) through a gain control QC that normalizes the output variance to
unity, then we add a final (Li.d. Gaussian) noise term V'II of variance R.. That is,
(13)
Without the last term, this model wo~ld be identical to Model C, since mUltiplying
both the signal and the VIIi noise by the same factor GC would not affect R. The last
term in effect fixes the number of output values that can be discriminated (Le., not
confounded with each other by the noise process V'II) to be of order Rl1!2.
The information rate for this model is derived to be (cf. Eqn. 12):
Maximum Information Preservation to Linear Systems
(14)
where V( C) is the variance of the intermediate output before it is passed through
GC:
(15)
Fig. 2b (dotted curve) shows the infomax solution (numerically obtained as above)
for the same QL(S) and g(s) functions and parameter values as were used to generate
the solid curve (for Model C), but with the new parameter Bl = 0.4. The effect of
the new Bl noise process in this case is to deepen the inhibitory sidelobes (relative
to the central peak). The more pronounced center-surround character of the
resulting M cell dampens the response of the cell to differences (between different
input patterns) in the spatially uniform component of the input pattern. This
response property allows the L .... M mapping to be info max-optimal when the
dynamic range of the cells' output response is constrained.? (A competing effect can
complicate the analysis: If Bl is increased much further, for example to 50 in the
case discussed, the sidelobes move to larger s and become shallower. This behavior
resembles that discussed at the end of the previous section for the case of increasing
Bo; in the present case it is the overall noise level that is being increased when Bl
increases and Bo is kept constant.)
TemporaUy-correlated input patterns
Let us see how infomax can be used to extract regularities in input time series, as
contrasted with the spatially-correlated input patterns discussed above. We consider
a single M cell that, at each discrete time denoted by n, can process inputs {LJ from
earlier times i ~ n (via delay !ines, for example). We use the same Model D as
before. There are two differences: First, we want g(s) = 00 for all s > 0 (input lines
from future times are "infinitely noisy"). [A technical point: Our use of periodic
boundary conditions, while computationally convenient, means that the input value
that will occur s time steps from now is the same value that occurred (N - s) steps
ago. We deal with this by choosing g(s) to equal 1 at s = 0, to increase as
s .... -N/2 (going into the past), and to increase further as s decreases from +N/2
to 1, corresponding to increasingly remote past times. The periodicity causes no
unphysical effects, provided that we make g(s) increase rapidly enough (or make N
large enough) so that C(s) is negligible for time intervals comparable to N.] Second,
the fact that C,,; is a function only of s'" is now a consequence of the constancy of
connection weights C(s) of a single M cell with time, rather than merely a convenient
Ansatz to facilitate the infomax computation for a set of many M cells (as it was in
previous sections).
The
infomax solution is shown in Fig. 2c for an example having
QL(S) = exp[ - (S/So)2]; g(s) = exp[ -t(s}/s.J with t(s} = s for s ~ 0 and
t(s} = s - N for s ~ 1; So = 4, Sl = 6, N = 32, Bo = 0.1, and Bl = 0.4. The result is
that the "temporal receptive field" of the M cell is excitatory for recent times, and
193
194
Linsker
inhibitory for somewhat more remote times (with additional weaker oscillations).
The cell's output can be viewed approximately as a linear combination of a smoothed
input and a smoothed first time derivative of the input, just as the output of the
center-surround cell of Fig. 2b can be viewed as a linear combination of a smoothed
input and a smoothed second spatial derivative of the input. As in Fig. 2b, setting
BI = 0 (not shown) lessens the relative inhibitory contribution.
SUMMARY
To gain insight into the operation of the principle of maximum information
preservation, we have applied the principle to the problem of the optimal design of
an array of linear filters under various conditions. The filter models that have been
used are motivated by certain features that appear to be characteristic of biological
networks. These features include the favoring of short connections and the
constrained range of output signal values. When nearby input signals (in space or
time) are correlated, the infomax-optimal solutions for the cases studied include (1)
center-surround cells and (2) cells sensitive to temporal variations in input. The
results of the mathematical analysis presented here apply also to arbitrary input
covariance functions of the form QL( I i - j I). We have also presented more general
expressions for the information rate, which can be used even when QL is not of this
form. The cases discussed illustrate the operation of the infomax principle in some
relatively simple but instructive situations. The analysis and results suggest how the
principle may be applied to more biologically realistic networks and input ensembles.
References
T. Berger, Rate Distortion Theory (Prentice-Hall, Englewood Cliffs, N.J., 1971),
chap. 4.
R. G. Gallager, Information Theory and Reliable Communication (John Wiley and
Sons, N.Y., 1968), p. 388.
R. Linsker, in: Neural Information Processing Systems (Denver, Nov. 1987), ed.
D. Z. Anderson (Amer. Inst. of Physics, N.Y.), pp. 485-494.
R. Linsker, Computer 21 (3) 105-117 (March 1988).
C. E. Shannon and W. Weaver, The Mathematical Theory of Communication (Univ.
of Illinois Press, Urbana, 1949).
| 102 |@word seek:1 covariance:4 simplifying:1 solid:4 ld:1 initial:1 series:1 past:2 must:2 john:1 numerical:1 realistic:3 additive:2 remove:1 plot:1 selected:2 short:2 provides:1 simpler:1 height:1 mathematical:2 along:1 become:3 manner:1 behavior:2 chap:1 resolve:1 actual:1 considering:1 increasing:1 provided:2 notation:1 moreover:1 unrelated:1 maximizes:1 bounded:2 what:1 substantially:1 minimizes:2 sni:2 temporal:4 hypothetical:1 inessential:1 qm:4 nonnegligible:1 control:2 unit:1 appear:3 before:4 iqi:2 negligible:1 local:3 limit:2 consequence:1 encoding:1 ak:6 cliff:1 firing:1 approximately:1 k:2 studied:2 resembles:1 limited:1 range:4 bi:1 averaged:1 practical:1 unique:1 implement:1 cnj:2 displacement:2 convenient:2 pre:1 suggest:1 onto:1 layered:1 prentice:1 context:1 equivalent:1 center:6 maximizing:1 straightforward:1 starting:1 independently:1 qc:1 insight:1 array:2 variation:2 rl1:1 suppose:2 element:1 role:1 constancy:1 enters:1 solved:1 capture:1 region:2 apportioned:1 remote:2 decrease:1 sidelobes:5 environment:1 dynamic:1 solving:1 rewrite:1 localization:1 icii:1 completely:1 easily:1 various:3 zo:1 univ:1 sc:2 broadens:1 choosing:2 whose:3 quite:1 larger:1 solve:1 plausible:1 distortion:5 otherwise:2 toeplitz:3 favor:3 jointly:2 transform:2 itself:1 superscript:1 final:1 noisy:1 eigenvalue:4 analytical:1 reconstruction:4 sll:1 rapidly:1 achieve:1 description:1 pronounced:2 regularity:1 transmission:2 assessing:1 produce:1 derive:1 develop:1 illustrate:1 progress:1 filter:4 modifying:1 ja:1 fix:2 l1m:1 biological:6 pl:3 c_:1 hold:1 considered:1 ic:2 hall:1 exp:7 mapping:8 lessens:1 infonnation:1 sensitive:2 largest:1 gaussian:8 modified:1 ck:6 rather:1 sab:2 mil:1 encode:1 derived:2 mainly:1 contrast:2 inst:1 lj:3 favoring:3 jq:1 going:1 interested:2 ralph:1 issue:1 among:1 overall:1 denoted:1 favored:1 constrained:4 spatial:1 mutual:2 field:4 equal:1 having:8 identical:4 unnecessarily:1 filling:2 arborizations:1 linsker:9 future:1 randomly:1 simultaneously:1 replaced:1 phase:7 replacement:1 attempt:1 organization:1 englewood:1 certainly:1 analyzed:1 nl:1 shorter:1 indexed:1 increased:3 column:1 earlier:3 goodness:2 cost:1 uniform:1 delay:1 periodic:2 corrupted:1 synthetic:1 density:1 lnf:1 peak:4 physic:1 ansatz:2 infomax:15 ambiguity:4 nm:1 satisfied:1 central:3 derivative:2 li:5 account:1 diversity:1 includes:1 satisfy:1 jc:3 explicitly:2 later:1 analyze:2 contribution:2 ni:2 variance:15 characteristic:1 ensemble:5 maximized:1 spaced:2 jzk:1 cni:1 yield:4 multiplying:1 q6:2 ago:2 explain:1 complicate:1 ed:1 definition:1 nonetheless:1 pp:1 dm:1 naturally:1 associated:1 mi:3 gain:3 recall:1 lim:1 higher:1 snm:2 specify:1 response:3 amer:1 anderson:1 just:2 stage:1 correlation:1 flanked:1 eqn:11 nonlinear:1 defines:1 brings:1 facilitate:1 effect:7 requiring:2 multiplier:1 spatially:3 nonzero:1 deal:4 adjacent:1 during:1 uniquely:2 noted:1 yorktown:1 criterion:1 generalized:1 pdf:3 tt:1 theoretic:1 common:1 sigmoid:1 discriminated:1 denver:1 discussed:6 occurred:2 numerically:1 surround:5 imposing:1 attenuate:1 pm:2 illinois:1 had:1 sql:1 longer:1 add:1 multivariate:1 recent:1 apart:1 certain:2 watson:1 arbitrarily:3 additional:4 somewhat:1 impose:4 maximize:7 signal:15 preservation:7 ii:3 multiple:3 reduces:1 technical:1 characterized:2 calculation:1 long:1 qi:1 sometimes:1 cell:43 affecting:1 want:2 spacing:1 interval:3 diagram:1 exhibited:1 ascent:2 subject:2 tend:1 integer:1 intermediate:3 iii:1 identically:1 enough:2 affect:1 competing:1 simplifies:1 tradeoff:1 det:1 expression:4 motivated:2 passed:1 wo:1 cause:2 useful:1 locally:1 yab:1 simplest:1 generate:1 sl:1 inhibitory:5 dotted:4 discrete:2 shall:3 redundancy:1 four:1 drawn:1 penalizing:1 rewriting:1 kqk:1 kept:1 merely:1 extends:1 dampens:1 oscillation:2 comparable:1 layer:4 bound:1 nonnegative:1 activity:3 yielded:1 occur:1 constraint:11 constrain:1 nearby:3 aspect:1 fourier:2 prescribed:1 relatively:1 combination:2 march:1 smaller:1 increasingly:2 character:1 unity:1 son:1 shallow:1 biologically:6 taken:1 computationally:1 resource:1 previously:1 turn:2 discus:1 end:1 confounded:1 operation:2 multiplied:1 apply:1 denotes:1 include:3 cf:1 restrictive:1 unchanged:1 bl:5 move:1 arrangement:2 quantity:2 receptive:4 gradient:2 distance:1 capacity:2 water:5 viii:3 length:1 berger:2 equivalently:2 ql:12 info:2 negative:1 design:1 shallower:2 neuron:1 sm:1 urbana:1 defining:1 extended:1 situation:2 communication:2 gc:3 smoothed:4 arbitrary:5 ordinate:1 namely:1 required:1 specified:2 connection:6 address:1 deepen:1 below:1 pattern:5 usually:1 iqy:1 max:3 reliable:1 power:2 overlap:1 treated:1 natural:1 difficulty:1 predicting:1 weaver:1 nth:2 mn:2 temporally:2 extract:1 sn:1 text:2 literature:1 determining:1 relative:4 expect:1 limitation:2 analogy:2 versus:1 localized:2 sil:1 principle:11 uncorrelated:1 land:1 ibm:1 normalizes:1 periodicity:2 excitatory:3 summary:1 last:4 transpose:1 free:1 side:1 weaker:2 understand:1 taking:1 distributed:1 tolerance:1 feedback:1 boundary:2 curve:7 evaluating:1 depth:1 made:2 far:2 sj:2 reconstructed:1 nov:1 global:1 spectrum:1 continuous:1 ljl:1 channel:4 zk:12 transfer:2 mj:1 vessel:1 complex:2 fli:1 did:1 motivation:2 noise:23 jdl:1 allowed:6 complementary:1 convey:2 fig:11 ny:1 wiley:1 position:1 lie:1 perceptual:1 dl:1 demand:1 locality:1 entropy:2 lt:1 led:1 infinitely:1 gallager:2 lagrange:1 prevents:1 scalar:1 bo:6 applies:1 corresponds:1 conditional:1 viewed:2 change:1 determined:2 except:3 uniformly:2 contrasted:1 unphysical:1 called:2 total:2 nil:1 pas:1 ya:1 shannon:5 mark:1 noisier:1 l1i:1 instructive:1 correlated:7 |
25 | 1,020 | A Computational Model of Prefrontal
Cortex Function
Todd S. Braver
Dept. of Psychology
Carnegie Mellon Univ.
Pittsburgh, PA 15213
Jonathan D. Cohen
Dept. of Psychology
Carnegie Mellon Univ .
Pittsburgh , PA 15213
David Servan-Schreiber
Dept. of Psychiatry
Univ . of Pittsburgh
Pittsburgh , PA 15232
Abstract
Accumulating data from neurophysiology and neuropsychology
have suggested two information processing roles for prefrontal cortex (PFC): 1) short-term active memory; and 2) inhibition. We
present a new behavioral task and a computational model which
were developed in parallel. The task was developed to probe both
of these prefrontal functions simultaneously, and produces a rich
set of behavioral data that act as constraints on the model. The
model is implemented in continuous-time , thus providing a natural
framework in which to study the temporal dynamics of processing
in the task. We show how the model can be used to examine the behavioral consequences of neuromodulation in PFC . Specifically, we
use the model to make novel and testable predictions regarding the
behavioral performance of schizophrenics, who are hypothesized to
suffer from reduced dopaminergic tone in this brain area.
1
Introduction
Prefrontal cortex (PFC) is an area of the human brain which is significantly expanded relative to other animals. There is general consensus that the PFC is centrally involved in higher cognitive activities such as planning , problem solving and
language. Recently, the PFC has been associated with two specific information processing mechanisms : short-term active memory and inhibition . Active memory is
the capacity of the nervous system to maintain information in the form of sustained
activation states (e.g. , cell firing) for short periods of time. This can be distinguished from forms of memory that are longer in duration and are instantiated as
142
Todd S. Braver, Jonathan D. Cohen, David Servan-Schreiber
modified values of physiological parameters (e.g., synaptic strength). Over the last
two decades, there have been a large number of neurophysiological studies focusing
on the cellular basis of active memory in prefrontal cortex. These studies have revealed neurons in PFC that fire selectively to specific stimuli and response patterns,
and that remain active during a delay between these. Investigators such as Fuster
(1989) and Goldman-Rakic (1987) have argued from this data that PFC maintains
temporary information needed to guide behavioral responses through sustained patterns of neural activity. This hypothesis is consistent with behavioral findings from
both animal and human lesion studies, which suggest that PFC is required for tasks
involving delayed responses to prior stimuli (Fuster, 1989; Stuss & Benson, 1986).
In addition to its role in active memory, many investigators have focused on the
inhibitory functions of PFC. It has been argued that PFC representations are required to overcome reflexive or previously reinforced response tendencies in order
to mediate a contextually appropriate - but otherwise weaker - response (Cohen &
Servan-Schreiber, 1992). Clinically, it has been observed that lesions to PFC are often associated with a syndrome of behavioral disinhibition, in which patients act in
impulsive and often socially inappropriate ways (Stuss & Benson, 1986). This syndrome has often been cited as evidence that PFC plays an important role inhibiting
behaviors which are compelling but socially inappropriate.
While the involvement of PFC in both active memory and inhibition is generally
agreed upon, computational models can play an important role in providing mechanisms by which to explain how these two information processing functions arise.
There are several computational models now in the literature which have focused
on either the active memory (Zipser, 1991), or inhibitory (Levine & Pruiett, 1989)
functions of PFC, or both functions together (Dehaene & Changeux, 1989; Cohen & Servan-Schreiber, 1992). These models have been instrumental in explaining
the role of PFC in a variety of behavioral tasks (e.g., the Wisconsin Card Sort and
Stroop). However, these earlier models are limited by their inability to fully capture the dynamical processes underlying active memory and inhibition. Specifically,
none of the simulations have been tightly constrained by the temporal parameters
found in the behavioral tasks (e.g., durations of stimuli, delay periods, and response
latencies). This limitation is not found solely in the models, but is also a feature of
the behavioral tasks themselves. The tasks simulated were not structured in ways
that could facilitate a dynamical analysis of processing.
In this paper we address the limitations of the previous work by describing both a
new behavioral task and a computational model of PFC. These have been developed
in parallel and, together, provide a useful framework for exploring the temporal
dynamics of active memory and inhibition and their consequences for behavior. We
then go on to describe how this framework can be used to examine neuromodulatory
effects in PFC, which are believed to playa critical role in both normal functioning
and in psychiatric disorders, such as schizophrenia.
2
Behavioral Assessment of Human PFC Function
We have developed a task paradigm which incorporates two components central to
the function of prefrontal cortex - short-term active memory and inhibition - and
that can be used to study the dynamics of processing. The task is a variant of the
continuous performance test (CPT), which is commonly used to study attention in
A Computational Model of Prefrontal Cortex Function
143
behavioral and clinical research. In a standard version of the task (the CPT-AX),
letters are presented one at a time in the middle of a computer screen. Subjects are
instructed to press the target button to the letter X (probe stimulus) but only when
it is preceded by an A (the cue stimulus). In previous versions of the CPT, subjects
only responded on target trials. In the present version of the task, a two response
forced-choice procedure is employed; on non-A-X trials subjects are asked to press
the non-target button. This procedure allows for response latencies to be evaluated
on every trial , thus providing more information about the temporal dimensions of
processing in the task .
Two additional modifications were made to the standard paradigm in order to
maximally engage PFC activity. The memory function of PFC is tapped by manipulating the delay between stimuli. In the CPT-AX , the prior stimulus (cue or
non-cue) provides the context necessary to decide how to respond to the probe letter . However, with a short delay (750 msec .), there is little demand on memory
for the prior stimulus. This is supported by evidence that PFC lesions have been
shown to have no effect on performance when there is only a short delay (Stuss &
Benson, 1986). With a longer delay (5000 msec.), however, it becomes necessary to
maintain a representation of the prior stimulus in order for it to be used as context
for responding to the current one. The ability of the PFC to sustain contextual
representations over the delay period can be determined behaviorally by comparing
performance on short delay trials (50%) against those with long delays (50%).
The inhibitory function of PFC is probed by introducing a prepotent response
tendency that must be overcome to respond correctly. This tendency is introduced
into the task by increasing the frequency of target trials (A followed by X). In the
remaining trials, there are three types of distractors: 1) a cue followed by a nontarget probe letter (e.g. , A-Y); 2) a non-cue followed by the target probe letter (e.g .,
B-X); and a non-cue followed by a non-target probe letter (e.g., B-Y). Target trials
occur 70% of the time, while each type of distract or trial occurs only 10% of the
time. The frequ ency of targets promotes the development of a strong tendency to
respond to the target probe letter whenever it occurs , regardless of the identity of
the cue (since a response to the X itself is correct 7 out of 8 times).
The ability to inhibit this response tendency can be examined by comparing accuracy on trials when the target occurs in the absence of the cue (B-X trials) , with
those made when neither the cue nor target occurs (i.e., B-Y trials , which provide a
measure of non-specific response bias and random responding). Trials in which the
cue but not the target probe appears (A-Y trials) are also particularly interesting
with respect to PFC function. These trials measure the cumulative influence of
active representations of context in guiding responses. In a normally functioning
system, context representations should stabilize and increase in strength as time
progresses. Thus , it is expected that A- Y accuracy will tend to decrease for long
delay trials relative to short ones .
As mentioned above, the primary benefit of this paradigm is that it provides a
framework in which to simultaneously probe the inhibitory and memory functions
associated with PFC. This is supported by preliminary neuroimaging data from
our laboratory (using PET) which suggests that PFC is, in fact, activated during
performance of the task. Although it is simple in structure, the task also generates
a rich set of behavioral data. There are four stimulus conditions crossed with two
delay conditions for which both accuracy and reaction time performance can be
144
Todd S. Braver, Jonathan D. Cohen, David Servan-Schreiber
100
90
80
70
Accurac, (Short Delay)
Accurac, (Long Delay)
V
V
60
1-
MODEL (Ace)
.DATA (Ace)
RT(ShortDelay)
J
RT (Long Delay)
750
650
!
I
550
"
.Ii
1 450
..:
350
250
~
-~~
AX
,
,
,
AY
BX
BY
Trial Condition
09
'''J!.'
AX
-~~-~
AY
BX
BY
Trial Condition
MODEL (Correct)
--- MODEL (Incorrect)
.. DATA (Correct)
'V DATA (Incorrect)
Figure 1:
Subjecl beha.viora.1 da.la. with model performa.nce s uperimpos ed . Top Panels: Acc ura.cy a.c ross
both dela.ys in a.1I four condilion s. Bottom Panels: Rea.ction times for both correc t a.nd incorrec t res pon se s in
a.1I condition s . Ba.r s repre sent s ta.nda.rd error of mea.s ure ment for the empirica.l da.ta..
measured. Figure 1 shows data gathered from 36 college-age subjects performing
this task.
In brief, we found that: 1) Accuracy was relatively unchanged in the long delays
compared to the short, demonstrating that active memory was adequately supporting performance; 2) A-Y accuracy, however, did slightly decrease at long delays,
reflecting the normal build-up of context representations over time; 3) Accuracy
on B-X trials was relatively high, supporting the assumption that subjects could
effectively use context representations to inhibit prepotent responses ; 4) A distinct
pattern emerged in the latencies of correct and incorrect responses , providing information on the temporal dynamics of processing (i .e. , responses to A-Y trials are
slow on correct trials and fast on incorrect ones; the pattern is reversed for B-X trials) . Taken together, the data provides specific, detailed information about normal
PFC functioning, which act as constraints on the development and evaluation of a
computational model.
3
A Computational Model of the CPT-AX
We have developed a recurrent network model which produces detailed information
regarding the temporal course of processing in the CPT-AX task. The network is
composed of three modules: an input module, a memory module, and an output
module. The memory module implements the memory and inhibitory functions
believed to be carried out by PFC. Figure 2 shows a diagram of the model.
Each unit in the input module represents a different stimulus condition: A, B, X &
A Computational Model of Prefrontal Cortex Function
145
OUTPUT LAYER
~~L0~
INPUT LAYER
Figure 2:
A diagram of the CPT?AX model.
Y. Units in the input module make excitatory connections on the response module,
both directly and indirectly through the memory module. Lateral inhibition within
each layer produces competition for representations . Activity from the cue stimulus
flows to the memory module, which is responsible for maintaining a trace of the
relevant context in each trial. Units in the memory module have self-excitatory
connections, which allow for the activity generated by the cue to be sustained in
the absence of input. The recurrent connectivity utilized by each unit in this module
is assumed to be a simpler, but formally equivalent analogue of a fully connected
recurrent cell assembly. Further, Zipser (1991) has used this type of connectivity to
produce temporal activity patterns which are highly similar to the firing patterns
of neurons in memory-associated areas of cortex, such as PFC. Activity from the
input and memory modules is integrated in the output module. The output of this
module determines whether a target (T) or non-target (N) response is made.
To simulate the CPT-AX task we have purposefully kept the network architecture
and size as simple as possible in order to maximize the model's interpretability. We
have therefore not attempted to simulate neural information processing in a neuronby-neuron manner. Rather, the populations of a few units are seen as capturing the
information processing characteristics of much larger populations of real neurons.
In this way, it is possible to capture the stochastic, distributed, and dynamical
properties of real neural networks with small and analytically tractable simulations.
The simulation is run in a temporally continuous framework in which processing is
governed by the following difference equation:
(1 )
where
1
(2)
is the state of unit j, Ij is the total input to j , dt is the time-step of integration, 'Y
is the gain and f3 is the bias. The continuous framework is preferable to a discrete
event-based one in that it allows for a plausible way to scale events appropriately
to the exact temporal specifications of the task (i.e., the duration of stimuli and
the delay between cue and probe). In addition, the continuous character of the
simulation naturally provides a framework for inferring the reaction times in the
various conditions.
146
4
Todd S. Braver, Jonathan D. Cohen, David Servan-Schreiber
Simulations of Behavioral Performance
We used a continuous recurrent generalization of backpropagation (Pearlmutter ,
1989) to train the network to perform the CPT-AX. All of the connection weights
were developed entirely by the training procedure , with the constraint that that all
self and between layer weights were forced to be positive and all within layer weights
were forced to be negative. Training consisted of repeated presentation of each of
the 8 conditions in the task (A-X,A-Y,B-X ,B-Y, at both long and short delays), with
the presentation frequency of each condition matching that of the behavioral task .
Weights were updated after the presentation of each trial, biases ({3) were fixed at
-2.5, and dt was set at 0.1. The network was trained deterministically ; completion
of training occurred when network accuracy reached 100% for each condition.
Following training, weights were fixed. Errors and reaction time distributions were
then simulated by adding zero-mean Gaussian noise to the net input of each unit
at every time step during trial presentation. A trial consisted of the presentation
of the cue stimulus, a delay period and then the probe stimulus. As mentioned
above, the duration of these events was appropriately scaled to match the temporal
parameters of the task (e.g ., 300 msec. duration for cue and probe presentation,
750 msec . for short delays, 5000 msec. for long delays). A time constant (1") of 50
msec. was used for simulation in the network. This scaling factor provided sufficient
temporal resolution to capture the relationship between the two task delays while
still permitting a tractable way of simulating the events .
Responses were determined by noting which output unit reached a threshold value
first following presentation of the probe stimulus. Response latency was determined
by calculating the number of time steps taken by the model to reach threshold
multiplied by the time constant 1". To facilitate comparisons with the experimental
reaction times, a constant k was added to all values produced . This parameter might
correspond to the time required to execute a motor response. The value of k was
determined by a least mean squares fit to the data. 1000 trials of each condition
were run in order to obtain a reliable estimate of performance under stochastic
conditions. The standard deviation of the noise distribution (0') and the threshold
(T) of the response units were adjusted to produce the best fit to the subject data.
Figure 1 compares the results of the simulation against the behavioral data.
As can be seen in the figure, the model provides a good fit to the behavioral data
in both the pattern of accuracy and reaction times . The model not only matches
the qualitative pattern of errors and reaction times but produces very similar quantitative results as well. The match between model and experimental results is particularly striking when it is considered that there are a total of 24 data points that
this model is fitting, with only 4 free parameters (O',T,1" ,k). The model's ability to
successfully account for the pattern of behavioral performance provides convincing
evidence that it captures the essential principles of processing in the task. We can
then feel confident in not only examining normal processing, but also in extending
the model to explore the effects of specific disturbances to processing in PFC .
5
Behavioral Effects of Neuromodulation in PFC
In a previous meeting of this conference a simulation of a simpler version of the CPT
was discussed (Servan-Schreiber, Printz, & Cohen, 1990). In this simulation the
147
A Computational Model of Prefrontal Cortex Function
Accuracy (Short Delay)
Accuracy (Long Delay)
100
....CJ
......
~
,,
I
90
I
I
Q
I
U
....
==
~
...
,
I
,
CJ
~
=-
I
,
80
I
~
70
60
AX
AY
BX
BY
AX
AY
BX
BY
MODEL (Normal Gain)
-- - MODEL (Reduced Gain)
.DATA (Controls)
Comparision of of model performance with normal and redu ced gain . The graph illustrates ~he effec~
of reducing gain in the memory layer on task performance. In the baseline network "1=1 , in ~he reduced-gain
network "1=0.8.
Figure 3:
effects of system-wide changes in catecholaminergic tone were captured by changing
the gain (-r) parameter of network units. Changes in gain are thought correspond to
the action of modulatory neurotransmitters in modifying the responsivity of neurons
to input signals (Servan-Schreiber et aI. , 1990; Cohen & Servan-Schreiber, 1992).
The current simulation of the CPT offers the opportunity to explore the effects
of neuromodulation on the information processing functions specific to PFC. The
transmitter dopamine is known to modulate activity in PFC , and manipulations
to prefrontal dopamine have been shown to have effects on both memory-related
neuronal activity and behavioral performance (Sawaguchi & Goldman-Rakic, 1991).
Furthermore, it has been hypothesized that reductions of the neuromodulatory effects of dopamine in PFC are responsible for some of the information processing
deficits seen in schizophrenia. To simulate the behavior of schizophrenic subjects,
we therefore reduce the gain ('Y) of units in the memory module of the network.
With reduced gain in the memory module, there are striking changes in the model's
performance of the task. As can be seen in Figure 3, in the short delay conditions
the performance of the reduced-gain model is relatively similar to that of control
subjects (and the intact model). However, at long delays , the reduced-gain model
produces a qualitatively different pattern of performance. In this condition, the
model has a high B-X error rate but a low A-Y error rate, a pattern which is opposite
to that seen in the control subjects. This double dissociation in performance is a
robust effect of the reduced-gain simulation (i.e. , it seems relatively uninfluenced
by other parameter adjustments) .
Thus , the model makes clear-cut predictions which are both novel and highly
testable. Specifically, the model predicts that: 1) Differences in performance be-
148
Todd S. Braver, lonatMn D. Cohen, David Servan-Schreiber
tween control and schizophrenic subjects will be most apparent at long delays ; 2)
Schizophrenics will perform significantly worse than control subjects on B-X trials
at long delays; 3) Schizophrenics will perform significantly better than control subjects on A-Y trials at long delays. This last prediction is especially interesting given
the fact that tasks in which schizophrenics show superior performance relative to
controls are relatively rare in experimental research.
Furthermore, the model not only makes predictions regarding schizophrenic behavioral performance, but also offers explanations as to their mechanisms. Analyses of
the trajectories of activation states in the memory module reveals that both of the
dissociations in performance are due to failures in maintaining representations of
the context set up by the cue stimulus. Reducing gain in the memory module blurs
the distinction between signal and noise , and causes the context representations to
decay over time. As a result, in the long delay trials , there is a higher probability
that the model will show both failures of inhibition (more B-X errors) and memory
(less A- Y errors) .
6
Conclusions
The results of this paper show how a computational analysis of the temporal dynamics of PFC information processing can aid in understanding both normal and disturbed behavior. We have developed a behavioral task which simultaneously probes
both the inhibitory and active memory functions of PFC. We have used this task in
combination with a computational model to explore the effects of neuromodulatory
dysfunction, making specific predictions regarding schizophrenic performance in the
CPT-AX. Confirmation of these predictions now await further testing.
References
Cohen, J. & Servan-Schreiber, D. (1992). Context , cortex, and dopamine: A connectionist
approach to behavior and biology in schizophrenia. Psychological Review, 99 , 45- 77.
Dehaene, S. & Changeux, J. (1989). A simple model of prefrontal cortex function
delayed-response tasks. Journal of Cognitive Neuroscience, 1 (3), 244- 261.
III
Fuster, J . (1989). The prefrontal cortex. New York: Raven Press.
Goldman-Rakic, P. (1987). Circuitry of primate prefrontal cortex and regulation of behavior by representational memory. In F. Plum (Ed .) , Handbook of physiology-the nervous
system, v. Bethesda, MD: American Physiological Society, 373-417.
Levine, D. & Pruiett, P. (1989). Modeling some effects of frontal lobe damage: novelty
and perseveration. Neural Networks, 2 , 103-116.
Pearlmutter, B. (1989). Learning state space trajectories in recurrent neural networks.
Neural Computation, 1 , 263-269.
Sawaguchi, T. & Goldman-Rakic, P. (1991). D1 dopamine receptors in prefrontal cortex:
Involvement in working memory. Science , 251 , 947-950.
Servan-Schreiber, D., Printz, H., & Cohen, J. (1990). The effect of catecholamines on
performance: From unit to system behavior. In D. Touretzky (Ed.), Neural information
processing systems 2. San Mateo, GA: Morgan Kaufman , 100-108.
Stuss, D. & Benson , D. (1986) . The frontal lobes. New York: Raven Press.
Zipser, D. (1991). Recurrent network model of the neural mechanism of short-term active
memory. Neural Computation, 3,179- 19.3.
| 1020 |@word neurophysiology:1 trial:29 version:4 middle:1 instrumental:1 seems:1 nd:1 simulation:11 lobe:2 reduction:1 responsivity:1 reaction:6 current:2 contextual:1 comparing:2 activation:2 must:1 blur:1 motor:1 stroop:1 cue:16 nervous:2 tone:2 short:15 provides:6 simpler:2 incorrect:4 qualitative:1 sustained:3 fitting:1 behavioral:23 manner:1 expected:1 behavior:7 themselves:1 examine:2 planning:1 nor:1 brain:2 socially:2 goldman:4 little:1 inappropriate:2 increasing:1 becomes:1 provided:1 underlying:1 panel:2 kaufman:1 developed:7 finding:1 temporal:11 quantitative:1 every:2 act:3 preferable:1 scaled:1 control:7 normally:1 unit:12 positive:1 todd:5 consequence:2 receptor:1 ure:1 firing:2 solely:1 might:1 mateo:1 examined:1 suggests:1 contextually:1 limited:1 responsible:2 testing:1 implement:1 backpropagation:1 procedure:3 stuss:4 area:3 significantly:3 thought:1 physiology:1 matching:1 suggest:1 psychiatric:1 ga:1 mea:1 context:10 influence:1 accumulating:1 disturbed:1 equivalent:1 pon:1 go:1 attention:1 regardless:1 duration:5 focused:2 resolution:1 disorder:1 population:2 updated:1 feel:1 target:14 play:2 engage:1 exact:1 hypothesis:1 tapped:1 pa:3 particularly:2 utilized:1 cut:1 predicts:1 observed:1 role:6 levine:2 bottom:1 module:19 capture:4 await:1 cy:1 connected:1 decrease:2 inhibit:2 neuropsychology:1 mentioned:2 correc:1 asked:1 dynamic:5 trained:1 solving:1 upon:1 basis:1 various:1 neurotransmitter:1 train:1 univ:3 instantiated:1 forced:3 describe:1 distinct:1 ction:1 fast:1 apparent:1 ace:2 emerged:1 larger:1 plausible:1 otherwise:1 ability:3 itself:1 net:1 nontarget:1 ment:1 performa:1 redu:1 relevant:1 representational:1 competition:1 double:1 extending:1 produce:7 recurrent:6 completion:1 measured:1 ij:1 progress:1 strong:1 implemented:1 empirica:1 correct:5 modifying:1 stochastic:2 human:3 argued:2 generalization:1 preliminary:1 nda:1 exploring:1 adjusted:1 considered:1 normal:7 circuitry:1 inhibiting:1 uninfluenced:1 ross:1 schreiber:12 successfully:1 behaviorally:1 gaussian:1 catecholaminergic:1 modified:1 rather:1 sawaguchi:2 ax:12 l0:1 transmitter:1 psychiatry:1 baseline:1 integrated:1 manipulating:1 development:2 animal:2 constrained:1 integration:1 f3:1 biology:1 represents:1 connectionist:1 stimulus:17 few:1 composed:1 simultaneously:3 tightly:1 delayed:2 fire:1 maintain:2 highly:2 evaluation:1 activated:1 necessary:2 re:1 psychological:1 earlier:1 compelling:1 modeling:1 impulsive:1 servan:12 reflexive:1 introducing:1 deviation:1 rare:1 delay:30 examining:1 confident:1 cited:1 together:3 connectivity:2 central:1 prefrontal:14 worse:1 cognitive:2 american:1 bx:4 account:1 stabilize:1 crossed:1 reached:2 repre:1 sort:1 maintains:1 parallel:2 square:1 accuracy:10 responded:1 who:1 characteristic:1 reinforced:1 gathered:1 correspond:2 dissociation:2 produced:1 none:1 trajectory:2 perseveration:1 acc:1 explain:1 reach:1 touretzky:1 whenever:1 synaptic:1 ed:3 against:2 failure:2 frequency:2 involved:1 naturally:1 associated:4 gain:14 distractors:1 cj:2 agreed:1 reflecting:1 focusing:1 appears:1 higher:2 ta:2 dt:2 response:23 maximally:1 sustain:1 evaluated:1 execute:1 furthermore:2 working:1 assessment:1 ency:1 facilitate:2 effect:12 hypothesized:2 consisted:2 functioning:3 adequately:1 analytically:1 laboratory:1 rakic:4 during:3 disinhibition:1 self:2 dysfunction:1 ay:4 pearlmutter:2 novel:2 recently:1 superior:1 preceded:1 cohen:11 discussed:1 occurred:1 he:2 mellon:2 ai:1 neuromodulatory:3 rd:1 language:1 specification:1 cortex:14 longer:2 inhibition:8 playa:1 involvement:2 manipulation:1 meeting:1 seen:5 ced:1 additional:1 captured:1 morgan:1 syndrome:2 employed:1 novelty:1 paradigm:3 period:4 dela:1 maximize:1 ii:1 signal:2 match:3 believed:2 clinical:1 long:14 offer:2 permitting:1 schizophrenia:3 promotes:1 y:1 prediction:6 involving:1 variant:1 beha:1 patient:1 dopamine:5 cell:2 rea:1 addition:2 diagram:2 printz:2 appropriately:2 subject:12 tend:1 ura:1 dehaene:2 sent:1 incorporates:1 flow:1 zipser:3 noting:1 revealed:1 iii:1 variety:1 fit:3 psychology:2 architecture:1 opposite:1 reduce:1 regarding:4 whether:1 suffer:1 york:2 cause:1 action:1 cpt:12 generally:1 latency:4 useful:1 prepotent:2 se:1 detailed:2 modulatory:1 clear:1 reduced:7 inhibitory:6 neuroscience:1 correctly:1 carnegie:2 probed:1 discrete:1 four:2 demonstrating:1 threshold:3 changing:1 neither:1 kept:1 button:2 graph:1 nce:1 run:2 letter:7 respond:3 striking:2 decide:1 scaling:1 capturing:1 layer:6 entirely:1 followed:4 centrally:1 activity:9 strength:2 occur:1 comparision:1 constraint:3 generates:1 simulate:3 performing:1 expanded:1 dopaminergic:1 relatively:5 structured:1 clinically:1 combination:1 remain:1 slightly:1 character:1 bethesda:1 modification:1 making:1 primate:1 benson:4 taken:2 equation:1 previously:1 describing:1 neuromodulation:3 mechanism:4 needed:1 tractable:2 multiplied:1 probe:14 schizophrenic:8 appropriate:1 indirectly:1 simulating:1 braver:5 distinguished:1 responding:2 remaining:1 top:1 assembly:1 opportunity:1 maintaining:2 calculating:1 testable:2 build:1 especially:1 society:1 unchanged:1 added:1 occurs:4 damage:1 primary:1 rt:2 md:1 reversed:1 deficit:1 card:1 simulated:2 capacity:1 lateral:1 consensus:1 cellular:1 pet:1 relationship:1 providing:4 convincing:1 regulation:1 neuroimaging:1 trace:1 negative:1 ba:1 perform:3 neuron:5 supporting:2 david:5 introduced:1 required:3 connection:3 purposefully:1 distinction:1 temporary:1 address:1 suggested:1 dynamical:3 pattern:11 catecholamine:1 interpretability:1 memory:34 reliable:1 explanation:1 analogue:1 critical:1 event:4 natural:1 disturbance:1 brief:1 temporally:1 carried:1 prior:4 literature:1 understanding:1 review:1 relative:3 wisconsin:1 fully:2 interesting:2 limitation:2 age:1 sufficient:1 consistent:1 principle:1 course:1 excitatory:2 supported:2 last:2 free:1 guide:1 weaker:1 bias:3 allow:1 explaining:1 wide:1 benefit:1 distributed:1 overcome:2 dimension:1 cumulative:1 rich:2 instructed:1 commonly:1 made:3 qualitatively:1 san:1 active:15 reveals:1 handbook:1 pittsburgh:4 assumed:1 continuous:6 fuster:3 decade:1 robust:1 confirmation:1 distract:1 pfc:36 da:2 did:1 tween:1 noise:3 arise:1 mediate:1 lesion:3 repeated:1 neuronal:1 screen:1 slow:1 aid:1 inferring:1 guiding:1 msec:6 deterministically:1 governed:1 specific:7 changeux:2 decay:1 physiological:2 evidence:3 essential:1 raven:2 adding:1 effectively:1 illustrates:1 demand:1 explore:3 neurophysiological:1 adjustment:1 determines:1 modulate:1 identity:1 presentation:7 absence:2 change:3 specifically:3 determined:4 reducing:2 total:2 tendency:5 la:1 attempted:1 experimental:3 intact:1 selectively:1 college:1 formally:1 inability:1 jonathan:4 frontal:2 investigator:2 dept:3 d1:1 |
26 | 1,021 | The Gamma MLP for Speech Phoneme
Recognition
Steve
Lawrence~
Ah Chung Tsoi, Andrew D. Back
{lawrence,act,back}Oelec.uq.edu.au
Department of Electrical and Computer Engineering
University of Queensland
St. Lucia Qld 4072 Australia
Abstract
We define a Gamma multi-layer perceptron (MLP) as an MLP
with the usual synaptic weights replaced by gamma filters (as proposed by de Vries and Principe (de Vries and Principe, 1992)) and
associated gain terms throughout all layers. We derive gradient
descent update equations and apply the model to the recognition
of speech phonemes. We find that both the inclusion of gamma
filters in all layers, and the inclusion of synaptic gains, improves
the performance of the Gamma MLP. We compare the Gamma
MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi I1R MLP
architectures, and a local approximation scheme. We find that the
Gamma MLP results in an substantial reduction in error rates.
1
1.1
INTRODUCTION
THE GAMMA FILTER
Infinite Impulse Response (I1R) filters have a significant advantage over Finite Impulse Response (FIR) filters in signal processing: the length of the impulse response
is uncoupled from the number of filter parameters. The length of the impulse response is related to the memory depth of a system, and hence I1R filters allow a
greater memory depth than FIR filters of the same order. However, I1R filters are
*http://www.neci.nj.nec.com/homepages/lawrence
786
S. LAWRENCE, A. C. TSOI, A. D. BACK
not widely used in practical adaptive signal processing. This may be attributed
to the fact that a) there could be instability during training and b) the gradient
descent training procedures are not guaranteed to locate the global optimum in the
possibly non-convex error surface (Shynk, 1989).
De Vries and Principe proposed using gamma filters (de Vries and Principe, 1992),
a special case of IIR filters, at the input to an otherwise standard MLP. The gamma
filter is designed to retain the uncoupling of memory depth to the number of parameters provided by IIR filters, but to have simple stability conditions.
The output of a neuron in a multi-layer perceptron is computed using 1
I
f L--i=O WkiYi
I
1-1)
De Vries and Principe consider adding short
Yk =
.
("",Nr-l
term memory with delays: YkI --
f
("",Nr-l
L--i=O "",K
L--j=O 9kij (t I
h
J.) Yi1-1 (t - J.)) were
(r!i)!
9~ij =
tj-le-/-'~it
j = 1, ... , K . The depth of the memory is controlled
by J.t, and K is the order of the filter. For the discrete time case, we obtain the
recurrence relation: zo(t) = x(t) and Zj(t) = (1 - J.t)Zj(t - 1) + J.tZj-l (t - 1) for
j = 1, ... , K. In this form, the gamma filter can be interpreted as a cascaded series
of filter modules, where each module is a first order IIR filter with the transfer function q-(I-/-,) , where qZj(t) ~ Zj(t + 1). We have a filter with K poles, all located
at 1 - J.t. Thus, the gamma filter may be considered as a low pass filter for J.t < 1.
The value of J.t can be fixed, or it can be adapted during training.
2
NETWORK MODELS
Figure 1: A gamma filter synapse with an associated gain term 'c'.
We have defined a gamma MLP as a multi-layer perceptron where every synapse
contains a gamma filter and a gain term, as shown in figure 1. The motivation
behind the inclusion of the gain term is discussed later. A separate J.t parameter
is used for each filter. Update equations are derived in a manner analogous to the
standard MLP and can be found in Appendix A. The model is defined as follows.
lwhere yi is the output of neuron k in layer I, Nl is the number of neurons in layer I,
is the weight connecting neuron k in layer I to neuron i in layer I - 1, yb = 1 (bias),
and / is commonly a sigmoid function.
Wii
The Gamma MLP for Speech Phoneme Recognition
787
Definition 1 A Gamma MLP with L layers excluding the input layer (0,1, ... , L),
gamma filters of order K, and No, N 1 , ... , NL neurons per layer, is defined as
f (x~ (t))
N'-l
K
i=O
j=O
L C~i(t) L wL j (t)Zkij (t)
Ziij (t)
Ziij (t)
(1- ILL(t))zkij(t -1) + ILL(t)zki(j_I)(t -1)
y!-l (t)
(1)
1
~j ~
K
j=O
eO / 2 _e- o / 2
where y(t) = neuron output, c'ki = synaptic gain, f(a) = eO/2+e 0/2, k
1,2, ... ,N, (neuronindex), I = 0,1, ... ,L(layer), and Ziijli=O = 1, W~ij li=O,#O
0, C~ij li=O = 1(bias).
o
For comparison purposes, we have used the TDNN (Time Delay Neural Network)
architecture2 , the Back-Tsoi FIR3 and I1R MLP architectures (Back and Tsoi,
1991a) where every synapse contains an FIR or I1R filter and a gain term, and the
local approximation algorithm used by Casdagli (k-NN LA) (Casdagli, 1991)4. The
Gamma MLP is a special case of the I1R MLP.
3
3.1
TASK
MOTIVATION
Accurate speech recognition requires models which can account for a high degree
of variability in the data. Large amounts of data may be available but it may be
impractical to use all of the information in standard neural network models.
Hypothesis: As the complexity of a problem increases (higher dimensionality, greater
variety of training data), the error surface of a neural network becomes more complex. It may contain a number of local minima5 many of which may be much worse
than the global minimum. The training (parameter estimation) algorithms become
"stuck" in local minima which may be increasingly poor compared to the global
optimum. The problem suffers from the so called "curse of dimenSionality" and the
2We use TDNN to refer to an MLP with a time window of inputs, not the replicated
architecture introduced by Lang (Lang et al., 1990) .
3We distinguish the Back-Tsoi FIR network from the Wan FIR network in that the
Wan architecture has no synaptic gains, and the update algorithms are different. The
Back-Tsoi update algorithm has provided better convergence in previous experiments.
4Casdagli created an affine model of the following form for each test pattern: yi =
aD + L~=l ai~, where k is the number of neighbors, j = 1, ... , k, and n is the input
dimension. The resulting model is used to find y for the test pattern.
5We note that it can be difficult to distinguish a true local minimum from a long plateau
in the standard backpropagation algorithm.
788
S. LAWRENCE, A. C. TSOI, A. D. BACK
difficulty in optimizing a function with limited control over the nature of the error
surface.
We can identify two main reasons why the application of the Gamma MLP may
be superior to the standard TDNN for speech recognition: a) the gamma filtering
operation allows consideration of the input data using different time resolutions and
can account for more past history of the signal which can only be accounted for in
an FIR or TDNN system by increasing the dimensionality of the model, and b)
the low pass filtering nature of the gamma filter may create a smoother function
approximation task, and therefore a smoother error surface for gradient descent 6 .
3.2
TASK DETAILS
Model Input Window
[~
Networl( Output 1
Target Function
Classification 0
Networl( Output 2
II
;
~}
!
Frames of RASTA data
...::.. ...:::'.!'}
j :
~.} ,.;:!.. ""::'I'}
. ; i
~
l I~
~
Sequence End
~
Figure 2: PLP input data format and the corresponding network target functions for the
phoneme "aa" .
Our data consists of phonemes extracted from the TIMIT database and organized
as a number of sequences as shown in figure 2 (example for the phoneme "aa").
One model is trained for each phoneme. Note that the phonemes are classified in
context, with a number of different contexts, and that the surrounding phonemes
are labelled only as not belonging to the target phoneme class. Raw speech data
was pre-processed into a sequence of frames using the RASTA-PLP v2.0 software7 .
We used the default options for PLP analysis. The analysis window (frame) was
20 ms. Each succeeding frame overlaps with the preceding frame by 10 ms. 9
PLP coefficients together with the signal power are extracted and used as features
describing each frame of data. Phonemes used in the current tests were the vowel
"aa" and the fricative "s" . The phonemes were extracted from speakers coming
from the same demographic region in the TIMIT database. Multiple speakers were
used and the speakers used in the test set were not contained in the training set.
The training set contained 4000 frames, where each phoneme is roughly 10 frames.
The test set contained 2000 frames, and an additional validation set containing 2000
frames was used to control generalization.
6If we consider a very simple network and derive the relationship of the smoothness of
the required function approximation to the smoothness of the error surface this statement
appears to be valid. However, it is difficult to show a direct relationship for general
networks.
7 Obtained from ftp:/ /ftp.icsi.berkeley.edu/pub/speech/rasta2.0.tar.Z.
The Gamma MLP for Speech Phoneme Recognition
4
789
RESULTS
Two outputs were used in the neural networks as shown by the target functions in
figure 2, corresponding to the phoneme being present or not. A confidence criterion
was used: Ymax x (Ymax - Ymin) (for soft max outputs). The initial learning rate was
0.1, 10 hidden nodes were used, FIR and Gamma orders were 5 (6 taps), the TDNN
and k-NN models had an input window of 6 steps in time, the tanh activation function was used, target outputs were scaled between -0.8 and 0.8, stochastic update
was used, and initial weights were chosen from a set of candidates based on training
set performance. The learning rate was varied over time according to the schedule:
= 'TIo/ (N/2 + max (1,(cI- ",~!(o.Cj(n
?)) where'TI = learning rate, 'TIo = initial
(I
learning rate, N = total epochs, n = current epoch, Cl = 50, C2 = 0.65. This is
'TI
C2 N
C2)N
similar to the schedule proposed in (Darken and Moody, 1991) with an additional
term to decrease the learning rate towards zero over the final epochs 8 .
I
Train Error %
FIR MLP
Gamma MLP
TDNN
k-NN LA
I
2-NN
I
Test Error %
FIR MLP
Gamma MLP
TDNN
k-NN LA
I
2-NN
I
Test False +ve
FIR MLP
Gamma MLP
TDNN
k-NN LA
I
Test False -ve
FIR MLP
Gamma MLP
TDNN
k-NN LA
I
5-NN
1st layer
17.6
0.43
0 .39
7.78
I
All layers
14.5
1.5
5.73
0 .88
I
Gains , 1st layer
27.2
0 .59
6 .07
0 .12
I
Gains , all layers
40 .9
19.8
5.63
1.68
14.4
0.86
I
5-NN l i s t layer
22.2
0.97
0.16
14 .7
I
All layers
20.4
0 .61
13.5
0 . 33
I
Gams , 1st layer
29
0.14
12.8
1.0
I
Gams , all layers
41
21
12.7
0.50
24.5
0 .68
I
All layers
2.0
11.4
7 .01
0.47
I
All layers
44.1
5 .6
2.2
30.4
0
0
31
I
2-NN
2-NN
53
I
28 .4
I
22.6
I
I
5-NN l i s t layer
13.5
0 .67
7 .94
0.45
I
Gams , 1st layer
4.5
0 .77
6.83
0 .34
I
Gams , all layers
31.3
49.0
8.05
1.8
13
0 .27
I
17.4
I
5-NN l i s t layer
44.9
2 .6
32 .2
1.2
I
Gams , 1st layer
92.9
2.4
2 .8
28.4
I
Gams , all
66.4
24.7
54.6
layers
53
4.4
1.8
I
56.8
Table 1: Results comparing the architectures and the use of filters in all layers and
synaptic gains for the FIR and Gamma MLP models. The NMSE is followed by the
standard deviation. The TDNN results are listed under an arbitrary column heading
(gains and 1st layer/alilayers does not apply).
The results of the simulations are shown in table 19 . Each result represents an
average over four simulations with different random seeds - the standard deviation
of the four individual results is also shown. The FIR and Gamma MLP networks
have been tested both with and without synaptic gains, and with and without
filters in the output layer synapses. These results are for the models trained on
the "s" phoneme, results for the "aa" phoneme exhibit the same trend. "Test false
negative" is probably the most important result here, and is shown graphically
in figure 3. This is the percentage of times a true classification (ie. the current
8Without this term we have encountered considerable parameter fluctuation over the
last epoch.
9NMSE
= 2:~=1 (d(k) -
y(k))2
I
(2:~=1 (d(k) - (2:~=1 d(k)) INr) IN.
790
S. LAWRENCE, A. C. TSOI, A. D. BACK
60
--"
55
Q)
~
~
45
Q)
40
'"
LL
35
i
30
I-
k-NN LA _._._ ..
f------ f
Z
.!!2
~ Ga:~~TDNN
~t~ -_=-=-~-'
.. _.-
50
I - - -__ I
1
-r-?---- ----+ --- -
25
20
2-NN
5-NN
NG 1 L NG AL
G lL
GAL
Figure 3: Percentage of false negative classifications on the test set. NG=No gains,
G=Gains, lL=filters in the first layer only, AL=filters in all layers. The error bars show
plus and minus one standard deviation. The synaptic gains case for the FIR MLP is not
shown as the poor performance compresses the remainder of the graph. Top to bottom,
the lines correspond to: k-NN LA (left), TDNN, FIR MLP, and Gamma MLP.
phoneme is present) is incorrectly reported as false. From the table we can see
that the Gamma MLP performs Significantly better than the FIR MLP or standard
TDNN models for this problem. Synaptic gains and gamma filters in all layers
improve the performance of the Gamma MLP, while the inclusion of synaptic gains
presented difficulty for the FIR MLP. Results for the IIR MLP are not shown - we
have been unable to obtain significant convergence lO . We investigated values of k
not listed in the table for the k-NN LA model, but it performed poorly in all cases.
5
CONCLUSIONS
We have defined a Gamma MLP as an MLP with gamma filters and gain terms in
every synapse. We have shown that the model performs significantly better on our
speech phoneme recognition problem when compared to TDNN, Back-Tsoi FIR and
IIR MLP architectures, and Casdagli's local approximation model. The percentage
of times a phoneme is present but not recognized for the Gamma MLP was 44%
lower than the closest competitor, the Back-Tsoi FIR MLP model.
The inclusion of gamma filters in all layers and the inclusion of synaptic gains improved the performance of the Gamma MLP. The improvement due to the inclusion
of synaptic gains may be considered non-intuitive to many - we are adding degrees
of freedom, but no additional representational power. The error surface will be different in each case, and the results indicate that the surface for the synaptic gains
case is more amenable to gradient descent. One view of the situation is seen by
Back & Tsoi with their FIR and IIR MLP networks (Back and Tsoi, 1991b): From
a signal processing perspective the response of each synapse is determined by polezero positions. With no synaptic gains, the weights determine both the static gain
and the pole-zero positions of the synapses. In an experimental analysis performed
by Back & Tsoi it was observed that some synapses devoted themselves to modellOTheoretically, the IIR MLP model is the most powerful model used here. Though it
is prone to stability problems, the stability of the model can and was controlled in the
simulations performed here (basically, by reflecting poles that move outside the unit circle
back inside). The most obvious hypothesis for the difficulty in training the model is related
to the error surface and the nature of gradient descent. We expect the error surface to be
considerably more complex for the IIR MLP model, and for gradient descent update to
experience increased difficulty optimizing the function.
The Gamma MLP for Speech Phoneme Recognition
791
ing the dynamics of the system in question, while others "sacrificed" themselves to
provide the necessary static gains l l to construct the required nonlinearity.
APPENDIX A: GAMMA MLP UPDATE EQUATIONS
~W~i;(t)
=
8J(t)
-'1 8
I
I
()
w",; t
I
I
= '1 6" (t)c", (t)Z"i; (t)
(2)
~C~i(t)
~J'~i (t)
(3)
=
(4)
o
=
j=O
(1 - J'~i(t))a~,;(t -1) + J'~i(t)a~iC;_I)(t - 1)
+z~,(;_I)(t -1) - Z~i;(t - 1)
(5)
1 $j $ K
I=L
(6)
1 $j $ K
1
(1 - J';,,(t)).B;,,;(t -1)
j=O
+ J';,,(t).B~"(;_l) (t -
1)
1 $j $K
(7)
Acknowledgments
This work has been partially supported by the Australian Research Council (ACT and
ADB) and the Australian Telecommunications and Electronics Research Board (SL).
References
Back, A. and Tsoi, A. (1991a). FIR and IIR synapses, a new neural network architecture
for time series modelling. Neural Computation, 3(3):337-350.
Back, A. D. and Tsoi, A. C. (1991b). Analysis of hidden layer weights in a dynamic locally
recurrent network. In Simula, 0., editor, Proceedings International Conference on
Artificial Neural Networks, ICANN-91, volume 1, pages 967-976, Espoo, Finland.
Casdagli, M. (1991). Chaos and deterministic versus stochastic non-linear modelling. J.R.
Statistical Society B, 54(2):302-328.
Darken, C. and Moody, J. (1991). Note on learning rate schedules for stochastic optimization. In Neural Information Processing Systems 3, pages 832-838. Morgan Kaufmann.
de Vries, B. and Principe, J. (1992). The gamma model- a new neural network for temporal
processing. Neural Networks, 5(4):565-576.
Lang, K. J., Waibel, A. H., and Hinton, G. E. (1990). A time-delay neural network
architecture for isolated word recognition. Neural Networks, 3:23-43.
Shynk, J . (1989). Adaptive IIR filtering. IEEE ASSP Magazine, pages 4-21.
llThe neurons were observed to have gone into saturation, providing a constant output.
PART VII
VISION
| 1021 |@word casdagli:5 simulation:3 queensland:1 minus:1 reduction:1 electronics:1 initial:3 series:2 contains:2 pub:1 past:1 current:3 com:1 comparing:1 lang:3 activation:1 designed:1 succeeding:1 update:7 yi1:1 short:1 node:1 c2:3 direct:1 become:1 consists:1 inside:1 manner:1 uncoupling:1 roughly:1 themselves:2 multi:3 curse:1 window:4 increasing:1 becomes:1 provided:2 homepage:1 interpreted:1 gal:1 nj:1 impractical:1 temporal:1 berkeley:1 every:3 act:2 ti:2 scaled:1 control:2 unit:1 engineering:1 local:6 fluctuation:1 plus:1 au:1 limited:1 gone:1 practical:1 acknowledgment:1 tsoi:17 backpropagation:1 procedure:1 significantly:2 pre:1 confidence:1 word:1 ga:1 context:2 instability:1 www:1 deterministic:1 graphically:1 convex:1 resolution:1 stability:3 analogous:1 target:5 magazine:1 hypothesis:2 trend:1 simula:1 recognition:9 located:1 database:2 bottom:1 observed:2 module:2 electrical:1 region:1 decrease:1 icsi:1 yk:1 substantial:1 complexity:1 dynamic:2 trained:2 surrounding:1 zo:1 train:1 sacrificed:1 artificial:1 outside:1 widely:1 otherwise:1 final:1 advantage:1 sequence:3 coming:1 yki:1 remainder:1 poorly:1 ymax:2 representational:1 intuitive:1 convergence:2 optimum:2 ftp:2 derive:2 andrew:1 recurrent:1 ij:3 indicate:1 australian:2 filter:33 stochastic:3 australia:1 generalization:1 adb:1 considered:2 ic:1 lawrence:6 seed:1 finland:1 purpose:1 estimation:1 tanh:1 council:1 wl:1 create:1 fricative:1 tar:1 derived:1 improvement:1 modelling:2 nn:19 hidden:2 relation:1 classification:3 ill:2 espoo:1 special:2 construct:1 ng:3 represents:1 others:1 gamma:42 ve:2 individual:1 zki:1 replaced:1 vowel:1 freedom:1 mlp:48 inr:1 nl:2 behind:1 tj:1 devoted:1 amenable:1 accurate:1 necessary:1 experience:1 circle:1 isolated:1 kij:1 column:1 soft:1 increased:1 pole:3 deviation:3 delay:3 iir:10 reported:1 considerably:1 st:7 international:1 ie:1 retain:1 connecting:1 together:1 moody:2 containing:1 wan:2 possibly:1 fir:22 worse:1 chung:1 li:2 account:2 de:6 coefficient:1 ad:1 later:1 performed:3 view:1 option:1 timit:2 phoneme:21 kaufmann:1 correspond:1 identify:1 raw:1 basically:1 ah:1 history:1 classified:1 plateau:1 synapsis:4 suffers:1 synaptic:13 definition:1 competitor:1 obvious:1 associated:2 attributed:1 static:2 gain:25 improves:1 dimensionality:3 organized:1 schedule:3 cj:1 back:19 reflecting:1 appears:1 steve:1 higher:1 response:5 improved:1 synapse:5 yb:1 though:1 impulse:4 contain:1 true:2 hence:1 ll:3 during:2 recurrence:1 plp:4 speaker:3 m:2 criterion:1 performs:2 consideration:1 chaos:1 sigmoid:1 superior:1 networl:2 volume:1 discussed:1 significant:2 refer:1 ai:1 smoothness:2 inclusion:7 nonlinearity:1 had:1 surface:9 closest:1 perspective:1 optimizing:2 yi:2 seen:1 minimum:3 greater:2 additional:3 preceding:1 morgan:1 eo:2 recognized:1 determine:1 signal:5 ii:1 smoother:2 multiple:1 ing:1 long:1 controlled:2 vision:1 neci:1 probably:1 variety:1 architecture:8 speech:10 listed:2 amount:1 locally:1 processed:1 http:1 sl:1 percentage:3 zj:3 per:1 discrete:1 four:2 graph:1 powerful:1 telecommunication:1 throughout:1 appendix:2 layer:37 ki:1 guaranteed:1 distinguish:2 followed:1 encountered:1 adapted:1 lucia:1 format:1 department:1 according:1 waibel:1 poor:2 belonging:1 increasingly:1 equation:3 describing:1 end:1 demographic:1 available:1 operation:1 i1r:7 wii:1 apply:2 gam:6 v2:1 uq:1 compress:1 top:1 society:1 move:1 question:1 usual:1 nr:2 exhibit:1 gradient:6 separate:1 unable:1 reason:1 length:2 relationship:2 providing:1 architecture2:1 difficult:2 statement:1 negative:2 rasta:2 neuron:8 darken:2 finite:1 descent:6 incorrectly:1 situation:1 hinton:1 excluding:1 variability:1 assp:1 locate:1 frame:10 varied:1 arbitrary:1 introduced:1 required:2 tap:1 uncoupled:1 bar:1 pattern:2 saturation:1 max:2 memory:5 power:2 overlap:1 difficulty:4 cascaded:1 scheme:1 improve:1 created:1 shynk:2 zkij:2 tdnn:15 ymin:1 epoch:4 expect:1 filtering:3 versus:1 validation:1 degree:2 affine:1 editor:1 lo:1 prone:1 accounted:1 supported:1 last:1 heading:1 bias:2 allow:1 perceptron:3 neighbor:1 depth:4 dimension:1 default:1 valid:1 stuck:1 commonly:1 adaptive:2 replicated:1 global:3 qld:1 why:1 table:4 nature:3 transfer:1 investigated:1 complex:2 cl:1 icann:1 main:1 motivation:2 nmse:2 board:1 position:2 candidate:1 false:5 adding:2 ci:1 nec:1 tio:2 vries:6 vii:1 contained:3 partially:1 aa:4 extracted:3 towards:1 labelled:1 considerable:1 infinite:1 determined:1 called:1 total:1 pas:2 experimental:1 la:8 principe:6 tested:1 |
27 | 1,022 | A Multiscale Attentional Framework for
Relaxation Neural Networks
Dimitris I. Tsioutsias
Dept. of Electrical Engineering
Yale University
New Haven, CT 06520-8285
Eric Mjolsness
Dept. of Computer Science & Engineering
University of California, San Diego
La Jolla, CA 92093-0114
tsioutsias~cs.yale.edu
emj~cs.ucsd.edu
Abstract
We investigate the optimization of neural networks governed by
general objective functions. Practical formulations of such objectives are notoriously difficult to solve; a common problem is the
poor local extrema that result by any of the applied methods. In
this paper, a novel framework is introduced for the solution oflargescale optimization problems. It assumes little about the objective
function and can be applied to general nonlinear, non-convex functions; objectives in thousand of variables are thus efficiently minimized by a combination of techniques - deterministic annealing ,
multiscale optimization, attention mechanisms and trust region optimization methods.
1
INTRODUCTION
Many practical problems in computer vision, pattern recognition , robotics and other
areas can be described in terms of constrained optimization . In the past decade,
researchers have proposed means of solving such problems with the use of neural
networks [Hopfield & Tank, 1985; Koch et ai., 1986], which are thus derived as
relaxation dynamics for the objective functions codifying the optimization task.
One disturbing aspect of the approach soon became obvious , namely the apparent inability of the methods to scale up to practical problems , the principal reason
being the rapid increase in the number of local minima present in the objectives as
the dimension of the problem increases. Moreover most objectives, E( v), are highly
nonlinear, non-convex functions of v , and simple techniques (e.g. steepest descent)
D. I. TSIOUTSIAS, E. MJOLSNESS
634
will , in general , locate the first minimum from the starting point.
In this work, we propose a framework for solving large-scale instances of such optimization problems. We discuss several techniques which assist in avoiding spurious
minima and whose combined result is an objective function solution that is computationallyefficient, while at the same time being globally convergent. In section 2.1
we discuss the use of deterministic annealing as a means of avoiding getting trapped
into local minima. Section 2.2 describes multiscale representations of the original
objective in reduced spatial domains. In section 2.3 we present a scheme for reducing the computational requirements of the optimization method used, by means of
a focus of attention mechanism. Then, in section 2.4 we introduce a trust region
method for the relaxation phase of the framework, which uses second order information (i.e. curvature) of the objective function. In section 3 we present experimental
results on the application of our framework to a 2-D region segmentation objective
with discontinuities. Finally, section 4 summarizes our presentation.
2
THEORETWALFRAMEWORK
Our optimization framework takes the form of a list of nested loops indicating the
order of conceptual (and computational) phases that occur: from the outer to the
inner loop we make use of deterministic annealing, a multiscale representation , an
attentional mechanism and a trust region optimization method.
2.1
ANNEALING NETS
The usefulness of statistical mechanics for designing optimization procedures has
recently been established; prime examples are simulated annealing and its various
mean field theory approximations [Hopfield & Tank, 1985; Durbin & Willshaw,
1987]. The success of such methods is primarily due to entropic terms included in
the objective (i .e. syntactic terms), but the price to pay is their highly nonlinear
form. Interestingly, those terms can effectively be convexified by the use of a "temperature" parameter, T , allowing for a reduction in the number of minima and the
ability to track the solution through "temperature".
2.2
MULTISCALE REPRESENTATION
To solve large-scale problems in thousands of variables , we need to speed up the
convergence of the method while still retaining valid state-space trajectories. To
accomplish this we introduce smaller, approximate versions of the problem at coarser
spatial scales [Mjolsness et al. , 1991] ; the nonlinearity of the original objective is
maintained at all scales, as opposed to other approaches where the objectives and
their derivatives are either approximated by the use of finite difference methods ,
or solved for by multigrid techniques where a quadratic objective is still assumed .
Consequently, the multiscale representation exploits the effective smoothness in the
objectives: by alternating relaxation phases between coarser and finer scales, we
use the former to identify extrema and the latter to localise them.
2.3
FOCUS OF ATTENTION
To further reduce the computational requirements of larg~scale optimization (and
indirectly control its temporal behavior), we use a focus of attention (FoA) mechanism [Mjolsness & Miranker , 1993], reminiscent of the spotlight hypothesis argued
A Multiscale Attentional Framework for Relaxation Neural Networks
635
to exist in early vision systems [Koch & Ullman, 1985; Olshausen et al., 1993]. The
effect of a FoA is to support efficient, responsive analysis: it allows resources to be
focused on selected areas of a computation and can rapidly redirect them as the
task requirements evolve.
Specifically, the FoA becomes a characteristic function, 7l'(X) , determining which
of the N neurons are active and which are clamped during relaxation, by use of a
discrete-valued vector, X, and by the rule: 7l'i(X) = 1 if neuron Vi is in the FoA, and
zero otherwise. Moreover, a limited number, n, of neurons Vi are active at any given
instant: I:i 7l'i(X) = n, with n? Nand n chosen as an optimal FoA size. To tie the
attentional mechanism to the multiscale representation, we introduce a partition
of the neurons Vi into blocks indexed by a (corresponding to coarse-scale blockneurons), via a sparse rectangular matrix Bia E {O, I} such that I:a Bia = 1, Vi,
with i = 1, ... ,N, a = 1,oo.,K and K?N. Then 7l'i(X) = I:aBiaXa, and we use
each component of X for switching a different block of the partition; thus, a neuron
Vi is in the FoA iff its coarse scale block a is in the FoA, as indicated by Xa. As
a result, our FoA need not necessarily have a single region of activity: it may well
have a distributed activity pattern as determined by the partitions Bia. 1
Clocked objective function notation [Mjolsness & Miranker, 1993] makes the task
more apparent: during the active-x phase the FoA is computed for the next activev phase, determining the subset of neurons Vi on which optimization is to be carried
out. We introduce the quantity E ;dv] == g~ ~ (Ti is a time axis for Vi) [Mjolsness
& Miranker, 1993] as an estimate of the predicted dE arising from each Vi if it joins
the FoA. For HopfieldjGrossberg dynamics this measure becomes:
E ;d v ] =
_g~(gi1(Vi)) (~~)
2
(1)
== -gH U i)(E,i)2
wi th E,i ~f 'V'i E, and gi the transfer function for neuron Vi (e.g. a sigmoid function). Eq. (1) is used here analogously to saliency measures introduced into neurophysiological work [Koch & Ullman, 1985]; we propose it as a global measure
of conspicuousness. As a result, attention becomes a k-winner-take-all (kWTA)
network:
a
a
where I refers to the scale for which the FoA is being determined (I = 1, ... , L), EEl
conforms with the clocked objective notation, and the last summand corresponds
to the subspace on which optimization is to be performed, as determined by the
current FoA.2 Periodically, an analogous FoA through spatial scales is run, allowing
re-direction of system resources to the scale which seems to be having the largest
combined benefit and cost effect on the optimization [Tsioutsias & Mjolsness, 1995].
The combined effect of multiscale optimization and FoA is depicted schematically in
Fig. 1: reduced-dimension functionals are created and a FoA beam "shines" through
scales picking the neurons to work on.
Preferably, Bia will be chosen to minimize the number of inter-block connections.
Before computing a new FoA we update the neighbors of all neurons that were included
in the last focus; this has a similar effect to an implicit spreading of activation.
1
2
D. I. TSIOUTSIAS, E. MJOLSNESS
636
Layer 3
Layer 1
Figure 1: Multiscale Attentional Neural Nets: FoA on a layer (e.g. L=l) competes
with another FoA (e.g . L=2) to determine both preferable scale and subspace.
2.4
OPTIMIZATION PHASE
To overcome the problems generally associated with the steepest descent m ethod,
other techniques have been devised . Newton 's method , although successful in small
to medium-sized problems, does not scale well in large non-convex instances and is
computationally intensive. Quasi-Newton methods are efficient to compute , have
quadratic termination but are not globally convergent for general nonlinear, nonconvex functions. A method that guarantees global convergence is the trust region
method [Conn et al., 1993] . The idea is summarized as follows : Newton's method
suffers from non-positive definite Hessians; in such a case, the underlying function
m(k)(6) obtained from the 2nd order Taylor expansion of E(Vk + 6) does not have
a minimum and the method is not defined, or equivalently, the region around the
current point Vk in which the Taylor series is adequate does not include a minimizing
point of m(k)(6). To resolve this, we can define a neighborhood Ok of Vk such that
m(k)(6) agrees with E(Vk + 6) in some sense; then, we pick Vk+l
Vk + 6 k , where
6 k minimizes m(k)(6) , V(Vk + 6) E Ok . Thus , we seek a solution to the resulting
subproblem:
=
(3)
where 1I ?lIp is any kind of norm (for instance, the L2 norm leads to the LevenbergMarquardt methods) , and ~k is the radius of Ok, adaptively modified based on an
=
+
accuracy ratio Tk = (~E(k)/~m(k)
(E(k ) - E(Vk
6k?/(m(k)(O) - m(k)(6 k ?;
~E(k) is the "actual reduction" in E(k) when step 6 k is taken, and ~m(k) the
"predicted reduction" . The closer Tk is to unity, the better the agreement between
the local quadratic model of E (k) and the objective itself is , and ~k is modified
adaptively to reflect this [Conn et al., 1993].
We need to make some brief points here (a complete discussion will be given elsewhere [Tsioutsias & Mjolsness, 1995]):
A Multiscale Attentional Framework for Relaxation Neural Networks
637
? At each spatial scale of our multiscale representation, we optimize the corresponding objective by applying a trust region method. To obtain sufficient
relaxation progress as we move through scales we have to maintain meaningful region sizes, Llk; to that end we use a criterion based on the curvature
of the functionals along a searching direction.
? The dominant relaxation computation within the algorithm is the solution
of eq. (3). We have chosen to solve this subproblem with a preconditioned
conjugate gradient method (PCG) that uses a truncated Newton step to
speed up the computation; steps are accepted when a sufficiently good
approximation to the quasi-Newton step is found. 3 In our case, the norm
in eq. (3) becomes the elliptical norm 1I~llc = ~tc~, where a diagonal
preconditioner to the Hessian is used as the scaling matrix C.
? If the neuronal connectivity pattern of the original objective is sparse (as
happens for most practical combinatorial optimization problems), the pattern of the resulting Hessian can readily be represented by sparse static data
structures,4 as we have done within our framework. Moreover, the partition
matrices, Bia, introduce a moderate fill-in in the coarser objectives and the
sparsity of the corresponding Hessians is again taken into account.
3
EXPERIMENTS
We have applied our proposed optimization framework to a spatially structured
objective from low-level vision, namely smooth 2-D region segmentation with the
inclusion of discontinuity detection processes:
ij
ij
ij
ij
ij
where d is the set of image intensities, j is the real-valued smooth surface to be fit to
the data, lV and lh are the discrete-valued line processes indicating a non-zero value
in the intensity gradient, and ?(x) = -(2g o)-1[lnx+ln(1-x)] is a barrier function
restricting each variable into (0,1) by infinite barriers at the borders. Eq. (4) is
a mixed-nonlinear objective involving both continuous and binary variables ; our
framework optimizes vectors j, lh and lV simultaneously at any given scale as continuous variables, instead of earlier two-step, alternate continuous/discrete-phase
approaches [Terzopoulos, 1986].
We have tested our method on gradually increasing objectives, from a "small" size
of N=12,288 variables for a 64x64 image, up to a large size of N=786 ,432 variables
for a 512x512 image; the results seem to coincide with our theoretical expectations:
a significant reduction in computational cost was observed and consistent convergence towards the optimum of the objective was found for various numbers of coarse
scales and FoA sizes. The dimension of the objective at any scale I was chosen via
a power law: N(L-l+1)! L, where L is the total number of scales and N the size of
3
4
The algorithm can also handle directions of negative curvature.
This property becomes important in a neural net implementation.
D. I. TSIOUTSIAS, E. MJOLSNESS
638
the original objective.
The effect of our multiscale optimization with and without a FoA is shown in Fig. 2
for the 128x128 and the 512x512 nets, where E( v*) is the best final configuration
with a one-level no-FoA net , and cumulative cost is an accumulated measure in the
number of connection updates at each scale; a consistent scale-up in computational
efficiency can be noted when L > 1, while the cost measure also reflects the relative
total wall-clock times needed for convergence. Fig. 3 shows part of a comparative
study we made for saliency measures alternative to eq. (1) (e.g. g~IE,il), in order
to investigate the validity of eq. (1) as a predictor of l:!..E: the more prominent
"linearity" in the left scatterplot seems to justify our choice of saliency.
104
M-'S-'/-_A_T_N_e_t_s,..,,(_12_8_t2-,)_
: _L_=--,1,_2'-,3_ _ _---,
. - -_ _ _
10'
MS/ AT Nets (512t2) : L=1,2,3,4
10'
10'
10'
10'
10'
10'
10'
~ 10 l
~
2
'"
Nl
I
10'
#1
>'
g10 - 1
10-'
10-'
10"
10"
10-4
10-110
10-'
2000
10-' 0
Figure 2: Multiscale Optimization (curves labeled by number of scales used): #numbered curves correspond to nets without a FoA , simply-numbered ones to nets
with a FoA used at all scales. The lowest costs result from the combined use of
multiscale optimization and FoA.
4
CONCLUSION
We have presented a framework for the optimization of large-scale objective functions using neural networks that incorporate a multiscale attentional mechanism.
Our method allows for a continuous adaptation of the system resources to the computational requirements of the relaxation problem through the combined use of
several techniques. The framework was applied to a 2-D image segmentation objective with discontinuities; formulations of this problem with tens to hundreds of
thousands of variables were then successfully solved.
Acknow ledgements
This work was supported partly by AFOSR-F49620-92-J-0465 and the Yale Center
of Theoretical and Applied Neurosci ence.
60000
639
A Multiscale Attentional Framework for Relaxation Neural Networks
10'
(128t2) : Focus on 1st level - proposed saliency
10 '
.. ..
8
~
o
8
,.
00
00
10?
",00
10'
0
~ 10-'
.!!
o
o
o
0
,.,10-'
o
o
o
.."
o
o
~
0
0
0
8 o 8
~0o;
/.0
: 10'
"
0.
c
.!!
OJ
11l 10-3
0
3
..
o
0
0
o
:0
.8-
0
10'
o
o
:0
(128t2) : Focus on 1st level - absolute gradient
.
0
"1:,00 0
0"
~
c
.!!
. .0
.
~10-1
~
~
!10- 4
10-'
"I
10 -~0~-.-'-'-'u.tl~Oo.,- J....Ll.J"!'1*=0-.-'-'-~1
O:=.-.l.....L..Ll.';t'!loOr-'-~.tO!:.-r-u~1~0-:r'-'~
100
(Average Della-E per block)
10-~0b.--'"-U.~I~"ol_:r'-'.w.m~I"O~
I _r-u.li;~
lo.L_:r'-'-,-"~uI"O~I_?.-'-'-l.l..lLU~l
ulo,,,"
_? .l.....L..Lu.;I"~ol-cr'-'-~
1 00
(Average Della-E per block)
Figure 3: Saliency Comparison: (left), saliency as in eq. (1); (right), the absolute
gradient was used instead.
References
A. Conn , N. Gould, A. Sartanaer, & Ph . Toint. (1993) Global Convergence of a
Class of Trust Region Algorithms for Optimization Using Inexact Projections on
Convex Constraints. SIAM J. of Optimization, 3(1) :164-221.
R. Durbin & D. Willshaw. (1987) An Analogue Approach to the TSP Problem
Using an Elastic Net Method. Nature , 326:689-691.
J. Hopfield & D. W. Tank. (1985) Neural Computation of Decisions in Optimization
Problems. Bioi. Cybernei., 52:141-152.
C. Koch , J. Marroquin & A. Yuille. (1986) Analog 'Neuronal ' Networks in Early
Vision . Proc . of the National Academy of Sciences USA, 83:4263-4267.
C . Koch, & S. Ullman . (1985) Shifts in Selective Visual Attention : Towards the
Underlying Neural Circuitry. Human Neurobiology , 4 :219-227 .
E. Mjolsness, C. Garrett, & W. Miranker. (1991) Multiscale Optimization in Neural
Nets. IEEE Trans. on Neural Networks , 2(2):263-274 .
E. Mjolsness & W. Miranker. (1993) Greedy Lagrangians for Neural Networks:
Three Levels of Optimization in Relaxation Dynamics. YALEU/DCS/TR-945.
(URL file:!!cs.ucsd.edu!pub!emj!papers!yale-TR-945.ps.Z)
B. Olshausen, C. Anderson, & D. Van Essen. (1993) A Neurobiological Model of
Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of
Information. The Journal of Neuroscience , 13(11):4700-4719 .
D. Terzopoulos. (1986) Regularization of Inverse Visual Problems Involving Discontinuities. IEEE Trans. PAMI, 8:419-429 .
D. I. Tsioutsias & E. Mjolsness. (1995) Global Optimization in Neural Nets: A
Novel Relaxation Framework . To appear as a UCSD-CSE-TR, Dec. 1995.
| 1022 |@word version:1 seems:2 norm:4 nd:1 termination:1 seek:1 pick:1 tr:3 yaleu:1 reduction:4 configuration:1 series:1 pub:1 interestingly:1 past:1 current:2 elliptical:1 activation:1 reminiscent:1 readily:1 periodically:1 partition:4 localise:1 update:2 greedy:1 selected:1 steepest:2 coarse:3 cse:1 x128:1 along:1 introduce:5 inter:1 rapid:1 behavior:1 mechanic:1 ol:1 globally:2 resolve:1 little:1 actual:1 increasing:1 becomes:5 moreover:3 notation:2 competes:1 medium:1 underlying:2 linearity:1 lowest:1 multigrid:1 kind:1 minimizes:1 extremum:2 guarantee:1 temporal:1 preferably:1 ti:1 tie:1 preferable:1 willshaw:2 control:1 appear:1 before:1 positive:1 engineering:2 local:4 switching:1 conspicuousness:1 pami:1 limited:1 practical:4 block:6 definite:1 x512:2 procedure:1 area:2 projection:1 refers:1 numbered:2 applying:1 optimize:1 deterministic:3 center:1 attention:7 starting:1 convex:4 focused:1 rectangular:1 rule:1 fill:1 searching:1 x64:1 handle:1 analogous:1 diego:1 us:2 designing:1 hypothesis:1 agreement:1 recognition:2 approximated:1 coarser:3 labeled:1 observed:1 subproblem:2 electrical:1 solved:2 thousand:3 region:11 mjolsness:13 ui:1 dynamic:4 solving:2 yuille:1 eric:1 efficiency:1 hopfield:3 various:2 represented:1 effective:1 neighborhood:1 apparent:2 whose:1 solve:3 valued:3 otherwise:1 ability:1 gi:1 syntactic:1 itself:1 tsp:1 final:1 net:11 propose:2 shine:1 adaptation:1 loop:2 rapidly:1 iff:1 academy:1 getting:1 convergence:5 requirement:4 optimum:1 p:1 comparative:1 tk:2 oo:2 ij:5 progress:1 eq:7 c:3 predicted:2 direction:3 radius:1 human:1 routing:1 computationallyefficient:1 argued:1 wall:1 lagrangians:1 codifying:1 koch:5 around:1 sufficiently:1 circuitry:1 entropic:1 early:2 proc:1 spreading:1 combinatorial:1 largest:1 agrees:1 successfully:1 reflects:1 modified:2 cr:1 derived:1 focus:6 vk:8 sense:1 accumulated:1 nand:1 spurious:1 quasi:2 selective:1 tank:3 pcg:1 retaining:1 constrained:1 spatial:4 field:1 having:1 emj:2 minimized:1 t2:4 haven:1 primarily:1 summand:1 simultaneously:1 national:1 phase:7 maintain:1 detection:1 investigate:2 highly:2 essen:1 nl:1 foa:24 closer:1 lh:2 conforms:1 indexed:1 taylor:2 re:1 theoretical:2 instance:3 levenbergmarquardt:1 earlier:1 ence:1 cost:5 subset:1 predictor:1 usefulness:1 hundred:1 successful:1 kwta:1 accomplish:1 combined:5 adaptively:2 st:2 siam:1 ie:1 eel:1 picking:1 analogously:1 connectivity:1 again:1 reflect:1 opposed:1 derivative:1 ullman:3 li:1 account:1 de:1 summarized:1 vi:10 performed:1 minimize:1 il:1 accuracy:1 became:1 characteristic:1 efficiently:1 correspond:1 identify:1 saliency:6 lu:1 trajectory:1 notoriously:1 researcher:1 finer:1 suffers:1 inexact:1 obvious:1 associated:1 static:1 segmentation:3 garrett:1 marroquin:1 ok:3 formulation:2 done:1 anderson:1 xa:1 implicit:1 preconditioner:1 clock:1 trust:6 nonlinear:5 multiscale:18 indicated:1 olshausen:2 usa:1 effect:5 validity:1 former:1 regularization:1 alternating:1 spatially:1 ll:2 during:2 maintained:1 noted:1 clocked:2 criterion:1 m:1 prominent:1 complete:1 temperature:2 gh:1 image:4 novel:2 recently:1 common:1 sigmoid:1 winner:1 analog:1 spotlight:1 significant:1 ai:1 smoothness:1 inclusion:1 nonlinearity:1 convexified:1 surface:1 dominant:1 curvature:3 jolla:1 moderate:1 prime:1 optimizes:1 nonconvex:1 binary:1 success:1 minimum:6 determine:1 smooth:2 devised:1 involving:2 vision:4 expectation:1 robotics:1 dec:1 beam:1 schematically:1 annealing:5 file:1 seem:1 fit:1 inner:1 reduce:1 idea:1 intensive:1 shift:1 assist:1 url:1 hessian:4 adequate:1 generally:1 ten:1 ph:1 reduced:2 exist:1 trapped:1 arising:1 track:1 per:2 neuroscience:1 ledgements:1 discrete:3 conn:3 relaxation:13 bia:5 run:1 inverse:1 tsioutsias:8 decision:1 summarizes:1 scaling:1 toint:1 layer:3 ct:1 pay:1 convergent:2 yale:4 quadratic:3 durbin:2 larg:1 activity:2 occur:1 constraint:1 aspect:1 speed:2 gould:1 structured:1 alternate:1 combination:1 poor:1 conjugate:1 describes:1 smaller:1 unity:1 wi:1 happens:1 dv:1 gradually:1 invariant:1 taken:2 computationally:1 resource:3 ln:1 discus:2 mechanism:6 needed:1 end:1 indirectly:1 responsive:1 alternative:1 original:4 assumes:1 include:1 newton:5 instant:1 llu:1 exploit:1 objective:30 move:1 quantity:1 diagonal:1 gradient:4 subspace:2 attentional:8 simulated:1 outer:1 reason:1 preconditioned:1 loor:1 ratio:1 minimizing:1 equivalently:1 difficult:1 acknow:1 negative:1 implementation:1 ethod:1 allowing:2 i_:1 neuron:9 finite:1 descent:2 truncated:1 neurobiology:1 locate:1 dc:1 ucsd:3 intensity:2 introduced:2 namely:2 connection:2 california:1 established:1 discontinuity:4 trans:2 pattern:5 dimitris:1 sparsity:1 oj:1 analogue:1 power:1 scheme:1 brief:1 axis:1 redirect:1 carried:1 created:1 l2:1 evolve:1 determining:2 relative:1 law:1 afosr:1 mixed:1 lv:2 sufficient:1 consistent:2 lo:1 elsewhere:1 supported:1 last:2 soon:1 l_:2 terzopoulos:2 neighbor:1 barrier:2 absolute:2 sparse:3 distributed:1 benefit:1 overcome:1 dimension:3 llc:1 valid:1 cumulative:1 curve:2 f49620:1 van:1 made:1 disturbing:1 san:1 coincide:1 functionals:2 approximate:1 neurobiological:1 global:4 active:3 conceptual:1 llk:1 assumed:1 continuous:4 decade:1 lip:1 nature:1 transfer:1 ca:1 elastic:1 expansion:1 necessarily:1 domain:1 neurosci:1 border:1 neuronal:2 fig:3 join:1 tl:1 governed:1 clamped:1 list:1 scatterplot:1 restricting:1 effectively:1 g10:1 depicted:1 tc:1 simply:1 neurophysiological:1 visual:3 gi1:1 nested:1 corresponds:1 bioi:1 sized:1 presentation:1 consequently:1 towards:2 price:1 included:2 specifically:1 determined:3 reducing:1 miranker:5 infinite:1 justify:1 principal:1 total:2 accepted:1 experimental:1 la:1 partly:1 meaningful:1 indicating:2 support:1 latter:1 inability:1 avoiding:2 incorporate:1 dept:2 tested:1 della:2 |
28 | 1,023 | Correlated Neuronal Response:
Time Scales and Mechanisms
Wyeth Bair
Howard Hughes Medical Inst.
NYU Center for Neural Science
4 Washington PI., Room 809
New York, NY 10003
Ehud Zohary
Dept. of Neurobiology
Institute of Life Sciences
The Hebrew University, Givat Ram
Jerusalem, 91904 ISRAEL
Christof Koch
Computation and Neural Systems
Caltech, 139-74
Pasadena, CA 91125
Abstract
We have analyzed the relationship between correlated spike count
and the peak in the cross-correlation of spike trains for pairs of simultaneously recorded neurons from a previous study of area MT
in the macaque monkey (Zohary et al., 1994). We conclude that
common input, responsible for creating peaks on the order of ten
milliseconds wide in the spike train cross-correlograms (CCGs),
is also responsible for creating the correlation in spike count observed at the two second time scale of the trial. We argue that
both common excitation and inhibition may play significant roles
in establishing this correlation.
1
INTRODUCTION
In a previous study of pairs of MT neurons recorded using a single extracellular
electrode, it was found that the spike count during two seconds of visual motion
stimulation had an average correlation coefficient of r = 0.12 and that this correlation could significantly limit the usefulness of pooling across increasingly large
populations of neurons (Zohary et aI., 1994). However, correlated spike count between two neurons could in principle occur at several time-scales. Correlated drifts
Correlated Neuronal Response: Time Scales and Mechanisms
69
in the excitability of the cells, for example due to normal biological changes or
electrode induced changes, could cause correlation at a time scale of many minutes. Alternatively, attentional or priming effects from higher areas could change
the responsivity of the cells at the time scale of an experimental trial. Or, as suggested here, common input that changes on the order of milliseconds could cause
correlation in spike count. The first section determines the time scale at which the
neurons are correlated by analyzing the relationship between the peak in the spike
train cross-correlograms (CCGs) and the correlation between the spike counts using
a construct we call the trial CCG. The second section examines temporal structure
that is indicative of correlated suppression of firing, perhaps due to inhibition, which
may also contribute to the spike count correlation.
2
THE TIME SCALE OF CORRELATION
At the time scale of the single trial, the correlation, r se, of spike counts x and y from
two neurons recorded during nominally identical two second stimuli was computed
using Pearson's correlation coefficient,
rse
=
E[xy] - ExEy
,
uxuy
(1)
where E is expected value and u 2 is variance. If spike counts are converted to
z-scores, i.e., zero mean and unity variance, then rse = E[xy], and rse may be
interpreted as the zero-lag value of the cross-correlation of the z-scored spike counts.
The trial CCGs resulting from this procedure are shown for two pairs of neurons in
Fig. l.
To distinguish between cases like the two shown in Fig. 1, the correlation was broken
into a long-term component, rlt, the average value (computed using a Gaussian
window of standard deviation 4 trials) surrounding the zero-lag value, and a shortterm component, rst, the difference between the zero-lag value and rlt. Across 92
pairs of neurons from three monkeys, the average rst was 0.10 (s.d. 0.17) while rlt
was not significantly different from zero (mean 0.01, s.d. 0.11). The mean of rst
was similar to the overall correlation of 0.12 reported by Zohary et al. (1994).
Under certain assumptions, including that the time scale of correlation is less than
the trial duration, rst can be estimated from the area under the spike train CCG
and the areas under the autocorrelations (derivation omitted). Under the additional
assumption that the spike trains are individually Poisson and have no peak in the
autocorrelation except that which occurs by definition at lag zero, the correlation
coefficient for spike count can be estimated by
rpeak
~ j.AA.ABArea,
(2)
where .AA and .AB are the mean firing rates of neurons A and B, and Area is the area
under the spike train CCG peak, like that shown in Fig. 2 for one pair of neurons.
Taking Area to be the area under the CCG between ?32 msec gives a good estimate
of short-term rst, as shown in Fig. 3. In addition to the strong correlation (r = 0.71)
between rpeak and rst, rpeak is a less noisy measure, having standard deviation (not
shown) on average one fourth as large as those of rst.
We conclude that the common input that causes the peaks in the spike train CCGs is
also responsible for the correlation in spike count that has been previously reported.
W. BAIR. E. ZOHARY. C. KOCH
70
o
80
160
240
320 0
Trial Number
400
800
1200
Trial Number
0.3
0.2
d
U
U 0.1
";3
.~
~
0
~~------------------~~
ernu090
-0.1 +'-r-~..:...-,:......-~-.-,-~~---,-,~,.--,--,
-100
-50
0
50
100 -50
-25
Lag (Trials)
0
25
50
Lag (Trials)
Figure 1: Normalized responses for two pairs of neurons and their trial crosscorrelograms (CCGs). The upper traces show the z-scored spike counts for all
trials in the order they occurred. Spikes were counted during the 2 sec stimulus,
but trials occurred on average 5 sec apart, so 100 trials represents about 2.5 minutes. The lower traces show the trial CCGs. For the pair of cells in the left panel,
responsivity drifts during the experiment. The CCG (lower left) shows that the drift
is correlated between the two neurons over nearly 100 trials. For the pair of cells
in the right panel, the trial CCG shows a strong correlation only for simultaneous
trials. Thus, the measured correlation coefficient (trial CCG at zero lag) seems to
occur at a long time scale on the left but a short time scale (less than or equal to one
trial) on the right. The zero-lag value can be broken into two components, T st and
Tlt (short term and long term, respectively, see text). The short-term component,
T st, is the value at zero lag minus the weighted average value at surrounding lag
times. On the left, Tst ~ 0, while on the right, Tlt ~ O.
Correlated Neuronal Response: Time Scales and Mechanisms
71
5
o
8
16
24
32
Width at HH (msec)
1
o
emu064P
-100
o
-50
50
100
Time Lag (msec)
Figure 2: A spike train CCG with central peak. The frequency histogram of widths
at half-height is shown (inset) for 92 cell pairs from three monkeys. The area of the
central peak measured between ?32 msec is used to predict the correlation coefficients, rp eak. plotted in Fig. 3. The y-axis indicates the probability of a coincidence
relative to that expected for Poisson processes at the measured firing rates .
0.8
?
0.6
~
~
(1)
~
'-"
~
? ?
? ?
0.4
?
?
0.2 ?
?
.....
? ?
?
?
? ?
?
?
?
?
0 ?
??
-0.2
-0.2
o
0.2
0.4
0.6
0.8
r (Short Term)
Figure 3: The area of the peak of the spike train CCG yields a prediction, rpeak (see
Eqn. 2) , that is strongly correlated (r = 0.71, p < 0.00001), with the short-term
spike count correlation coefficient , rst . The absence of points in the lower right
corner of the plot indicates that there are no cases of a pair of cells being strongly
correlated without having a peak in the spike train CCG.
w. BAIR, E. ZOHARY, C. KOCH
72
In Fig. 3, there are no pairs of neurons that have a short-term correlation and yet
do not have a peak in the ?32 msec range of the spike train CCG.
3
CORRELATED SUPPRESSION
There is little doubt that common excitatory input causes peaks like the one shown
in Fig. 2 and therefore results in the correlated spike count at the time scale of the
trial. However, we have also observed correlated periods of suppressed firing that
may point to inhibition as another contribution to the CCG peaks and consequently
to the correlated spike count.
Fig. 4 A and B show the response of one neuron to coherent preferred and null
direction motion, respectively. Excessively long inter-spike intervals (ISIs), or gaps,
appear in the response to preferred motion, while bursts appear in the response
to null motion. Across a database of 84 single neurons from a previous study
(Britten et aI., 1992), the occurrence of the gaps and bursts has a symmetrical
time course-both are most prominent on average from 600-900 msec post-stimulus
onset, although there are substantial variations from cell to cell (Bair, 1995). The
gaps, roughly 100 msec long, are not consistent with the slow, steady adaptation
(presumably due to potassium currents) which is observed under current injection
in neocortical pyramidal neurons, e.g., the RS 1 and RS2 neurons of Agmon and
Connors (1992).
Fig. 4 C shows spike trains from two simultaneously recorded neurons stimulated
with preferred direction motion. The longest gaps appear to occur at about the
same time. To assess the correlation with a cross-correlogram, we first transform
the spike trains to interval trains, shown in Fig. 4 D for the spike trains in C.
This emphasizes the presence of long ISIs and removes some of the information
regarding the precise occurrence times of action potentials. The interval crosscorrelation (ICC) between each pair of interval trains is computed and averaged
over all trials, and the average shift predictor is subtracted. Fig. 4 E and F show
ICCs (thick lines) for two different pairs of neurons. In 17 of 31 pairs (55%), there
were peaks in the raw ICC that were at least 4 standard errors above the level of the
shift predictor. The peaks were on average centered (mean 4.3 msec, SD 54 msec)
and had mean width at half-height of 139 msec (SD 59 msec).
To isolate the cause of the peaks, the long intervals in the trains were set to the
mean of the short intervals. Long intervals were defined as those that accounted
for 30% of the duration of the data and were longer than all short intervals. Note
that this is only a small fraction of the number of ISIs in the spike train (typically
less than about 10%), since a few long intervals consume the same amount of time
as many short intervals. Data from 300-1950 msec was processed, avoiding the
on-transient and the lack of final interval. With the longest intervals neutralized,
the peaks were pushed down to the level of the noise in the ICC (thin lines, Fig. 4
E, F). Thus, 90% of the action potentials may serve to set a mean rate, while a few
periods of long ISIs dominate the ICC peaks.
The correlated gaps are consistent with common inhibition to neurons in a local
region of cortex, and this inhibition adds area to the spike train CCG peaks in
the form of a broader base (not shown). The data analyzed here is from behaving animals, so the gaps may be related to small saccades (within the 0.5 degree
73
Correlated Neuronal Response: Time Scales and Mechanisms
""11"1'"''''''"'
""'11111"'"''
1111 """111111111111"'""
11111'""111111111"11
""II.!IIIIIIIIIIII
"11"""1111111110111111111111111111111""111'1
II
11111111'"11111111111111111111
111111111'"'"111 IIItIlI!
'"IIIIIII!!I1I11'"1I1I 1I1"'"tll ""UIU
""'"'"
'""""
111"'."111111111111
III
1111111 '"'
IlIg" " ' 111111111111111"11111
111111 1111111111'"1111111 1111111111
111'"""""1111
1111"111'"'"1111111
'"""""'"11'"111111'
11111111'"1111111111'"'"""'"'
11""'"1111111
II'
IIIIIU"'"IIIIIIIIIII.II'
"'''''1111111111 11,.""""'"
111111111111111'
1111111111111111111'"'"11111"11111111'11"11"11111111111"1 ' " ' ' ' ' ' "
1111111111111"'""1111111 "'"111
'"'"1111111'"111111111111111'
I
I
""1111111111.11. IIUII
111'''.,11111111111111111
1111111'111111"""11
1111
I
"11'"" 11111111"1111'"111 111""
"""'"111 "" II 111111""" 1111'
1111 111'11'111111111 I III 1M II , ' ' ' '
I"
! III
'''.1'111111111111111,,11111111111111111111111111111111111111111111''"1111"
I 11111
I l " l I l l ! III
1111
"11"11
A
"I
1111111 II! I 1111 III II! I ! ! " I I! " ' " ' " III III. II 11.11 III II
111M"" tI III. " 111"111 I ? " I ""I I I 11111
.11111111111111 II
11111111111111'",
"""""""'"111'"1111111111"1"1
"11'11111'1111111111
1111
1111.1.11111111 '"'1 II!!' I I ,,'''''HIII'''!I1''!
""
11111111111"1
"'"111'"'" I I I ' " ! I I " I II!
1111111.111111111111111111111'
11111111""'"1111111111111'1111111111 '1IIIII'"IIIIIIII""""""YlI.""111I
11""
,""'',''1''','' ,,'.',
II
I
I
II
I
'I
?
"
?
I
I
"'I I
ta'
I
III
N 1
.111
III
i
II
0
1111
I
"" "'
I I
III
I
"11
.1
II
I I
II
'"~
II
II
I"
"'
I'"
1111
"II'"'~IIIIII,II!'"'''',,\
III
,.
II
II
"I II !
I!
11
II.
t""'IIIIII
'11I'1~I'M'i"IlI'III"'I,IIII"II'"III,III'
I
11111
II
'111'
"'
I
500
1000
B
, I
?
'I!
I'
III
""
I'
III
1500
2000
msec
1
2
II
11111111111111 1111111111111111 11111111 II 01 n11111111 11111111111111111111111111111111111111111111111111118 11111
11.1111 1111
1111 1 11111111 111111
I 1I11 III I 11I1111I111I111I111I111111I11Im1l11111 II
c
1
D
1000
2000
Time (msec)
E
-1000 -500
F
o
500
1000 -1000 -500
o
500
1000
Time Lag (msec)
Figure 4: (A) The brisk response to coherent preferred direction motion is interrupted by occasional excessively long inter-spike intervals, i.e., gaps. (B) The
suppressed response to null direction motion is interrupted by bursts of spikes. (C)
Simultaneous spike trains from two neurons show correlated gaps in the preferred
direction response. (D) The interval representation for the spike trains in C. (E,F)
Interval cross-correlograms have peaks indicating that the gaps are correlated (see
text).
w. BAIR. E. ZOHARY. C. KOCH
74
fixation window) or eyelid blink. It has been hypothesized that blink suppression
and saccadic visual suppression may operate through the same pathways and are
of neuronal origin (Ridder and Tomlinson, 1993). An alternative hypothesis is that
the gaps and bursts arise in cortex from intrinsic circuitry arranged in an opponent
fashion.
4
CONCLUSION
Common input that causes central peaks on the order of tens of milliseconds wide in
spike train CCGs is also responsible for causing the correlation in spike count at the
time scale of two second long trials. Long-term correlation due to drifts in responsivity exists but is zero on average across all cell pairs and may represent a source of
noise which complicates the accurate measurement of cell-to-cell correlation. The
area of the peak of the spike train CCG within a window of ?32 msec is the basis of
a good prediction of the spike count correlation coefficient and provides a less noisy
measure of correlation between neurons. Correlated gaps observed in the response
to coherent preferred direction motion is consistent with common inhibition and
contributes to the area of the spike train CCG peak, and thus to the correlation
between spike count. Correlation in spike count is an important factor that can
limit the useful pool-size of neuronal ensembles (Zohary et al., 1994; Gawne and
Richmond, 1993).
Acknowledgements
We thank William T. Newsome, Kenneth H. Britten, Michael N. Shadlen, and J.
Anthony Movshon for kindly providing data that was recorded in previous studies
and for helpful discussion. This work was funded by the Office of Naval Research
and the Air Force Office of Scientific Research. W. B. was supported by the L. A.
Hanson Foundation and the Howard Hughes Medical Institute.
References
Agmon A, Connors BW (1992) Correlation between intrinsic firing patterns and
thalamocortical synaptic responses of neurons in mouse barrel cortex. J N eurosci 12:319-329.
Bair W (1995) Analysis of Temporal Structure in Spike Trains of Visual Cortical
Area MT. Ph.D. thesis, California Institute of Technology.
Britten KH, Shadlen MN, Newsome WT, Movshon JA (1992) The analysis of visual
motion: a comparison of neuronal and psychophysical performance. J Neurosci
12:4745-4765.
Gawne T J, Richmond BJ (1993) How independent are the messages carried by
adjacent inferior temporal cortical neurons? J Neurosci 13:2758-2771.
Ridder WH, Tomlinson A (1993) Suppression of contrasts sensitivity during eyelid
blinks. Vision Res 33: 1795- 1802.
Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate
and its implications for psychophysical performance. Nature 370:140-143.
| 1023 |@word trial:24 seems:1 r:1 minus:1 responsivity:3 score:1 current:2 yet:1 interrupted:2 remove:1 plot:1 half:2 indicative:1 short:10 provides:1 contribute:1 height:2 correlograms:3 burst:4 fixation:1 pathway:1 autocorrelation:1 inter:2 expected:2 roughly:1 isi:4 little:1 window:3 zohary:9 panel:2 barrel:1 null:3 israel:1 interpreted:1 monkey:3 temporal:3 ti:1 medical:2 christof:1 appear:3 local:1 sd:2 limit:2 analyzing:1 establishing:1 firing:5 range:1 averaged:1 responsible:4 hughes:2 procedure:1 area:14 significantly:2 center:1 jerusalem:1 duration:2 examines:1 dominate:1 i1i:1 population:1 variation:1 discharge:1 play:1 hypothesis:1 origin:1 database:1 observed:4 role:1 coincidence:1 region:1 substantial:1 broken:2 rse:3 serve:1 basis:1 surrounding:2 train:25 derivation:1 pearson:1 lag:12 consume:1 transform:1 noisy:2 final:1 adaptation:1 causing:1 kh:1 rst:8 potassium:1 electrode:2 measured:3 strong:2 direction:6 thick:1 centered:1 transient:1 ja:1 biological:1 koch:4 normal:1 presumably:1 predict:1 bj:1 circuitry:1 eurosci:1 omitted:1 agmon:2 individually:1 weighted:1 gaussian:1 uiu:1 broader:1 tll:1 office:2 naval:1 longest:2 indicates:2 richmond:2 contrast:1 suppression:5 inst:1 helpful:1 typically:1 pasadena:1 i1:2 overall:1 animal:1 equal:1 construct:1 having:2 washington:1 identical:1 represents:1 nearly:1 thin:1 stimulus:3 few:2 simultaneously:2 bw:1 william:1 ab:1 message:1 analyzed:2 implication:1 accurate:1 xy:2 iiiiiiiiiii:1 re:1 plotted:1 ccgs:7 complicates:1 newsome:3 deviation:2 predictor:2 usefulness:1 reported:2 st:2 peak:23 sensitivity:1 pool:1 michael:1 mouse:1 thesis:1 central:3 recorded:5 corner:1 creating:2 crosscorrelation:1 doubt:1 converted:1 potential:2 sec:2 coefficient:7 onset:1 wyeth:1 contribution:1 ass:1 air:1 variance:2 ensemble:1 yield:1 blink:3 raw:1 emphasizes:1 gawne:2 simultaneous:2 synaptic:1 definition:1 frequency:1 wh:1 higher:1 ta:1 response:13 arranged:1 strongly:2 correlation:33 eqn:1 iiiii:1 lack:1 autocorrelations:1 perhaps:1 scientific:1 effect:1 excessively:2 normalized:1 hypothesized:1 excitability:1 adjacent:1 during:5 width:3 inferior:1 excitation:1 steady:1 prominent:1 neocortical:1 motion:9 common:8 stimulation:1 mt:3 occurred:2 significant:1 measurement:1 iiiiiii:1 ai:2 had:2 funded:1 neutralized:1 longer:1 cortex:3 inhibition:6 behaving:1 add:1 base:1 apart:1 certain:1 life:1 caltech:1 additional:1 rs2:1 tomlinson:2 period:2 ii:31 cross:6 long:13 post:1 prediction:2 vision:1 poisson:2 histogram:1 represent:1 cell:11 addition:1 iiii:1 interval:15 pyramidal:1 source:1 operate:1 pooling:1 induced:1 isolate:1 call:1 presence:1 iii:19 regarding:1 shift:2 bair:6 ridder:2 movshon:2 york:1 cause:6 action:2 useful:1 se:1 amount:1 ten:2 ph:1 processed:1 millisecond:3 estimated:2 tst:1 iiiiii:2 kenneth:1 ram:1 fraction:1 fourth:1 pushed:1 distinguish:1 occur:3 injection:1 extracellular:1 across:4 increasingly:1 suppressed:2 unity:1 previously:1 count:20 mechanism:4 hh:1 opponent:1 occasional:1 occurrence:2 subtracted:1 alternative:1 rp:1 psychophysical:2 spike:46 occurs:1 saccadic:1 attentional:1 thank:1 argue:1 relationship:2 providing:1 hebrew:1 trace:2 upper:1 yli:1 neuron:24 howard:2 neurobiology:1 precise:1 drift:4 pair:15 hanson:1 california:1 coherent:3 macaque:1 suggested:1 pattern:1 including:1 force:1 mn:2 technology:1 axis:1 carried:1 shortterm:1 britten:3 text:2 acknowledgement:1 icc:5 relative:1 foundation:1 degree:1 consistent:3 shadlen:3 principle:1 pi:1 excitatory:1 course:1 accounted:1 supported:1 thalamocortical:1 institute:3 wide:2 taking:1 eyelid:2 cortical:2 counted:1 tlt:2 eak:1 preferred:6 connors:2 symmetrical:1 conclude:2 alternatively:1 stimulated:1 nature:1 ca:1 brisk:1 contributes:1 priming:1 anthony:1 ehud:1 kindly:1 neurosci:2 noise:2 scored:2 arise:1 neuronal:8 fig:12 fashion:1 ny:1 slow:1 rlt:3 msec:16 minute:2 down:1 inset:1 nyu:1 intrinsic:2 exists:1 i11:1 ccg:15 givat:1 crosscorrelograms:1 gap:11 visual:4 correlogram:1 nominally:1 saccade:1 aa:2 determines:1 consequently:1 room:1 absence:1 change:4 except:1 wt:2 ili:1 experimental:1 indicating:1 dept:1 avoiding:1 correlated:21 |
29 | 1,024 | Onset-based Sound Segmentation
Leslie S. Smith
CCCN jDepartment of Computer Science
University of Stirling
Stirling FK9 4LA
Scotland
Abstract
A technique for segmenting sounds using processing based on mammalian early auditory processing is presented. The technique is
based on features in sound which neuron spike recording suggests
are detected in the cochlear nucleus. The sound signal is bandpassed and each signal processed to enhance onsets and offsets.
The onset and offset signals are compressed, then clustered both in
time and across frequency channels using a network of integrateand-fire neurons. Onsets and offsets are signalled by spikes, and
the timing of these spikes used to segment the sound.
1
Background
Traditional speech interpretation techniques based on Fourier transforms, spectrum
recoding, and a hidden Markov model or neural network interpretation stage have
limitations both in continuous speech and in interpreting speech in the presence
of noise, and this has led to interest in front ends modelling biological auditory
systems for speech interpretation systems (Ainsworth and Meyer 92; Cosi 93; Cole
et al 95).
Auditory modelling systems use similar early auditory processing to that used in
biological systems. Mammalian auditory processing uses two ears, and the incoming
signal is filtered first by the pinna (external ear) and the auditory canal before it
causes the tympanic membrane (eardrum) to vibrate. This vibration is then passed
on through the bones of the middle ear to the oval window on the cochlea. Inside
the cochlea, the pressure wave causes a pattern of vibration to occur on the basilar
membrane. This appears to be an active process using both the inner and outer hair
cells of the organ of Corti. The movement is detected by the inner hair cells and
turned into neural impulses by the neurons of the spiral ganglion. These pass down
the auditory nerve, and arrive at various parts of the cochlear nucleus. From there,
nerve fibres innervate other areas: the lateral and medial nuclei of the superior olive,
L.S.SMITH
730
and the inferior colliculus, for example. (See (Pickles 88)).
Virtually all modern sound or speech interpretation systems use some form of bandpass filtering, following the biology as far as the cochlea. Most use Fourier transforms to perform a calculation of the energy in each band over some time period,
usually between 25 and 75 ms. This is not what the cochlea does. Auditory modelling front ends differ in the extent and length to which they follow animal early
auditory processing, but the term generally implies at least that wideband filters
are used, and that high temporal resolution is maintained in the initial stages. This
means the use of filtering techniques. rather than Fourier transforms in the bandpass
stage. Such filtering systems have been implemented by Patterson and Holdsworth
(Patterson and Holdsworth 90; Slaney 93), and placed directly in silicon (Lazzaro
and Mead 89; Lazzaro et al 93; Liu et al 93; Fragniere and van Schaik 94).
Some auditory models have moved beyond cochlear filtering. The inner hair cell
has been modelled by either simple rectification (Smith 94) or has been based on
the work of (Meddis 88) for example (Patterson and Holdsworth 90; Cosi 93; Brown
92). Lazzaro has experimented with a silicon version of Licklider's autocorrelation
processing (Licklider 51; Lazzaro and Mead 89). Others such as (Wu et al 1989:
Blackwood et al1990; Ainsworth and Meyer 92; Brown 92; Berthommier 93; Smith
94) have considered the early brainstem nuclei, and their possible contribution,
based on the neurophysiology of the different cell types (Pickles 88; Blackburn and
Sachs 1989; Kim et al 90).
Auditory model-based systems have yet to find their way into mainstream speech
recognition systems (Cosi 93). The work presented here uses auditory modelling
up to onset cells in the cochlear nucleus. It adds a temporal neural network to
clean up the segmentation produced. This part has been filed as a patent (Smith
95). Though the system has some biological plausibility, the aim is an effective
data-driven segmentation technique implement able in silicon.
2
Techniques used
Digitized sound was applied to an auditory front end, (Patterson and Holdsworth
90), which bandpassed the sound into channels each with bandwidth 24.7{4.37Fr; +
I)Hz, where Fe is the centre frequency (in KHz) of the band (Moore and Glasberg
83). These were rectified, modelling the effect of the inner hair cells. The signals
produced bear some resemblance to that in the auditory nerve. The real system
has far more channels and each nerve channel carries spike-coded information. The
coding here models the signal in a population of neighboring auditory nerve fibres.
2.1
The onset-offset filter
The signal present in the auditory nerve is stronger near the onset of a tone than
later (Pickles 88). This effect is much more pronounced in certain cell types of the
cochlear nucleus. These fire strongly just after the onset of a sound in the band to
which they are sensitive, and are then silent. This emphasis on onsets was modelled
by convolving the signal in each band with a filter which computes two averages, a
more recent one, and a less recent one, and subtracts the less recent one from the
more recent one. One biologically possible justification for this is to consider that
a neuron is receiving the same driving input twice, one excitatorily, and the other
inhibitorily; the excitatory input has a shorter time-constant than the inhibitory
input. Both exponentially weighted averages, and averages formed using a Gaussian
filter have been tried (Smith 94), but the former place too much emphasis on the
most recent part of the signal, making the latter more effective.
731
Onset-based Sound Segmentation
The filter output for input signal s(x) is
O(t. k, 'f') =
lot (f(t - x, k) -
f(t - x, k/,r))s(x)dx
(1)
where f(x, y) = vY exp( -yx 2 ). k and 'r determine the rise and fall times of the
pulses of sOlmd that the system is sensitive to. We used A: = 1000, 'r = 1.2, so
that the SD of the Gaussians are 24.49ms and 22.36ms. The convolving filter has
a positive peak at O. crosses 0 at 22.39ms. and is then negative. With these values.
the system is sensitive to energy rises and falls which occm in the envelopes of
everyday sounds. A positive onset-offset signal implies that the bandpassed signal is
increasing in intensity, and a negative onset-offset signal implies that it is decreasing
in intensity. The convolution used is a sound analog of the difference of Gaussians
operator used to extract black/white and white/black edges in monochrome images
(MalT and Hildreth 80). In (Smith 94) we performed sOlmd segmentation directly
on this signal.
2.2
Compressing the onset-offset signal
The onset-offset signal was divided into two positive-going signals, an onset signal
consisting of the positive-going part, and an offset signal consisting of the inverted
negative-going part. Both were compressed logarithmically (where log(x) was taken
as 0 for 0 S x S 1). This increases the dynamical range of the system, and models
compressive biological effects. The compressed onset signal models the output of a
population of onset cells. This technique for producing an onset signal is related to
that of (Wu et al 1989: Cosi 93).
2.3
The integrate-and-fire neural network
To segment the sound using the onset and offset signals, they need to be integrated
across frequency bands and across time. This temporal and tonotopic clustering
was achieved using a network of integrate-and-fire units. An integrate-and-fire unit
accumulates its weighted input over time. The activity of the unit A. is initially O.
and alters according to
dA
(2)
- = I(t) - "YA
dt
where I(t) is the input to the nemon and "Y, the dissipation, describes the leakiness
of the integration. When A reaches a threshold. the unit fires (i.e. emits a pulse).
and A is reset to O. After firing, there is a period of insensitivity to input, called the
refractory period. Such nemons are discussed in. e.g. (Mirolla and Strogatz 90).
One integrate-and-fire neuron was used per charmel: this neuron received input either from a single charmel, or from a set of adjacent charmels. all with equal positive
weighting. The output of each neuron was fed back to a set of adjacent neurons,
again with a fixed positive weight, one time step (here 0.5ms) later. Because of the
leaky nature of the accumulation of activity, excitatory input to the neuron arriving
when its activation is near' threshold has a lar'ger effect on the next firing time than
excitatory input arriving when activation is lower. Thus, if similar input is applied
to a set of neurons in adjacent charmels. the effect of the inter-neuron connections
is that when the first one fires, its neighbors fire almost immediately. This allows
a network of such neurons to cluster the onset or offset signals, producing a sharp
burst of spikes across a number of charmels providing unambiguous onsets or offsets.
The external and internal weights of the network were adjusted so that onset or
offset input alone allowed neurons to fire, while internal input alone was not enough
L. S. SMITH
732
to cause firing. The refractory period used was set to 50ms for the onset system,
and 5ms for the offset system. For the onset system, the effect was to produce sharp
onset firing responses across adjacent channels in response to a sudden increase in
energy in some channels, thus grouping onsets both tonotopically and temporally.
This is appropriate for onsets, as these are generally brief and clearly marked. The
output of this stage we call the onset map. Offsets tend to be more gradual. This
is due to physical effects: for example, a percussive sound will start suddenly, as
the vibrating element starts to move. but die away slowly as the vibration ceases
(see (Gaver 93) for a discussion). Even when the vibration does stop suddenly. the
sound will die away more slowly due to echoes. Thus we cannot reliably mark the
offset of a sound: instead. we reduce the refractory period of the offset neurons, and
produce a train of pulses marking the duration of the offset in this channel. We call
the output of this stage the offset map.
3
Results
As the technique is entirely data-driven. it can be applied to sound from any source.
It has been applied to both speech and musical sounds. Figure 1 shows the effect
of applying the techniques discussed to a short piece of speech. Fig lc shows that
the neural network integrates the onset timings across the channels, allowing these
onsets to be used for segmentation. The simplest technique is to divide up the
continuous speech at each onset: however,to ensure that the occasional onset in a
single channel does not confuse the system. and that onsets which occur near to
each other do not result in very short segments we demanded that a segmentation
boundary have at least 6 onsets inside a period of lOms. and the minimum segment
length was set to 25ms.
The utterance Ne'Uml information processing systems has phonetic representation:
/ njtlrl: anfarmeIan prosc:salJ ststalllS /
and is segmented into the following 19 segments:
/n/, jtl/, /r/, /la/, /a/, /nf/. /arm/, /e/,
/t/, /st/, /am/, /s/
/I/,
/an/, /pro/, /os/, /c:s/ , /aIJ/, /s/,
The same text spoken more slowly (over 4.38s, rather than 2.31s) has phonetic
representation:
/ njural:anftrmeIanprosc:stIJ ststams /
Segmenting using this technique gives the following 25 segments:
/n/ , /ju/ , /u/, /r/. /a/ , /al/, /1/, / /, /an/, /f/ , /um/, /e/,
/ro/, /os/, /c:s/, /tIJ/, /s/, /t/ , /st/, /am/, /s/
/I/.
/an/, /n:/, /pr/,
Although some phonemes are broken between segments, the system provides effective segmentation, and is relatively insensitive to speech rate. The system is also
effective at finding speech inside certain types of noise (such as motor-bike noise) ,
as can be seen in fig Ie and f.
The system has been used to segment sound from single musical instruments. Where
these have clear breaks between notes this is straightforward: in (Smith 94) correct
segmentation was achieved directly from the onset-offset signal but was not achieved
for slurred sounds, in which the notes change smoothly. As is visible in figure 2c,
the onsets here are clear using the network, and the segmentation produced is near-
733
Onset-based Sound Segmentation
[E]J
:'.
...
...
'"
GJ . . .
,
..
..
"
. ... .... ..
'"\N',,,,,,.J/'I~'/w,. -"?'''''''''''~v..flt'''''I/!fII~~...A..it.-./'~f\It'''v,i~~~~';''o~\-J..J{iII''r ~'if'I{I/'}/i'...J"'''
~ 'W'hlJ.,j.~~..f"'\/' JJ.A~ "'v.""./fI/<II'~rJ~ M '\-'.'~'f.."""v/
"'I'jNflNlI/V
'1'#.~N~I{'f!II/W/W ?" /")~,\/,,
.. ; . .
'
Figure 1: (a-d):Onset and Offset maps from author saying Neural information processing systems rapidly. a: envelope of original sound. b: onset map. from 28
channels. from 100Hz-6KHz. Onset filter parameters as in text; one neuron per
channel, with no interconnection. Neuron refractory period is 50ms. c: as b , but
network has input applied to 6 adjacent channels, and internal feedback to 10 channels. d: offset map produced similarly, with refractory period 5ms. e: envelope of
say, that's a nice bike with motorbike noise in background (lines mark utterance).
f, g: onset, offset maps for e.
perfect. Best results were obtained here when the input to the network is not spread
across channels.
4
Conclusions and further work
An effective data driven segmentation technique based on onset feature detection
and using integrate-and-fire neurons has been demonstrated. The system is relatively immune to broadband noise. Segmentation is not an end in itself: the
effectiveness of any technique will depend on the eventual application.
L. S.SMITH
734
..'~----
.'
Figure 2: a: slurred flute sound. with vertical lines showing boundary between
notes. b: onsets found using a single neuron per channel, and no interconnection.
c: as b, but with internal feedback from each channel to 16 adjacent channels d:
offsets found with refractory period 5ms.
The segmentation is currently not using the information on which bands the onsets
occur in. We propose to extend this work by combining the segmentation described
here with work streaming bands sharing same-frequency amplitude modulation.
The aim of this is to extract sound segments from some subset of the bands, allowing
segmentation and streaming to run concurrently.
Acknowledgements
Many thanks are due to the members of the Centre for Cognitive and Computational
Neuroscience at the University of Stirling.
References
Ainsworth W. Meyer G. Speech analysis by means of a physiologically-based model
of the cochlear nerve and cochlear nucleus. in Visual r'e presentations of speech
signals. Cooke M. Beet S. eds. 1992.
Berthommier F .. Modelling nelll'rul'eSpOllSes of t.he int.ermediate auditory system, in
Mathematics applied to biology and medicine, Demongeot .I, Capa..'!so V, Wuertz
Publishing, Canada, 1993.
Blackburn C.C .. Sachs M.B. Classification of unit types in the anteroventral cochlear
nucleus: PST hist.ograms and regularity analysis, . J. Neurophys'iology, 62, 6,
1989.
Onset-based Sound Segmentation
735
Blackwood N .. Meyer G., Aimsworth W. A Model of the processing of voiced plosives
in the auditory nerve and cochlear nucleus, Proceedings Inst of Acoustics, 12,
10, 1990.
Brown G. Computational Auditory Scene Analysis, TR CS-92-22, Department of
Computing Science, University of Sheffield, England, 1992.
Cole R .. et al, The challenge of spoken language systems: research directions of the
90's. IEEE Trans Speech and Audio Pmcessing, 3. 1, 1995.
Cosi P. On the use of auditory models in speech technology, in Intelligent Perceptual
Models, LNCS 745, Springer Verlag, 1993.
Fragniere E., van Schaik A .. Lineal' predictive coding of the speech signal using an
analog cochlear modeL MANTRA Internal Report, 94/2, MANTRA Center for
Neuro-mimetic systems, EPFL, Lausanne, Switzerland, 1994.
Gaver W.W. What in the world do we hear?: an ecological approach to auditory
event perception. Ecological Psychology, 5(1). 1-29, 1993.
Kim D.O. ,Sirianni .T.G., Chang S.O .. Responses of DCN-PVCN neurons and auditory nerve fibres in lmanesthetized decerebrate cats to AM and pure tones:
analysis with autocorrelation/power-spectrum, Hearing Research. 45, 95-113.
1990.
Lazzaro .T., Mead C., Silicon modelling of pitch perception, Proc Natl. Acad Sciences, USA, 86. 9597-9601, 1989.
Lazzaro .T., Wawrzynek .T .. Mahowald M. , Sivilotti M., Gillespie D .. Silicon auditory
processors as computer peripherals. IEEE Trans on Neural Networks, 4, 3, May
1993.
Licklider .T.C.R, A Duplex theory of pitch perception, Experentia, 7. 128-133, 1951.
Liu W .. Andreou A.G., Goldstein M.H., Analog cochlear model for multiresolution
speech analysis, Advances in Neural Information Processing Systems 5, Hanson
S ..T., Cowan .T.D., Lee Giles C. (eds), Morgan Kaufmann, 1993.
Marl' D., Hildreth E. Theory of edge detection, Proc. Royal Society of London B,
207. 187-217, 1980.
Meddis R .. Simulation of auditory-neural transduction: further studies. J. Acollst
Soc Am. 83. 3, 1988.
Moore B.C ..J.. Glasberg B.R. Suggested formulae for calculating auditory-filter
bandwidths and excitation patterns, J Acoust Soc America, 74. 3, 1983.
Mirollo RE. , Strogatz S.H. Synchronization of pulse-coupled biological oscillators,
SIAM J. Appl Math, 50, 6, 1990.
Patterson R. Holdsworth .T. (1990). An Introd'IJ,ction to A1J,ditory Sensation Processing. in AAM HAP. Vol 1. No 1.
Pickles .T.O. (1988). An Introd'u ction to the PhyS'iology of Hearing, 2nd Edition,
Academic Press.
Slaney M .. An efficient implementation of the Patterson-Holdsworth auditory filter
bank, Apple technical report No 35, Apple Computer Inc, 1993.
Smith L.S. SO\illd segmentation using onsets and offsets, J of New Music Research,
23, 1, 1994.
Smith L.S. Onset/offset coding for interpretation and segmentation of sound, UK
patent no 9505956.4. March 1995.
Wu Z.L., Schwartz .T.L .. Escudier P. A theoretical study of neural mechanisms
specialized in the detection of articulatory-acoustic events, Proc Eurospeech
89. ed Tubach .T.P., Mariani .T ..T., Paris, 1989.
| 1024 |@word neurophysiology:1 version:1 middle:1 stronger:1 nd:1 pulse:4 tried:1 gradual:1 simulation:1 pressure:1 tr:1 carry:1 initial:1 liu:2 neurophys:1 activation:2 yet:1 dx:1 olive:1 visible:1 motor:1 medial:1 alone:2 tone:2 scotland:1 smith:12 short:2 schaik:2 filtered:1 leakiness:1 sudden:1 provides:1 math:1 burst:1 excitatorily:1 inside:3 autocorrelation:2 flute:1 inter:1 decreasing:1 window:1 increasing:1 anteroventral:1 bike:2 duplex:1 what:2 sivilotti:1 compressive:1 spoken:2 finding:1 acoust:1 temporal:3 nf:1 um:1 ro:1 uk:1 schwartz:1 unit:5 producing:2 segmenting:2 before:1 positive:6 timing:2 sd:1 acad:1 accumulates:1 mead:3 firing:4 modulation:1 black:2 emphasis:2 twice:1 suggests:1 lausanne:1 appl:1 wideband:1 range:1 implement:1 lncs:1 area:1 marl:1 cannot:1 hlj:1 operator:1 applying:1 accumulation:1 map:6 demonstrated:1 center:1 dcn:1 straightforward:1 duration:1 resolution:1 immediately:1 pure:1 population:2 justification:1 ainsworth:3 us:2 logarithmically:1 pinna:1 recognition:1 element:1 mammalian:2 compressing:1 movement:1 broken:1 depend:1 segment:9 predictive:1 patterson:6 various:1 cat:1 america:1 train:1 effective:5 london:1 ction:2 detected:2 say:1 beet:1 interconnection:2 compressed:3 echo:1 itself:1 propose:1 reset:1 fr:1 neighboring:1 turned:1 combining:1 rapidly:1 insensitivity:1 multiresolution:1 moved:1 pronounced:1 everyday:1 cluster:1 regularity:1 produce:2 bandpassed:3 perfect:1 basilar:1 ij:1 received:1 soc:2 implemented:1 c:1 implies:3 differ:1 direction:1 switzerland:1 sensation:1 correct:1 filter:9 brainstem:1 integrateand:1 tympanic:1 clustered:1 biological:5 adjusted:1 considered:1 exp:1 fragniere:2 driving:1 early:4 proc:3 integrates:1 pickle:4 currently:1 cole:2 vibration:4 sensitive:3 organ:1 weighted:2 clearly:1 concurrently:1 gaussian:1 aim:2 rather:2 monochrome:1 modelling:7 kim:2 am:4 inst:1 streaming:2 epfl:1 integrated:1 initially:1 hidden:1 going:3 classification:1 animal:1 integration:1 equal:1 nemon:1 biology:2 blackburn:2 others:1 report:2 intelligent:1 modern:1 consisting:2 fire:11 pmcessing:1 detection:3 interest:1 signalled:1 iology:2 natl:1 articulatory:1 edge:2 shorter:1 divide:1 re:1 theoretical:1 giles:1 stirling:3 leslie:1 mahowald:1 hearing:2 subset:1 front:3 too:1 eurospeech:1 st:2 ju:1 peak:1 thanks:1 siam:1 filed:1 ie:1 lee:1 receiving:1 enhance:1 again:1 ear:3 slowly:3 external:2 slaney:2 convolving:2 cognitive:1 coding:3 int:1 inc:1 stij:1 onset:47 piece:1 later:2 bone:1 lot:1 performed:1 break:1 wave:1 start:2 voiced:1 contribution:1 formed:1 musical:2 phoneme:1 kaufmann:1 modelled:2 produced:4 rectified:1 apple:2 processor:1 reach:1 phys:1 sharing:1 ed:3 energy:3 frequency:4 emits:1 auditory:26 stop:1 pst:1 holdsworth:6 segmentation:19 amplitude:1 goldstein:1 back:1 nerve:9 appears:1 dt:1 follow:1 response:3 cosi:5 though:1 strongly:1 just:1 stage:5 o:2 hildreth:2 lar:1 escudier:1 impulse:1 resemblance:1 usa:1 effect:8 brown:3 former:1 moore:2 white:2 adjacent:6 inferior:1 maintained:1 unambiguous:1 die:2 excitation:1 m:11 dissipation:1 interpreting:1 pro:1 image:1 fi:1 superior:1 specialized:1 physical:1 patent:2 berthommier:2 khz:2 exponentially:1 insensitive:1 analog:3 interpretation:5 discussed:2 extend:1 refractory:6 he:1 silicon:5 mathematics:1 similarly:1 innervate:1 centre:2 language:1 immune:1 decerebrate:1 mainstream:1 gj:1 add:1 fii:1 recent:5 driven:3 eardrum:1 phonetic:2 certain:2 verlag:1 ecological:2 inverted:1 seen:1 minimum:1 morgan:1 determine:1 period:9 signal:26 ii:2 sound:26 rj:1 segmented:1 technical:1 england:1 calculation:1 plausibility:1 cross:1 academic:1 divided:1 coded:1 pitch:2 neuro:1 hair:4 sheffield:1 cochlea:4 achieved:3 cell:8 background:2 source:1 envelope:3 recording:1 hz:2 virtually:1 tend:1 cowan:1 member:1 effectiveness:1 call:2 near:4 presence:1 iii:1 spiral:1 enough:1 psychology:1 bandwidth:2 silent:1 inner:4 reduce:1 introd:2 passed:1 speech:17 cause:3 lazzaro:6 jj:1 generally:2 tij:1 fk9:1 clear:2 transforms:3 band:8 processed:1 simplest:1 vy:1 inhibitory:1 alters:1 canal:1 neuroscience:1 per:3 vol:1 threshold:2 clean:1 tonotopically:1 fibre:3 colliculus:1 run:1 arrive:1 place:1 almost:1 saying:1 wu:3 mimetic:1 entirely:1 activity:2 occur:3 vibrating:1 scene:1 fourier:3 lineal:1 relatively:2 uml:1 percussive:1 marking:1 according:1 department:1 peripheral:1 march:1 tonotopic:1 membrane:2 across:7 describes:1 mantra:2 wawrzynek:1 biologically:1 making:1 pr:1 meddis:2 taken:1 rectification:1 mechanism:1 fed:1 instrument:1 end:4 hap:1 gaussians:2 occasional:1 away:2 appropriate:1 corti:1 motorbike:1 original:1 clustering:1 ensure:1 publishing:1 yx:1 medicine:1 calculating:1 music:1 society:1 suddenly:2 move:1 spike:5 glasberg:2 traditional:1 lateral:1 outer:1 cochlear:11 extent:1 length:2 providing:1 a1j:1 fe:1 negative:3 rise:2 implementation:1 reliably:1 perform:1 allowing:2 vertical:1 neuron:19 convolution:1 markov:1 digitized:1 sharp:2 intensity:2 canada:1 paris:1 connection:1 hanson:1 andreou:1 acoustic:2 trans:2 beyond:1 able:1 suggested:1 usually:1 pattern:2 vibrate:1 dynamical:1 perception:3 challenge:1 hear:1 royal:1 power:1 event:2 gillespie:1 arm:1 technology:1 brief:1 temporally:1 ne:1 extract:2 utterance:2 coupled:1 text:2 nice:1 acknowledgement:1 synchronization:1 bear:1 limitation:1 filtering:4 ger:1 nucleus:9 integrate:5 bank:1 licklider:3 cooke:1 excitatory:3 placed:1 arriving:2 loms:1 aij:1 fall:2 neighbor:1 inhibitorily:1 leaky:1 recoding:1 van:2 boundary:2 feedback:2 world:1 computes:1 author:1 subtracts:1 far:2 aam:1 active:1 incoming:1 rul:1 hist:1 spectrum:2 continuous:2 demanded:1 physiologically:1 channel:17 nature:1 plosive:1 da:1 sachs:2 spread:1 noise:5 edition:1 allowed:1 fig:2 broadband:1 transduction:1 lc:1 meyer:4 bandpass:2 perceptual:1 weighting:1 down:1 formula:1 showing:1 offset:26 experimented:1 cease:1 flt:1 grouping:1 confuse:1 smoothly:1 led:1 ganglion:1 visual:1 strogatz:2 chang:1 springer:1 marked:1 presentation:1 eventual:1 oscillator:1 change:1 called:1 oval:1 pas:1 la:2 ya:1 blackwood:2 internal:5 mark:2 latter:1 audio:1 |
30 | 1,025 | A model of transparent motion and
non-transparent motion aftereffects
Alexander Grunewald*
Max-Planck Institut fur biologische Kybernetik
Spemannstrafie 38
D-72076 Tubingen, Germany
Abstract
A model of human motion perception is presented. The model
contains two stages of direction selective units. The first stage contains broadly tuned units, while the second stage contains units
that are narrowly tuned. The model accounts for the motion aftereffect through adapting units at the first stage and inhibitory
interactions at the second stage. The model explains how two populations of dots moving in slightly different directions are perceived
as a single population moving in the direction of the vector sum,
and how two populations moving in strongly different directions are
perceived as transparent motion. The model also explains why the
motion aftereffect in both cases appears as non-transparent motion.
1
INTRODUCTION
Transparent motion can be studied using displays which contain two populations of
moving dots. The dots within each population have the same direction of motion,
but directions can differ between the two populations. When the two directions are
very similar, subjects report seeing dots moving in the average direction (Williams &
Sekuler, 1984). However, when the difference between the two directions gets large,
subjects perceive two overlapping sheets of moving dots. This percept is called
transparent motion. The occurrence of transparent motion cannot be explained by
direction averaging, since that would result in a single direction of perceived motion.
Rather than just being a quirk of the human visual system, transparent motion is
an important issue in motion processing. For example, when a robot is moving its
? Present address: Caltech, Mail Code 216-76, Pasadena, CA 91125.
838
A. GRUNEWALD
motion leads to a velocity field. The ability to detect transparent motion within
that velocity field enables the robot to detect other moving objects at the same time
that the velocity field can be used to estimate the heading direction of the robot.
Without the ability to code mUltiple directions of motion at the same location,
i.e. without the provision for transparent motion, this capacity is not available.
Traditional algorithms have failed to properly process transparent motion, mainly
because they assigned a unique velocity signal to each location, instead of allowing
the possibility for multiple motion signals at a single location. Consequently, the
study of transparent motion has recently enjoyed widespread interest.
STIMULUS
PERCEPT
Test
Figure 1: Two populations of dots moving in different directions during an adaptation phase are perceived as transparent motion. Subsequent viewing of randomly
moving dots during a test phase leads to an illusory percept of unidirectional motion,
the motion aftereffect (MAE). Stimulus and percept in both phases are shown.
After prolonged exposure to an adaptation display containing dots moving in one direction, randomly moving dots in a test display appear to be moving in the opposite
direction (Hiris & Blake, 1992; Wohlgemuth, 1911). This illusory percept of motion
is called the motion aftereffect (MAE). Traditionally this is explained by assuming
that pairs of oppositely tuned direction selective units together code the presence
of motion. When both are equally active, no motion is seen. Visual motion leads
to stronger activation of one unit, and thus an imbalance in the activity of the two
units. Consequently, motion is perceived. Activation of that unit causes it to fatigue, which means its response weakens. After motion offset, the previously active
unit sends out a reduced signal compared to its partner due to adaptation. Thus
adaptation generates an imbalance between the two units, and therefore illusory
motion, the MAE, is perceived. This is the ratio model (Sutherland, 1961).
Recent psychophysical results show that after prolonged exposure to transparent
motion, observers perceive a MAE of a single direction of motion, pointing in the
vector average of the adaptation directions (Mather, 1980; Verstraten, Fredericksen, & van de Grind, 1994). Thus adaptation to transparent motion leads to a
non-transparent MAE. This is illustrated in Figure 1. This result cannot be accounted for by the ratio model, since the non-transparent MAE does not point in
the direction opposite to either of the adaptation directions. Instead, this result
suggests that direction selective units of all directions interact and thus contribute
to the MAE. This explanation is called the distribution-shift model (Mather, 1980).
However, thus far it has only been vaguely defined, and no demonstration has been
given that shows how this mechanism might work.
A Model of Transparent Motion and Non-transparent Motion Aftereffects
839
This study develops a model of human motion perception based on elements from
both the ratio and the distribution-shift models for the MAE. The model is also
applicable to the situation where two directions of motion are present. When the
directions differ slightly, only a single direction is perceived. When the directions
differ a lot, transparent motion is perceived. Both cases lead to a unitary MAE.
2
OUTLINE OF THE MODEL
The model consists of two stages. Both stages contain units that are direction
selective. The architecture of the model is shown in Figure 2.
~----~--~,~r---~---'---\
Stage 2
CD080CD
-,
86)
+----+~--+~--~----~--~--~--~~
Figure 2: The model contains two stages of direction selective units. Units at stage
1 excite units of like direction selectivity at stage 2, and inhibit units of opposite
directions. At stage 2 recurrent inhibition sharpens directional motion responses.
The grey level indicates the strength of interaction between units. Strong influence
is indicated by black arrows, weak influence is indicated by light grey arrows.
Units in stage 1 are broadly tuned motion detectors. In the present study the precise
mechanism of motion detection is not central, and hence it has not been modeled. It
is assumed that the bandwidth of motion detectors at this stage is about 30 degrees
(Raymond, 1993; Williams, Tweten, & Sekuler, 1991). In the absence of any visual
motion, all units are active at a baseline level; this is equivalent to neuronal noise.
Whenever motion of a particular direction is present in the input, the activity of
the corresponding unit (Vi) is activated maximally (Vi = 9), and units of similar
direction selectivity are weakly activated (Vi = 3). The activities of all other units
decrease to zero. Associated with each unit i at stage 1 is a weight Wi that denotes
the adaptational state of unit i to fire a unit at stage 2. During prolonged exposure
to motion these weights adapt, and their strength decreases. The equation governing
the strength of the weights is given below:
dWi
- dt = R(1- w?)
~
V?W
?
~~,
where R = 0.5 denotes the rate of recovery to the baseline weight. When Wi = 1
the corresponding unit is not adapted. The further Wi is reduced from 1, the more
840
A. GRUNEWALD
the corresponding unit is adapted. The products ViWi are transmitted to stage 2.
Each unit of stage 1 excites units coding similar directions at stage 2, and inhibits
units coding opposite directions of motion. The excitatory and inhibitory effects
between units at stages 1 and 2 are caused by kernels, shown in Figure 3.
Feedback kernels
Feedforward kernels
1
-
0.8
-
0.6
0.6
-
0.4
0.4 r-
-
0.2
0.2 f--
-
1
excitatory
0.8
-------
I
1excitatory
inhibitory
---~---
inhibitory
0
--- -- --180
-----0
180
0
-
------+
--------
o
180
-180
Figure 3: Kernels used in the model. Left: excitatory and inhibitory kernels between
stages 1 and 2; right: excitatory and inhibitory feedback kernels within stage 2.
Activities at stage 2 are highly tuned for the direction of motion. The broad activation of motion signals at stage 1 is directionally sharpened at stage 2 through
the interactions between recurrent excitation and inhibition. Each unit in stage 2
excites itself, and interacts with other units at stage 2 through recurrent inhibition.
This inhibition is maximal for close directions, and falls off as the directions become more dissimilar. The kernels mediating excitatory and inhibitory interactions
within stage 2 are shown in Figure 3. Through these inhibitory interactions the
directional tuning of units at stage 2 is sharpened; through the excitatory feedback
it is ensured that one unit will be maximally active. Activities of units at stage 2
are given by Mi = max4 (mi' 0), where the behavior of mi is governed by:
F/ and Fi- denote the result of convolving the products of the activities at stage
1 and the corresponding adaptation level, VjWj , with excitatory and inhibitory
feedforward kernels respectively. Similarly, Bt and Bj denote the convolution of
the activities M j at stage 2 with the feedback kernels.
3
SIMULATIONS OF PSYCHOPHYSICAL RESULTS
In the simulations there were 24 units at each stage. The model was simulated
dynamically by integrating the differential equations using a fourth order RungeKutta method with stepsize H = 0.01 time units. The spacing of units in direction
space was 15 degrees at both stages. Spatial interactions were not modeled. In
the simulations shown, a motion stimulus is present until t = 3. Then the motion
stimulus ceases. Activity at stage 2 after t = 3 corresponds to a MAE.
A Model of Transparent Motion and Non-transparent Motion Aftereffects
3.1
841
UNIDIRECTIONAL MOTION
When adapting to a single direction of motion, the model correctly generates a
motion signal for that particular direction of motion. After offset of the motion
input, the unit coding the opposite direction of motion is activated, as in the MAE.
A simulation of this is shown in Figure 4.
Stage 1
Stage 2
act
act
360
360
Figure 4: Simulation of single motion input and resulting MAE. Motion input is
presented until t = 3.
During adaptation the motion stimulus excites the corresponding units at stage 1,
which in turn activate units at stage 2. Due to recurrent inhibition only one unit
at stage 2 remains active (Grossberg, 1973), and thus a very sharp motion signal
is registered at stage 2. During adaptation the weights associated with the units
that receive a motion input decrease. After motion offset, all units receive the same
baseline input. Since the weights of the previously active units are decreased, the
corresponding cells at stage 2 receive less feedforward excitation. At the same time,
the previously active units receive strong feedforward inhibition, since they receive
inhibition from units tuned to very different directions of motion and whose weights
did not decay during adaptation. Similarly, the units coding the opposite direction
of motion as those previously active receive more excitation and less inhibition.
Through recurrent inhibition the unit at stage 2 coding the opposite direction to that
which was active during adaptation is activated after motion offset: this activity
corresponds to the MAE. Thus the MAE is primarily an effect of disinhibition.
3.2
TRANSPARENT MOTION: SIMILAR DIRECTIONS
Two populations of dots moving in different, but very similar, directions lead to
bimodal activation at stage 1. Since the feedforward excitatory kernel is broadly
tuned, and since the directions of motion are similar, the ensuing distribution of
activities at stage 2 is unimodal, peaking halfway between the two directions of
motion. This corresponds to the vector average of the directions of motion of the
two populations of dots. A simulation of this is shown in Figure 5.
During adaptation the units at stage 1 corresponding to the input adapt. As before
this means that after motion offset the previously active units receive less excitatory
input and more inhibitory input. As during adaptation this signal is unimodal. Also,
the unit at stage 2 coding the opposite direction to that of the stimulus receives
842
A. GRUNEWALD
Stage 2
Stage 1
act
60 120
60 120 180
direction 240
180
direction 240
Figure 5: Simulation of two close directions of motion. Stage 2 of the network model
registers unitary motion and a unitary MAE.
less inhibition and more excitation. Through the recurrent activities within stage
2, that unit gets maximally activated. A unimodal MAE results.
3.3
TRANSPARENT MOTION: DIFFERENT DIRECTIONS
When the directions of the two populations of dots in a transparent motion display
are sufficiently distinct, the distribution of activities at stage 2 is no longer unimodal,
but bimodal. Thus, recurrent inhibition leads to activation of two units at stage 2.
They correspond to the two stimulus directions. A simulation is shown in Figure 6.
Stage 1
Stage 2
act
60 120
180
direction 240
Figure 6: Simulation of two distinct directions of motion. Stage 2 of the model
registers transparent motion during adaptation, but the MAE is unidirectional.
Feedforward inhibition is tuned much broader than feedforward excitation, and as a
consequence the inhibitory signal during adaptation is unimodal, peaking at the unit
of stage 2 coding the opposite direction of the average of the two previously active
directions. Therefore that unit receives the least amount of inhibition after motion
offset. It receives the same activity from stage 1 as units coding nearby directions,
since the corresponding weights at stage 1 did not adapt. Due to recurrent activities
at stage 2 that unit becomes active: non-transparent motion is registered.
A Model of Transparent Motion and Non-transparent Motion Aftereffects
4
843
DISCUSSION
Recently Snowden, Treue, Erickson, and Andersen (1991) have studied the effect
of transparent motion stimuli on neurons in areas VI and MT of macaque monkey.
They simultaneously presented two populations of dots, one of which was moving
in the preferred direction of the neuron under study, and the other population was
moving in a different direction. They found that neurons in VI were barely affected
by the second population of dots. Neurons in MT, on the other hand, were inhibited
when the direction of the second population differed from the preferred direction,
and inhibition was maximal when the second population was moving opposite to the
preferred direction. These results support key mechanisms of the model. At stage
1 there is no interaction between opposing directions of motion. The feedforward
inhibition between stages 1 and 2 is maximal between opposite directions. Thus
activities of units at stage 1 parallel neural activities recorded at VI, and activities
of units at stage 2 parallels those neural activities recorded in area MT.
Acknowledgments
This research was carried out under HFSP grant SF-354/94.
Reference
Grossberg, S. (1973). Contour enhancement, short term memory, and constancies in
reverberating neural networks. Studies in Applied Mathematics, LII, 213-257.
Hiris, E., & Blake, R. (1992). Another perspective in the visual motion aftereffect.
Proceedings of the National Academy of Sciences USA, 89, 9025-9028.
Mather, G. (1980). The movement aftereffect and a distribution-shift model for
coding the direction of visual movement. Perception, 9, 379-392.
Raymond, J. E. (1993). Movement direction analysers: independence and bandwidth. Vision Research, 33(5/6), 767-775.
Snowden, R. J ., Treue, S., Erickson, R. G., & Andersen, R. A. (1991). The response
of area MT and VI neurons to transparent motion. Journal of Neuroscience,
11 (9), 2768-2785.
Sutherland, N. S. (1961). Figural after-effects and apparent size. Quarterly Journal
of Experimental Psychology, 13, 222-228.
Verstraten, F. A. J., Fredericksen, R. E., & van de Grind, W. A. (1994). Movement
aftereffect of bi-vectorial transparent motion. Vision Research, 34, 349-358.
Williams, D., Tweten, S., & Sekuler, R. (1991). Using metamers to explore motion
perception. Vision Research, 31 (2), 275-286.
Williams, D. W., & Sekuler, R. (1984). Coherent global motion percept from
stochastic local motions. Vision Research, 24 (1), 55-62.
Wohlgemuth, A. (1911). On the aftereffect of seen movement. British Journal of
Psychology (Monograph Supplement), 1, 1-117.
| 1025 |@word sharpens:1 stronger:1 grey:2 simulation:9 contains:4 tuned:8 activation:5 subsequent:1 enables:1 short:1 contribute:1 location:3 become:1 differential:1 consists:1 grunewald:4 behavior:1 prolonged:3 becomes:1 monkey:1 act:4 ensured:1 unit:61 rungekutta:1 grant:1 appear:1 planck:1 before:1 sutherland:2 local:1 kybernetik:1 consequence:1 might:1 black:1 studied:2 dynamically:1 suggests:1 sekuler:4 bi:1 grossberg:2 unique:1 acknowledgment:1 area:3 adapting:2 integrating:1 seeing:1 get:2 cannot:2 close:2 sheet:1 influence:2 equivalent:1 williams:4 exposure:3 recovery:1 perceive:2 population:15 traditionally:1 velocity:4 element:1 constancy:1 decrease:3 inhibit:1 movement:5 monograph:1 weakly:1 distinct:2 activate:1 analyser:1 whose:1 apparent:1 ability:2 itself:1 directionally:1 interaction:7 product:2 maximal:3 adaptation:16 figural:1 academy:1 enhancement:1 mather:3 object:1 weakens:1 recurrent:8 quirk:1 excites:3 strong:2 differ:3 direction:71 stochastic:1 human:3 viewing:1 explains:2 transparent:32 wohlgemuth:2 sufficiently:1 blake:2 bj:1 pointing:1 perceived:8 applicable:1 grind:2 rather:1 snowden:2 broader:1 treue:2 properly:1 fur:1 indicates:1 mainly:1 baseline:3 detect:2 bt:1 pasadena:1 selective:5 germany:1 issue:1 spatial:1 field:3 broad:1 report:1 stimulus:8 develops:1 inhibited:1 primarily:1 randomly:2 simultaneously:1 national:1 phase:3 fire:1 opposing:1 detection:1 interest:1 possibility:1 highly:1 light:1 activated:5 institut:1 tubingen:1 off:1 together:1 andersen:2 central:1 sharpened:2 recorded:2 containing:1 lii:1 convolving:1 account:1 de:2 coding:9 caused:1 register:2 vi:7 observer:1 lot:1 biologische:1 parallel:2 unidirectional:3 percept:6 correspond:1 directional:2 weak:1 detector:2 whenever:1 associated:2 mi:3 illusory:3 provision:1 appears:1 fredericksen:2 oppositely:1 dt:1 response:3 maximally:3 strongly:1 just:1 stage:64 governing:1 until:2 hand:1 receives:3 overlapping:1 widespread:1 indicated:2 usa:1 effect:4 contain:2 hence:1 assigned:1 illustrated:1 during:11 disinhibition:1 excitation:5 fatigue:1 outline:1 motion:92 recently:2 fi:1 mt:4 mae:17 enjoyed:1 tuning:1 mathematics:1 similarly:2 dot:14 moving:17 robot:3 longer:1 inhibition:15 recent:1 perspective:1 selectivity:2 caltech:1 seen:2 transmitted:1 signal:8 multiple:2 unimodal:5 adapt:3 equally:1 vision:4 kernel:10 bimodal:2 cell:1 receive:7 spacing:1 decreased:1 sends:1 subject:2 unitary:3 presence:1 feedforward:8 independence:1 psychology:2 architecture:1 bandwidth:2 opposite:11 shift:3 narrowly:1 cause:1 amount:1 reduced:2 inhibitory:11 neuroscience:1 correctly:1 broadly:3 affected:1 key:1 vaguely:1 halfway:1 sum:1 verstraten:2 fourth:1 display:4 activity:18 strength:3 adapted:2 vectorial:1 nearby:1 generates:2 inhibits:1 slightly:2 wi:3 explained:2 peaking:2 aftereffect:12 tweten:2 previously:6 equation:2 turn:1 remains:1 mechanism:3 available:1 quarterly:1 occurrence:1 stepsize:1 denotes:2 psychophysical:2 traditional:1 interacts:1 erickson:2 simulated:1 capacity:1 ensuing:1 mail:1 partner:1 barely:1 assuming:1 code:3 modeled:2 ratio:3 demonstration:1 mediating:1 allowing:1 imbalance:2 convolution:1 neuron:5 situation:1 precise:1 sharp:1 pair:1 coherent:1 registered:2 macaque:1 address:1 below:1 perception:4 max:1 memory:1 explanation:1 carried:1 raymond:2 degree:2 excitatory:10 accounted:1 heading:1 fall:1 van:2 feedback:4 contour:1 dwi:1 far:1 preferred:3 global:1 active:12 assumed:1 excite:1 why:1 ca:1 interact:1 did:2 arrow:2 noise:1 neuronal:1 differed:1 sf:1 governed:1 british:1 reverberating:1 offset:6 decay:1 cease:1 supplement:1 metamers:1 explore:1 visual:5 failed:1 corresponds:3 consequently:2 absence:1 averaging:1 called:3 hfsp:1 experimental:1 support:1 alexander:1 dissimilar:1 |
31 | 1,026 | A MODEL OF AUDITORY STREAMING
Susan L. McCabe & Michael J. Denham
Neurodynamics Research Group
School of Computing
University of Plymouth
Plymouth PL4 8AA, u.K.
ABSTRACT
An essential feature of intelligent sensory processing is the ability to
focus on the part of the signal of interest against a background of
distracting signals, and to be able to direct this focus at will. In this
paper the problem of auditory scene segmentation is considered and a
model of the early stages of the process is proposed. The behaviour of
the model is shown to be in agreement with a number of well known
psychophysical results. The principal contribution of this model lies in
demonstrating how streaming might result from interactions between
the tonotopic patterns of activity of input signals and traces of previous
activity which feedback and influence the way in which subsequent
signals are processed.
1 INTRODUCTION
The appropriate segmentation and grouping of incoming sensory signals is important in
enabling an organism to interact effectively with its environment (Llinas, 1991). The
formation of associations between signals, which are considered to arise from the same
external source, allows the organism to recognise significant patterns and relationships
within the signals from each source without being confused by accidental coincidences
between unrelated signals (Bregman, 1990). The intrinsically temporal nature of sound
means that in addition to being able to focus on the signal of interest, perhaps of equal
significance, is the ability to predict how that signal is expected to progress; such
expectations can then be used to facilitate further processing of the signal. It is important
to remember that perception is a creative act (Luria, 1980). The organism creates its
interpretation of the world in response to the current stimuli, within the context of its
current state of alertness, attention, and previous experience. The creative aspects of
perception are exemplified in the auditory system where peripheral processing
decomposes acoustic stimuli. Since the frequency spectra of complex sounds generally
A Model of Auditory Streaming
53
overlap, this poses a complicated problem for the auditory system : which parts of the
signal belong together, and which of the subgroups should be associated with each other
from one moment to the next, given the extra complication of possible discontinuities
and occlusion of sound signals? The process of streaming effectively acts to to associate
those sounds emitted from the same source and may be seen as an accomplishment,
rather than the breakdown of some integration mechanism (Bregman, 1990).
The cognitive model of streaming, proposed by (Bregman, 1990), is based primarily on
Gestalt principles such as common fate, proximity, similarity and good continuation.
Streaming is seen as a mUltistage process, in which an initial, preattentive process
partitions the sensory input, causing successive sounds to be associated depending on the
relationship between pitch proximity and presentation rate. Further refinement of these
sound streams is thought to involve the use of attention and memory in the processing of
single streams over longer time spans.
Recently a number of computational models which implement these concepts of
streaming have been developed. A model of streaming in which pitch trajectories are
used as the basis of sequential grouping is proposed by (Cooke, 1992). In related work,
(Brown, 1992) uses data-driven grouping schema to form complex sound groups from
frequency components with common periodicity and simultaneous onset. Sequential
associations are then developed on the basis of pitch trajectory. An alternative approach
suggests that the coherence of activity within networks of coupled oscillators, may be
interpreted to indicate both simultaneous and sequential groupings (Wang, 1995),
(Brown, 1995), and can, therefore, also model the streaming of complex stimuli. Sounds
belonging to the same stream, are distinguished by synchronous activity and the
relationship between frequency proximity and stream formation is modelled by the
degree of coupling between oscillators.
A model, which adheres closely to auditory physiology, has been proposed by (Beauvois,
1991). Processing is restricted to two frequency channels and the streaming of pure
tones. The model uses competitive interactions between frequency channels and leaky
integrator model neurons in order to replicate a number of aspects of human
psychophysical behaviour. The model, described here, used Beauvois' work as a starting
point, but has been extended to include multichannel processing of complex signals. It
can account for the relationship streaming and frequency difference and time interval
(Beauvois, 1991), the temporal development and variability of streaming perceptions
(Anstis, 1985), the influence of background organisation on foreground perceptions
(Bregman, 1975), as well as a number of other behavioural results which have been
omitted due to space limitations.
2
THEMODEL
We assume the existence of tonotopic maps, in which frequency is represented as a
distributed pattern of activity across the map. Interactions between the excitatory
tonotopic patterns of activity reflecting stimulus input, and the inhibitory tonotopic
masking patterns, resulting from previous activity, form the basis of the model. In order
to simulate behavioural experiments, the relationship between characteristic frequency
and position across the arrays is determined by equal spacing within the ERB scale
(Glasberg, 1990). The pattern of activation across the tonotopic axis is represented in
terms of a Gaussian function with a time course which reflects the onset-type activity
found frequently within the auditory system.
s. L. MCCABE, M. J. DENHAM
54
Input signals therefore take the form :
i(x,t) = CI (t - t Onset)e-c2(t-t Ortut )e 2~2lfc(x}-r.)2
[1]
where i(x.t) is the probability of input activity at position x, time t. C} and C; are
constants, tan.m is the starting time of the signal, fc (x) is the characteristic frequency at
position x,/. is the stimulus frequency, and a determines the spread of the activation.
In models where competitive interactions within a single network are used to model the
streaming process, such as (Beauvois, 1991), it is difficult to see how the organisation of
background sounds can be used to improve foreground perceptions (Bregman, 1975)
since the strengthening of one stream generally serves to weaken others. To overcome
this problem, the model of preattentive streaming proposed here, consists of two
interacting networks, the foreground and background networks, F and B; illustrated in
figure 1. The output from F indicates the activity, if any, in the foreground, or attended
stream, and the output from B reflects any other activity. The interaction between the
two eventually ensures that those signals appearing in the output from F, i.e. in the
foreground stream, do not appear in the output from B, the background; and vice versa.
In the model, strengthening of the organisation of the background sounds, results in the
'sharpening' of the foreground stream due to the enhanced inhibition produced by a more
coherent background.
rre
rnR
Figure 1 : Connectivity of the Streaming Networks.
Neurons within each array do not interact with each other but simply perform a
summation of their input activity. A simplified neuron model with low-pass filtering of
the inputs, and output representing the probability offiring, is used:
p(x, t) = cr[~ Vj(x, t)], where cr(y)
= I+~_Y
[2]
J
The inputs to the foreground net are :
VI (x,t) = (1- ::)VI (x,t-dt) + VI . ~(i(x,t?.dt
[3]
V2(X,t)
= (1- ::)V2(X, t- dt) + V2?mFi(x,t- dt?
.dt
[4]
V3(X, t)
= (1- ~)v3(x,t-dt) + V3 . ~(mB(x,t- dt?.dt
[5]
where x is the position across the array, time t, sampling rate dt. "tj are time constants
which determine the rate of decay of activity, V; are weights on each of the inputs, and
t/J(y) is a function used to simulate the stochastic properties of nerve firing which returns
a value of J or 0 with probability y.
55
A Model of Auditory Streaming
The output activity pattern in the foreground net and its 'inverse', mF(x,f) and mFi(x,t),
are found by :
mF(x, t) = cr[v\ (x, t) -l'\(V2(X, t), n) -l'\(V3(X, t), n)]
[6]
N
mFi(x, t) = max {[~ ~ mF(xi' t - dt)] - mF(x, t- dt), O}
[7]
i=\
where 17(v(x,f),n) is the mean of the activity within neighbourhood n of position x at time
t and N is the number of frequency channels. Background inputs are similarly calculated.
To summarise, the current activity in response to the acoustic stimulus forms an
excitatory input to both the foreground and background streaming arrays, F and B. In
addition, F receives inhibitory inputs reflecting the current background activity, and the
inverse of the current foreground activity. The interplay between the excitatory and
inhibitory activities causes the model to gradually focus the foreground stream and
exclude extraneous stimuli. Since the patterns of inhibitory input reflect the distributed
patterns of activity in the input, the relationship between frequency difference and
streaming, results simply from the graded inhibition produced by these patterns. The
relationship between tone presentation rate and streaming is determined by the time
constants in the model which can be tuned to alter the rate of decay of activity.
To enable comparisons with psychophysical results, we view the judgement of coherence
or streaming made by the model as the difference between the strength of the foreground
response to one set of tones compared to the other. The strength of the response to a
given frequency, Resp(f,t), is a weighted sum of the activity within a window centred on
the frequency :
Respif, t) =
W
~
i=-W
mF(x(j) + i, t) * e
_k...
2(12
[8]
where W determines the size of the window centred on position, x(/), the position in the
map corresponding to frequency f, and a determines the spread of the weighting
function about position x(/).
The degree of coherence between two tones, say hand h' is assumed to depend on the
difference in strength of foreground response to the two :
C hif
o
I:
\,j 2,
t) = 1 _/ Resp(fj .t)--Resp{j2,t) /
Resp(fj . t}+Resp(/2,t)
[9]
where Coh(f;,h,t) ranges between 0, when Resp(f;,t) or Resp(h,t) vanishes and the
difference between the responses is a maximum, indicating maximum streaming, and 1,
when the responses are equal and maximally coherent. Values between these limits are
interpreted as the degree of coherence, analogous to the probability of human subjects
making ajudgement of coherence (Anstis, 1985), (Beauvois, 1991).
3
RESULTS
Experiments exploring the effect of frequency interval and tone presentation rate and
streaming are described in (Beauvois, 1991). Subjects were required to listen to an
alternating sequence of tones, ABABAB ... for 15 seconds, and then to judge whether at
the end of the sequence they perceived an oscillating, trill-like, temporally coherent
sequence, or two separate streams, one of interrupted high tones, the other of interrupted
s. L. MCCABE, M. J. DENHAM
56
low tones. Their results showed clearly an increasing tendency towards stream
segmentation both with increasing frequency difference between A and B, and
increasing tone presentation rate, results the model manages substantially to reproduce;
as 100r---~c_--~--~--------_,
may be seen in figure 2.
100r---~~--------~--~----'
4.76 tones/sec
I eo
60
~ 60
~
11
~
E20
0
1000
OL---~----~----~--~--~
1100
1200
1300
1<400
1500
1000
1100
1200
!
60
1500
7.69 tones/sec
5.88 tones/sec
~
1?>0
100~~~----------~--------'
100
80
1300
80
~
?
'li
~
?>
E 20
0
1000
20
oL---~----~----~--~----~
1100
1200
1?Xl
1500
1000
1100
1200
1300
1?>0
1500
100r---~----------~--~----'
100
20 tones/sec
11.11 tones/sec
?
S
? .co
l!
1300
80
~
60
1J
~
eo
~
20
20
o
1000
OL---~----~----~--~--~
1100
1200
1300
1?Xl
1500
1000
1100
1200
1300
1.ao
1500
Figure 2 : Mean Psychophysical '0' and Model ,*, Responses to the Stimulus ABAB ...
(A=lOOO Hz, B as indicated along X axis (Hz), tone presentation rates, as shown.)
In investigating the temporal development of stream segmentation, (Anstis, 1985) used a
similar stimulus to the experiment described above, but in this case subjects were
required to indicate continuously whether they were perceiving a coherent or streaming
signal. As can be seen in figure 3, the model clearly reproduces the principal features
found in their experiments, i.e. the probability of hearing a single, fused, stream declines
during each run, the more rapid the tone presentation rate, the quicker stream
segmentation occurs, and the judgements made were quite variable during each run.
In an experiment to investigate whether the organisation of the background sounds
affects the foreground, subjects were required to judge whether tone A was higher or
lower than B (Bregman, 1975). This judgement was easy when the two tones were
presented in isolation, but performance degraded significantly when the distractor tones,
X, were included. However, when a series of 'captor' tones, C, with frequency close to X
were added, the judgement became easier, and the degree of improvement was inversely
related to the difference in frequency between X and C. In the experiment, subjects
received an initial priming AB stimulus, followed by a set of 9 tones : CCCXABXCC.
The frequency of the captor tones, was manipulated to investigate how the proximity of
'captor' to 'distractor' tones affected the required AB order judgement.
57
A Model of Auditory Streaming
Figure 3 : The Probability of Perceptual Coherence as a Function of Time in Response
to Two Alternating Tones. Symbols: '.' 2 tones/s, '0' 4 tones/s, '+' 8 tones/so
In order to model this experiment and the effect of priming, an 'attentive' input, focussed
on the region of the map corresponding to the A and B tones, was included. We assume,
as argued by Bregman, that subjects' performance in this task is related to the degree to
which they are able to stream the AB pair separately. His D parameter is a measure of
the degree to which ABIBA can be discriminated. The model's performance is then
given by the strength of the foreground response to the AB pair as compared to the
distractor tones, and Coh([A B],X) is used to measure this difference. The model exhibits
a similar sensitivity to the distractor/captor frequency difference to that of human
subjects, and it appears that the formation of a coherent background stream allows the
model to distinguish the foreground group more clearly.
A)
B)' ~------~------~----~
09
t.4eM toherente XAB.X.
08
2S00
0)
N
06
I!OO
E
05
04
_____ . . _____ e -- --
G?????..??c?????????(???? ..?? ....?...._..........???co.... ?..??,
03 " .. ? .. ?? .... &egm ..... Op...."...,.
02
01
o
500
' 000
'500
Cap'Of hoquoncy jHz)
TIME
Figure 4 : A) Experiment to Demonstrate the Formation ofMuItiple Streams,
(Bregman, 1975). B) Model Response; '?'Mean Degree of Doherence to XABX, '0',
Bregman's D Parameter, '+' Model's Judgement of Coherence.
4
DISCUSSION
The model of streaming which we have presented here is essentially a very simple one,
which can, nevertheless, successfully replicate a wide range of psychophysical
experiments. Embodied in the model is the idea that the characteristics of the incoming
sensory signals result in activity which modifies the way in which subsequent incoming
58
s. L. MCCABE, M. J. DENHAM
signals are processed. The inhibitory feedback signals effectively comprise expectations
against which later signals are processed. Processing in much of the auditory system
seems to be restricted to processing within frequency 'channels'. In this model, it is
shown how local interactions, restricted almost entirely to within-channel activity, can
form a global computation of stream formation. It is not known where streaming occurs
in the auditory system, but feedback projections both within and between nuclei are
extensive, perhaps allowing an iterative refinement of streams. Longer range projections,
originating from attentive processes or memory, may modify local interactions to
facilitate the extraction of recognised or interesting sounds.
The relationship between streaming and frequency interval, could be modelled by
systematically graded inhibitory weights between frequency channels. However, in the
model this relationship arises directly from the distributed incoming activity patterns,
which seems a more robust and plausible solution, particularly if one takes the need to
cope with developmental changes into account. Although to simplify the simulations
peripheral auditory processing was not included in the model, the activity patterns
assumed as input can be produced by the competitive processing of the output from a
cochlear model.
An important aspect of intelligent sensory processing is the ability to focus on signals of
interest against a background of distracting signals, thereby enabling the perception of
significant temporal patterns. Artificial sensory systems, with similar capabilities, could
act as robust pre-processors for other systems, such as speech recognisers, fault detection
systems, or any other application which required the dynamic extraction and temporal
linking of subsets of the overall signal.
Values Used For Model Parameters
a=.005, c)=75, c2=100, V=[lOO 5 5 5 5], T=[.05 .6 .6 .6 .6], n=2, N=lOO
References
Anstis, S., Saida, S., J. (1985) Exptl Psych, 11(3), pp257-271
Beauvois, M.W., Meddis, R (1991) J. Exptl Psych, 43A(3), pp517-541
Bregman, AS., Rudnicky, AI. (1975) J. ExptJ Psych, 1(3), pp263-267
Bregman, A.S. (1990) 'Auditory scene analysis', MIT Press
Brown, GJ. (1992) University of Sheffield Research Reports, CS-92-22
Brown, GJ., Cooke, M. (1995) submitted to IJCAI workshop on Computational
Auditory Scene Analysis
Cooke, M.P. (1992) Computer Speech and Language 6, pp 153-173
Glasberg, B.R., Moore, B.C.J. (1990) Hearing Research, 47, pp103-138
L1inas, RR, Pare, D. (1991) Neuroscience, 44(3), pp521-535
Luria, A (1980) 'Higher cortical functions in man', NY:Basic
van Noorden, L.P.AS. (1975) doctoral dissertation, published by Institute for Perception
Research, PO Box 513, Eindhoven, NL
Wang, D.L. (1995) in 'Handbook of brain theory and neural networks', MIT Press
PART II
NEUROSCIENCE
| 1026 |@word judgement:6 seems:2 replicate:2 simulation:1 attended:1 thereby:1 moment:1 initial:2 series:1 tuned:1 current:5 activation:2 interrupted:2 subsequent:2 partition:1 tone:29 dissertation:1 complication:1 successive:1 along:1 c2:2 direct:1 consists:1 expected:1 rapid:1 frequently:1 distractor:4 integrator:1 ol:3 brain:1 window:2 increasing:3 confused:1 unrelated:1 mccabe:4 interpreted:2 substantially:1 psych:3 developed:2 rudnicky:1 sharpening:1 temporal:5 remember:1 act:3 appear:1 local:2 modify:1 mfi:3 limit:1 firing:1 might:1 doctoral:1 suggests:1 co:2 range:3 implement:1 thought:1 physiology:1 significantly:1 projection:2 pre:1 close:1 context:1 influence:2 map:4 modifies:1 attention:2 starting:2 pure:1 array:4 his:1 analogous:1 resp:7 enhanced:1 tan:1 ababab:1 us:2 agreement:1 associate:1 particularly:1 breakdown:1 quicker:1 coincidence:1 wang:2 susan:1 region:1 ensures:1 alertness:1 environment:1 vanishes:1 developmental:1 multistage:1 dynamic:1 depend:1 rre:1 creates:1 basis:3 po:1 represented:2 artificial:1 formation:5 quite:1 plausible:1 say:1 ability:3 interplay:1 sequence:3 rr:1 net:2 interaction:7 mb:1 strengthening:2 causing:1 j2:1 ijcai:1 oscillating:1 depending:1 coupling:1 oo:1 pose:1 op:1 school:1 received:1 progress:1 c:1 indicate:2 judge:2 closely:1 stochastic:1 human:3 enable:1 argued:1 behaviour:2 ao:1 summation:1 eindhoven:1 exploring:1 c_:1 proximity:4 considered:2 predict:1 early:1 omitted:1 perceived:1 vice:1 successfully:1 reflects:2 weighted:1 mit:2 clearly:3 gaussian:1 rather:1 cr:3 focus:5 improvement:1 indicates:1 streaming:27 originating:1 reproduce:1 overall:1 extraneous:1 development:2 integration:1 equal:3 comprise:1 extraction:2 sampling:1 foreground:16 alter:1 others:1 stimulus:10 intelligent:2 summarise:1 primarily:1 simplify:1 report:1 manipulated:1 occlusion:1 ab:4 detection:1 interest:3 investigate:2 nl:1 tj:1 bregman:11 plymouth:2 experience:1 weaken:1 hearing:2 subset:1 loo:2 sensitivity:1 michael:1 together:1 continuously:1 fused:1 connectivity:1 reflect:1 s00:1 denham:4 external:1 cognitive:1 return:1 li:1 account:2 exclude:1 centred:2 sec:5 onset:3 stream:19 vi:3 later:1 view:1 schema:1 competitive:3 complicated:1 capability:1 masking:1 contribution:1 degraded:1 became:1 characteristic:3 modelled:2 produced:3 manages:1 trajectory:2 published:1 processor:1 submitted:1 simultaneous:2 against:3 attentive:2 frequency:24 pp:1 associated:2 auditory:14 intrinsically:1 listen:1 cap:1 segmentation:5 reflecting:2 nerve:1 appears:1 higher:2 dt:11 response:11 llinas:1 maximally:1 box:1 stage:1 hand:1 receives:1 indicated:1 perhaps:2 facilitate:2 effect:2 concept:1 brown:4 alternating:2 moore:1 illustrated:1 during:2 distracting:2 recognised:1 demonstrate:1 fj:2 recently:1 common:2 discriminated:1 association:2 organism:3 interpretation:1 belong:1 linking:1 significant:2 versa:1 ai:1 similarly:1 language:1 similarity:1 longer:2 inhibition:2 gj:2 exptl:2 showed:1 driven:1 fault:1 seen:4 eo:2 determine:1 v3:4 signal:25 ii:1 sound:12 pitch:3 basic:1 sheffield:1 essentially:1 expectation:2 background:13 addition:2 separately:1 spacing:1 interval:3 source:3 extra:1 subject:7 hz:2 fate:1 emitted:1 easy:1 affect:1 isolation:1 decline:1 idea:1 accomplishment:1 synchronous:1 whether:4 rnr:1 speech:2 cause:1 generally:2 involve:1 processed:3 multichannel:1 continuation:1 inhibitory:6 neuroscience:2 affected:1 group:3 erb:1 demonstrating:1 nevertheless:1 offiring:1 sum:1 run:2 inverse:2 almost:1 recognise:1 coherence:7 entirely:1 captor:4 followed:1 distinguish:1 accidental:1 activity:26 strength:4 scene:3 aspect:3 simulate:2 span:1 coh:2 creative:2 peripheral:2 tonotopic:5 belonging:1 across:4 em:1 making:1 restricted:3 gradually:1 meddis:1 behavioural:2 eventually:1 mechanism:1 serf:1 end:1 v2:4 appropriate:1 appearing:1 distinguished:1 neighbourhood:1 alternative:1 existence:1 include:1 graded:2 psychophysical:5 added:1 occurs:2 glasberg:2 abab:1 exhibit:1 separate:1 cochlear:1 relationship:9 difficult:1 trace:1 perform:1 allowing:1 neuron:3 enabling:2 extended:1 variability:1 interacting:1 pair:2 required:5 extensive:1 acoustic:2 coherent:5 subgroup:1 anstis:4 discontinuity:1 able:3 pattern:13 perception:7 exemplified:1 trill:1 max:1 memory:2 overlap:1 representing:1 improve:1 pare:1 inversely:1 temporally:1 pl4:1 axis:2 coupled:1 embodied:1 hif:1 interesting:1 limitation:1 filtering:1 nucleus:1 degree:7 principle:1 systematically:1 cooke:3 periodicity:1 excitatory:3 course:1 looo:1 institute:1 wide:1 focussed:1 leaky:1 distributed:3 van:1 feedback:3 overcome:1 calculated:1 world:1 cortical:1 sensory:6 made:2 refinement:2 simplified:1 cope:1 gestalt:1 reproduces:1 global:1 incoming:4 investigating:1 handbook:1 assumed:2 themodel:1 xi:1 spectrum:1 iterative:1 decomposes:1 neurodynamics:1 nature:1 channel:6 robust:2 interact:2 adheres:1 complex:4 e20:1 priming:2 vj:1 significance:1 spread:2 xab:1 arise:1 ny:1 position:8 xl:2 lie:1 perceptual:1 weighting:1 symbol:1 decay:2 organisation:4 grouping:4 essential:1 workshop:1 sequential:3 effectively:3 ci:1 easier:1 mf:5 fc:1 simply:2 egm:1 lfc:1 aa:1 determines:3 presentation:6 towards:1 oscillator:2 man:1 change:1 included:3 determined:2 perceiving:1 principal:2 pas:1 tendency:1 preattentive:2 indicating:1 arises:1 |
32 | 1,027 | REMAP: Recursive Estimation and
Maximization of A Posteriori
Probabilities - Application to
Transition-Based Connectionist Speech
Recognition
Yochai Konig, Herve Bourlard~ and Nelson Morgan
{konig, bourlard,morgan }@icsi.berkeley.edu
International Computer Science Institute
1947 Center Street Berkeley, CA 94704, USA.
Abstract
In this paper, we introduce REMAP, an approach for the training
and estimation of posterior probabilities using a recursive algorithm
that is reminiscent of the EM-based Forward-Backward (Liporace
1982) algorithm for the estimation of sequence likelihoods. Although very general, the method is developed in the context of a
statistical model for transition-based speech recognition using Artificial Neural Networks (ANN) to generate probabilities for Hidden Markov Models (HMMs). In the new approach, we use local
conditional posterior probabilities of transitions to estimate global
posterior probabilities of word sequences. Although we still use
ANNs to estimate posterior probabilities, the network is trained
with targets that are themselves estimates of local posterior probabilities. An initial experimental result shows a significant decrease
in error-rate in comparison to a baseline system.
1
INTRODUCTION
The ultimate goal in speech recognition is to determine the sequence of words that
has been uttered. Classical pattern recognition theory shows that the best possible system (in the sense of minimum probability of error) is the one that chooses
the word sequence with the maximum a posteriori probability (conditioned on the
*Also affiliated with with Faculte Poly technique de Mons, Mons, Belgium
REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities
389
evidence). If word sequence i is represented by the statistical model M i , and the
evidence (which, for the application reported here, is acoustical) is represented by
a sequence X = {Xl, ... , X n , ... , X N }, then we wish to choose the sequence that
corresponds to the largest P(MiIX). In (Bourlard & Morgan 1994), summarizing
earlier work (such as (Bourlard & Wellekens 1989)), we showed that it was possible to compute the global a posteriori probability P(MIX) of a discriminant form
of Hidden Markov Model (Discriminant HMM), M, given a sequence of acoustic
vectors X. In Discriminant HMMs, the global a posteriori probability P(MIX) is
computed as follows: if r represents all legal paths (state sequences ql, q2, ... , qN)
in Mi, N being the length of the sequence, then
P(Mi IX) =
L P(Mi, ql, q2, ... , qNIX)
r
=
in which ~n represents the specific state hypothesized at time n, from the set Q
{ql, ... , q , qk, ... , qK} of all possible HMM states making up all possible models
Mi. We can further decompose this into:
P(Mi, ql, q2,???, qNIX) = P(ql, q2,???, qNIX)P(Milql, q2,???, qN, X)
Under the assumptions stated in (Bourlard & Morgan 1994) we can compute
N
P(ql, q2,???, qNIX)
= II p(qnlqn-l, xn)
n=l
The Discriminant HMM is thus described in terms of conditional transition probabilities p(q~lq~-l' xn), in which q~ stands for the specific state ql of Q hypothesized
at time n and can be schematically represented as in Figure 1.
P(IkIIIkI, x)
p(/aell/ael, x)
P(/aelllkl, x)
P(ltIlltI, x)
P(ltll/ael, x)
Figure 1: An example Discriminant HMM for the word "cat". The variable
to a specific acoustic observation Xn at time n.
X
refers
Finally, given a state sequence we assume the following approximation:
P(Milql, q2,???, qN, X) : : : : P(Milql, q2,???, qN)
We can estimate the right side of this last equation from a phonological model (in
the case that a given state sequence can belong to two different models). All the
required (local) conditional transition probabilities p(q~lq~-l> xn) can be estimated
by the Multi-Layer Perceptron (MLP) shown in Figure 2.
Recent work at lesl has provided us with further insight into the discriminant
HMM, particularly in light of recent work on transition-based models (Konig &
Morgan 1994j Morgan et al. 1994). This new perspective has motivated us to further
develop the original Discriminant HMM theory. The new approach uses posterior
probabilities at both local and global levels and is more discriminant in nature.
In this paper, we introduce the Recursive Estimation-Maximization of A posteriori
390
Y. KONIG, H. BOURLARD, N. MORGAN
P(CurrenCstlte I Acoustics, Prevlous_stlte)
t t t t
t???? .. t
0.1 ?? 0
Previous
Stlte
Acoustics
Figure 2: An MLP that estimates local conditional transition probabilities.
Probabilities (REMAP) training algorithm for hybrid HMM/MLP systems. The
proposed algorithm models a probability distribution over all possible transitions
(from all possible states and for all possible time frames n) rather than picking a
single time point as a transition target. Furthermore, the algorithm incrementally
increases the posterior probability of the correct model, while reducing the posterior
probabilities of all other models. Thus, it brings the overall system closer to the
optimal Bayes classifier.
A wide range of discriminant approaches to speech recognition have been studied
by researchers (Katagiri et al. 1991; Bengio et al. 1992; Bourlard et al. 1994). A
significant difficulty that has remained in applying these approaches to continuous
speech recognition has been the requirement to run computationally intensive algorithms on all of the rival sentences. Since this is not generally feasible, compromises
must always be made in practice. For instance, estimates for all rival sentences can
be derived from a list of the "N-best" utterance hypotheses, or by using a fully
connected word model composed of all phonemes.
2
2.1
REMAP TRAINING OF THE DISCRIMINANT HMM
MOTIVATIONS
The discriminant HMM/MLP theory as described above uses transition-based probabilities as the key building block for acoustic recognition. However, it is well known
that estimating transitions accurately is a difficult problem (Glass 1988). Due to
the inertia of the articulators, the boundaries between phones are blurred and overlapped in continuous speech. In our previous hybrid HMM/MLP system, targets
were typically obtained by using a standard forced Viterbi alignment (segmentation). For a transition-based system as defined above, this procedure would thus
yield rigid transition targets, which is not realistic.
Another problem related to the Viterbi-based training of the MLP presented in
Figure 2 and used in Discriminant HMMs, is the lack of coverage of the input space
during training. Indeed, during training (based on hard transitions), the MLP only
processes inputs consisting of "correct" pairs of acoustic vectors and correct previous
state, while in recognition the net should generalize to all possible combinations of
REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities
391
acoustic vectors and previous states, since all possible models and transitions will be
hypothesized for each acoustic input. For example, some hypothesized inputs may
correspond to an impossible condition that has thus never been observed, such as
the acoustics of the temporal center of a vowel in combination with a previous state
that corresponds to a plosive. It is unfortunately possible that the interpolative
capabilities of the network may not be sufficient to give these "impossible" pairs a
sufficiently low probability during recognition.
One possible solution to these problems is to use a full MAP algorithm to find transition probabilities at each frame for all possible transitions by a forward-backwardlike algorithm (Liporace 1982), taking all possible paths into account.
2.2
PROBLEM FORMULATION
As described above, global maximum a posteriori training of HMMs should find the
optimal parameter set e maximizing
J
II P(Mj IXj, e)
(1)
j=1
in which Mj represents the Markov model associated with each training utterance
Xj, with j = 1, ... , J.
Although in principle we could use a generalized back-propagation-like gradient
procedure in e to maximize (1) (Bengio et al. 1992), an EM-like algorithm should
have better convergence properties, and could preserve the statistical interpretation of the ANN outputs. In this case, training of the discriminant HMM by a
global MAP criterion requires a solution to the following problem: given a trained
MLP at iteration t providing a parameter set e t and, consequently, estimates of
P(q~lxn' q~-I' et ), how can we determine new MLP targets that:
1. will be smooth estimates of conditional transition probabilities q~-1
Vk,f E [1, K] and "In E [1, N],
-+
q~,
2. when training the MLP for iteration t+ 1, will lead to new estimates of et+l
and P(q~lxn' q~-I' et+1) that are guaranteed to incrementally increase the
global posterior probability P(MiIX, e)?
In (Bourlard et al. 1994), we prove that a re-estimate of MLP targets that guarantee
convergence to a local maximum of (1) is given by1:
(2)
where we have estimated the left-hand side using a mapping from the previous
state and the local acoustic data to the current state, thus making the estimator
realizable by an MLP with a local acoustic window .2 Thus, we will want to estimate
1 In most of the following, we consider only one particular training sequence X associated
with one particular model M. It is, however, easy to see that all of our conclusions remain
valid for the case of several training sequences Xj, j
1, ... , J. A simple way to look
at the problem is to consider all training sequences as a single training sequence obtained
by concatenating all the X,'s with boundary conditions at every possible beginning and
ending point.
2Note that, as done in our previous hybrid HMM/MLP systems, all conditional on Xn
can be replaced by X;::!: = {x n - c , ., ?. , X n , .?? , Xn+d} to take some acoustic context into
account.
=
392
Y. KONIG, H. BOURLARD, N. MORGAN
the transition probability conditioned on the local data (as MLP targets) by using
the transition probability conditioned on all of the data.
In (Bourlard et al. 1994), we further prove that alternating MLP target estimation
(the "estimation" step) and MLP training (the" maximization" step) is guaranteed
to incrementally increase (1) over t. 3 The remaining problem is to find an efficient
algorithm to express P(q~IX, q~-l' M) in terms of P(q~lxn, q~-l) so that the next
iteration targets can be found. We have developed several approaches to this estimation, some of which are described in (Bourlard et al. 1994). Currently, we are
implementing this with an efficient recursion that estimates the sum of all possible
paths in a model, for every possible transition at each possible time. From these
values we can compute the desired targets (2) for network training by
P( t IX M k )
qn , , qn-l
2.3
=
P(M, q~, ~~_lIX)
J
k
IX)
DJ
,qn,
qn-l
~ . P(M
(3)
REMAP TRAINING ALGORITHM
The general scheme of the REMAP training of hybrid HMM/MLP systems can be
summarized as follow:
1. Start from some initial net providing P(q~lxn' q~-l' e t ), t = 0, V possible
(k,?)-pairs4.
2. Compute MLP targets P(q~IXj,q~_l,et,Mj) according to (3), V training
sentences Xj associated with HMM Mj, V possible (k, ?) state transition
pairs in Mj and V X n , n
1, ... , N in Xj (see next point).
=
3. For every Xn in the training database, train the MLP to minimize the
relative entropy between the outputs and targets. See (Bourlard et ai,
1994) for more details. This provides us with a new set of parameters t ,
for t
t + 1.
=
e
4. Iterate from 2 until convergence.
This procedure is thus composed of two steps: an Estimation (E) step, corresponding to step 2 above, and a Maximization (M) step, corresponding to step 3 above.
In this regards, it is reminiscent of the Estimation-Maximization (EM) algorithm
as discussed in (Dempster et al. 1977). However, in the standard EM algorithm,
the M step involves the actual maximization of the likelihood function. In a related
approach, usually referred to as Generalized EM (GEM) algorithm, the M step does
not actually maximize the likelihood but simply increases it (by using, e.g., a gradient procedure). Similarly, REMAP increases the global posterior function during
the M step (in the direction of targets that actually maximize that global function),
rather than actually maximizing it. Recently, a similar approach was suggested for
mapping input sequences to output sequences (Bengio & Frasconi 1995).
3Note here that one "iteration" does not stand for one iteration of the MLP training
but for one estimation-maximization iteration for which a complete MLP training will be
required.
4This can be done, for instance, by training up such a net from a hand-labeled database
like TIMIT or from some initial forward-backward estimator of equivalent local probabilities (usually referred to as "gamma" probabilities in the Baum-Welch procedure).
REMAP: Recursive Estimation and Maximization of A Posteriori Probabilities
System
DHMM, pre-REMAP
1 REMAP iteration
2 REMAP iterations
393
Error Rate
14.9%
13.6%
13.2%
Table 1: Training and testing on continuous numbers, no syntax, no durational
models.
3
EXPERIMENTS AND RESULTS
For testing our theory we chose the Numbers'93 corpus. It is a continuous speech
database collected by CSLU at the Oregon Graduate Institute. It consists of numbers spoken naturally over telephone lines on the public-switched network (Cole
et al. 1994). The Numbers'93 database consists of 2167 speech files of spoken numbers produced by 1132 callers. We used 877 of these utterances for training and
657 for cross-validation and testing (200 for cross-validation) saving the remaining
utterances for final testing purposes. There are 36 words in the vocabulary, namely
zero, oh, 1, 2, 3, ... ,20, 30, 40, 50, ... ,100, 1000, a, and, dash, hyphen, and double.
All our nets have 214 inputs: 153 inputs for the acoustic features, and 61 to represent the previous state (one unit for every possible previous state, one state per
phoneme in our case). The acoustic features are combined from 9 frames with 17
features each (RASTA-PLP8 + delta features + delta log gain) computed with an
analysis window of 25 ms computed every 12.5 ms (overlapping windows) and with
a sampling rate of 8 Khz . The nets have 200 hidden units and 61 outputs.
Our results are summarized in Table 1. The row entitled "DHMM, pre-REMAP"
corresponds to a Discriminant HMM using the same training approach, with hard
targets determined by the first system, and additional inputs to represent the previous state The improvement in the recognition rate as a result of REMAP iterations
is significant at p < 0.05. However all the experiments were done using acoustic
information alone. Using our (baseline) hybrid system under equal conditions, i.e.,
no duration information and no language information, we get 31.6% word error;
adding the duration information back we get 12.4% word error. We are currently
experimenting with enforcing minimum duration constraints in our framework.
4
CONCLUSIONS
In summary:
? We have a method for MAP training and estimation of sequences.
? This can be used in a new form of hybrid HMM/MLP. Note that recurrent
nets or TDNNs could also be used. As with standard HMM/MLP hybrids,
the network is used to estimate local posterior probabilities (though in this
case they are conditional transition probabilities, that is, state probabilities
conditioned on the acoustic data and the previous state). However, in the
case of REMAP these nets are trained with probabilistic targets that are
themselves estimates of local posterior probabilities.
? Initial experiments demonstrate a significant reduction in error rate for this
process.
394
Y. KONIG, H. BOURLARD, N. MORGAN
Acknowledgments
We would like to thank Kristine Ma and Su-Lin Wu for their help with the Numbers'93 database. We also thank OGI, in particular to Ron Cole, for providing the
database. We gratefully acknowledge the support of the Office of Naval Research,
URI No. N00014-92-J-1617 (via UCB), the European Commission via ESPRIT
project 20077 (SPRACH), and ICSI and FPMs in general for supporting this work.
References
BENGIO, Y., & P. FRASCONI. 1995. An input output HMM architecture.
In Advances in Neural Information Processing Systems, ed. by G. Tesauro,
D. Touretzky, & T. Leen, volume 7. Cambridge: MIT press.
- - , R. DE MORI, G. FLAMMIA, & R. KOMPE. 1992. Global optimization of a
neural network-hidden Markov model hybrid. IEEE trans. on Neural Networks
3.252-258.
BOURLARD, H., Y. KONIG, & N. MORGAN. 1994. REMAP: Recursive estimation
and maximization of a posteriori probabilities, application to transition-based
connectionist speech recognition. Technical Report TR-94-064, International
Computer Science Institute, Berkeley, CA.
--, & N. MORGAN. 1994. Connectionist Speech Recognition - A Hybrid Approach.
Kluwer Academic Publishers.
--, & C. J. WELLEKENS. 1989. Links between Markov models and multilayer
perceptrons. In Advances in Neural Information Processing Systems 1, ed. by
D.J. Touretzky, 502-510, San Mateo. Morgan Kaufmann.
COLE, R.A., M. FANTY, & T. LANDER. 1994. Telephone speech corpus development at CSL U. In Proceedings Int 'I Conference on Spoken Language Processing,
Yokohama, Japan.
DEMPSTER, A. P., N. M. LAIRD, & D. B. RUBIN. 1977. Maximum likelihood
from incomplete data via the EM algorithm. Journal of the Royal Statistical
Society, Series B 34.1-38.
GLASS, J. R., 1988. Finding Acoustic Regularities in Speech Applications to Phonetic Recognition. M.LT dissertation.
KATAGIRI, S., C.H. LEE, & JUANG B.H. 1991. New discriminative training
algorithms based on the generalized probabilistic decent method. In Proc. of
the IEEE Workshop on Neural Netwroks for Signal Processing, ed. by RH.
Juang, S.Y. Kung, & C.A. Kamm, 299-308 .
KONIG, Y., & N. MORGAN. 1994. Modeling dynamics in connectionist speech
recognition - the time index model. In Proceedings Int'l Conference on Spoken
Language Processing, 1523-1526, Yokohama, Japan.
LIPORACE, L. A. 1982. Maximum likelihood estimation for multivariate observations of markov sources. IEEE Trans. on Information Theory IT-28.729-734.
MORGAN, N., H. BOURLARD, S. GREENBERG, & H. HERMANSKY. 1994. Stochastic perceptual auditory-event-based models for speech recognition. In Proceedings Int'l Conference on Spoken Language Processing, 1943-1946, Yokohama,
Japan.
| 1027 |@word tr:1 reduction:1 initial:4 series:1 current:1 ixj:2 reminiscent:2 must:1 realistic:1 alone:1 dhmm:2 beginning:1 dissertation:1 provides:1 ron:1 prove:2 consists:2 introduce:2 indeed:1 themselves:2 multi:1 kamm:1 actual:1 csl:1 window:3 provided:1 estimating:1 project:1 q2:8 developed:2 spoken:5 finding:1 guarantee:1 temporal:1 berkeley:3 every:5 esprit:1 classifier:1 unit:2 local:12 path:3 chose:1 studied:1 mateo:1 hmms:4 range:1 graduate:1 acknowledgment:1 testing:4 recursive:7 practice:1 block:1 procedure:5 word:9 pre:2 refers:1 get:2 liporace:3 context:2 applying:1 impossible:2 equivalent:1 map:3 center:2 maximizing:2 uttered:1 baum:1 duration:3 welch:1 insight:1 estimator:2 oh:1 caller:1 target:15 us:2 hypothesis:1 overlapped:1 recognition:15 particularly:1 database:6 labeled:1 observed:1 lix:1 connected:1 decrease:1 icsi:2 dempster:2 dynamic:1 trained:3 compromise:1 represented:3 cat:1 train:1 forced:1 artificial:1 mon:2 laird:1 final:1 sequence:19 net:7 fanty:1 konig:8 convergence:3 double:1 requirement:1 regularity:1 juang:2 help:1 develop:1 recurrent:1 coverage:1 involves:1 direction:1 correct:3 stochastic:1 public:1 implementing:1 decompose:1 sufficiently:1 viterbi:2 mapping:2 belgium:1 purpose:1 estimation:16 proc:1 currently:2 cole:3 largest:1 hyphen:1 mit:1 always:1 rather:2 office:1 derived:1 naval:1 vk:1 articulator:1 improvement:1 likelihood:5 experimenting:1 baseline:2 sense:1 summarizing:1 posteriori:10 glass:2 realizable:1 rigid:1 typically:1 cslu:1 hidden:4 overall:1 development:1 equal:1 phonological:1 never:1 frasconi:2 saving:1 sampling:1 represents:3 look:1 hermansky:1 connectionist:4 report:1 kompe:1 composed:2 preserve:1 gamma:1 replaced:1 consisting:1 vowel:1 mlp:23 alignment:1 durational:1 light:1 closer:1 herve:1 lxn:4 incomplete:1 re:1 desired:1 instance:2 earlier:1 modeling:1 maximization:11 ael:2 reported:1 commission:1 chooses:1 combined:1 international:2 probabilistic:2 lee:1 picking:1 choose:1 japan:3 account:2 de:2 summarized:2 int:3 blurred:1 oregon:1 start:1 bayes:1 capability:1 timit:1 minimize:1 qk:2 phoneme:2 kaufmann:1 yield:1 correspond:1 generalize:1 accurately:1 produced:1 researcher:1 anns:1 touretzky:2 ed:3 naturally:1 associated:3 mi:5 gain:1 auditory:1 segmentation:1 actually:3 back:2 follow:1 formulation:1 done:3 though:1 leen:1 furthermore:1 until:1 lesl:1 hand:2 su:1 overlapping:1 lack:1 incrementally:3 propagation:1 brings:1 building:1 usa:1 hypothesized:4 alternating:1 ogi:1 during:4 criterion:1 generalized:3 m:2 syntax:1 complete:1 demonstrate:1 recently:1 khz:1 volume:1 belong:1 interpretation:1 discussed:1 kluwer:1 significant:4 cambridge:1 ai:1 similarly:1 language:4 katagiri:2 dj:1 gratefully:1 posterior:12 multivariate:1 showed:1 recent:2 perspective:1 phone:1 tesauro:1 phonetic:1 n00014:1 morgan:14 minimum:2 entitled:1 additional:1 determine:2 maximize:3 signal:1 ii:2 full:1 mix:2 smooth:1 technical:1 academic:1 cross:2 lin:1 multilayer:1 iteration:9 represent:2 tdnns:1 schematically:1 want:1 lander:1 source:1 publisher:1 flammia:1 file:1 bengio:4 easy:1 decent:1 iterate:1 xj:4 architecture:1 intensive:1 motivated:1 ultimate:1 remap:17 speech:14 generally:1 rival:2 generate:1 estimated:2 delta:2 per:1 express:1 key:1 interpolative:1 backward:2 sum:1 run:1 wu:1 layer:1 guaranteed:2 dash:1 constraint:1 ltll:1 according:1 combination:2 remain:1 em:6 making:2 legal:1 equation:1 computationally:1 wellekens:2 mori:1 faculte:1 yokohama:3 original:1 remaining:2 classical:1 society:1 gradient:2 thank:2 link:1 street:1 hmm:18 nelson:1 acoustical:1 collected:1 discriminant:14 enforcing:1 length:1 index:1 providing:3 ql:7 difficult:1 unfortunately:1 stated:1 affiliated:1 rasta:1 observation:2 markov:6 acknowledge:1 supporting:1 frame:3 pair:3 required:2 namely:1 sentence:3 acoustic:17 trans:2 suggested:1 usually:2 pattern:1 royal:1 event:1 difficulty:1 hybrid:9 bourlard:15 recursion:1 scheme:1 utterance:4 relative:1 fully:1 by1:1 validation:2 switched:1 sufficient:1 rubin:1 principle:1 row:1 summary:1 last:1 side:2 perceptron:1 institute:3 wide:1 taking:1 regard:1 boundary:2 greenberg:1 xn:7 transition:24 stand:2 valid:1 qn:8 ending:1 forward:3 made:1 inertia:1 vocabulary:1 san:1 global:10 corpus:2 gem:1 discriminative:1 continuous:4 table:2 nature:1 mj:5 ca:2 plosive:1 poly:1 european:1 rh:1 motivation:1 referred:2 wish:1 lq:2 xl:1 concatenating:1 perceptual:1 ix:4 remained:1 specific:3 list:1 evidence:2 workshop:1 adding:1 conditioned:4 uri:1 entropy:1 lt:1 simply:1 corresponds:3 ma:1 conditional:7 goal:1 ann:2 consequently:1 feasible:1 hard:2 telephone:2 determined:1 reducing:1 experimental:1 ucb:1 perceptrons:1 support:1 kung:1 |
33 | 1,028 | Exponentially many local minima for single
neurons
Peter Auer
Manfred K. Warmuth
Mark Herbster
Department of Computer Science
Santa Cruz, California
{pauer,mark,manfred} @cs.ucsc.edu
Abstract
We show that for a single neuron with the logistic function as the transfer
function the number of local minima of the error function based on the
square loss can grow exponentially in the dimension.
1 INTRODUCTION
Consider a single artificial neuron with d inputs. The neuron has d weights w E Rd. The
output of the neuron for an input pattern x E Rd is y = ?(x? w), where ? : R -+ R
is a transfer function. For a given sequence of training examples ((Xt, Yt))I<t<m, each
consisting of a pattern Xt E R d and a desired output Yt E R, the goal of the training phase
for neural networks consists of minimizing the error function with respect to the weight
vector w E Rd. This function is the sum of the losses between outputs of the neuron and
the desired outputs summed over all training examples. In notation, the error function is
m
E(w) =
L L(Yt, ?(Xt . w))
,
t=1
where L : R x R
-+
[0,00) is the loss function.
A common example of a transfer function is the logistic function logistic( z) = I+!-' which
has the bounded range (0, 1). In contrast, the identity function id(z) = z has unbounded
(y - Y)2. Other
range. One of the most common loss functions is the square loss L(y, y)
examples are the absolute loss Iy - yl and the entropic1oss yin? + (1 - y) In
=
::::l
We show that for the square loss and the logistic function the error function of a single
neuron for n training examples may have Ln / dJ d local minima. More generally, this holds
for any loss and transfer function for which the composition of the loss function with the
transfer function (in notation L(y, ?(x . w)) is continuous and has bounded range. This
Exponentially Many Local Minima for Single Neurons
Figure 1:
317
Error Function with 25 Local Minima (16 Visible), Generated by 10 TwoDimensional Examples.
proves that for any transfer function with bounded range exponentially many local minima
can occur when the loss function is the square loss.
The sequences of examples that we use in our proofs have the property that they are nonrealizable in the sense that there is no weight vector W E R d for which the error function
is zero, i.e. the neuron cannot produce the desired output for all examples. We show with
some minimal assumptions on the loss and transfer functions that for a single neuron there
can be no local minima besides the global minimum if the examples are realizable.
If the transfer function is the logistic function then it has often been suggested in the
literature to use the entropic loss in artificial neural networks in place of the square loss
[BW88, WD88, SLF88, Wat92]. In that case the error function of a single neuron is
convex and thus has only one minimum even in the non-realizable case. We generalize this
observation by defining a matching loss for any differentiable increasing transfer functions
?:
1
,p-l(y)
L</>(y, f)) =
(?(z) - y) dz .
</>-l(y)
The loss is the area depicted in Figure 2a. If ? is the identity function then L</> is the square
loss likewise if ? is the logistic function then L</> is the entropic loss. For the matching loss
the gradient descent update for minimizing the error function for a sequence of examples
is simply
Wnew := Wold
-1]
(f)?(Xt .
Wold) - Yt)Xt) ,
t=1
where 1] is a positive learning rate. Also the second derivatives are easy to calculate for
this general setting: L4>(Y~:v~<:Wt;W)) = ?'(Xt . W)Xt,iXt,j. Thus, if Ht(w) is the Hessian
of L</>(Yt, ?(Xt . w)) with respect to W then v T Ht(w)v = ?'(Xt . w)(v . Xt)2. Thus
318
P. AUER. M. HERBSTER, M. K. WARMUTH
0.8
wO.I
0.4
0.2
.-1
(9)
(a)
Figure 2:
=w? x
...
-2
...o
(b)
(a) The Matching Loss Function L</>.
(b) The Square Loss becomes Saturated, the Entropic Loss does not.
H t is positive semi-definite for any increasing differentiable transfer function. Clearly
L:~I Ht(w) is the Hessian of the error function E(w) for a sequence of m examples and
it is also positive semi-definite. It follows that for any differentiable increasing transfer
function the error function with respect to the matching loss is always convex.
We show that in the case of one neuron the logistic function paired with the square loss
can lead to exponentially many minima. It is open whether the number of local minima
grows exponentially for some natural data. However there is another problem with the
pairing of the logistic and the square loss that makes it hard to optimize the error function
with gradient based methods. This is the problem of flat regions. Consider one example
(x, y) consisting of a pattern x (such that x is not equal to the all zero vector) and the
desired output y. Then the square loss (Iogistic(x . w) - y)2, for y E [0, I] and w E R d ,
turns flat as a function of w when f) = logistic( x . w) approaches zero or one (for example
see Figure 2b where d = I and y = 0). It is easy to see that for all bounded transfer
functions with a finite number of minima and corresponding bounded loss functions, the
of the square
same phenomenon occurs. In other words, the composition L(y, ?(x .
loss with any bounded transfer function ? which has a finite number of extrema turns flat as
Ix . w I becomes large. Similarly, for multiple examples the error function E( w) as defined
above becomes flat. In flat regions the gradients with respect to the weight vector w are
small, and thus gradient-based updates of the weight vector may have a hard time moving
the weight vector out of these flat regions. This phenomenon can easily be observed in
practice and is sometimes called "saturation" [Hay94]. In contrast, if the logistic function
is paired with the entropic loss (see Figure 2b), then the error function turns flat only at the
global minimum. The same holds for any increasing differentiable transfer function and its
matching loss function.
w?
A number of previous papers discussed conditions necessary and sufficient for mUltiple
local minima of the error function of single neurons or otherwise small networks [WD88,
SS89, BRS89, Blu89, SS91, GT92]. This previous work only discusses the occurrence of
multiple local minima whereas in this paper we show that the number of such minima can
grow exponentially with the dimension. Also the previous work has mainly been limited
to the demonstration of local minima in networks or neurons that have used the hyperbolic
tangent or logistic function with the square loss. Here we show that exponentially many
minima occur whenever the composition of the loss function with the transfer function is
continuous and bounded.
The paper is outlined as follows. After some preliminaries in the next section, we gi ve formal
Exponentially Many Local Minima for Single Neurons
319
11
04$
O.
0.9
036
0.1
03
07
026
WOI
02
as -- --- ------------ __
0.t5
O.
0.1
03
0.06
02
0
-2
,-
,
,
'
,
\,
'-
,/ ' ..
"
01 L......L-~~-~~~~-'--~-~
-8-e-~-2
0
-1
log ..
(b)
(a)
Figure 3:
(a) Error Function for the Logistic Transfer Function and the
Square Loss with Examples ((10, .55), (.7, .25?)
(b) Sets of Minima can be Combined.
statements and proofs of the results mentioned above in Section 3. At first (Section 3.1) we
show that n one-dimensional examples might result in n local minima of the error function
(see e.g. Figure 3a for the error function of two one-dimensional examples). From the local
minima in one dimension it follows easily that n d-dimensional examples might result in
Ln/ dJ d local minima of the error function (see Figure 1 and discussion in Section 3.2).
We then consider neurons with a bias (Section 4), i.e. we add an additional input that is
((Xt, Yt?)I<t<m is
clamped to one. The error function for a sequence of examples S
now
=
m
Es(B, w) =
I: L(Yt, r/>(B + WXt?,
t=1
where B denotes the bias, i.e. the weight of the input that is clamped to one. We can prove
that the error function might have Ln/2dJ d local minima if loss and transfer function are
symmetric. This holds for example for the square loss and the logistic transfer function .
The proofs are omitted due to space constraints. They are given in the full paper [AHW96] ,
together with additional results for general loss and transfer functions.
Finally we show in Section 5 that with minimal assumptions on transfer and loss functions
that there is only one minimum of the error function if the sequence of examples is realizable
by the neuron.
The essence of the proofs is quite simple. At first observe that ifloss and transfer function are
bounded and the domain is unbounded, then there exist areas of saturation where the error
function is essentially flat. Furthermore the error function is "additive" i.e. the error function
produced by examples in SUS' is simply the error function produced by the examples in
S added to the error function produced by the examples in S', Esusl Es + ESI. Hence
the local minima of Es remain local minima of Esus 1 if they fall into an area of saturation
of Es. Similarly, the local minima of ESI remain local minima of Esusl as well (see
Figure 3b). In this way sets of local minima can be combined.
=
2 PRELIMINARIES '
We introduce the notion of minimum-containing set which will prove useful for counting
the minima of the error function.
320
P. AUER, M. HERBSTER, M. K. WARMUTH
Definition 2.1 Let f : Rd_R be a continuous function. Then an open and bounded set
U E Rd is called a minimum-containing set for f if for each w on the boundary of U there
is a w'" E U such that f(w"') < f(w).
Obviously any minimum-containing set contains a local minimum of the respecti ve function.
Furthermore each of n disjoint minimum-containing sets contains a distinct local minimum.
Thus it is sufficient to find n disjoint minimum-containing sets in order to show that a
function has at least n local minima.
3 MINIMA FOR NEURONS WITHOUT BIAS
We will consider transfer functions ? and loss functions L which have the following
property:
(PI): The transfer function ? : R-R is non-constant. The loss function L : ?(R) x
?(R)-[O, 00) has the property that L(y, y) = 0 and L(y, f)) > 0 for all y f.
f) E ?(R). FinallythefunctionL(?,?(?)): ?(R) x R-[O,oo) is continuous and
bounded.
3.1
ONE MINIMUM PER EXAMPLE IN ONE DIMENSION
Theorem 3.1 Let ? and L satisfy ( PI). Then for all n ~ I there is a sequence of n
examples S = (XI, y), ... , (x n , y)), Xt E R, y E ?(R), such that Es(w) has n distinct
local minima.
Since L(y, ?( w)) is continuous and non-constant there are w- , w"', w+ E R such that the
values ?( w-), ?( w"'), ?( w+) are all distinct. Furthermore we can assume without loss
of generality that 0 < w- < w'" < w+. Now set y = ?(w"'). If the error function
L(y, ?(w)) has infinitely many local minima then Theorem 3.1 follows immediately, e.g.
by setting XI = ... = Xn = 1. If L(y, ?(w)) has only finitely many minima then
limw ..... oo L(y, ?(w)) = L(y, ?(oo)) exists since L(y, ?(w)) is bounded and continuous.
We use this fact in the following lemma. It states that we get a new minimum-containing
set by adding an example in the area of saturation of the error function.
Lemma 3.2 Assume that limw..... oo L(y, ?( w)) exists. Let S = (XI, YI), ... , (x n , Yn))
be a sequence of examples and 0 < WI < wi < wt < ... < w;; < w~ < w~
such that Es(w t ) > Es(wn and Es(wn < Es(wt) for t = 1, ... , n. Let S' =
(xo, y} (XI, Yd, ... , (x n, Yn)) where Xo is sufficiently large. Furthermoreletwo = w'" /xo
and Wo = w?/xo (where w-, w"', w+, Y = ?(w"') are as above). Then 0 < we; < Wo <
wt < WI < wi < wt < ... < w;; < w~ < w~ and
Proof. We have to show that for all Xo sufficiently large condition (l) is satisfied, i.e. that
(2)
We get
lim ESI(WO) = L(y, ?(w"'))
~
..... oo
recalling that Wo
+
lim Es(w'" /xo) = L(y, ?(w"'))
~-oo
= w'" /xo and S' = S u (xo, y) . Analogously
lim ESI(w~) = L(y,?(w?)) + Es(O).
x
0"'" 00
+ Es(O),
321
Exponentially Many Local Minima for Single Neurons
Thus equation (2) holds for t = 0. For t = 1, ... , n we get
lim ESI(w;) = lim L(y, ?(w;xo))
:1:0-+00
:1:0-+00
+ Es(wn
= L(y, ?(oo))
+ Es(wn
and
Since Es (w;)
< Es (w;) for t
= 1, ... , n, the lemma follows.
o
Proof of Theorem 3.1. The theorem follows by induction from Lemma 3.2 since each
0
interval ( wi, wi) is a minimum-containing set for the error function .
Remark. Though the proof requires the magnitude of the examples to be arbitrarily large I
in practice local minima show up for even moderately sized w (see Figure 3a).
3.2
CURSE OF DIMENSIONALITY: THE NUMBER OF MINIMA MIGHT
GROW EXPONENTIALLY WITH THE DIMENSION
We show how the I-dimensional minima of Theorem 3.1 can be combined to obtain ddimensional minima.
Lemma 3.3 Let I : R -+ R be a continuous function with n disjoint minimum-containing
sets UI , .?. ,Un. Then the sets UtI x ... X Utd , tj E {I, ... , n}, are n d disjoint minimumcontaining sets for the function 9 : Rd -+ R, g(XI, . .. , Xd) = l(xI) + ... + I(xd).
o
Proof. Omitted.
Theorem 3.4 Let ? and L satisfy ( PI). Then for all n ~ 1 there is a sequence of examples
S = (XI,Y),""(xn,y)), Xt E Rd, y E ?(R), such that Es(w) has l~Jd distinct local
minima.
By Lemma 3.2 there exists a sequence of one-dimensional examples S' =
(xI,y)"" , (xLcrJ'Y)) such that ESI(w) has L~J disjoint minimum-containing sets.
Thus by Lemma 3.3 the error function Es (w) has l ~ Jd disjoint minimum-containing
sets where S = ((XI, 0, .. . ,0), y), ... , ?xLcrJ' 0, . .. ,0), y), .. . , ?0, ... , xI), y), .. . ,
?0, .. . , xLcrJ), y)).
0
Proof.
4 MINIMA FOR NEURONS WITH A BIAS
Theorem 4.1 Let the transfer function ? and the loss function L satisfy ?( Bo + z) - ?o =
?o - ?(Bo - z) and L(?o + y, ?o + y)
L(?o - y, ?o - y)for some B o, ?o E R and all
z E R, y, Y E ?(R). Furthermore let ? have a continuous second derivative and assume
that the first derivative of ? at Bo is non-zero. At last let ~L(y, y) be continuous in y
=
Then for all n ~ 1
there is a sequence of examples S = (XI, YI), . .. , (xn, Yn)), Xt E R d, Yt E ?(R), such
and y, L(y, y) = 0for all y E ?(R), and
that Es (B, w) has
(~L(Y, y)) (?o, ?o) > 0.
l ~ Jd distinct local minima.
Note that the square loss along with either the hyperbolic or logistic transfer function
satisfies the conditions of the theorem.
IThere is a parallel proof where the magnitudes of the examples may be arbitrarily small.
P. AUER, M. HERBSTER, M. K. WARMUTH
322
5 ONE MINIMUM IN THE REALIZABLE CASE
We show that when transfer and loss function are monotone and the examples are realizable
then there is only a single minimal surface. A sequence of examples S is realizable if
Es(w) = 0 for some wE Rd.
Theorem 5.1 Let 4> and L satisfy (P1). Furthermore let 4> be mOriotone and L such that
L(y, y + rl) ~ L(y, y + r2) for 0 ~ rl ~ r2 or 0 ~ rl ~ r2. Assume that for some
sequence of examples S there is a weight vectorwo E Rd such that Es(wo) = O. Thenfor
each WI E Rd the function h( a) = Es (( 1 - a )wo + aWl) is increasing for a ~ O.
Thus each minimum WI can be connected with Wo by the line segment WOWI such that
Es(w) = 0 for all W on WOWI.
Proof of Theorem 5.1.
Let S = ((XI, yd, ... , (xn, Yn)).
Then h(a)
E~=I L(yt, 4>(WOXt + a(wl - wo)xt}). Since Yt = 4>(WOXt) it suffices to show that
L(4)(z), 4>(z+ar)) is monotonically increasing in a ~ ofor all Z, r E R. Let 0 ~ al ~ a2.
Since 4> is monotone we get 4>(z + aIr) = 4>(z) + rl, 4>(z + a2r)
4>(z) + r2 where
o ~ rl ~ r2 or 0 ~ rl ~ r2? Thus L(4)(z), 4>(z + aIr)) ~ L(4)(z), 4>(z + a2r)).
0
=
Acknowledgments
We thank Mike Dooley, Andrew Klinger and Eduardo Sontag for valuable discussions. Peter Auer
gratefully acknowledges support from the FWF, Austria, under grant J01028-MAT. Mark Herbster
and Manfred Warmuth were supported by NSF grant IRI-9123692.
References
[AHW96] P. Auer, M. Herbster, and M. K. Warmuth. Exponentially many local minima for single
neurons. Technical Report UCSC-CRL-96-1, Univ. of Calif. Computer Research Lab,
Santa Cruz, CA, 1996. In preperation.
[Blu89]
E.K. Blum. Approximation of boolean functions by sigmoidal networks: Part i: Xor and
other two-variable functions . Neural Computation, 1:532-540, February 1989.
[BRS89]
M.L. Brady, R. Raghavan, and J. Slawny. Back propagation fails to separate where
perceptrons succeed. IEEE Transactions On Circuits and Systems, 36(5):665-674, May
1989.
[BW88]
E. Baum and F. Wilczek. Supervised learning of probability distributions by neural
networks . In D.Z. Anderson, editor, Neural Information Processing Systems, pages 5261, New York, 1988. American Insitute of Physics.
[GT92]
Marco Gori and Alberto Tesi. On the problem of local minima in backpropagation. IEEE
Transaction on Pattern Analysis and Machine Intelligence, 14(1):76-86, 1992.
[Hay94]
S. Haykin. Neural Networks: a Comprehensive Foundation. Macmillan, New York, NY,
1994.
[SLF88]
S. A. Solla, E. Levin, and M. Fleisher. Accelerated learning in layered neural networks.
Complex Systems, 2:625-639,1988.
[SS89]
E.D. Sontag and H.l. Sussmann. Backpropagation can give rise to spurious local minima
even for networks without hidden layers. Complex Systems, 3(1):91-106, February 1989.
[SS91]
E.D. Sontag and H.l. Sussmann. Back propagation separates where perceptrons do. Neural
Networks,4(3),1991.
[Wat92]
R. L. Watrous. A comparison between squared error and relative entropy metrics using
several optimization algorithms. Complex Systems, 6:495-505, 1992.
[WD88]
B.S. Wittner and J .S. Denker. Strategies for teaching layered networks classification tasks.
In D.Z. Anderson, editor, Neural Information Processing Systems, pages 850--859, New
York, 1988. American Insitute of Physics.
| 1028 |@word open:2 ithere:1 contains:2 cruz:2 additive:1 visible:1 update:2 intelligence:1 warmuth:6 haykin:1 manfred:3 sigmoidal:1 unbounded:2 along:1 ucsc:2 pairing:1 consists:1 prove:2 nonrealizable:1 introduce:1 tesi:1 p1:1 curse:1 increasing:6 becomes:3 notation:2 bounded:11 circuit:1 watrous:1 extremum:1 brady:1 eduardo:1 xd:2 grant:2 yn:4 positive:3 local:33 id:1 yd:2 might:4 limited:1 range:4 acknowledgment:1 practice:2 definite:2 backpropagation:2 area:4 hyperbolic:2 matching:5 word:1 get:4 cannot:1 layered:2 twodimensional:1 optimize:1 yt:10 dz:1 baum:1 iri:1 convex:2 immediately:1 sussmann:2 notion:1 observed:1 mike:1 calculate:1 fleisher:1 region:3 connected:1 solla:1 valuable:1 mentioned:1 a2r:2 ui:1 moderately:1 esi:6 limw:2 segment:1 easily:2 univ:1 distinct:5 insitute:2 artificial:2 quite:1 otherwise:1 gi:1 obviously:1 sequence:13 differentiable:4 awl:1 produce:1 oo:7 andrew:1 finitely:1 ddimensional:1 c:1 raghavan:1 suffices:1 preliminary:2 hold:4 marco:1 sufficiently:2 entropic:4 a2:1 omitted:2 wl:1 clearly:1 always:1 mainly:1 contrast:2 sense:1 realizable:6 spurious:1 hidden:1 classification:1 respecti:1 summed:1 equal:1 report:1 ve:2 comprehensive:1 phase:1 consisting:2 recalling:1 saturated:1 tj:1 necessary:1 calif:1 desired:4 minimal:3 boolean:1 ar:1 levin:1 ixt:1 combined:3 herbster:6 yl:1 physic:2 together:1 iy:1 analogously:1 squared:1 satisfied:1 containing:10 american:2 derivative:3 satisfy:4 lab:1 parallel:1 square:15 air:2 xor:1 likewise:1 generalize:1 produced:3 whenever:1 definition:1 proof:11 austria:1 lim:5 dimensionality:1 auer:6 back:2 supervised:1 wold:2 though:1 generality:1 furthermore:5 anderson:2 wilczek:1 propagation:2 logistic:14 grows:1 hence:1 symmetric:1 essence:1 common:2 rl:6 exponentially:12 discussed:1 composition:3 rd:9 outlined:1 similarly:2 teaching:1 gratefully:1 dj:3 moving:1 surface:1 add:1 wxt:1 arbitrarily:2 yi:2 minimum:62 additional:2 monotonically:1 semi:2 multiple:3 full:1 technical:1 alberto:1 wittner:1 paired:2 essentially:1 metric:1 sometimes:1 whereas:1 utd:1 interval:1 grow:3 fwf:1 counting:1 easy:2 wn:4 whether:1 sus:1 wo:9 peter:2 sontag:3 hessian:2 york:3 remark:1 generally:1 useful:1 santa:2 exist:1 nsf:1 disjoint:6 per:1 ofor:1 mat:1 blum:1 ht:3 monotone:2 sum:1 place:1 uti:1 layer:1 preperation:1 occur:2 constraint:1 flat:8 department:1 remain:2 wi:8 xo:9 ln:3 equation:1 turn:3 discus:1 denker:1 observe:1 occurrence:1 jd:3 denotes:1 gori:1 prof:1 february:2 added:1 occurs:1 strategy:1 thenfor:1 gradient:4 thank:1 separate:2 induction:1 besides:1 minimizing:2 demonstration:1 statement:1 rise:1 neuron:21 observation:1 finite:2 descent:1 defining:1 california:1 suggested:1 pattern:4 saturation:4 natural:1 acknowledges:1 literature:1 tangent:1 relative:1 loss:43 foundation:1 sufficient:2 editor:2 pi:3 supported:1 last:1 formal:1 bias:4 fall:1 absolute:1 boundary:1 dimension:5 xn:4 t5:1 transaction:2 global:2 xi:12 continuous:9 un:1 transfer:26 ca:1 complex:3 domain:1 ny:1 fails:1 clamped:2 ix:1 theorem:10 xt:15 r2:6 woi:1 exists:3 adding:1 magnitude:2 entropy:1 depicted:1 yin:1 simply:2 infinitely:1 macmillan:1 bo:3 satisfies:1 wnew:1 succeed:1 goal:1 identity:2 sized:1 crl:1 hard:2 wt:5 lemma:7 called:2 e:23 perceptrons:2 l4:1 mark:3 support:1 accelerated:1 phenomenon:2 |
34 | 1,029 | A Practical Monte Carlo Implementation
of Bayesian Learning
Carl Edward Rasmussen
Department of Computer Science
University of Toronto
Toronto, Ontario, M5S 1A4, Canada
carl@cs.toronto.edu
Abstract
A practical method for Bayesian training of feed-forward neural
networks using sophisticated Monte Carlo methods is presented
and evaluated. In reasonably small amounts of computer time this
approach outperforms other state-of-the-art methods on 5 datalimited tasks from real world domains.
1
INTRODUCTION
Bayesian learning uses a prior on model parameters, combines this with information
from a training set , and then integrates over the resulting posterior to make predictions. With this approach, we can use large networks without fear of overfitting,
allowing us to capture more structure in the data, thus improving prediction accuracy and eliminating the tedious search (often performed using cross validation) for
the model complexity that optimises the bias/variance tradeoff. In this approach
the size of the model is limited only by computational considerations.
The application of Bayesian learning to neural networks has been pioneered by
MacKay (1992), who uses a Gaussian approximation to the posterior weight distribution. However, the Gaussian approximation is poor because of multiple modes in
the posterior. Even locally around a mode the accuracy of the Gaussian approximation is questionable, especially when the model is large compared to the amount
of training data.
Here I present and test a Monte Carlo method (Neal, 1995) which avoids the
Gaussian approximation. The implementation is complicated, but the user is not required to have extensive knowledge about the algorithm. Thus, the implementation
represents a practical tool for learning in neural nets.
599
A Practical Monte Carlo Implementation of Bayesian Learning
1.1
THE PREDICTION TASK
=
The training data consists of n examples in the form of inputs x
{x(i)} and
corresponding outputs y = {y(i)} where i = 1 ... n. For simplicity we consider
only real-valued scalar outputs. The network is parametrised by weights w, and
hyperparameters h that control the distributions for weights, playing a role similar
to that of conventional weight decay. Weights and hyperparameters are collectively
termed 0, and the network function is written as F/I (x), although the function value
is only indirectly dependent on the hyperparameters (through the weights).
Bayes' rule gives the posterior distribution for the parameters in terms of the likelihood, p(ylx, 0), and prior, p(O):
p
(Olx
,y
) = p(O)p(ylx, O)
p(ylx)
To minimize the expected squared error on an unseen test case with input
we use the mean prediction
x(n+l),
(1)
2
MONTE CARLO SAMPLING
The following implementation is due to Neal (1995). The network weights are
updated using the hybrid Monte Carlo method (Duane et al. 1987). This method
combines the Metropolis algorithm with dynamical simulation. This helps to avoid
the random walk behavior of simple forms of Metropolis, which is essential if we
wish to explore weight space efficiently. The hyperparameters are updated using
Gibbs sampling.
2.1
NETWORK SPECIFICATION
The networks used here are always of the same form: a single linear output unit, a
single hidden layer of tanh units and a task dependent number of input units. All
layers are fully connected in a feed forward manner (including direct connections
from input to output). The output and hidden units have biases.
The network priors are specified in a hierarchical manner in terms of hyperparameters; weights of different kinds are divided into groups, each group having it's own
prior. The output-bias is given a zero-mean Gaussian prior with a std. dev. of
u = 1000, so it is effectively unconstrained.
The hidden-biases are given a two layer prior: the bias b is given a zero-mean
Gaussian prior b '" N(O, ( 2 ); the value of u is specified in terms of precision r = u- 2 ,
which is given a Gamma prior with mean p = 400 (corresponding to u = 0.05) and
shape parameter a = 0.5; the Gamma density is given by p(r) '" Gamma(p, a) ex:
r Ol / 2 - 1 exp( -ra/2p). Note that this type of prior introduces a dependency between
the biases for different hidden units through the common r. The prior for the
hidden-to-output weights is identical to the prior for the hidden-biases, except that
the variance of these weights under the prior is scaled down by the square root
of the number of hidden units, such that the network output magnitude becomes
independent of the number of hidden units. The noise variance is also given a
Gamma prior with these parameters.
600
C. E. RASMUSSEN
The input-to-hidden weights are given a three layer prior: again each weight is
given a zero-mean Gaussian prior w rv N(O, (12); the corresponding precision for
the weights out of input unit i is given a Gamma prior with a mean J.l and a shape
parameter a1 = 0.5: Ti rv Gamma(J.l, a1). The mean J.l is determined on the top
level by a Gamma distribution with mean and shape parameter ao = 1: J.li rv
Gamma(400,ao). The direct input-to-output connections are also given this prior.
The above-mentioned 3 layer prior incorporates the idea of Automatic Relevance
Determination (ARD), due to MacKay and Neal, and discussed in Neal (1995) . The
hyperparameters, Ti, associated with individual inputs can adapt according to the
relevance of the input; for an unimportant input, Ti can grow very large (governed
by the top level prior), thus forcing (1i and the associated weights to vanish.
2.2
MONTE CARLO SPECIFICATION
Sampling from the posterior weight distribution is performed by iteratively updating
the values of the network weights and hyperparameters. Each iteration involves two
components: weight updates and hyperparameter updates. A cursory description
of these steps follows.
2.2.1
Weight Updates
Weight updates are done using the hybrid Monte Carlo method . A fictitious dynamical system is generated by interpreting weights as positions, and augmenting
the weights w with momentum variables p. The purpose of the dynamical system
is to give the weights "inertia" so that slow random walk behaviour can be avoided
during exploration of weight space. The total energy, H, of the system is the sum
of the kinetic energy, I<, (a function of the momenta) and the potential energy, E.
The potential energy is defined such that p(w) ex exp( -E). We sample from the
joint distribution for wand p given by p(w,p) ex exp(-E - I<), under which the
marginal distribution for w is given by the posterior. A sample of weights from the
posterior can therefore be obtained by simply ignoring the momenta.
Sampling from the joint distribution is achieved by two steps: 1) finding new points
in phase space with near-identical energies H by simulating the dynamical system
using a discretised approximation to Hamiltonian dynamics, and 2) changing the
energy H by doing Gibbs sampling for the momentum variables.
Hamiltonian Dynamics. Hamilton's first order differential equations for Hare
approximated by a series of discrete first order steps (specifically by the leapfrog
method). The first derivatives of the network error function enter through the
derivative of the potential energy, and are computed using backpropagation. In
the original version of the hybrid Monte Carlo method the final position is then
accepted or rejected depending on the final energy H'" (which is not necessarily
equal to the initial energy H because of the discretisation). Here we use a modified
version that uses an average over a window of states instead. The step size of the
discrete dynamics should be as large as possible while keeping the rejection rate
low. The step sizes are set individually using several heuristic approximations, and
scaled by an overall parameter c. We use L = 200 iterations, a window size of 20
and a step size of c = 0.2 for all simulations.
Gibbs Sampling for Momentum Variables. The momentum variables are
updated using a modified version of Gibbs sampling, allowing the energy H to
change. A "persistence" of 0.95 is used; the new value of the momentum is a
weighted sum of the previous value (weight 0.95) and the value obtained by Gibbs
sampling (weight (1 - 0.95 2)1/2). With this form of persistence, the momenta
A Practical Monte Carlo Implementation of Bayesian Learning
601
changes approx. 20 times more slowly, thus increasing the "inertia" of the weights,
so as to further help in avoiding random walks. Larger values of the persistence will
further increase the weight inertia, but reduce the rate of exploration of H. The
advantage of increasing the weight inertia in this way rather than by increasing L is
that the hyperparameters are updated at shorter intervals, allowing them to adapt
to the rapidly changing weights.
2.2.2
Hyperparameter Updates
The hyperparameters are updated using Gibbs sampling. The conditional distributions for the hyperparameters given the weights are of the Gamma form, for which
efficient generators exist, except for the top-level hyperparameter in the case of the
3 layer priors used for the weights from the inputs; in this case the conditional
distribution is more complicated and a form of rejection sampling is employed.
2.3
NETWORK TRAINING AND PREDICTION
The network training consists of two levels of initialisation before sampling for
networks used for prediction. At the first level of initialisation the hyperparameters
(variance of the Gaussians) are kept constant at 1, allowing the weights to grow
during 1000 leapfrog iterations. Neglecting this phase can cause the network to get
caught for a long time in a state where weights and hyperparameters are both very
small.
The scheme described above is then invoked and run for as long as desired, eventually producing networks from the posterior distribution. The initial 1/3 of these
nets are discarded, since the algorithm may need time to reach regions of high posterior probability. Networks sampled during the remainder of the run are saved for
making predictions.
The predictions are made using an average of the networks sampled from the posterior as an approximation to the integral in eq. (1). Since the output unit is linear
the final prediction can be seen as coming from a huge (fully connected) ensemble
net with appropriately scaled output weights. All the results reported here were
for ensemble nets with 4000 hidden units. The size of the individual nets is given
by the rule that we want at least as many network parameters as we have training
examples (with a lower limit of 4 hidden units). We hope thereby to be well out of
the underfitting region. Using even larger nets would probably not gain us much
(in the face of the limited training data) and is avoided for computational reasons.
All runs used the parameter values given above. The only check that is necessary
is that the rejection rate stays low, say below 5%; if not, the step size should
be lowered. In all runs reported here, c = 0.2 was adequate. The parameters
concerning the Monte Carlo method and the network priors were all selected based
on intuition and on experience with toy problems. Thus no parameters need to be
set by the user.
3
TESTS
The performance of the algorithm was evaluated by comparing it to other state-ofthe-art methods on 5 real-world regression tasks. All 5 data sets have previously
been studied using a 10-way cross-validation scheme (Quinlan 1993). The tasks
in these domains is to predict price or performance of an object from various discrete and real-valued attributes. For each domain the data is split into two sets
of roughly equal size, one for training and one for testing. The training data is
602
C. E. RASMUSSEN
further subdivided into full-, half-, quarter- and eighth-sized subsets, 15 subsets in
total. Networks are trained on each of these partitions, and evaluated on the large
common test set . On the small training sets, the average performance and one
std. dev. error bars on this estimate are computed.
3.1
ALGORITHMS
The Monte Carlo method was compared to four other algorithms. For the three
neural network methods nets with a single hidden layer and direct input-output
connections were used. The Monte Carlo method was run for 1 hour on each of the
small training sets, and 2,4 and 8 hours respectively on the larger training sets. All
simulations were done on a 200 MHz MIPS R4400 processor. The Gaussian Process
method is described in a companion paper (Williams & Rasmussen 1996).
The Evidence method (MacKay 1992) was used for a network with separate hyperparameters for the direct connections, the weights from individual inputs (ARD),
hidden biases, and output biases. Nets were trained using a conjugate gradient
method, allowing 10000 gradient evaluations (batch) before each of 6 updates of
the hyperparameters. The network Hessian was computed analytically. The value
of the evidence was computed without compensating for network symmetries, since
this can lead to a vastly over-estimated evidence for big networks where the posterior Gaussians from different modes overlap. A large number of nets were trained for
each task, with the number of hidden units computed from the results of previous
nets by the following heuristics: The min and max number of hidden units in the 20%
nets with the highest evidences were found. The new architecture is picked from a
Gaussian (truncated at 0) with mean (max - min)/2 and std. dev. 2 + max - min,
which is thought to give a reasonable trade-off between exploration and exploitation. This procedure is run for 1 hour of cpu time or until more than 1000 nets have
been trained. The final predictions are made from an ensemble of the 20% (but a
maximum of 100) nets with the highest evidence.
An ensemble method using cross-validation to search over a 2-dimensional grid for
the number of hidden units and the value of a single weight decay parameter has
been included, as an attempt to have a thorough version of "common practise".
The weight decay parameter takes on the values 0, 0.01, 0.04, 0.16 , 0.64 and 2.56.
Up to 6 sizes of nets are used, from 0 hidden units (a linear model) up to a number
that gives as many weights as training examples. Networks are trained with a
conjugent gradient method for 10000 epochs on each of these up to 36 networks,
and performance was monitored on a validation set containing 1/3 of the examples,
selected at random. This was repeated 5 times with different random validation
sets, and the architecture and weight decay that did best on average was selected.
The predictions are made from an ensemble of 10 nets with this architecture, trained
on the full training set. This algorithm took several hours of cpu time for the largest
training sets.
The Multivariate Adaptive Regression Splines (MARS) method (Friedman 1991)
was included as a non-neural network approach. It is possible to vary the maximum
number of variables allowed to interact in the additive components of the model.
It is common to allow either pairwise or full interactions. I do not have sufficient
experience with MARS to make this choice. Therefore, I tried both options and
reported for each partition on each domain the best performance based on the
test error, so results as good as the ones reported here might not be obtainable in
practise. All other parameters of MARS were left at their default values. MARS
always required less than 1 minute of cpu time.
A Practical Monte Carlo Implementation of Bayesian Learning
603
Auto price
Cpu
0.6
2
0.5
1.5
0.4
+
0.3
1
0*
+
0.5
o~------~----~----~---
20
40
IS!
*
X
0.1
x
10
o
0.2
80
OL-~----~------~----~--
13
26
House
52
104
Mpg
0.25
0.6
t
0.5
0.4
0.2
0.15
0.3
0.1
>?1>*
+ IS!
0.2
*
Xo+
IS!
0.05
0.1
o~~----~------~----~--
32
64
128
OL-~----~----~----~--
256
24
48
96
192
Servo
Geometric mean
1
x Monte Carlo
0.283
o Gaussian Evidence
0.364
0.6
+ Backprop
0.339
0.4
*
MARS
0.371
IS!
Gaussian Process
0.304
0.8
OtIS!
0.2
X
*
o~~------~----~----~---
11
22
44
88
Figure 1: Squared error on test cases for the five algorithms applied to the five problems.
Errors are normalized with respect to the variance on the test cases. The x-axis gives the
number of training examples; four different set sizes were used on each domain. The error
bars give one std. dev. for the distribution of the mean over training sets. No error bar is
given for the largest size, for which only a single training set was available. Some of the
large error bars are cut of at the top. MARS was unable to run on the smallest partitions
from the Auto price and the servo domains; in these cases the means of the four other
methods were used in the reported geometric mean for MARS.
604
C. E. RASMUSSEN
domain
Auto Price
Cpu
House
Mpg
Servo
3.2
Table 1: Data Sets
# training cases # test cases # binary inputs
80
104
256
192
88
79
105
250
200
79
0
0
1
6
10
# real inputs
16
6
12
3
2
PERFORMANCE
The test results are presented in fig . 1. On the servo domain the Monte Carlo
method is uniformly better than all other methods, although the difference should
probably not always be considered statistically significant. The Monte Carlo method
generally does well for the smallest training sets. Note that no single method does
well on all these tasks. The Monte Carlo method is never vastly out-performed by
the other methods.
The geometric mean of the performances over all 5 domains for the the 4 different
training set sizes is computed. Assuming a Gaussian distribution of prediction
errors, the log of the error variance can (apart from normalising constants) be
interpreted as the amount of information unexplained by the models. Thus, the
log of the geometric means in fig. 1 give the average information unexplained by
the models. According to this measure the Monte Carlo method does best, closely
followed by the Gaussian Process method . Note that MARS is the worst, even
though the decision between pairwise and full interactions were made on the basis
of the test errors.
4
CONCLUSIONS
I have outlined a black-box Monte Carlo implementation of Bayesian learning in
neural networks, and shown that it has an excellent performance. These results suggest that Monte Carlo based Bayesian methods are serious competitors for practical
prediction tasks on data limited domains.
Acknowledgements
I am grateful to Radford Neal for his generosity with insight and software. This research
was funded by a grant to G. Hinton from the Institute for Robotics and Intelligent Systems.
References
S. Duane, A. D. Kennedy, B. J. Pendleton & D. Roweth (1987) "Hybrid Monte Carlo",
Physics Letters B, vol. 195, pp. 216-222.
J . H. Friedman (1991) "Multivariate adaptive regression splines" (with discussion) , Annals
of Statistics , 19,1-141 (March) . Source: http://lib.stat.cmu.edu/general/mars3.5.
D. J. C. MacKay (1992) "A practical Bayesian framework for backpropagation networks",
Neural Computation, vol. 4, pp. 448- 472.
R. M. Neal (1995) Bayesian Learning for Neural Networks, PhD thesis, Dept. of Computer
Science, University of Toronto, ftp: pub/radford/thesis. ps. Z from ftp. cs . toronto. edu.
J. R. Quinlan (1993) "Combining instance-based and model-based learning", Proc . ML '93
(ed P.E. Utgoff), San Mateo: Morgan Kaufmann.
C. K. I. Williams & C. E. Rasmussen (1996). "Regression with Gaussian processes", NIPS
8, editors D. Touretzky, M. Mozer and M. Hesselmo. (this volume) .
| 1029 |@word exploitation:1 version:4 eliminating:1 tedious:1 simulation:3 tried:1 thereby:1 initial:2 series:1 pub:1 initialisation:2 outperforms:1 comparing:1 written:1 additive:1 partition:3 shape:3 update:6 half:1 selected:3 cursory:1 hamiltonian:2 normalising:1 toronto:5 five:2 direct:4 differential:1 consists:2 combine:2 underfitting:1 manner:2 pairwise:2 ra:1 expected:1 roughly:1 mpg:2 behavior:1 ol:3 compensating:1 cpu:5 window:2 increasing:3 becomes:1 lib:1 kind:1 interpreted:1 finding:1 thorough:1 ti:3 questionable:1 scaled:3 control:1 unit:15 grant:1 hamilton:1 producing:1 before:2 limit:1 might:1 black:1 studied:1 mateo:1 limited:3 statistically:1 practical:8 testing:1 backpropagation:2 procedure:1 otis:1 thought:1 persistence:3 suggest:1 get:1 conventional:1 williams:2 caught:1 simplicity:1 rule:2 insight:1 his:1 updated:5 annals:1 pioneered:1 user:2 carl:2 us:3 approximated:1 updating:1 std:4 cut:1 role:1 capture:1 worst:1 region:2 connected:2 trade:1 highest:2 servo:4 mentioned:1 intuition:1 mozer:1 complexity:1 utgoff:1 practise:2 dynamic:3 trained:6 grateful:1 basis:1 joint:2 various:1 monte:22 pendleton:1 heuristic:2 larger:3 valued:2 say:1 statistic:1 unseen:1 final:4 advantage:1 net:15 took:1 interaction:2 coming:1 remainder:1 combining:1 rapidly:1 ontario:1 description:1 p:1 object:1 help:2 depending:1 ftp:2 stat:1 augmenting:1 ard:2 eq:1 edward:1 c:2 involves:1 closely:1 saved:1 attribute:1 exploration:3 backprop:1 subdivided:1 behaviour:1 ao:2 around:1 considered:1 exp:3 predict:1 vary:1 smallest:2 purpose:1 proc:1 integrates:1 tanh:1 unexplained:2 individually:1 largest:2 tool:1 weighted:1 hope:1 gaussian:14 always:3 modified:2 rather:1 avoid:1 leapfrog:2 likelihood:1 check:1 generosity:1 am:1 dependent:2 hidden:17 overall:1 art:2 mackay:4 marginal:1 optimises:1 equal:2 never:1 having:1 sampling:11 identical:2 represents:1 spline:2 intelligent:1 serious:1 gamma:9 individual:3 phase:2 attempt:1 friedman:2 huge:1 evaluation:1 introduces:1 parametrised:1 integral:1 neglecting:1 necessary:1 experience:2 shorter:1 discretisation:1 walk:3 desired:1 roweth:1 instance:1 dev:4 mhz:1 subset:2 reported:5 dependency:1 density:1 stay:1 off:1 physic:1 squared:2 again:1 vastly:2 thesis:2 containing:1 slowly:1 derivative:2 li:1 toy:1 potential:3 performed:3 root:1 picked:1 doing:1 bayes:1 option:1 complicated:2 minimize:1 square:1 accuracy:2 variance:6 who:1 efficiently:1 ensemble:5 kaufmann:1 ofthe:1 bayesian:11 carlo:22 kennedy:1 r4400:1 m5s:1 processor:1 reach:1 touretzky:1 ed:1 competitor:1 energy:10 hare:1 pp:2 associated:2 monitored:1 sampled:2 gain:1 knowledge:1 obtainable:1 sophisticated:1 feed:2 evaluated:3 done:2 mar:8 though:1 box:1 rejected:1 until:1 mode:3 normalized:1 analytically:1 iteratively:1 neal:6 during:3 interpreting:1 consideration:1 invoked:1 common:4 quarter:1 volume:1 discussed:1 significant:1 gibbs:6 enter:1 automatic:1 unconstrained:1 approx:1 grid:1 outlined:1 funded:1 lowered:1 specification:2 posterior:11 own:1 multivariate:2 apart:1 forcing:1 termed:1 binary:1 seen:1 morgan:1 employed:1 rv:3 multiple:1 full:4 adapt:2 determination:1 cross:3 long:2 divided:1 concerning:1 a1:2 prediction:13 regression:4 cmu:1 iteration:3 achieved:1 robotics:1 want:1 interval:1 grow:2 source:1 appropriately:1 probably:2 olx:1 incorporates:1 near:1 split:1 mips:1 architecture:3 reduce:1 idea:1 tradeoff:1 hessian:1 cause:1 adequate:1 generally:1 unimportant:1 ylx:3 amount:3 locally:1 http:1 exist:1 estimated:1 discrete:3 hyperparameter:3 vol:2 group:2 four:3 changing:2 kept:1 sum:2 wand:1 run:7 letter:1 reasonable:1 decision:1 layer:7 followed:1 software:1 min:3 department:1 according:2 march:1 poor:1 conjugate:1 metropolis:2 making:1 xo:1 equation:1 previously:1 eventually:1 available:1 gaussians:2 hierarchical:1 indirectly:1 simulating:1 batch:1 original:1 top:4 a4:1 quinlan:2 especially:1 gradient:3 separate:1 unable:1 reason:1 assuming:1 implementation:8 allowing:5 discarded:1 truncated:1 hinton:1 canada:1 discretised:1 required:2 specified:2 extensive:1 connection:4 hour:4 nip:1 bar:4 dynamical:4 below:1 eighth:1 including:1 max:3 overlap:1 hybrid:4 scheme:2 axis:1 auto:3 prior:21 epoch:1 geometric:4 acknowledgement:1 fully:2 fictitious:1 generator:1 validation:5 sufficient:1 editor:1 playing:1 rasmussen:6 keeping:1 bias:9 allow:1 institute:1 face:1 default:1 world:2 avoids:1 forward:2 inertia:4 made:4 adaptive:2 avoided:2 san:1 ml:1 overfitting:1 search:2 table:1 reasonably:1 ignoring:1 symmetry:1 improving:1 interact:1 excellent:1 necessarily:1 domain:10 did:1 big:1 noise:1 hyperparameters:14 repeated:1 allowed:1 fig:2 slow:1 precision:2 position:2 momentum:8 wish:1 governed:1 house:2 vanish:1 down:1 companion:1 minute:1 decay:4 evidence:6 essential:1 effectively:1 phd:1 magnitude:1 rejection:3 simply:1 explore:1 scalar:1 fear:1 collectively:1 radford:2 duane:2 kinetic:1 conditional:2 sized:1 price:4 change:2 included:2 determined:1 except:2 specifically:1 uniformly:1 total:2 accepted:1 relevance:2 dept:1 avoiding:1 ex:3 |
35 | 103 | 653
AN ADAPTIVE NETWORK THAT LEARNS
SEQUENCES OF TRANSITIONS
C. L. Winter
Science Applications International Corporation
5151 East Broadway, Suite 900
Tucson, Auizona 85711
ABSTRACT
We describe an adaptive network, TIN2, that learns the transition
function of a sequential system from observations of its behavior. It
integrates two subnets, TIN-I (Winter, Ryan and Turner, 1987) and
TIN-2. TIN-2 constructs state representations from examples of
system behavior, and its dynamics are the main topics of the paper.
TIN-I abstracts transition functions from noisy state representations
and environmental data during training, while in operation it produces
sequences of transitions in response to variations in input. Dynamics
of both nets are based on the Adaptive Resonance Theory of Carpenter
and Grossberg (1987). We give results from an experiment in which
TIN2 learned the behavior of a system that recognizes strings with an
even number of l's .
INTRODUCTION
Sequential systems respond to variations in their input environment with sequences of
activities. They can be described in two ways. A black box description characterizes a
system as an input-output function, m = B(u), mapping a string of input symbols, ll,
into a single output symbol, m. A sequential automaton description characterizes a
system as a sextuple (U, M, S, SO, f, g) where U and M are alphabets of input and output
symbols, S is a set of states, sO is an initial state and f and g are transition and output
functions respectively. The transition function specifies the current state, St, as a
function of the last state and the current input, Ut,
(1)
In this paper we do not discuss output functions because they are relatively simple. To
further simplify discussion, we restrict ourselves to binary input alphabets, although the
neural net we describe here can easily be extended to accomodate more complex alphabets.
654
Winter
A common engineering problem is to identify and then simulate the functionality of a
system from observations of its behavior. Simulation is straightforward when we can
actually observe the internal states of a system, since then the function f can be specified
by learning simple associations among internal states and external inputs. In robotic
systems, for instance, internal states can often be characterized by such parameters as
stepper motor settings, strain gauge values, etc., and so are directly accessible. Artificial
neural systems have peen found useful in such simulations because they can associate
large, possibly noisy state space and input variables with state and output variables (Tolat
and Widrow, 1988; Winter, Ryan and Turner, 1987).
Unfortunately, in many interesting cases we must base simulations on a limited set of
examples of a system's black box behavior because its internal workings are
unobservable. The black box description is not, by itself, much use as a simulation tool
since usually it cannot be specified without resorting to infinitely large input-output
tables. As an alternative we can try to develop a sequential automaton description of the
system by observing regularities in its black box behavior. Artificial neural systems can
contribute to the development of physical machines dedicated to system identification
because i) frequently state representations must be derived from many noisy input
variables, ii) data must usually be processed in continuous time and iii) the explicit
dynamics of artificial neural systems can be used as a framework for hardware
implementations.
In this paper we give a brief overview of a neural net, TIN2, which learns and processes
state transitions from observations of correct black box behavior when the set of
observations is large enough to characterize the black box as an automaton. The TIN2
net is based on two component networks. Each uses a modified adaptive resonance circuit
(Carpenter and Grossberg, 1987) to associate heterogeneous input patterns. TIN-1
(Winter, Ryan and Turner, 1987) learns and executes transitions when given state
representations. It has been used by itself to simulate systems for which explicit state
representations are available (Winter, 1988a). TIN-2 is a highly parallel, continuous time
implementation of an approach to state representation first outlined by Nerode (1958).
Nerode's approach to system simulation relies upon the fact that every string, l!. moves a
machine into a particular state, s(y). once it has been processed. The s(y) state can be
characterized by putting the system initially into s(u) (by processing y) and then
presenting a set of experimental strings. (~1 .... , ~n)' for further processing.
Experiments consist of observing the output mi = BUt?~i) where ? indicates
concatenation. A state can then be represented by the entries in a row of a state
characterization table, C (Table 1). The rows of C are indexed by strings, lI, its columns
are indexed by experiments. Wi. and its entries are mi. In Table 1 annotations in
parentheses indicate nodes (artificial neurons) and subnetworks of TIN-2 equivalent to the
corresponding C table entry. During experimentation C expands as states are
Adaptive Network That Learns Sequences of Transitions
distinguished from one another. The orchestration of experiments, their selection, the
TABLE 1. C Table Constructed by TIN-2
A.
A.
1
0
10
1 (Node 7)
o(Node 6)
o(Node 1)
o(Node 3)
o(Assembly 1)
1 (Assembly 2)
o(Node 2)
o(Node 9)
o(Node 5)
1 (Node 6)
o(Node 2)
1 (Node 1)
o(Node 4)
o(Node 0)
role of teachers and of the environment have been investigated by Arbib and Zeiger
(1969), Arbib and Manes (1974), Gold (1972 and 1978) and Angluin (1987) to name a
few. TIN-2 provides an architecture in which C can be embedded and expanded as
necessary. Collections of nodes within TIN-21earn to associate triples (mi, 11, ~i) so that
inputting II later results in the output of the representation (m 1, ... , mn)n of the state
associated with 11.
TIN-2
TIN-2 is composed of separate assemblies of nodes whose dynamics are such that each
assembly comes to correspond to a column in the state characterization table C. Thus we
call them column-assemblies. Competition among column-assemblies guarantees that
nodes of only one assembly, say the ith, learn to respond to experimental pattern ~i'
Hence column-assemblies can be labelled ~1' ~2 and so on, but since labelings are not
assigned ahead of time, arbitrarily large sets of experiments can be learned.
The theory of adaptive resonance is implemented in TIN-2 column-assemblies through
partitioned adaptive resonance circuits (cf. Ryan, Winter and Turner, 1987). Adaptive
resonance circuits (Grossberg and Carpenter, 1987; Ryan and Winter, 1987) are composed
of four collections of nodes: Input, Comparison (FI), Recognition (F2) and Reset. In
TIN-2 Input, Comparison and Reset are split into disjoint m,.u and ~ partitions. The net
runs in either training or operational mode, and can move from one to the other as
required. The training dynamics of the circuit are such that an F2 node is stimulated by
the overall triple (m. n,~, but can be inhibited by a mismatch with any component.
During operation input of.u recalls the state representation s(u) = (m 1.... , mn)n'
Node activity for the kth FI partition, FI ,k' k = m, u,
W,
is governed by
(2)
Here t < 1 scales time, Ii,k is the value of the ith input node of partition k, xi,k is
655
656
Winter
activity in the corresponding node of FI and f is a sigmoid function with range [0. 1].
The elements of I are either 1. -lor O. The dynamics of the TIN-2 circuit are such that 0
indicates the absence of a symbol, while 1 and -1 represent elements of a binary alphabet.
The adaptive feedback filter. T. is a matrix (Tji) whose elements. after training. are also
1.-1 orO.
Activity, yj. in the jth F2 node is driven by
+ L meFl,m Bmj h(xm)] - 4[ ~*j f(YTl) + Ruj + Rw] .
(3)
The feedforward fllter B is composed of matrices (Buj)' (Bmj) and (Bw) whose elements
are normalized to the size of the patterns memorized. Note that (Bw) is the same for
every node in a given column-assembly. i.e. the rows of (Bw) are all the same. Hence all
nodes within a column-assembly learn to respond to the same experimental pattern. w.
and it is in this sense that an assembly evolves to become equivalent to a column in table
C. During training the sum ~*j f(YTl) in (3) runs through the recognition nodes of all
TIN-2 column-assemblies. Thus. during training only one F2 node. say the Jth. can be
active at a time across all assemblies. In operation. on the other hand. we remove
inhibition due to nodes in other assemblies so that at any time one node in each
column-assembly can be active. and an entire state representation can be recalled.
The Reset terms Ru,j and Rw in (3) actively inhibit nodes of F2 when mismatches
between memory and input occur. Ruj is specific to the jth F2 node.
dRujldt = -Ruj + f(Yj) f(v 1I1u II - II ?I.u II) .
(4)
Rw affects all F2 nodes in a column-assembly and is driven by
dRw/dt = -Rw + [LjeF2 f(Yj)] f(v IIlw II-II fI.w II).
(5)
v < 1 is a vigilance parameter (Carpenter and Grossberg. 1987): for either (4) or (5) R > 0
at equilibrium just when the intersection between memory and input. PI T n I. is
=
relatively small, i.e. R > 0 when v 11111 > II PI II. When the system is in operation. we
fix Rw = 0 and input the pattern Iw = O. To recall the row in table C indexed by 11, we
input 11 to all column-assemblies. and at equilibrium xi.m = Lje F2 Tjif(Yj). Thus xi,m
represents the memory of the element in C corresponding to 11 and the column in C with
the same label as the column-assembly. Winter (1988b) discusses recall dynamics in
more detail.
657
Adaptive Network That Learns Sequences of Transitions
At equilibrium in either training or operational mode only the winning F2 node has YJ *-
O. so LjTjif(Yj) = TJi in (2). Hence xi.k = 0 if TJi = -li.k. i.e. if memory and input
mismatch; IXi.kl = 2 if TJi = Ii,k. i.e. when memory and input match; and IXi.kl = 1 if
TJ.i =O. Ii.k *- 0 or ifTJ.i *- O. Ii.k = O. The F1 output function h in (3) is defined so
that hex) = 1 if x> 1. hex) = -1 if x < -1 and hex) = 0 if -1 S x S 1. The output pattern
~1 = (h(x1) ..... h(xnl? reflects IJ ('\ Ik. as h(xi) *- 0 only if TJi = Ii.k.
The adaptive filters (Buj) and (Bmj) store normalized versions of those patterns on FI.u
and F1.m which have stimulated the jth F2 node. The evolution of Bij for u E FI.u or
F1 m is driven by
?
(6)
On the other hand (Bw) stores a normalized version of the experiment w which labels the
entire column-assembly. Thus all nodes in a column-assembly share a common memory
of~.
(7)
where w E F1 w .
?
The feedback mters (Tuj). (Tmj) and (Tw) store exact memories of patterns on partitions
ofFI:
(8)
for i
E
FI.u ? F1.m ? and
(9)
for i E FI.w' In operation long-term memory modification is suspended.
EXPERIMENT
Here we report partial results from an experiment in which TlN-2 learns a state
characterization table for an automaton that recognizes strings containing even numbers of
.
.
658
Winter
both I's and O's. More details can be found in Winter (1988b). For notational
convenience in this section we will discuss patterns as if they were composed of l's and
O's, but be aware that inside TIN-2 every 0 symbol is really a -1. Data is provided in the
form of triples eM, ll, YD by a teacher; the data set for this example is given in Table 1.
Data were presented to the net in the order shown. The net consisted of three
column-assemblies. Each F2 collection contained ten nodes. Although the strings that
can be processed by an automaton of this type are in principle arbitrarily long, in practice
some limitation on the length of training strings is necessary if for no other reason than
that the memory capacity of a computer is finite. For this simple example Input and F I
partitions contain eight nodes, but in order to have a special symbol to represent A..
strings are limited to at most six elements. With this restriction the A. symbol can be
distinguished from actual input strings through vigilance criteria. Other solutions to the
problem of representing A. are being investigated, but for now the special eight bit
symbol, 00000011, is used to represent A. in the strings A.-yt.
The net was trained using fast-learning (Carpenter and Grossberg, 1987): a triple in Table
1 was presented to the net. and all nodes were allowed to come to their equilibrium values
where they were held for about three long-term time units before the next triple was
presented. Consider the processing that follows presentation of (0, 1,0) the first datum
in Table 1. The net can obtain equivalents to two C table entries from (0, 1,0): the entry
in row 1l = 10, column Yi. = A. and the entry in row II =1, column w = O. The string 10
and the membership value 0 were displayed on the A. assembly's input slabs, and in this
case the 3rd F2 node learned the association among the two patterns. When the pattern
(0, 1, 0) was input to other column-assemblies, one F2 node (in this case the 9 th in
column-assembly 1) learned to associate elements of the triple. Of course a side effect of
this was that column-assembly 1 was labelled by Yi.. = 0 thereafter. When (1. 1, 1) was
input next, node 9 in column-assembly 1 tried to respond to the new triple, all nodes in
column-assembly 1 were then inhibited by a mismatch on Yi.., and finally node 1 on
column-assembly 2 learned (1, 1, 1). From that point on column-assembly 2 was
labelled by 1.
LEARNING TRANSITIONS
The TIN-I net (Winter. Ryan and Turner, 1987) is composed of i) a partitioned adaptive
resonance circuit with dynamics similar to (2) - (9) for learning state transitions and ii) a
Control Circuit which forces transitions once they have been learned. Transitions are
unique in the sense that a previous state and current input completely determine the
current state. The partitioned adaptive resonance circuit has three input fields: one for the
previous state, one for the current input and one for the next state. TIN-l's F2 nodes
learn transitions by associating patterns in the three input fields. Once trained. TIN-l
processes strings sequentially. bit-by-bit.
Adaptive Network That Learns Sequences of Transitions
1L~r--T-'N---2-~:t ~
TIN-l
TIN-2
I~ u.eu
Figure 1. Training TIN2.
The architecture of TIN2, the net that integrates TIN-2 and TIN-I. is shown in Figure 1.
The system resorts to the TIN-2 nets only to learn transitions. If TIN-2 has learned a C
table in which examples of all transitions appear, TIN-I can easily learn the automaton's
state transitions. A C table contains an example of a transition from state si to state Sj
forced by current input u, if it contains i) a row labelled by a string lli which leaves the
automaton in si after processing and ii) a row labelled by the string lltu which leaves the
automaton in Sj. To teach TIN-l the transition we simply present lli to the lower TIN-2
in Figure I, llieu to the upper TIN-2 net and u to TIN-I.
CONCLUSIONS
We have described a network, TIN-2, which learns the equivalent of state characterization
tables (Gold, 1972). The principle reasons for developing a neural net implementation are
i) neural nets are intrinsically massively parallel and so provide a nice model for systems
that must process large data sets, ii) although in the interests of brevity we have not
stressed the point, neural nets are robust against noisy data, iii) neural nets like the
partitioned adaptive resonance circuit have continuous time activity dynamics and so can
be synchronized with other elements of a larger real-time system through simple scaling
parameters, and iv) the continuous time dynamics and precise architectural specifications
of neural nets provide a blueprint for hardware implementations.
We have also sketched a neural net, TIN2, that learns state transitions by integrating
TIN-2 nets with the TIN-I net (Winter, Ryan and Turner, 1987). When a complete state
characterization table is available from TIN-2, TIN2 can be taught transitions from
examples of system behavior. However, the ultimate goal of a net like this lies in
developing a system that "or,rates acceptably" with a partial state characterization table.
To operate acceptably TIN must perform transitions correctly when it can, recognize
when it cannot, signal for new data when it is required and expand the state charcterization
taole when it must. Happily TIN2 already provides the first two capabilities, and
combinations of TIN2 with rule-based controllers and with auxiliary control networks are
currently being explored as approachws to satisfy the latter (Winter, 1988b).
Nets like TIN2 may eventually prove useful as control elements in physical machines
because sequential automata can respond to unpredictable environments with a wide range
of behavior. Even very simple automata can repeat activities and make decisions based
upon environmental variations. Currently, most physical machines that make decisions
are dedicated to a single task; applying one to a new task requires re-programming by a
659
660
Winter
skilled technician. A programmer must, furthermore, determine a priori precisely which
machine state - environment associations are significant enough to warrant insertion in
the control structure of a given machine. TIN2, on the other hand, is trained, not
programmed, and can abstract significant associations from noisy input. It is a "blank
slate" that learns the structure of a particular sequential machine from examples.
References
D. Angluin, "Learning Regular Sets from Queries and Counterexamples", Information
and Computation, 75 (2), 1987.
M. A. Arbib and E. G. Manes, "Machines in a Category: an Expository Introduction",
SIAM Review, 16 (2), 1974.
M. A. Arbib and H. P. Zeiger, "On the Relevance of Abstract Algebra to Control
Theory", Automatica, 5, 1969.
G. Carpenter and S. Grossberg, "A Massively Parallel Architecture for a Self-Organizing
Neural Pattern Recognition Machine", Comput. Vision Graphics Image Process. 37 (54),
1987.
E. M. Gold, "System Identification Via State Characterization", Automatica, 8, 1972.
E. M. Gold, "Complexity of Automaton Identification from Given Data", Info. and
Control, 37, 1978.
A. Neroda, "Linear Automaton Transformations", Proc. Am. Math. Soc., 9, 1958.
T. W. Ryan and C. L. Winter, "Variations on Adaptive Resonance", in Proc. 1st IntI.
Conf. on Neural Networks, IEEE, 1987.
T. W. Ryan, C. L. Winter and C. J. Turner, "Dynamic Control of an Artificial Neural
System: the Property Inheritance Network", Appl. Optics, 261 (23) 1987.
V. V. Tolat and B. Widrow, "An Adaptive Neural Net Controller with Visual Inputs",
Neural Networks, I, S upp I, 1988.
C. L. Winter, T. W. Ryan and C. J. Turner, "TIN: A Trainable Inference Network", in
Proc. 1st Inti. Conf. on Neural Networks, 1987.
C. L. Winter, "An Adaptive Network that Flees Pursuit", Neural Networks, I, Supp.l,
1988a.
C. L. Winter, "TIN2: An Adaptive Controller", SAIC Tech. Rpt., SAIC, 5151 E.
Broadway, Tucson, AZ, 85711, 1988b.
Part V
Implementation
| 103 |@word version:2 simulation:5 tried:1 initial:1 contains:2 current:6 blank:1 si:2 must:7 partition:5 motor:1 remove:1 leaf:2 ith:2 characterization:7 provides:2 node:43 math:1 contribute:1 lor:1 constructed:1 skilled:1 become:1 ik:1 prove:1 inside:1 behavior:9 frequently:1 actual:1 unpredictable:1 provided:1 circuit:9 string:15 inputting:1 tin2:13 flees:1 transformation:1 corporation:1 suite:1 guarantee:1 every:3 expands:1 control:7 unit:1 acceptably:2 appear:1 before:1 engineering:1 yd:1 black:6 appl:1 limited:2 programmed:1 range:2 grossberg:6 unique:1 yj:6 practice:1 integrating:1 regular:1 cannot:2 convenience:1 selection:1 applying:1 restriction:1 equivalent:4 yt:1 blueprint:1 straightforward:1 automaton:12 rule:1 variation:4 exact:1 programming:1 us:1 ixi:2 associate:4 element:9 recognition:3 role:1 eu:1 inhibit:1 environment:4 insertion:1 complexity:1 dynamic:11 trained:3 algebra:1 upon:2 f2:14 completely:1 easily:2 slate:1 represented:1 alphabet:4 forced:1 fast:1 describe:2 artificial:5 query:1 whose:3 larger:1 say:2 noisy:5 itself:2 sextuple:1 sequence:6 net:25 reset:3 organizing:1 gold:4 description:4 competition:1 az:1 regularity:1 produce:1 widrow:2 develop:1 subnets:1 ij:1 soc:1 implemented:1 auxiliary:1 indicate:1 come:2 synchronized:1 correct:1 functionality:1 filter:2 tji:5 happily:1 programmer:1 memorized:1 fix:1 f1:5 really:1 tln:1 ryan:10 equilibrium:4 mapping:1 oro:1 slab:1 proc:3 integrates:2 label:2 iw:1 currently:2 gauge:1 tool:1 reflects:1 modified:1 ytl:2 derived:1 mane:2 notational:1 indicates:2 tech:1 sense:2 am:1 inference:1 membership:1 entire:2 initially:1 expand:1 labelings:1 sketched:1 unobservable:1 among:3 overall:1 priori:1 development:1 resonance:9 special:2 field:2 construct:1 once:3 aware:1 represents:1 warrant:1 report:1 simplify:1 inhibited:2 few:1 winter:21 composed:5 saic:2 recognize:1 ourselves:1 bw:4 interest:1 highly:1 tj:1 held:1 partial:2 necessary:2 indexed:3 iv:1 re:1 stepper:1 instance:1 column:27 entry:6 graphic:1 characterize:1 teacher:2 st:3 international:1 siam:1 accessible:1 earn:1 containing:1 possibly:1 vigilance:2 external:1 conf:2 resort:1 li:2 actively:1 supp:1 bmj:3 satisfy:1 later:1 try:1 observing:2 characterizes:2 parallel:3 capability:1 annotation:1 correspond:1 identify:1 identification:3 lli:2 executes:1 against:1 associated:1 mi:3 intrinsically:1 recall:3 ut:1 actually:1 dt:1 response:1 box:6 furthermore:1 just:1 working:1 hand:3 mode:2 name:1 effect:1 normalized:3 consisted:1 contain:1 evolution:1 hence:3 assigned:1 rpt:1 ll:2 during:5 self:1 upp:1 criterion:1 presenting:1 complete:1 dedicated:2 image:1 fi:9 common:2 sigmoid:1 physical:3 overview:1 association:4 significant:2 counterexample:1 tolat:2 rd:1 resorting:1 outlined:1 specification:1 inhibition:1 etc:1 base:1 tmj:1 driven:3 massively:2 store:3 binary:2 arbitrarily:2 suspended:1 yi:3 determine:2 signal:1 ii:19 technician:1 match:1 characterized:2 long:3 parenthesis:1 heterogeneous:1 controller:3 vision:1 represent:3 operate:1 call:1 feedforward:1 iii:2 enough:2 split:1 affect:1 arbib:4 restrict:1 architecture:3 associating:1 six:1 ultimate:1 useful:2 ten:1 hardware:2 processed:3 category:1 rw:5 angluin:2 specifies:1 disjoint:1 correctly:1 taught:1 thereafter:1 putting:1 four:1 sum:1 run:2 respond:5 architectural:1 decision:2 lje:1 scaling:1 bit:3 datum:1 activity:6 ahead:1 occur:1 precisely:1 optic:1 simulate:2 expanded:1 relatively:2 developing:2 nerode:2 combination:1 expository:1 across:1 em:1 wi:1 partitioned:4 tw:1 evolves:1 modification:1 inti:2 discus:3 eventually:1 subnetworks:1 available:2 operation:5 pursuit:1 experimentation:1 eight:2 observe:1 distinguished:2 alternative:1 cf:1 assembly:30 recognizes:2 move:2 already:1 kth:1 separate:1 concatenation:1 capacity:1 topic:1 reason:2 ruj:3 ru:1 length:1 unfortunately:1 teach:1 broadway:2 info:1 implementation:5 perform:1 upper:1 observation:4 neuron:1 finite:1 displayed:1 extended:1 strain:1 precise:1 required:2 specified:2 kl:2 recalled:1 learned:7 usually:2 pattern:13 mismatch:4 xm:1 tucson:2 memory:9 force:1 turner:8 mn:2 representing:1 brief:1 nice:1 review:1 inheritance:1 embedded:1 interesting:1 limitation:1 triple:7 xnl:1 principle:2 pi:2 share:1 row:8 course:1 repeat:1 last:1 jth:4 hex:3 side:1 wide:1 feedback:2 transition:24 collection:3 adaptive:19 sj:2 robotic:1 active:2 sequentially:1 automatica:2 xi:5 continuous:4 table:20 stimulated:2 tuj:1 learn:5 robust:1 operational:2 investigated:2 complex:1 main:1 allowed:1 carpenter:6 x1:1 explicit:2 winning:1 comput:1 lie:1 governed:1 tin:37 learns:11 bij:1 specific:1 symbol:8 explored:1 consist:1 sequential:6 accomodate:1 intersection:1 simply:1 infinitely:1 visual:1 contained:1 environmental:2 relies:1 goal:1 presentation:1 labelled:5 absence:1 experimental:3 east:1 internal:4 latter:1 stressed:1 brevity:1 relevance:1 trainable:1 |
36 | 1,030 | Neuron-MOS Temporal Winner Search
Hardware for Fully-Parallel Data
Processing
Tadashi SHIBATA, Tsutomu NAKAI, Tatsuo MORIMOTO
Ryu KAIHARA, Takeo YAMASHITA, and Tadahiro OHMI
Department of Electronic Engineering
Tohoku University
Aza-Aoba, Aramaki, Aobaku, Sendai 980-77 JAPAN
Abstract
A unique architecture of winner search hardware has been developed using a novel neuron-like high functionality device called
Neuron MOS transistor (or vMOS in short) [1,2] as a key circuit
element. The circuits developed in this work can find the location
of the maximum (or minimum) signal among a number of input
data on the continuous-time basis, thus enabling real-time winner
tracking as well as fully-parallel sorting of multiple input data. We
have developed two circuit schemes. One is an ensemble of selfloop-selecting v MOS ring oscillators finding the winner as an oscillating node. The other is an ensemble of vMOS variable threshold
inverters receiving a common ramp-voltage for competitive excitation where data sorting is conducted through consecutive winner
search actions. Test circuits were fabricated by a double-polysilicon
CMOS process and their operation has been experimentally verified.
1
INTRODUCTION
Search for the largest (or the smallest) among a number of input data, Le., the
winner-take-all (WTA) action, is an essential part of intelligent data processing
such as data retrieval in associative memories [3], vector quantization circuits [4],
Kohonen's self-organizing maps [5] etc. In addition to the maximum or minimum
search, data sorting also plays an essential role in a number of signal processing
such as median filtering in image processing, evolutionary algorithms in optimizing
problems [6] and so forth . Usually such data processing is carried out by software
running on general purpose computers, but the computation time increases explo-
686
T. SHIBATA, T. NAKAI, T. MORIMOTO, R. KAIHARA, T. YAMASHITA, T. OHMI
sively with the increase in the volume of data. In order to build electronic systems
having a real-time-response capability, the direct implementation of fully parallel
algorithms on the integrated circuits hardware is critically demanded.
A variety of WTA [4, 7, 8) circuits have been implemented so far based on analog
current-mode circuit technologies. A number of cells, each composed of a current
source, competitively share the total current specified by a global current sink and
the winner is identified through the current concentration toward the cell via tacit
positive feedback mechanisms. The circuit implementations using MOSFET's operating in the subthreshold regime [4, 7) are ideal for large scale integration due to
its ultra low power nature. Although they are inherently slow at circuit levels, the
performance at a system level is far superior to digital counterparts owing to the
flexible computing algorithms of analog. In order to achieve a high speed operation, MOSFET's biased at strong inversion is also utilized in Ref. [8). However,
cost must be traded off for increased power.
What we are presenting in this paper is a unique WTA architecture implemented
by vMOS technology [1,2]. In vMOS circuits the summation of multiples of voltage
signals is conducted on the vMOS floating gate (or better be called "temporary floating gate" when used in a clocked scheme [9]) via charge sharing among capacitors,
and the result of the summation controls the transistor action. The voltage-mode
summation capability of vMOS has been uniquely utilized to produce the WTA
action. No DC current flows for the sum operation itself in contrast to the Kirchhoff sum. In vMOS transistors, however, DC current flows in a CMOS inverter
configuration when the floating gate is biased in the transition region. Therefore
the power consumption is larger than in the subthreshold circuitries. However, the
vMOS WTA's presented in this article will give an opportunity of high speed operation at much less power consumption than current-mode circuitries operating in the
strong inversion mode. In the following we present two kinds of winner search hardware featuring very fast operation. The winner can be tracked in a continuous-time
regime with a detection delay time of about lOOpsec, while the sorting of multiple
data is conducted in a fixed frame of time of about 100nsec.
2
NEURON-MOS CONTINUOUS-TIME WTA
Fig. 1(a) shows a schematic circuit diagram of a vMOS continuous-time WTA
for four input signals. Each signal is fed to an input-stage vMOS inverter-A: a
,ole 'Lc
0:71' .
V,.,-VA4
VA'-V ...
V ?? 1
Vs
l
~
o~
:
Va ~..
V.
V ?? 1
: v. (c)
(b)
'ole
VAI-VA4
::~fw
(a)
o~: v.
Voc1
l
~??
(d)
Figure 1: (a) Circuit diagram of vMOS continuous-time WTA circuit. (b)lV(d)
Response of VAl V A4 as & function of the floating-gate potential of vMOS inverterIV
A.
Neuron-MOS Temporal Winner Search Hardware for Fully-parallel Data Processing
687
CMOS inverter in which the common gate is made floating and its potential ,pFA
is determined via capacitance coupling with three input terminals. VI ('" 'V4) and
VR are equally coupled to the floating gate and a small capacitance pulls down the
floating gate to ground. The vMOS inverter-B is designed to turn on when the
number of l's in its inputs (VAl'" VA4) is more than 1. When a feedback loop is
formed as shown in the figure, it becomes a ring oscillator composed of odd-numbers
of inverter stages.
=
=
When Vi '" V4
0, the circuit is stable with VR
1 because inverter-A's do not
turn on. This is because the small grounded capacitor pulls down the floating gate
potential ,pFA a little smaller than its inverting threshold (VDD/2) (see Fig. l(b)).
If non-zero signals are given to input terminals, more-than-one inverter-A's turn on
(see Fig. l(c)) and the inverter-B also turns on, thus initiating the transition of VR
from VDD to O. According to the decrease in VR, some of the inverter-A's turn off
but the inverter-B (number 1 detector) still stays at on-state until the last inverterA turns off. When the last inverter-A, the one receiving the largest voltage input,
turns off, the inverter-B also turns off and VR begins to increase. As a result, ring
oscillation occurs only in the loop including the largest-input inverter-A(Fig. l(d)).
In this manner, the winner is identified as an oscillating node. The inverter-B can
be altered to a number "2" detector or a number "3" detector etc. by just reducing
the input voltage to the largest coupling capacitor. Then it is possible for top two
or top three to be winners.
o
4()
.0
lOD
120
10
140
~~
31)
20
40
50
:~
VAi ???f]
? ? .: . fl?. . .., . ...-.
-
. . . ..
',
4fl;...
~
!
70
60
2~
. . . . . .. . ... .. . .. . . ...
;
o~, .. .
I
"
?
I.
.
. . 1.
'
..
..
w
CJ
<
....
-'
"
o
>
....
2"~
4
"'
... ...,. VA. :
J.
f ~..J.!~.........
o t..
i ~-'-'-'
! ,~.....i~...J
o
10
20
30
40
50
(a)
.
-~
o
TIME [JIaec]
~
~
TIME [nsec)
I
l
I
...
l
60
I
l
I
..
j
70
(b)
Figure 2: (a) Measured wave forms of four-input WTA as depicted in Fig. 1(80)
(bread board experoment) . (b) Simulation results for non-oscillating WTA explained in Fig. 3.
Fig. 2(80) demonstrates the measured wave forms of a bread-board test circuit
composed of discrete components for verifying the circuit idea. It is clearly seen that
ring oscillation occurs only at the temporal winner. However, the ring oscillation
increases the power dissipation, and therefore, non-oscillating circuitry would be
preferred. An example of simulation results for such a non-oscillating circuit is
demonstrated in Fig. 2(b).
Fig.
3(80) gives the circuit diagram of a non-oscillating version of the vMOS
688
T. SHIBATA. T. NAKAI. T. MORIMOTO. R. KAIHARA. T. YAMASHITA. T. OHMI
'1
~: I~I I
vMOS Inv.,.. r-A
Vt
Va
~
Va
V.
vMOS Inv_r-B
~T
V..
aD
? No,,-olCillalinl mod,
o Olcillatl,. mod,
1 aD
;0,2
0
!
()
.;=-
?
f ??
,,0.1i-
00
0
0
0
?
0
.R.O 1111
1[>.1>:
0
R
COXT~
VA
a
?RI'0)? ?
2000
10
??0
4000
CUT/c...
(b)
(c)
Figure 3: (a) Circuit diagram of non-oscillating-mode WTA. HSPICE simulation
results: (b) combinations of R and CEXT for non-oscillating mode; (c) winner
detection delay as a function of capacitance load.
(a)
continuous-time WTA. In order to suppress the oscillation, the loop gain is reduced
by removing the two-stage CMOS inverters in front of the inverter-B and RC delay
element is inserted in the feedback loop. The small grounded capacitors were removed in inverter-A's. The waveforms demonstrated in Fig. 2(b) are the HSPICE
simulation results with R = 0 and CEXT = 20Cgote(Cgote: input capacitance of
elemental CMOS inverter=5.16f.F) . The circuit was simulated assuming a typical
double-poly 0.5-pm CMOS process. Fig. 3(b) indicates the combinations of Rand
C EXT yielding the non-oscillating mode of operation obtained by HSPICE simulation. It is important to note that if CEXT ~ 15Cgote , non-oscillating mode appears
with R = O. This me8JlS the output resistance of the inverter-B plays the role of
R. When the number of inverter-A's is increased, the increased capacitance load
serves as CEXT. Therefore, WTA having more than 19 input signals C8Jl operate in
the non-oscillating mode. Fig. 3(c) represents the detection delay as a function of
CEXT. It is known that the increase in CEXT, therefore the increase in the number
of input signals to the WTA, does not significantly increase the detection delay and
that the delay is only in the r8Jlge of 100 to 200psec.
A photomicrograph of a test circuit of the non-oscillating mode WTA fabricated
by Tohoku-University st8Jldard double-polysilicon CMOS process on 3-pm design
rules, and the measurement results are shown in Fig. 4(80) and (b), respectively.
~
I-
v
/
:--- ""
1
V [\( Y?. ~ V
I"-
INPU T OAl ~
~~
o
(a)
I--
~
r-"---'V
OUTP TO ~TA
(b)
~
~
TIM E
V
v.
V
"-
I'-- /
"""
....
VA'
[2511uc/dlv)
Figure 4: (a) Photomicrograph of a test circuit for 4-input continuous-time WTA.
Chip size is 800pmx500pm including all peripherals (3-pm rules). The core circuit
of Fig. 3(80) occupies approximately 0.12 mm2 ? (b) Measured wave forms.
Neuron-MOS Temporal Winner Search Hardware for Fully-parallel Data Processing
3
689
NEURON-MOS DATA SORTING CIRCUITRY
The elemental idea of this circuit was first proposed at ISSCC '93 [3] as an application of the vMOS WTA circuit. In the present work, a clocked-vMOS technique [9]
was introduced to enhance the accuracy and reliability of vMOS circuit operation
and test circuits were fabricated and their operation have been verified.
Fig. 5(80) shows the circuit diagram of a test circuit for sorting three analog data VA,
VB, and Vc , and a photomicrograph of a fabricated test circuit designed on 3-pm
rules is shown in Fig. 5(b). Each input stage is a vMOS inverter: a CMOS inverter
in which the common gate is made floating and its potential fjJ F is determined
by two input voltages via equa.lly-weighted capacitance coupling, namely fjJF =
(VA + VRAMP)/2. The reset signal forces the floating node be grounded, thus
cancelling the charge on the vMOS floating gate each time before sorting. This is
quite essential in achieving long-term reliability of vMOS operation. In the second
stage are flip-flop memory cells to store sorting results. The third stage is a circuit
which counts the number of 1's at its three input terminals and outputs the result in
binary code. The concept of the vMOS A/D converter design [10] has been utilized
in the circuit.
(a)
(b)
...............
(j) vMOS
@ Data latch
~
@ Counter
.
Inverter
Figure 5: (a) Circuit diagram of vMOS data-soring circuit. (b) Photomicrograph
of a test circuit fabricated by Tohoku Univ. Standard double-polysillicon CMOS
process (3-pm rules). Chip size is 1250pmxBOOpm including a.ll peripherals.
The sorting circuit is activated by ramping up VRAMP from OV to VDD. Then the
vMOS inverter receiving the largest input turns on first and the output data of the
counter at this moment (0,0) is latched in the respective memory cells. The counter
output changes to (0,1) after gate delays in the counter and this code is latched
when the vMOS inverter receiving the second largest turns on. Then the counter
counts up to (1,0). In this manner, the all input data are numbered according to
the order of their magnitudes after a ramp voltage scan is completed.
The measurement results are demonstrated in Fig. 6(80) in comparison with the
HSPICE simulation results. Simulation was carried out on the same architecture
circuit designed on O.5-pm design rules and operated under 3V power supply. For
three analog input voltages: VA = 5V, VB = 4V, and Vc = 2V, (0,0), (0,1),
T. SHIBATA, T. NAKAI, T. MORIMOTO, R. KAIHARA, T. YAMASHITA, T. OHMI
690
40
MEASUREMENT
~30
S
3-INPUT
SORnNG CIRCUIT
20
j 1:
~
4
8
6
2
SortIng Accuracy (bit ]
r
(b)
Ii
_100
L
~80
-eo
r
15-INPUT
SORTING CIRCUIT
S40
c:
~
20
0
2
6
4
8
SortIng Accuracy (bit ]
r '
10~/div
20nsec/civ
(a)
(c)
Figure 6: (a) Wave forms of the test circuit shown in Fig. 5(a) measured without
buffer circuitry (left) and simulation results of a circuit designed with 0.5-pm rules
(right). (b) Minimum scan time vs. sorting accuracy for a three-input sorter. (c)
Minimum scan time vs. sorting accuracy for a 15-input sorter.
and (1,0) are latched, respectively, after the ramp voltage scan, thus accomplishing
correct sorting. Slow operation of the test circuit is due to the loading effect caused
by the direct probing of the node voltage without output buffer circuitries. The
simulation with a 0.5-pm-design-rule circuit indicates the sorting is accomplished
within the scan time of 4Onsec.
In Fig. 6(b), the minimum scan time obtained by simulation is plotted as a function of the bit accuracy in sorting analog data. N -bit accuracy means the minimum
voltage difference required for winner discrimination is VDD/2 2 ? If the ramp rate
is too fast, the vMOS inverter receiving the next largest data turns on before the
correct counting results become available, leading to an erroneous operation. The
scan time/accuracy relation in Fig. 6(b) is primarily determined by the response
delay in the counter. It should be noted that the number of inverter stages in the
counter (vMOS A/D converter) is always three indifferent to the number of output
bits, namely, the delay would not increase significantly by the increase in the number of input data. In order to investigate this, a 15-input counter was designed and
the delay time was evaluated by HSPICE simulation. It was 312 psec in comparison
with 110 psec of the 3-input counter of Fig. 5(a). The scan time/accuracy relation
for the 15-input sorting circuit is shown in Fig. 6( c), indicating the sorting of 15
input data can be accomplished in 100 nsec with 8-bit accuracy.
Neuron-MOS Temporal Winner Search Hardware for Fully-parallel Data Processing
4
691
CONCLUSIONS
A novel neuron-like functional device liMOS has been successfully utilized in constructing intelligent electronic circuits which can carry out search for the temporal
winner. As a result, it has become possible to perform data sorting as well as
winner search in an instance, both requiring very time-consuming sequential data
processing on a digital computer. The hardware algorithms presented here are typical examples of the liMOS binary-multivalue-analog merged computation scheme,
which would play an important role in the future flexible data processing.
Acknowledgements
This work was partially supported by Grant-in-Aid for Scientific Research
(06402038) from the Ministry of Education, Science, Sports, and Culture, Japan. A
part of this work was carried out in the Super Clean Room of Laboratory for Electronic Intelligent Systems, Research Institute of Electrical communication, Tohoku
University.
References
[1] T. Shibata and T . Ohmi, "A functional MOS transistor featuring gate-level
weighted sum and threshold operations," IEEE Trans. Electron Devices, Vol. 39,
No.6, pp.1444-1455 (1992).
[2] T. Shibata, K. Kotani, T. Yamashita, H. Ishii, H. Kosaka, and T. Ohmi, "Implementing interlligence on silicon using neuron-like functional MOS transistors," in
Advances in Neural Information Processing Systems 6 (San Francisco, CA: Morgan
Kaufmann 1994) pp. 919-926.
[3] T. Yamashita, T. Shibata, and T. Ohmi, "Neuron MOS winner-take-all circuit
and its application to associative memory," in ISSCC Dig. Tech. Papers, Feb. 1993,
FA 15.2, pp. 236-237.
[4] G. Gauwenberghs and V. Pedroni, " A charge-based CMOS parallel analog vector
quantizer," in Advances in Neural Information Processing Systems 7 (Cambridge,
MA: The MIT Press 1995) pp. 779-786.
[5] T. Kohonen, Self-Organization and Associative Memory, 2nd ed. (New York:
Springer-Verlag 1988).
[6] M. Kawamata, M. Abe, and T. Higuchi, "Evolutionary digital filters," in Proc.
Int. Workshop on Intelligent Signal Processing and Communication Systems, seoul,
Oct., 1994, pp. 263-268.
[7] J. Lazzaro, S. Ryckebusch, M. A. Mahowald, and C. A. Mead, "Winner-TakeAll networks of O(N) complexity," in Advances in Neural Information Processing
Systems 1 (San Mateo, CA: Morgan Kaufmann 1989) pp. 703-711.
[8] J . Choi and B. J. Sheu, "A high-precision VLSI winner-take-all circuit for selforganizing neural networks," IEEE J. Solid State Circuits, Vol. 28, No.5, pp.576584 (1993).
[9] K. Kotani, T. Shibata, M. Imai, and T. Ohmi, "Clocked-Neuron-MOS logic
circuits employing auto-threshold-adjustment," in ISSCC Dig. Technical Papers,
Feb. 1995, FA 19.5, pp. 320-321.
[10] T. Shibata and T. Ohmi, "Neuron MOS binary-logic integrated circuits: Part
II, Simplifying techniques of circuit configuration and their practical applications,"
IEEE Trans. Electron Devices, Vol. 40, No.5, 974-979 (1993).
| 1030 |@word version:1 inversion:2 loading:1 nd:1 simulation:11 simplifying:1 solid:1 carry:1 moment:1 configuration:2 selecting:1 current:8 must:1 takeo:1 civ:1 designed:5 v:3 discrimination:1 device:4 short:1 core:1 quantizer:1 node:4 location:1 rc:1 direct:2 become:2 supply:1 kotani:2 isscc:3 sendai:1 manner:2 ramping:1 terminal:3 sively:1 initiating:1 little:1 becomes:1 begin:1 circuit:52 what:1 kind:1 developed:3 finding:1 fabricated:5 temporal:6 charge:3 dlv:1 demonstrates:1 control:1 grant:1 positive:1 before:2 engineering:1 ext:1 mead:1 approximately:1 mateo:1 hspice:5 equa:1 unique:2 practical:1 significantly:2 numbered:1 map:1 demonstrated:3 rule:7 pull:2 s40:1 play:3 lod:1 element:2 utilized:4 cut:1 role:3 inserted:1 electrical:1 verifying:1 region:1 decrease:1 removed:1 counter:9 complexity:1 vdd:4 ov:1 basis:1 sink:1 kirchhoff:1 chip:2 univ:1 mosfet:2 fast:2 ole:2 limo:2 quite:1 larger:1 ramp:4 itself:1 associative:3 transistor:5 reset:1 cancelling:1 kohonen:2 loop:4 organizing:1 achieve:1 forth:1 elemental:2 double:4 oscillating:12 produce:1 cmos:10 ring:5 ohmi:9 coupling:3 tim:1 measured:4 odd:1 strong:2 implemented:2 nsec:4 waveform:1 merged:1 correct:2 functionality:1 owing:1 filter:1 vc:2 occupies:1 implementing:1 education:1 ultra:1 summation:3 ground:1 mo:13 traded:1 circuitry:6 electron:2 inverter:28 consecutive:1 smallest:1 purpose:1 proc:1 largest:7 successfully:1 weighted:2 mit:1 clearly:1 always:1 super:1 latched:3 voltage:11 indicates:2 tech:1 contrast:1 ishii:1 integrated:2 relation:2 vlsi:1 shibata:9 among:3 flexible:2 integration:1 uc:1 having:2 mm2:1 represents:1 future:1 intelligent:4 primarily:1 composed:3 floating:11 yamashita:6 detection:4 organization:1 investigate:1 indifferent:1 pfa:2 yielding:1 operated:1 activated:1 culture:1 respective:1 plotted:1 increased:3 instance:1 mahowald:1 cost:1 delay:10 conducted:3 front:1 too:1 stay:1 v4:2 off:5 receiving:5 enhance:1 sorter:2 leading:1 japan:2 potential:4 int:1 caused:1 vi:2 ad:2 competitive:1 wave:4 parallel:7 capability:2 formed:1 morimoto:4 accuracy:10 accomplishing:1 kaufmann:2 ensemble:2 subthreshold:2 critically:1 dig:2 detector:3 sharing:1 ed:1 pp:8 gain:1 cj:1 appears:1 ta:1 vmos:29 response:3 rand:1 evaluated:1 just:1 stage:7 until:1 mode:10 scientific:1 effect:1 concept:1 requiring:1 counterpart:1 laboratory:1 latch:1 ll:1 self:2 uniquely:1 noted:1 excitation:1 clocked:3 presenting:1 dissipation:1 image:1 novel:2 aoba:1 common:3 superior:1 functional:3 tracked:1 winner:22 sheu:1 volume:1 analog:7 measurement:3 silicon:1 cambridge:1 pm:8 kosaka:1 reliability:2 stable:1 operating:2 etc:2 feb:2 optimizing:1 store:1 buffer:2 verlag:1 binary:3 vt:1 accomplished:2 morgan:2 seen:1 minimum:6 ministry:1 eo:1 imai:1 signal:10 ii:2 multiple:3 bread:2 technical:1 polysilicon:2 long:1 retrieval:1 equally:1 va:10 schematic:1 inpu:1 fjj:1 lly:1 grounded:3 cell:4 addition:1 diagram:6 median:1 source:1 biased:2 operate:1 flow:2 capacitor:4 mod:2 aza:1 counting:1 ideal:1 variety:1 architecture:3 identified:2 converter:2 idea:2 tohoku:4 resistance:1 york:1 lazzaro:1 action:4 selforganizing:1 hardware:8 reduced:1 discrete:1 vol:3 key:1 four:2 threshold:4 achieving:1 photomicrograph:4 oal:1 clean:1 verified:2 sum:3 nakai:4 electronic:4 oscillation:4 vb:2 bit:6 fl:2 ri:1 software:1 speed:2 department:1 according:2 peripheral:2 combination:2 smaller:1 wta:17 explained:1 turn:11 count:2 mechanism:1 flip:1 fed:1 serf:1 available:1 operation:12 competitively:1 takeall:1 gate:12 top:2 running:1 completed:1 a4:1 opportunity:1 aramaki:1 build:1 outp:1 capacitance:6 tadashi:1 occurs:2 fa:2 concentration:1 ryckebusch:1 evolutionary:2 div:1 simulated:1 consumption:2 toward:1 assuming:1 code:2 vai:2 suppress:1 implementation:2 design:4 perform:1 neuron:13 enabling:1 flop:1 communication:2 dc:2 frame:1 psec:3 inv:1 abe:1 introduced:1 inverting:1 namely:2 required:1 specified:1 ryu:1 temporary:1 trans:2 usually:1 regime:2 including:3 memory:5 power:6 force:1 scheme:3 altered:1 technology:2 carried:3 coupled:1 auto:1 tacit:1 acknowledgement:1 val:2 fully:6 filtering:1 pedroni:1 lv:1 digital:3 article:1 share:1 featuring:2 supported:1 last:2 institute:1 feedback:3 transition:2 made:2 san:2 far:2 employing:1 preferred:1 logic:2 global:1 francisco:1 consuming:1 vramp:2 search:11 continuous:7 demanded:1 nature:1 ca:2 inherently:1 poly:1 constructing:1 ref:1 fig:22 board:2 slow:2 vr:5 lc:1 probing:1 aid:1 precision:1 third:1 down:2 removing:1 erroneous:1 load:2 choi:1 essential:3 workshop:1 quantization:1 sequential:1 magnitude:1 sorting:20 depicted:1 adjustment:1 tracking:1 partially:1 sport:1 springer:1 ma:1 oct:1 higuchi:1 oscillator:2 room:1 experimentally:1 fw:1 change:1 determined:3 typical:2 reducing:1 called:2 total:1 explo:1 indicating:1 seoul:1 scan:8 |
37 | 1,031 | Dynamics of Attention as Near
Saddle-Node Bifurcation Behavior
Hiroyuki Nakahara"
Kenji Doya
General Systems Studies
U ni versi ty of Tokyo
3-8-1 Komaba, Meguro
Tokyo 153, Japan
nakahara@vermeer.c.u-tokyo.ac.jp
ATR Human Information Processing
Research Laboratories
2-2 Hikaridai, Seika, Soraku
Kyoto 619-02, Japan
doya@hip.atr.co.jp
Abstract
In consideration of attention as a means for goal-directed behavior in non-stationary environments, we argue that the dynamics of
attention should satisfy two opposing demands: long-term maintenance and quick transition. These two characteristics are contradictory within the linear domain. We propose the near saddlenode bifurcation behavior of a sigmoidal unit with self-connection
as a candidate of dynamical mechanism that satisfies both of these
demands. We further show in simulations of the 'bug-eat-food'
tasks that the near saddle-node bifurcation behavior of recurrent
networks can emerge as a functional property for survival in nonstationary environments.
1
INTRODUCTION
Most studies of attention have focused on the selection process of incoming sensory
cues (Posner et al., 1980; Koch et al., 1985; Desimone et al., 1995). Emphasis was
placed on the phenomena of causing different percepts for the same sensory stimuli.
However, the selection of sensory input itself is not the final goal of attention. We
consider attention as a means for goal-directed behavior and survival of the animal.
In this view, dynamical properties of attention are crucial. While attention has
to be maintained long enough to enable robust response to sensory input, it also
has to be shifted quickly to a novel cue that is potentially important. Long-term
maintenance and quick transition are critical requirements for attention dynamics.
?currently at Dept. of Cognitive Science and Institute for Neural Computation,
U. C. San Diego, La Jolla CA 92093-0515. hnakahar@cogsci.ucsd.edu
39
Dynamics of Attention as Near Saddle-node Bifurcation Behavior
We investigate a possible neural mechanism that enables those dynamical characteristics of attention.
First, we analyze the dynamics of a network of sigmoidal units with self-connections.
We show that both long-term maintenance and quick transition can be achieved
when the system parameters are near a "saddle-node bifurcation" point . Then, we
test if such a dynamical mechanism can actually be helpful for an autonomously
behaving agent in simulations of a 'bug-eat-food' task. The result indicates that
near saddle-node bifurcation behavior can emerge in the course of evolution for
survival in non-stationary environments.
2
NEAR SADDLE-NODE BIFURCATION BEHAVIOR
When a pulse-like input is given to a linear system, the rising and falling phases
of the response have the same time constants. This means that long-term maintenance and quick transition cannot be simultaneously achieved by linear dynamics.
Therefore, it is essential to consider a nonlinear dynamical mechanism to achieve
these two demands.
2.1
DYNAMICS OF A SELF-RECURRENT UNIT
First, we consider the dynamics of a single sigmoidal unit with the self-connection
weight a and the bias b.
y(t
+ 1)
F(ay(t)
+ b) ,
(1)
F(x)
1
1 + exp( -x)'
(2)
The parameters (a, b) determine the qualitative behavior of the system such as the
number of fixed points and their stabilities. As we change the parameters , the
qualitative behavior of the system may suddenly change. This is referred to as
"bifurcation" (Guckenheimer, et al., 1983). A typical example is a "saddle-node
bifurcation" in which a pair of fixed points, one stable and one unstable, emerges.
In our system, this occurs when the state transition curve y(t + 1) = F(ay(t) + b) is
tangent to y(t + 1) = y(t). Let y* be this point of tangency. We have the following
condi tion for saddle-node bifurcation.
F(ay*
dF(ay + b)
dy
+ b)
I
b =
(3)
1
( 4)
y=y.
These equations can be solved, by noting F'(x)
a
y*
= F(x)(l- F(x)), as
1
(5)
y* (1 - y*)
1
F-1(y*) - ay* = F-l(y*) - - I - y*
(6)
By changing the fixed point value y* between a and 1, we can plot a curve in the
parameter space (a, b) on which saddle-node bifurcation occurs, as shown in Figure
1 (left). A pair of a saddle point and a stable fixed point emerges or disappears
when the parameters pass across the cusp like curve (cases 2 and 4) . The system
has only one stable fixed point when the parameters are outside the cusp (case 1)
and three fixed points inside the cusp (case 3).
H.NAKAHARA,K.DOYA
40
y ( t+ l )
b
Bifurcat ion Diagr am
y(t +1 J
CASE 1
~
06,
0 61
",
041
O~/
.,
H
-'0
"
0
../
,,'
"i
"
""0~
1O:".,,,,0';;;-0;;-;
8 lY lt )
Y'~tL
? " CA
SE3
Y',t." CASE?
o. a:
0 B
(I
,,/
'
't1x.d p t ., "' 3
6
'
,
OJ'
o
- lO
8:tixed pts,'
0'1/
0.20. 4 0.60 8 ly(tJ
.
- 15
CASE 2
"
C. 8f::.x.d Pt ?. , ~' 1
,/
0 6
,'
0"
0 2:
'0 20 40 60 8 1 y( t )
.,,'.
"
.
f lxed p ts " . 2
'
'
0 2('
4~
60 8: 1 y (tJ
Figure 1: Bifurcation Diagram of a Self-Recurrent Unit . Left : the curve in the
parameter space (a , b) on which saddle-node bifurcation is seen. Right : state transition diagrams for four different cases.
y ( t ? l)
Y
(t:l)ll.ll11
1d
=~9
o'~=Lil
l. 1111 b =- 7. 9
o.
0.6
O.
I
0. 4
0 .2
a
"
0.20. 4 0 . 60. 81
I
I
O.
O.
I
... "
b
y et)
I
o
o . 20 . 4 0
yet)
60. 8 1
yet)
y et )
L
o.~
do. &
o.
0.21
o.
o.
o
"
,I
5
10 15 2 (f i me (t)
0 . 41
o
5
1 0 l S 2a:'i rne l t )
Figure 2: Temporal Responses of Self-Recurrent Units. Left : near saddle-node
bifurcation. Right : far from bifurcation.
An interesting behavior can be seen when the parameters are just outside the cusp,
as shown in Figure 2 (left) . The system has only one fixed point near Y = 0, but
once the unit is activated (y ~ 1) , it stays "on" for many time steps and then goes
back to the fixed point quickly. Such a mechanism may be useful in satisfying the
requirements of attention dynamics: long-term maintenance and quick transition.
2.2
NETWORK OF SELF-RECURRENT UNITS
Next, we consider the dynamics of a network of the above self-recurrent units.
Yi(t
+ 1) = F[aYi(t) + b + L
CijYj(t)
+ diUi(t)],
(7)
j,jti
where a is the self connection weight , b is the bias, Cij is the cross connection weight,
and di is the input connection weight , and Ui(t) is the external input. The effect of
lateral and external inputs is equivalent to the change in the bias, which slides the
sigmoid curve horizontally without changing the slope.
For example, one parameter set of the bifurcation at y* = 0.9 is a = 11.11 and
b ~ -7.80. Let b = -7.90 so that the unit has a near saddle-node bifurcation
behavior when there is no lateral or external inputs. For a fixed a = 11.11, as we
increase b, the qualitative behavior of the system appears as case 3 in Figure 1, and
41
Dynamics of Attention as Near Saddle-node Bifurcation Behavior
Sensory
Inputs
,
,
'olr lood
Actions
Network Structure
"
- '--,~ ,- \, :/~ - - - .
r-.~ "==:-:"~-
~~ .......
'on:'oocl
~~
-
.....
Creature
. . ...
...
Creature
Inpol. IJrI .111'2
Figure 3: A Creature's Sensory Inputs(Left), Motor System(Center) and Network
Architecture(Right)
then, it changes again at b:::::: -3.31, where the fixed point at Y = 0.1, or another
bifurcation point , appears as case 4 in Figure L Therefore , ifthe input sum is large
enough, i.e . L j ,j;Ci CijYj + diuj > -3.31- (-7.90) :::::: 4.59, the lower fixed point
at Y = 0.1 disappears and the state jumps up to the upper fixed point near Y = 1,
quickly turning the unit "on". If the lateral connections are set properly, this can
in turn suppress the activation of other units. Once the external input goes away,
as we see in Figure 2 (left), the state stays "on" for a long time until it returns to
the fixed point near Y = O.
3
EVOLUTION OF NEAR BIFURCATION DYNAMICS
In the above section, we have theoretically shown the potential usefulness of near
saddle-node bifurcation behavior for satisfying demands for attention dynamics. We
further hypothesize that such behavior is indeed useful in animal behaviors and can
be found in the course of learning and evolution of the neural system.
To test our hypothesis, we simulated a 'bug-eat-food ' task . Our purpose in t.his
simulation was to see whether the attention dynamics discussed in the previous
section would help obtain better performance in a non-stationary environment. Vve
used evolutionary programming (Fogel et aI, 1990) to optimize the performance of
recurrent networks and feedforward networks.
3.1
THE BUG AND THE WORLD
In our simulation, a simple creature traveled around a non-stationary environment.
In the world, there were a certain number of food items. Each item was fixed at a
certain place in the world but appeared or disappeared in a stochastic fashion, as
determined by a two-state Markov system. In order to survive, A creature looked
for food by traveling the world . The amount of food a creature found in a certain
time period was the measure of its performance.
A creature had five sensory inputs, each of which detected food in the sector of 45
degrees (Figure 3, right). Its output level was given by L J' .l..,
where Tj ,"vas the
rJ
distance to the j-th food item within the sector. Note that the format of the input
contained information about distance and also that the creature could only receive
the amount of the input but could not distinguish each food from others.
For the sake of simplicity, we assumed that the creature lived in a grid-like world .
On each time step, it took one of three motor commands: L: turn left (45 degrees),
H. NAKAHARA, K. DOYA
42
Density of Food
Markov Transition Matrix
of each food
Random Walk
Nearest Visible
FeedForward
Recurrent
Nearest Visible/Invisible
0.05
.5 .5 .8 .8
.5 .5 .2 .2
7.0
6.9
42.7 18.6
58.6 37.3
65.7 43.6
97.7 97.1
0.10
.5 .5
.8 .8
.2 .2
.5 .5
13.8
13.9
65.3
32.4
84.8
60.0
94.0
66.1
129.1 128.8
Table 1: Performances of the Recurrent Network and Other Strategies.
C: step forward, and R: turn right (Figure 3, center). Simulations were run with
different Markov transition matrices of food appearance and with different food
densities. A creature got the food when it reached the food, whether it was visible
or invisible. When a creature ate a food item, a new food item was placed randomly.
The size of the world was 10x10 and both ends were connected as a torus.
A creature was composed of two layers: visual layer and motor layer (Figure 3,
left). There were five units 1 in visual layer, one for each sensory input, and their
dynamics were given by Equation (7). The self-connection a, the bias b and the
input weight di were the same for all units. There were three units in motor layer ,
each coding one of three motor commands, and their state was given by
ek
+ L: fkiYi(t),
exp(xk(t))
L:/ exp(x/(t)) '
(8)
(9)
where ek was the bias and fki was the feedforward connection weight. 2 One of the
three motor commands (L,C,R) was chosen stochastically with the probability Pk
(k=L,C,R). The activation pattern in visual layer was shifted when the creature
made a turn, which should give proper mapping between the sensory input and the
working memory.
3.2
EVOLUTIONARY PROGRAMMING
Each recurrent network was characterized by the parameters (a,b,Cij,di,ek,lkd,
some of which were symmetrically shared, e.g. C12 = C21. For comparison, we
also tested feedforward networks where recurrent connections were removed, i.e.
a
Cij
O.
=
=
A population of 60 creatures was tested on each generation. The initial population
was generated with random parameters. Each of the top twenty scoring creatures
produced three offspring; one identical copy of the parameters of the parent's and
two copies of these parameters with a Gaussian fluctuation. In this paper, we report
the result after 60 generations.
3.3
PERFORMANCE
1 We denote each unit in visual layer by Ul, U2, U3, U4, Us from the left to the right for
the later convenience
2In this simulation reported here, we set ek = O.
Dynamics of Attention as Near Saddle-node Bifurcation Behavior
-,
-, ,
-,
-, ,
-,
-7 ,
_7
,
- 10
- 12 . 5
,
43
......: ....
"
-L25
a
b
"Transition matrix
= ( :~
.5 )
.5
bTransition matrix = (
:~
:~ )
Figure 4: The Convergence of the Parameter of (a , b) by Evolutionary Programming
Plotted in the Bifurcation Diagram. The food density is 0.10 in both examples
above .
Table 1 shows the average of food found after 60 generations. As a reference of
performance level, we also measured the performances of three other simple algorithms: 1) random walk : one of the three motor commands is taken randomly with
equal probability. 2) nearest visible: move toward the nearest food visible at the
time within the creature's field of view of (U2, U3, U4). 3) nearest visible/invisible:
move toward the nearest food within the view of (U2, U3, U4) no matter if it is visible
or not, which gives an upper bound of performance.
The performance of recurrent network is better than that of feedforward network
and 'nearest visible'. This suggests that the ability of recurrent network to remember the past is advantageous.
The performance of feedforward network is better than that of 'nearest visible '.
One reason is that feedforward network could cover a broader area to receive inputs
than 'nearest visible' . In addition, two factors, the average time in which a creature
reaches the food and the average time in which the food disappears, may influence
the performance of feedforward network and 'nearest visible'. Feedforward network
could optimize its output to adapt two factors with its broader view in evolution
while 'nearest visible' did not have such adaptability.
It should be noted that both of 'nearest visible/invisible ' and 'nearest visible' explicitly assumed the higher-order sensory processing: distinguishing each food item from
the others and measuring the distance between each food and its body. Since its performance is so different regardless of its higher-order sensory processing, it implies
the importance of remembering the past. We can regard recurrent network as compromising two characteristics, remembering the past as 'nearest visible/invisible'
did and optimizing the sensitivity as feedforward network did , although recurrent
network did not have a perfect memory as 'nearest visible/invisible' .
3.4
CONVERGENCE TO NEAR-BIFURCATION REGIME
We plotted the histogram of the performance in each generation and the history of
the performance of a top-scoring creature over generations. Though they are not
shown here, the performance was almost optimal after 60 generations.
Figure 4 shows that two examples of a graph in which we plotted the parameter
H. NAKAHARA, K. DOYA
44
set (a , b) of top twenty scoring creatures in the 60th generation in the bifurcation
diagram. In the left graph, we can see the parameter set has converged to a regime
that gives a near saddle-node bifurcation behavior. On the other hand, in the right
graph, the parameter set has converged into the inside of cusp. It is interesting
to note that the area inside of the cusp gives bistable dynamics. Hence, if the
input is higher than a repelling point, it goes up and if the input is lower , it goes
down . The reason of the convergence to that area is because of the difference of
the world setting, that is, a Markov transition matrix. Since food would disappear
more quickly and stay invisible longer in the setting of the right graph, it should
be beneficial for a creature to remember the direction of higher inputs longer . In
most of cases reported in Table 1, we obtained the convergence into our predicted
regime and/or the inside of the cusp.
4
DISCUSSION
Near saddle-node bifurcation behavior can have the long-term maintenance and
quick transition, which characterize attention dynamics. A recurrent network
has better performance than memoryless systems for tasks in our simulated nonstationary environment. Clearly, near saddle-node bifurcation behavior helped a
creature's survival and in fact, creatures actually evolved to our expected parameter regime . However, we also obtained the convergence into another unexpected
regime which gives bistable dynamics . How the bistable dynamics are used remains
to be investigated.
Acknowledgments
H.N . is grateful to Ed Hutchins for his generous support, to John Batali and David
Fogel for their advice on the implementation of evolutionary programming and to
David Rogers for his comments on the manuscript of this paper.
References
R. Desimone, E. K. Miller , L. Chelazzi, & A. Lueschow. (1995) Multiple Memory
Systems in the Visual Cortex. In M. Gazzaniga (ed .) , The Cognitive Neurosciences,
475-486. MIT Press.
D. B. Fogel, L. J. Fogel, & V. W . Porto. (1990) Evolving Neural Networks. Biological cybernetics 63:487-493.
J. Guckenheimer & P. Homes. (1983) Nonlinear Oscillations, Dynamical Systems,
and Bifurcation of Vector Fields
C. Koch & S. Ullman . (1985) Shifts in selective visual attention:towards the underlying neural circuitry. Human Neurobiology 4:219-227 .
M. Posner , C .. R .R. Snyder, & B. J. Davidson. (1980) Attention and the detection
of signals. Journal of Experimental Psychology: General 109:160-174
| 1031 |@word rising:1 advantageous:1 pulse:1 simulation:6 initial:1 past:3 repelling:1 activation:2 yet:2 john:1 visible:16 enables:1 motor:7 hypothesize:1 plot:1 stationary:4 cue:2 item:6 xk:1 node:18 sigmoidal:3 five:2 qualitative:3 inside:4 theoretically:1 expected:1 indeed:1 behavior:21 seika:1 food:26 underlying:1 evolved:1 temporal:1 remember:2 unit:16 ly:2 offspring:1 fluctuation:1 emphasis:1 suggests:1 co:1 c21:1 directed:2 acknowledgment:1 area:3 evolving:1 got:1 cannot:1 convenience:1 selection:2 influence:1 optimize:2 equivalent:1 quick:6 center:2 go:4 attention:19 regardless:1 focused:1 simplicity:1 posner:2 his:3 stability:1 population:2 diego:1 pt:2 programming:4 distinguishing:1 hypothesis:1 satisfying:2 u4:3 solved:1 connected:1 autonomously:1 removed:1 environment:6 ui:1 dynamic:20 grateful:1 cogsci:1 detected:1 outside:2 ability:1 itself:1 final:1 ifthe:1 took:1 propose:1 causing:1 achieve:1 meguro:1 bug:4 parent:1 convergence:5 requirement:2 disappeared:1 perfect:1 help:1 recurrent:16 ac:1 measured:1 nearest:15 kenji:1 predicted:1 implies:1 direction:1 tokyo:3 compromising:1 porto:1 stochastic:1 human:2 cusp:7 enable:1 bistable:3 rogers:1 biological:1 koch:2 around:1 exp:3 mapping:1 circuitry:1 u3:3 generous:1 purpose:1 currently:1 guckenheimer:2 mit:1 clearly:1 gaussian:1 command:4 broader:2 properly:1 indicates:1 l25:1 am:1 helpful:1 selective:1 animal:2 bifurcation:29 equal:1 once:2 field:2 identical:1 survive:1 others:2 stimulus:1 komaba:1 report:1 randomly:2 composed:1 simultaneously:1 phase:1 opposing:1 detection:1 investigate:1 activated:1 tj:3 desimone:2 walk:2 plotted:3 hip:1 versi:1 cover:1 measuring:1 lood:1 hutchins:1 usefulness:1 characterize:1 reported:2 density:3 sensitivity:1 stay:3 quickly:4 again:1 cognitive:2 external:4 ek:4 stochastically:1 return:1 ullman:1 japan:2 potential:1 c12:1 coding:1 matter:1 satisfy:1 explicitly:1 tion:1 view:4 later:1 ayi:1 lkd:1 analyze:1 helped:1 reached:1 slope:1 ni:1 characteristic:3 percept:1 miller:1 fki:1 produced:1 cybernetics:1 history:1 converged:2 reach:1 ed:2 ty:1 di:3 emerges:2 hiroyuki:1 adaptability:1 actually:2 back:1 appears:2 manuscript:1 higher:4 response:3 though:1 just:1 olr:1 until:1 traveling:1 working:1 hand:1 nonlinear:2 effect:1 evolution:4 hence:1 memoryless:1 laboratory:1 se3:1 ll:1 self:10 maintained:1 noted:1 ay:5 jti:1 invisible:7 consideration:1 novel:1 sigmoid:1 functional:1 jp:2 discussed:1 ai:1 grid:1 had:1 stable:3 longer:2 behaving:1 cortex:1 optimizing:1 jolla:1 certain:3 yi:1 scoring:3 seen:2 remembering:2 determine:1 period:1 signal:1 multiple:1 rj:1 kyoto:1 x10:1 characterized:1 adapt:1 cross:1 long:8 va:1 maintenance:6 df:1 histogram:1 achieved:2 ion:1 condi:1 receive:2 addition:1 diagram:4 crucial:1 comment:1 nonstationary:2 near:20 noting:1 symmetrically:1 feedforward:10 rne:1 enough:2 psychology:1 architecture:1 shift:1 whether:2 ul:1 soraku:1 action:1 useful:2 amount:2 slide:1 shifted:2 neuroscience:1 snyder:1 four:1 falling:1 tangency:1 changing:2 graph:4 sum:1 run:1 place:1 almost:1 doya:5 oscillation:1 home:1 dy:1 batali:1 layer:7 bound:1 distinguish:1 chelazzi:1 sake:1 eat:3 format:1 across:1 ate:1 beneficial:1 taken:1 equation:2 remains:1 turn:4 mechanism:5 end:1 away:1 top:3 hikaridai:1 disappear:1 suddenly:1 move:2 looked:1 occurs:2 strategy:1 evolutionary:4 distance:3 atr:2 lateral:3 simulated:2 me:1 argue:1 unstable:1 toward:2 reason:2 cij:3 sector:2 potentially:1 suppress:1 implementation:1 lil:1 lived:1 proper:1 twenty:2 upper:2 markov:4 t:1 neurobiology:1 ucsd:1 david:2 pair:2 connection:10 fogel:4 gazzaniga:1 dynamical:6 pattern:1 appeared:1 regime:5 oj:1 memory:3 critical:1 turning:1 disappears:3 traveled:1 tangent:1 interesting:2 generation:7 agent:1 degree:2 lo:1 course:2 placed:2 copy:2 bias:5 institute:1 emerge:2 regard:1 curve:5 transition:12 world:7 sensory:11 forward:1 made:1 jump:1 san:1 far:1 incoming:1 assumed:2 davidson:1 table:3 robust:1 ca:2 investigated:1 domain:1 did:4 pk:1 body:1 vve:1 advice:1 referred:1 tl:1 creature:22 fashion:1 torus:1 candidate:1 t1x:1 down:1 survival:4 essential:1 importance:1 ci:1 demand:4 lt:1 saddle:19 appearance:1 visual:6 horizontally:1 unexpected:1 contained:1 u2:3 satisfies:1 goal:3 nakahara:5 towards:1 shared:1 change:4 typical:1 determined:1 contradictory:1 pas:1 experimental:1 la:1 support:1 dept:1 tested:2 phenomenon:1 |
38 | 1,032 | VLSI Model of Primate Visual Smooth Pursuit
Ralph Etienne-Cummings
Jan Van der Spiegel
Department of Electrical Engineering,
Southern Illinois University, Carbondale,
IL 62901
Moore School of Electrical Engineering,
University of Pennsylvania, Philadelphia,
PA 19104
Paul Mueller
Corticon, Incorporated,
3624 Market Str, Philadelphia,
PA 19104
Abstract
A one dimensional model of primate smooth pursuit mechanism has
been implemented in 2 11m CMOS VLSI. The model consolidates
Robinson's negative feedback model with Wyatt and Pola's positive
feedback scheme, to produce a smooth pursuit system which zero's the
velocity of a target on the retina. Furthermore, the system uses the
current eye motion as a predictor for future target motion. Analysis,
stability and biological correspondence of the system are discussed. For
implementation at the focal plane, a local correlation based visual
motion detection technique is used. Velocity measurements, ranging
over 4 orders of magnitude with < 15% variation, provides the input to
the smooth pursuit system. The system performed successful velocity
tracking for high contrast scenes. Circuit design and performance of the
complete smooth pursuit system is presented.
1 INTRODUCTION
The smooth pursuit mechanism of primate visual systems is vital for stabilizing a region
of the visual field on the retina. The ability to stabilize the image of the world on the
retina has profound architectural and computational consequences on the retina and visual
cortex, such as reducing the required size, computational speed and communication
hardware and bandwidth of the visual system (Bandera, 1990; Eckert and Buchsbaum,
1993). To obtain similar benefits in active machine vision, primate smooth pursuit can
be a powerful model for gaze control. The mechanism for smooth pursuit in primates
was initially believed to be composed of a simple negative feedback system which
attempts to zero the motion of targets on the fovea, figure I (a) (Robinson, 1965).
However, this scheme does not account for many psychophysical properties of smooth
707
VLSI Model of Primate Visual Smooth Pursuit
pursuit, which led Wyatt and Pola (1979) to proposed figure l(b), where the eye
movement signal is added to the target motion in a positive feed back loop. This
mechanism results from their observation that eye motion or apparent target motion
increases the magnitude of pursuit motion even when retinal motion is zero or constant.
Their scheme also exhibited predictive qualities, as reported by Steinbach (1976). The
smooth pursuit model presented in this paper attempts the consolidate the two models
into a single system which explains the findings of both approaches.
Target
Moticn
Eye
Motion
Retinal
Motion
e~
lee
G
ee = e t G+l
~;
>
I
G ~ co G r
Target
Motion
Eye
Motion
e~~
>
=0
(b)
(a)
Figure I: System Diagrams of Primate Smooth Pursuit Mechanism.
(a) Negative feedback model by Robinson (1965). (b) Positive
feedback model by Wyatt and Pola (1979).
The velocity based smooth pursuit implemented here attempts to zero the relative velocity
of the retina and target. The measured retinal velocity, is zeroed by using positive
feedback to accumulate relative velocity error between the target and the retina, where the
accumulated value is the current eye velocity. Hence, this model uses the Robinson
approach to match target motion, and the Wyatt and Pola positive feed back loop to
achieve matching and to predict the future velocity of the target. Figure 2 shows the
system diagram of the velocity based smooth pursuit system. This system is analyzed
and the stability criterion is derived. Possible computational blocks for the elements in
figure I (b) are also discussed. Furthermore, since this entire scheme is implemented on a
single 2 /lm CMOS chip, the method for motion detection, the complete tracking circuits
and the measured results are presented.
Retinal
Motion
Eye
Motion
er
Figure 2: System Diagram of VLSI Smooth Pursuit Mechanism.
is target velocity in space, Bt is projected target velocity, Be is the eye
velocity and Br is the measured retinal velocity.
2 VELOCITY BASED SMOOTH PURSUIT
Although figure I (b) does not indicate how retinal motion is used in smooth pursuit, it
provides the only measurement of the projected target motion. The very process of
calculating retinal motion realizes negative feed back between the eye movement and the
target motion, since retinal motion is the difference between project target and eye
motion. If Robinson's model is followed, then the eye movement is simply the
amplified version of the retinal motion. If the target disappears from the retina, the eye
motion would be zero. However, Steinbach showed that eye movement does not cea~
when the target fades off and on, indicating that memory is used to predict target motion.
Wyatt and Palo showed a direct additive influence of eye movement on pursuit. However,
the computational blocks G' and a of their model are left unfilled.
R. ETIENNE-CUMMINGS, J. VAN DER SPIEGEL, P. MUELLER
708
In figure 2, the gain G models the internal gain of the motion detection system , and the
internal representation of retinal velocity is then Vr. Under zero-slip tracking, the retinal
velocity is zero. This is obtained by using positive feed back to correct the velocity error
and eye,
The delay element represents a memory of the last eye
between target,
velocity while the current retinal motion is measured. If the target disappears, the eye
motion continues with the last value, as recorded by Steinbach, thus anticipating the
position of the target in space. The memory also stores the current eye velocity during
perfect pursuit. The internal representation of eye velocity, Ve , is subsequently amplified
by H and used to drive the eye muscles. The impulse response of the system is given in
equations (I). Hence, the relationship between eye velocity and target velocity is recursive
and given by equations (2). To prove the stability of this system, the retinal velocity can
be expressed in terms of the target motion as given in equations (3a). The ideal condition
for accurate performance is for GH = 1. However, in practice, gains of different amplifiers
er,
()
z-)
=GH--_-)
-.f..(z)
1- Z
(}r
ee.
()
(a); ~(I1)
(}r
=GH[-8(11) + u(n)]
(I)
(b)
n-)
(}e(n)
= (},(n) -
(}r(n)
=GH[-8(n) + u(n)] * (}r(n) = GHL(},.(k)
(2)
k=O
() r ( 11)
() r (n)
= (),( n ) (1
11
~
00
)
11
- GH)
0
if 11 -
=> () r( 1l )
I
GH < 1
= 0 if
GH
= 1 =>
() in)
= (),( 11 )
=> 0 < GH < 2 for stability
(
a)
(3)
( b)
are rarely perfectly matched. Equations (3b) shows that stability is assured for O<GH< 2.
Figure 3 shows a plot of eye motion versus updates for various choices of GH. At each
update, the retinal motion is computed. Figure 3(a) shows the eye's motion at the on-set
of smooth pursuit. For GH = 1, the eye movement tracks the target's motion exactly,
and lags slightly only when the target accelerates. On the other hand, if GH? I, the
eye's motion always lags the target's. If GH -> 2, the system becomes increasing
unstable, but converges for GH < 2. The three cases presented correspond to the smooth
pursuit system being critically, over and under damped, respectively.
3 HARDWARE IMPLEMENTATION
Using the smooth pursuit mechanism described, a single chip one dimensional tracking
system has been implemented. The chip has a multi-layered computational architecture,
similar to the primate's visual system. Phototransduction, logarithmic compression,
edge detection, motion detection and smooth pursuit control has been integrated at the
focal-plane. The computational layers can be partitioned into three blocks, where each
block is based on a segment of biological oculomotor systems.
3.1
IMAGING AND PREPROCESSING
The first three layers of the system mimics the photoreceptors, horizontal cells arx:l
bipolar cells of biological retinas. Similar to previous implementations of silicon
retinas, the chip uses parasitic bipolar transistors as the photoreceptors. The dynamic
range of photoreceptor current is compressed with a logarithmic response in low light arx:l
square root response in bright light. The range compress circuit represents 5-6 orders of
magnitude of light intensity with 3 orders of magnitude of output current dynamic range.
Subsequently, a passive resistive network is used to realize a discrete implementation of a
Laplacian edge detector. Similar to the rods and cones system in primate retinas, the
response time, hence the maximum detectable target speed, is ambient intensity dependent
(160 (12.5) Ils in 2.5 (250) IlW/cm2). However, this does prevent the system from
handling fast targets even in dim ambient lighting.
VLSI Model of Primate Visual Smooth Pursuit
~
g
u
>
709
20
20
15
15
10
10
5
~
5
0
]
-5
>" -5
- 10
?
? 10
Target
- -Eye: GH=I 99
- E ye GH=IOO
__ . Eye: GH=O_IO
-15
0
? 15
-20
-20
100
50
0
150
500
600
Updates
(a)
700
800
Updates
900
1000
(b)
Figure 3: (a) The On-Set of Smooth Pursuit for Various GH Values.
(b) Steady-State Smooth Pursuit.
3.2
MOTION MEASUREMENT
This computational layer measures retinal motion. The motion detection technique
implemented here differs from those believed to exist in areas V 1 and MT of the primate
visual cortex. Alternatively, it resembles the fly's and rabbit's retinal motion detection
system (Reichardt, 1961; Barlow and Levick, 1965; Delbruck, 1993). This is not
coincidental, since efficient motion detection at the focal plane must be performed in a
small areas and using simple computational elements in both systems.
The motion detection scheme is a combination of local correlation for direction
determination, and pixel transfer time measurement for speed. In this framework, motion
is defined as the disappearance of an object, represented as the zero-crossings of its edges,
at a pixel , followed by its re-appearance at a neighboring pixel. The (dis)appearance of
the zero-crossing is determined using the (negative) positive temporal derivative at the
pixel. Hence, motion is detected by AND gating the positive derivative of the zerocrossing of the edge at one pixel with the negative derivative at a neighboring pixel. The
direction of motion is given by the neighboring pixel from which the edge disappeared.
Provided that motion has been detected at a pixel, the transfer time of the edge over the
pixel's finite geometry is inversely proportional to its speed.
Equation (4) gives the mathematical representation of the motion detection process for an
object moving in +x direction. In the equation. f,(l.'k ,y.t) is the temporal response of
pixel k as the zero crossing of an edge of an object passes over its 2a aperture. Equation
(4) gives the direction of motion, while equation (5) gives the speed. The schematic of
motion _ x = [
f f,( l: k, y, t) > 0] [ f f t(l.' k + J, y, t) < 0] =0
motion+x=[~f,(l.'k-J,y,t)<O][~f/l.'k , y,t?O]
= 8[t
Motion.' t m =
Speed + x
=
t
-
(b)
(4)
2a(k-n)-a
v
]8[x - 2ak]
x
2a(k -n) -a
vx
J
- t
( a)
vx
2a
Disappear .' t d
2a(k -n) +a
= --~--?
vx
(5)
d
m
the VLSI circuit of the motion detection model is shown in figure 4(a). Figure 4(b)
shows reciprocal of the measured motion pulse-width for 1 D motion. The on-chip speed,
et, is the projected target speed. The measured pulse-widths span 3-4 orders magnitude,
710
R. ETIENNE-CUMMINGS, J. VAN DER SPIEGEL, P. MUELLER
One-Over Pulse-Width vs On-Chip Speed
?
O.R
~
0.4
"
~ -0.0 +--------::II~-----__+
M
~ -0.4
---e-- \IPW_Lefi
-0 .8
- - . - - IIPW_ Rlght
- 1.2 +-'----'--''-+--'--'--'--t---'--''--'-+-'--'--'-t-'--'--'-t---'--'--'-+
-40
00
4.0
8.0
12.0
-12.0
-R.O
On-Chip Speed rcml~J
Right
Left
(b)
(a)
Figure 4: (a) Schematic of the Motion Detection Circuit.
Measured Output of the Motion Detection Circuit.
(b)
depending on the ambient lighting, and show less than 15% variation between chips,
pixels, and directions (Etienne-Cummings, 1993).
3.3
THE SMOOTH PURSUIT CONTROL SYSTEM
The one dimensional smooth pursuit system is implemented using a 9 x I array of
motion detectors. Figure 5 shows the organization of the smooth pursuit chip. In this
system, only diverging motion is computed to reduce the size of each pixel. The outputs
of the motion detectors are grouped into one global motion signal per direction. This
grouping is performed with a simple, but delayed, OR, which prevents pulses from
neighboring motion cells from overlapping. The motion pulse trains for each direction
are XOR gated, which allows a single integrator to be used for both directions, thus
limiting mis-match_ The final value of the integrator is inversely proportional to the
target's speed. The OR gates conserve the direction of motion. The reciprocal of the
integrator voltage is next computed using the linear mode operation of a MOS transistor
(Etienne-Cummings, 1993). The unipolar integrated pulse allows a single inversion
circuit to be used for both directions of motion, again limiting mis-match. The output of
the "one-over" circuit is amplified, and the polarity of the measured speed is restored.
This analog voltage is proportional to retinal speed.
The measured retinal speed is subsequently ailed to the stored velocity. Figure 6 shows
the schematic for the retinal velocity accumulation (positive feedback) and storage (analog
Wave Forms
Motion Pulse Integration
and "One-Over"
V = GIRetinal Velocityl
Polarity
Restoration
Retinal Velocity
Accumulation
and Sample/Hold
Figure 5: Architecture of the VLSI Smooth Pursuit System. Sketches
of the wave forms for a fast leftward followed by a slow rightward
retinal motion are shown.
711
VLSI Model of Primate Visual Smooth Pursuit
memory). The output of the XOR gate in figure 5 is used by the sample-and-hold circuit
to control sampling switches S I and S2. During accumulation, the old stored velocity
value, which is the current eye velocity, is isolated from the summed value. At the
falling edge of the XOR output, the stored value on C2 is replaced by the new value on
Cl. This stored value is amplified using an off chip motor driver circuit, and used to
move the chip. The gain of the motor driver can be finely controlled for optimal
operation.
Motor
Retinal
Velocity
System
Accumulatiun
Target
Velocity
Two Phase Sample/Hold
Figure 6: Schematic Retinal Velocity Error Accumulation, Storage and
Motor Driver Systems.
Figure 7(a) shows a plot of one-over the measured integrated voltage as a function of on
chip target speed. Due to noise in the integrator circuit, the dynamic range of the motion
detection system is reduced to 2 orders of magnitude. However, the matching between left
and right motion is unaffected by the integrator. The MaS "one-over" circuit, used to
compute the analog reciprocal of the integrated voltage, exhibits only 0.06% deviation
from a fitted line (Etienne-Cummings, 1993b). Figure 7(b) shows the measured
increments in stored target velocity as a function of retinal (on-chip) speed. This is a test
of all the circuit components of the tracking system. Linearity between retinal velocity
increments and target velocity is observed, however matching between opposite motion
has degraded. This is caused by the polarity restoration circuit since it is the only
location where different circuits are used for opposite motion. On average, positive
increments are a factor of 1.2 times larger than negative increments. The error bars shows
the variation in velocity increments for different motion cells and different Chips. The
deviation is less than 15 %. The analog memory has a leakage of 10 mV/min and an
asymmetric swing of 2 to -1 V, caused by the buffers. The dynamic range of the
complete smooth pursuit system is measured to be 1.5 orders magnitude. The maximum
speed of the system is adjustable by varying the integrator charging time. The maximum
speed is ambient intensity dependent and ranges from 93 cmls to 7 cm/s on-chip speed in
Velocity Error Increment vs On-Chip Speed
Integrated Pulse vs On-Chip Speed
1.4
24
~
16
~
8
~
0
il
?
oS
.'
._
1.2
~
l'! 1.0
"e~
u
-t--------",/II!...------+
-8
.s
O.R
g 0 .6
LLl
.::;.
g 04
:: -16
-e--lnlPuI~_l..xft
-24
_ _? _
JntPlllo;e_Rl~hl
-32 -t-'---'---'-'--+-'--~~-t--'"-'-~_t_--"--''---'---"-t
10.0
-100
-5.0
0.0
5.0
On-Chip Speed lemlsl
(a)
OJ
>
- - - - . Nc~_ Jn c rt~nl
02
__ ? _ _Po,,_Incremclll
0.0
0
4
6
On-Chip Speed lem/s)
(b)
Figure 7. (a) Measured integrated motion pulse voltage. (b) Measured
output for the complete smooth pursuit system.
10
R. ETIENNE-CUMMINGS, J. VAN DER SPIEGEL, P. MUELLER
712
bright (250 JlW/cm 2) and dim (2.5 JlW/cm 2) lighting, respectively. However, for any
maximum speed chosen, the minimum speed is a factor of 0.03 slower. The minimum
speed is limited by the discharge time of the temporal differentiators in the motion
detection circuit to 0.004 cmls on chip. The contrast sensitivity of this system proved to
be the stumbling block, and it can not track objects in normal indoor lighting. However,
all circuits components tested successfully when a light source is used as the target.
Additional measured data can be found in (Etienne-Cummings, 1995). Further work will
improve the contrast sensitivity, combat noise and also consider two dimensional
implementations with target acquisition (saccades) capabilities.
4
CONCLUSION
A model for biological and silicon smooth pursuit has been presented. It combines the
negative feed back and positive feedback models of Robinson and Wyatt and Pola. The
smooth pursuit system is stable if the gain product of the retinal velocity detection
system and the eye movement system is less than 2. VLSI implementation of this
system has been performed and tested. The performance of the system suggests that wide
range (92.9 - 0.004 cmls retinal speed) target tracking is possible with a single chip focal
plane system. To improve this chip's performance, care must be taken to limit noise,
improve matching and increase contrast sensitivity. Future design should also include a
saccadic component to re-capture escaped targets, similar to biological systems.
References
C. Bandera, "Foveal Machine Vision Systems", Ph.D. Thesis, SUNY Buffalo, New
York, ]990
H. Barlow and W. Levick, 'The Mechanism for Directional Selective Units in Rabbit' s
Retina", Journal of Physiology, Vol. 178, pp. 477-504, ]965
T. Delbruck, "Silicon Retina with Correlation-Based, Velocity-Tuned Pixels ", IEEE
Transactions on Neural Networks, Vol. 4:3, pp. 529-41, 1993
M. Eckert and G. Buchsbaum, "Effect of Tracking Strategies on the Velocity Structure of
Two-Dimensional Image Sequences", J. Opt. Soc. Am., Vol. AIO:7, pp. 1582-85, 1993
R. Etienne-Cummings et at., "A New Temporal Domain Optical Flow Measurement
Technique for Focal Plane VLSI Implementation", Proceedings of CAMP 93, M.
Bayoumi, L. Davis and K. Valavanis (Eds.), pp. 24]-25] , 1993
R. Etienne-Cummings, R. Hathaway and J. Van der Spiegel, "An Accurate and Simple
CMOS 'One-Over' Circuit", Electronic Letters, Vol. 29-18, pp. ]618-]620, 1993b
R. Etienne-Cummings et aI., "Real-Time Visual Target Tracking: Two Implementations
of Velocity Based Smooth Pursuit", Visual Information Processing IV, SPIE Vol. 2488,
Orlando, 17-18 April 1995
W. Reichardt, "Autocorrelation, A Principle for the Evaluation of Sensory Information by
the Central Nervous System", Sensory Communication, Wiley, New York, 1961
D. Robinson, "The Mechanism of Human Smooth Pursuit Eye Movement", Journal of
Physiology ( London) Vol. 180, pp. 569-591 , 1965
M. Steinbach, "Pursuing the Perceptual Rather than the Retinal Stimuli", Vision
Research, Vol. 16, pp. 1371-1376,1976
H. Wyatt and J. Pola, "The Role of Perceived Motion in Smooth Pursuit Eye
Movements", Vision Research, Vol. 19, pp. 613-618, 1979
| 1032 |@word version:1 inversion:1 compression:1 cm2:1 pulse:9 t_:1 foveal:1 tuned:1 current:7 must:2 realize:1 additive:1 motor:4 plot:2 update:4 v:3 nervous:1 plane:5 reciprocal:3 provides:2 location:1 mathematical:1 c2:1 direct:1 profound:1 driver:3 prove:1 resistive:1 combine:1 autocorrelation:1 market:1 multi:1 integrator:6 str:1 lll:1 increasing:1 becomes:1 project:1 provided:1 matched:1 linearity:1 circuit:18 coincidental:1 cm:3 finding:1 temporal:4 combat:1 bipolar:2 exactly:1 control:4 unit:1 positive:11 engineering:2 local:2 limit:1 consequence:1 ak:1 resembles:1 suggests:1 co:1 limited:1 range:7 recursive:1 block:5 practice:1 differs:1 jan:1 area:2 physiology:2 matching:4 unipolar:1 layered:1 storage:2 influence:1 accumulation:4 rabbit:2 stabilizing:1 fade:1 array:1 stability:5 variation:3 increment:6 limiting:2 discharge:1 target:41 us:3 steinbach:4 slip:1 pa:2 velocity:41 element:3 crossing:3 conserve:1 continues:1 asymmetric:1 observed:1 role:1 fly:1 electrical:2 capture:1 region:1 movement:9 dynamic:4 segment:1 predictive:1 rightward:1 po:1 chip:22 various:2 represented:1 train:1 fast:2 london:1 detected:2 apparent:1 lag:2 larger:1 compressed:1 ability:1 spiegel:5 final:1 sequence:1 transistor:2 product:1 neighboring:4 loop:2 achieve:1 amplified:4 produce:1 disappeared:1 cmos:3 perfect:1 converges:1 object:4 depending:1 measured:15 school:1 soc:1 implemented:6 indicate:1 direction:10 correct:1 subsequently:3 vx:3 human:1 explains:1 orlando:1 opt:1 biological:5 hold:3 normal:1 predict:2 mo:1 lm:1 perceived:1 realizes:1 palo:1 grouped:1 successfully:1 always:1 rather:1 varying:1 voltage:5 derived:1 contrast:4 am:1 camp:1 dim:2 mueller:4 dependent:2 accumulated:1 entire:1 bt:1 integrated:6 initially:1 vlsi:10 selective:1 i1:1 zerocrossing:1 ralph:1 pixel:13 integration:1 summed:1 field:1 sampling:1 represents:2 future:3 mimic:1 stimulus:1 retina:12 composed:1 ve:1 delayed:1 replaced:1 geometry:1 phase:1 attempt:3 amplifier:1 detection:16 organization:1 evaluation:1 analyzed:1 nl:1 light:4 damped:1 accurate:2 ambient:4 edge:8 carbondale:1 iv:1 old:1 re:2 isolated:1 fitted:1 aio:1 wyatt:7 delbruck:2 restoration:2 deviation:2 predictor:1 delay:1 successful:1 reported:1 stored:5 sensitivity:3 lee:1 off:2 gaze:1 again:1 thesis:1 recorded:1 central:1 derivative:3 ghl:1 account:1 retinal:28 stabilize:1 caused:2 mv:1 performed:4 root:1 wave:2 capability:1 il:3 square:1 bright:2 xor:3 degraded:1 correspond:1 directional:1 critically:1 lighting:4 drive:1 unaffected:1 detector:3 ed:1 acquisition:1 pp:8 mi:2 spie:1 gain:5 proved:1 anticipating:1 back:5 levick:2 feed:5 cummings:11 response:5 april:1 furthermore:2 correlation:3 hand:1 sketch:1 horizontal:1 o:1 overlapping:1 mode:1 quality:1 impulse:1 effect:1 ye:1 barlow:2 swing:1 hence:4 moore:1 xft:1 unfilled:1 during:2 width:3 davis:1 steady:1 criterion:1 complete:4 motion:74 gh:18 passive:1 ranging:1 image:2 mt:1 discussed:2 analog:4 accumulate:1 measurement:5 silicon:3 ai:1 focal:5 phototransduction:1 illinois:1 moving:1 stable:1 cortex:2 showed:2 leftward:1 store:1 buffer:1 stumbling:1 der:5 muscle:1 minimum:2 additional:1 care:1 arx:2 signal:2 ii:2 smooth:36 match:2 determination:1 believed:2 escaped:1 laplacian:1 schematic:4 controlled:1 vision:4 cell:4 diagram:3 source:1 finely:1 exhibited:1 pass:1 flow:1 ee:2 ideal:1 vital:1 switch:1 buchsbaum:2 pennsylvania:1 bandwidth:1 perfectly:1 architecture:2 reduce:1 opposite:2 br:1 rod:1 york:2 differentiator:1 ph:1 hardware:2 reduced:1 exist:1 track:2 per:1 discrete:1 vol:8 falling:1 suny:1 prevent:1 imaging:1 cone:1 letter:1 powerful:1 architectural:1 electronic:1 pursuing:1 consolidate:1 accelerates:1 layer:3 followed:3 correspondence:1 scene:1 speed:27 span:1 min:1 optical:1 consolidates:1 department:1 combination:1 slightly:1 partitioned:1 primate:12 lem:1 hl:1 taken:1 equation:8 detectable:1 mechanism:9 pursuit:39 operation:2 gate:2 slower:1 jn:1 compress:1 include:1 etienne:11 calculating:1 disappear:1 leakage:1 psychophysical:1 move:1 added:1 restored:1 strategy:1 saccadic:1 rt:1 disappearance:1 southern:1 exhibit:1 fovea:1 unstable:1 relationship:1 polarity:3 nc:1 ilw:1 negative:8 implementation:8 design:2 adjustable:1 gated:1 observation:1 finite:1 buffalo:1 incorporated:1 communication:2 intensity:3 required:1 robinson:7 bar:1 indoor:1 oculomotor:1 oj:1 memory:5 charging:1 cea:1 scheme:5 improve:3 eye:31 inversely:2 disappears:2 philadelphia:2 ailed:1 reichardt:2 bayoumi:1 relative:2 proportional:3 versus:1 zeroed:1 principle:1 eckert:2 last:2 dis:1 wide:1 van:5 benefit:1 feedback:8 world:1 sensory:2 projected:3 preprocessing:1 transaction:1 aperture:1 global:1 active:1 photoreceptors:2 alternatively:1 transfer:2 ioo:1 cl:1 domain:1 assured:1 s2:1 noise:3 paul:1 slow:1 vr:1 wiley:1 position:1 perceptual:1 gating:1 er:2 grouping:1 magnitude:7 led:1 logarithmic:2 simply:1 appearance:2 visual:13 prevents:1 expressed:1 tracking:8 hathaway:1 saccade:1 ma:1 pola:6 determined:1 reducing:1 diverging:1 photoreceptor:1 indicating:1 rarely:1 parasitic:1 internal:3 tested:2 handling:1 |
39 | 1,033 | Gradient and Hamiltonian Dynamics
Applied to Learning in Neural Networks
James W. Howse
Chaouki T. Abdallah
Gregory L. Heileman
Department of Electrical and Computer Engineering
University of New Mexico
Albuquerque, NM 87131
Abstract
The process of machine learning can be considered in two stages: model
selection and parameter estimation. In this paper a technique is presented
for constructing dynamical systems with desired qualitative properties. The
approach is based on the fact that an n-dimensional nonlinear dynamical
system can be decomposed into one gradient and (n - 1) Hamiltonian systems. Thus, the model selection stage consists of choosing the gradient and
Hamiltonian portions appropriately so that a certain behavior is obtainable.
To estimate the parameters, a stably convergent learning rule is presented.
This algorithm has been proven to converge to the desired system trajectory
for all initial conditions and system inputs. This technique can be used to
design neural network models which are guaranteed to solve the trajectory
learning problem.
1
Introduction
A fundamental problem in mathematical systems theory is the identification of dynamical systems. System identification is a dynamic analogue of the functional approximation problem. A set of input-output pairs {u(t), y(t)} is given over some time
interval t E [7i, 1j]. The problem is to find a model which for the given input sequence
returns an approximation of the given output sequence. Broadly speaking, solving an
identification problem involves two steps. The first is choosing a class of identification models which are capable of emulating the behavior of the actual system. The
second is selecting a method to determine which member of this class of models best
emulates the actual system. In this paper we present a class of nonlinear models and
a learning algorithm for these models which are guaranteed to learn the trajectories
of an example system. Algorithms to learn given trajectories of a continuous time
system have been proposed in [6], [8], and [7] to name only a few. To our knowledge,
no one has ever proven that the error between the learned and desired trajectories
vanishes for any of these algorithms. In our trajectory learning system this error is
guaranteed to vanish. Our models extend the work in [1] by showing that Cohen's
systems are one instance of the class of models generated by decomposing the dynamics into a component normal to some surface and a set of components tangent to the
same surface. Conceptually this formalism can be used to design dynamical systems
with a variety of desired qualitative properties. Furthermore, we propose a provably
convergent learning algorithm which allows the parameters of Cohen's models to be
learned from examples rather than being programmed in advance. The algorithm is
275
Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks
convergent in the sense that the error between the model trajectories and the desired trajectories is guaranteed to vanish. This learning procedure is related to one
discussed in [5] for use in linear system identification.
2
Constructing the Model
First some terminology will be defined. For a system of n first order ordinary differential equations, the phase space of the system is the n-dimensional space of all state
components. A solution trajectory is a curve in phase space described by the differential equations for one specific starting point. At every point on a trajectory there
exists a tangent vector. The space of all such tangent vectors for all possible solution
trajectories constitutes the vector field for this system of differential equations.
The trajectory learning models in this paper are systems of first order ordinary differential equations. The form of these equations will be obtained by considering the
system dynamics as motion relative to some surface. At each point in the state space
an arbitrary system trajectory will be decomposed into a component normal to this
surface and a set of components tangent to this surface. This approach was suggested
to us by the results in [4], where it is shown that an arbitrary n-dimensional vector
field can be decomposed locally into the sum of one gradient vector field and (n - 1)
Hamiltonian vector fields. The concept of a potential function will be used to define these surfaces. A potential function V(:z:) is any scalar valued function of the
system states :z: = [Xl, X2, ??? , Xn.] t which is at least twice continuously differentiable
(Le. V(:z:) E or : r ~ 2). The operation [.]t denotes the transpose of the vector. If
there are n components in the system state, the function V{:z:), when plotted with
respect all of the state components, defines a surface in an (n + 1)-dimensional space.
There are two curves passing through every point on this potential surface which are
of interest in this discussion, they are illustrated in Figure 1(a). The dashed curve is
(z - zo)t \7 ... v (z)l ...o = 0
(a)
(b)
V(z) = K-
Figure 1: (a) The potential function V(z) = X~ (Xl _1)2 +x~ plotted versus its two dependent variables Xl and X2. The dashed curve is called a level surface and is given
by V(z) = 0.5. The solid curve follows the path of steepest descent through Zo.
(b) The partitioning of a 3-dimensional vector field at the point Zo into a 1dimensional portion which is normal to the surface V(z) = K- and a 2-dimensional
portion which is tangent to V(z) = K-. The vector -\7 ... V(z) 1"'0 is the normal vector to the surface V(z) = K- at the point Zo. The plane (z - zo)t \7 ... V (z) 1"'0 = 0
contains all of the vectors which are tangent to V(z) = K- at Zo. Two linearly
independent vectors are needed to form a basis for this tangent space, the pair
Q2(z) \7 ... V (z)l ... o and Q3(Z) \7 ... V (z)l ... o that are shown are just one possibility.
referred to as a level surface, it is a surface along which V(:z:) = K for some constant
K. Note that in general this level surface is an n-dimensional object. The solid curve
276
J. W. HOWSE, C. T. ABDALLAH, G. L. HEILEMAN
moves downhill along V (X) following the path of steepest descent through the point
Xo. The vector which is tangent to this curve at Xo is normal to the level surface
at Xo. The system dynamics will be designed as motion relative to the level surfaces
of V(x). The results in [4] require n different local potential functions to achieve
arbitrary dynamics. However, the results in [1] suggest that a considerable number
of dynamical systems can be achieved using only a single global potential function.
A system which is capable of traversing any downhill path along a given potential
surface V(x), can be constructed by decomposing each element of the vector field
into a vector normal to the level surface of V(x) which passes through each point
and a set of vectors tangent to the level surface of V(x) which passes through the
same point. So the potential function V(x) is used to partition the n-dimensional
phase space into two subspaces. The first contains a vector field normal to some
level surface V(x) = }( for }( E IR, while the second subspace holds a vector field
tangent to V(x) = IC. The subspace containing all possible normal vectors to the
n-dimensional level surface at a given point, has dimension one. This is equivalent
to the statement that every point on a smooth surface has a unique normal vector.
Similarly, the subspace containing all possible tangent vectors to the level surface at
a given point has dimension (n - 1). An example of this partition in the case of a
3-dimensional system is shown in Figure 1(b). Since the space of all tangent vectors
at each point on a level surface is (n - I)-dimensional, (n - 1) linearly independent
vectors are required to form a basis for this space.
Mathematically, there is a straightforward way to construct dynamical systems which
either move downhill along V(x) or remain at a constant height on V(x). In this
paper, dynamical systems which always move downhill along some potential surface
are called gradient-like systems. These systems are defined by differential equations
of the form
x = -P(x) VII:V(x),
(1)
where P(x) is a matrix function which is symmetric (Le. pt = P) and positive
:z~]f. These systems
definite at every point x, and where VIII V(x) =
are similar to the gradient flows discussed in [2]. The trajectories of the system
formed by Equation (1) always move downhill along the potential surface defined by
V(x). This can be shown by taking the time derivative of V(x) which is V(x) =
-[VII: V (x)]t P(x) [VII: V(x)] :5 O. Because P(x) is positive definite, V(x) can only be
zero where V II: V (x) = 0, elsewhere V(x) is negative. This means that the trajectories
of Equation (1) always move toward a level surface of V(x) formed by "slicing" V(x)
at a lower height, as pointed out in [2]. It is also easy to design systems which remain
at a constant height on V(x). Such systems will be denoted Hamiltonian-like systems.
They are specified by the equation
x = Q(x) VII: V(x),
(2)
where Q(x) is a matrix function which is skew-symmetric (Le. Qt = -Q) at every
point x. These systems are similar to the Hamiltonian systems defined in [2]. The
elements of the vector field defined by Equation (2) are always tangent to some level
surface of V (x). Hence the trajectories ofthis system remain at a constant height on
the potential surface given by V(x). Again this is indicated by the time derivative
of V(x), which in this case is V(x) = [VII: V(x)]f Q(x)[VII: V(x)] = o. This indicates
that the trajectories of Equation (2) always remain on the level surface on which the
system starts. So a model which can follow an arbitrary downhill path along the
potential surface V(x) can be designed by combining the dynamics of Equations (1)
and (2) . The dynamics in the subspace normal to the level surfaces of V(x) can be
[g;: , g;: ,... ,
Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks
277
defined using one equation of the form in Equation (1). Similarly the dynamics in the
subspace tangent to the level surfaces of Vex) can be defined using (n - 1) equations
of the form in Equation (2). Hence the total dynamics for the model are
n
z= -P(x)VIDV(x) + LQi(X)VIDV(x).
(3)
i=2
For this model the number and location of equilibria is determined by the function
Vex), while the manner in which the equilibria are approached is determined by the
matrices P(x) and Qi(x).
If the potential function Vex) is bounded below (i.e. Vex) > Bl V x E IRn , where
Bl is a constant), eventually increasing (i.e. limlllDlI-+oo Vex) ~ 00) , and has only
a finite number of isolated local maxima and minima (i.e. in some neighborhood
of every point where V III V (x) = 0 there are no other points where the gradient
vanishes), then the system in Equation (3) satisfies the conditions of Theorem 10
in [1]. Therefore the system will converge to one of the points where V ID Vex) = 0,
called the critical points of Vex), for all initial conditions. Note that this system
is capable of all downhill trajectories along the potential surface only if the (n - 1)
vectors Qi(X) V ID Vex) V i = 2, ... , n are linearly independent at every point x. It
is shown in [1] that the potential function
V(z) = C (
1:., (-y) d-y +
t, [~
(XI - I:.,(xd)'
+~
J:'
1:., h )II:.: (-y)]' d-y
1
(4)
satisfies these three criteria. In this equation ?.i(Xt} Vi = 1, ... , n are interpolation
polynomials, C is a real positive constant, Xi Vi = 1, ... , n are real constants chosen
so that the integrals are positive valued, and ?.Hxt} ==
f:-.
3
The Learning Rule
In Equation (3) the number and location of equilibria can be controlled using the
potential function Vex), while the manner in which the equilibria are approached can
be controlled with the matrices P(x) and Qi(X). If it is assumed that the locations
of the equilibria are known, then a potential function which has local minima and
maxima at these points can be constructed using Equation (4). The problem of
trajectory learning is thereby reduced to the problem of parameterizing the matrices
P(x) and Qi(x) and finding the parameter values which cause this model to best
emulate the actual system. If the elements P(x) and Qi(x) are correctly chosen,
then a learning rule can be designed which makes the model dynamics converge to
that of the actual system. Assume that the dynamics given by Equation (3) are a
parameterized model of the actual dynamics. Using this model and samples of the
actual system states, an estimator for states of the actual system can be designed. The
behavior of the model is altered by changing its parameters, so a parameter estimator
must also be constructed. The following theorem provides a form for both the state
and parameter estimators which guarantees convergence to a set of parameters for
which the error between the estimated and target trajectories vanishes.
Theorem 3.1. Given the model system
k
Z = LAili(x) +Bg(u)
(5)
i=l
where Ai E IRnxn and BE IRnxm are unknown, and li(') and g(.) are known smooth
functions such that the system has bounded solutions for bounded inputs u(t). Choose
J. W. HOWSE, C. T. ABDALLAH, G. L. HEILEMAN
278
a state estimator of the form
k
~ = 'R. B (x - x) +
L Ai fi(x) + iJ g(u)
(6)
i=1
where'R. B is an (n x n) matrix of real constants whose eigenvalues must all be in the
left half plane, and Ai and iJ are the estimates of the actual parameters. Choose
parameter estimators of the form
~
t
Ai = -'R.p (x - x) [fi(x)] V i = 1, ... , k
(7)
= -'R.p (x - x) [g(u)]t
B
where 'R. p is an (n x n) matrix of real constants which is symmetric and positive
definite, and (x - x) [.]t denotes an outer product. For these choices of state and
parameter estimators limt~oo(x(t) -x(t? = 0 for all initial conditions. Furthermore,
this remains true if any of the elements of Ai or iJ are set to 0, or if any of these
matrices are restricted to being symmetric or skew-symmetric.
The proof of this theorem appears in [3]. Note that convergence of the parameter
estimates to the actual parameter values is not guaranteed by this theorem. The
model dynamics in Equation (3) can be cast in the form of Equation (5) by choosing
each element of P(x) and Qi(X) to have the form
I-I
n
n
I-I
= LL~rBjkt?k(Xj)
and
QrB = LLArBjk ek(Xj),
(8)
j=1 k=O
j=1 k=O
where {t?o(Xj), t?1 (Xj), ... ,t?I-1 (Xj)} and {eo(Xj), el (Xj), ... ,el-l (Xj)} are a set of 1
orthogonal polynomials which depend on the state Xj' There is a set of such polynomials for every state Xj, j = 1,2, ... , n. The constants ~rBjk and ArBjk determine
the contribution of the kth polynomial which depends on the jth state to the value
of Prs and Qrs respectively. In this case the dynamics in Equation (3) become
PrB
:i:
=
t. ~ {
S;. [11.(x;) V. V (z)j
+
t,
A;;. [e;.(x;)
v. V(z)j } + T g(u(t))
(9)
where 8 jk is the (n x n) matrix of all values ~rsjk which have the same value of j and
k. Likewise A ijk is the (n x n) matrix of all values Arsjk, having the same value of
j and k, which are associated with the ith matrix Qi(X). This system has m inputs,
which may explicitly depend on time, that are represented by the m-element vector
function u(t). The m-element vector function g(.) is a smooth, possibly nonlinear,
transformation of the input function. The matrix Y is an (n x m) parameter matrix
which determines how much of input S E {I, ... , m} effects state r E {I, ... , n}.
Appropriate state and parameter estimators can be designed based on Equations (6)
and (7) respectively.
4
Simulation Results
Now an example is presented in which the parameters of the model in Equation (9)
are trained, using the learning rule in Equations (6) and (7), on one input signal and
then are tested on a different input signal. The actual system has three equilibrium
points, two stable points located at (1,3) and (3,5), and a saddle point located at
(2 - ~,4 + ~). In this example the dynamics of both the actual system and the
model are given by
(~1) =
Z2
Z~
Z~
O
(1'1 + 1'2
+:3
2)
0 1'4 + 1'5 Z1 + 1'6 Z2
(:~)
+ (0 - {1'7 + 1'8 Z1 + 1'9 Z2}) (:~ ) + (1'10) u(t)
8Y
'P7 + 'P8 ZI + 1'9 Z2
8Y
0
8Z2
0
8Z2
(10)
Gradient and Hamiltonian Dynamics Applied to Learning in Neural Networks
279
where V(x) is defined in Equation (4) and u(t) is a time varying input. For the actual
system the parameter values were 'PI = 'P4 = -4, 'P2 = 'Ps = -2, 'P3 = 'P6 = -1,
'P7 = 1, 'Ps = 3, 'P9 = 5, and 'PIO = 1. In the model the 10 elements 'Pi are
treated as the unknown parameters which must be learned. Note that the first matrix
function is positive definite if the parameters 'PI-'P6 are all negative valued. The
second matrix function is skew-symmetric for all values of 'P7-'P9. The two input
signals used for training and testing were Ul = 10000 (sin! 1000t + sin ~ 1000t) and
U2 = 5000 sin 1000 t. The phase space responses of the actual system to the inputs UI
and U2 are shown by the solid curves in Figures 3(b) and 3(a) respectively. Notice that
both of these inputs produce a periodic attractor in the phase space of Equation (10).
In order to evaluate the effectiveness of the learning algorithm the Euclidean distance
between the actual and learned state and parameter values was computed and plotted
versus time. The results are shown in Figure 2. Figure 2(a) shows these statistics when
{1I~zll, II~'PII}
{1I~zll, II~'PII}
17.5
15
15
12.5
12.5
10
7.5
i
----
,., ~--.----... ... .......
- --
2.5
150
200
250
300 t
50
100
150
200
250
300 t
(a)
(b)
Figure 2: (a) The state and parameter errors for training using input signal Ut. The solid
curve is the Euclidean distance between the state estimates and the actual states
as a function of time. The dashed curve shows the distance between the estimated
and actual parameter values versus time.
(b) The state and parameter errors for training using input signal U2.
50
100
training with input UI, while Figure 2(b) shows the same statistics for input U2. The
solid curves are the Euclidean distance between the learned and actual system states,
and the dashed curves are the distance between the learned and actual parameter
values. These statistics have two noteworthy features. First, the error between the
learned and desired states quickly converges to very small values, regardless of how
well the actual parameters are learned. This result was guaranteed by Theorem 3.1.
Second, the final error between the learned and desired parameters is much lower when
the system is trained with input UI. Intuitively this is because input Ul excites more
frequency modes of the system than input U2. Recall that in a nonlinear system the
frequency modes excited by a given input do not depend solely on the input because
the system can generate frequencies not present in the input. The quality of the
learned parameters can be qualitatively judged by comparing the phase plots using
the learned and actual parameters for each input, as shown in Figure 3. In Figure 3(a)
the system was trained using input Ul and tested with input U2, while in Figure 3(b)
the situation was reversed. The solid curves are the system response using the actual
parameter values, and the dashed curves are the response for the learned parameters.
The Euclidean distance between the target and test trajectories in Figure 3(a) is in
the range (0,0.64) with a mean distance of 0.21 and a standard deviation of 0.14. The
distance between the the target and test trajectories in Figure 3(b) is in the range
(0,4.53) with a mean distance of 0.98 and a standard deviation of 1.35. Qualitatively,
both sets of learned parameters give an accurate response for non-training inputs.
280
1. W. HOWSE, C. T. ABDALLAH, G. L. HEILEMAN
5
I
o
{i
-------r-- -- ----- --- -- I
-5
-10
-15
-l
-1
1
-2
-1
4
Xl
(a)
(b)
Figure 3: (a) A phase plot of the system response when trained with input UI and tested
with input U2. The solid line is the response to the test input using the actual
parameters. The dotted line is the system response using the learned parameters.
(b) A phase plot of the system response when trained with input U2 and tested
with input UI.
Note that even when the error between the learned and actual parameters is large,
the periodic attractor resulting from the learned parameters appears to have the same
"shape" as that for the actual parameters.
5
Conclusion
We have presented a conceptual framework for designing dynamical systems with
specific qualitative properties by decomposing the dynamics into a component normal
to some surface and a set of components tangent to the same surface. We have
presented a specific instance of this class of systems which converges to one of a finite
number of equilibrium points. By parameterizing these systems, the manner in which
these equilibrium points are approached can be fitted to an arbitrary data set. We
present a learning algorithm to estimate these parameters which is guaranteed to
converge to a set of parameter values for which the error between the learned and
desired trajectories vanishes.
Acknowledgments
This research was supported by a grant from Boeing Computer Services under Contract
W-300445. The authors would like to thank Vangelis Coutsias, Tom Caudell, and Bill
Home for stimulating discussions and insightful suggestions.
References
[1] M.A. Cohen. The construction of arbitrary stable dynamics in nonlinear neural networks.
Neural Networks, 5(1):83-103, 1992.
[2] M.W. Hirsch and S. Smale. Differential equations, dynamical systems, and linear algebra,
volume 60 of Pure and Applied Mathematics. Academic Press, Inc., San Diego, CA, 1974.
[3] J.W. Howse, C.T. Abdallah, and G.L. Heileman. A gradient-hamiltonian decomposition
for designing and learning dynamical systems. Submitted to Neural Computation, 1995.
[4] R.V. Mendes and J .T. Duarte. Decomposition of vector fields and mixed dynamics.
Journal of Mathematical Physics, 22(7):1420-1422, 1981.
[5] K.S. Narendra and A.M. Annaswamy. Stable adaptitJe systems. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1989.
[6] B.A. Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural
Computation, 1(2):263-269, 1989.
[7] D. Saad. Training recurrent neural networks via trajectory modification. Complex Systems, 6(2) :213-236, 1992.
[8] M.-A. Sato. A real time learning algorithm for recurrent analog neural networks. Biological Cybernetics, 62(2):237-241, 1990.
| 1033 |@word polynomial:4 simulation:1 decomposition:2 excited:1 thereby:1 solid:7 irnxn:1 initial:3 contains:2 selecting:1 z2:6 comparing:1 must:3 zll:2 partition:2 shape:1 designed:5 plot:3 half:1 p7:3 plane:2 steepest:2 hamiltonian:10 ith:1 provides:1 location:3 height:4 mathematical:2 along:8 constructed:3 differential:6 become:1 qualitative:3 consists:1 manner:3 p8:1 behavior:3 pio:1 decomposed:3 p9:2 actual:24 considering:1 increasing:1 bounded:3 q2:1 finding:1 transformation:1 nj:1 guarantee:1 every:8 xd:1 partitioning:1 grant:1 positive:6 service:1 engineering:1 local:3 annaswamy:1 heileman:5 id:2 cliff:1 path:4 interpolation:1 noteworthy:1 solely:1 twice:1 programmed:1 range:2 unique:1 acknowledgment:1 testing:1 definite:4 procedure:1 suggest:1 selection:2 judged:1 prentice:1 equivalent:1 bill:1 straightforward:1 regardless:1 starting:1 pure:1 slicing:1 rule:4 parameterizing:2 estimator:7 pt:1 target:3 construction:1 diego:1 designing:2 element:8 jk:1 located:2 electrical:1 vanishes:4 ui:5 dynamic:22 trained:5 depend:3 solving:1 algebra:1 basis:2 emulate:1 represented:1 zo:6 approached:3 choosing:3 neighborhood:1 whose:1 solve:1 valued:3 statistic:3 final:1 sequence:2 differentiable:1 eigenvalue:1 propose:1 product:1 p4:1 combining:1 achieve:1 convergence:2 p:2 produce:1 converges:2 object:1 oo:2 recurrent:3 excites:1 ij:3 qt:1 p2:1 involves:1 pii:2 require:1 biological:1 mathematically:1 hold:1 considered:1 ic:1 normal:11 hall:1 equilibrium:8 narendra:1 estimation:1 always:5 rather:1 varying:1 q3:1 indicates:1 sense:1 duarte:1 dependent:1 el:2 irn:1 provably:1 denoted:1 field:10 construct:1 having:1 constitutes:1 few:1 phase:8 attractor:2 interest:1 englewood:1 possibility:1 accurate:1 integral:1 capable:3 traversing:1 orthogonal:1 euclidean:4 desired:8 plotted:3 isolated:1 fitted:1 instance:2 formalism:1 ordinary:2 deviation:2 periodic:2 gregory:1 fundamental:1 contract:1 physic:1 continuously:1 quickly:1 caudell:1 again:1 nm:1 containing:2 choose:2 possibly:1 ek:1 derivative:2 return:1 li:1 potential:17 inc:2 explicitly:1 vi:2 bg:1 depends:1 portion:3 start:1 contribution:1 formed:2 ir:1 emulates:1 likewise:1 conceptually:1 identification:5 albuquerque:1 trajectory:25 cybernetics:1 submitted:1 frequency:3 james:1 proof:1 associated:1 prb:1 recall:1 knowledge:1 ut:1 obtainable:1 appears:2 follow:1 lqi:1 response:8 tom:1 furthermore:2 just:1 stage:2 p6:2 nonlinear:5 defines:1 mode:2 stably:1 quality:1 indicated:1 name:1 effect:1 concept:1 true:1 hence:2 symmetric:6 illustrated:1 ll:1 sin:3 criterion:1 pearlmutter:1 motion:2 fi:2 functional:1 cohen:3 volume:1 extend:1 discussed:2 analog:1 ai:5 mathematics:1 similarly:2 pointed:1 stable:3 surface:36 certain:1 minimum:2 eo:1 converge:4 determine:2 dashed:5 ii:4 signal:5 smooth:3 academic:1 controlled:2 qi:7 limt:1 achieved:1 interval:1 abdallah:5 appropriately:1 saad:1 pass:2 member:1 flow:1 effectiveness:1 iii:1 easy:1 variety:1 xj:10 zi:1 ul:3 speaking:1 passing:1 cause:1 locally:1 reduced:1 generate:1 notice:1 dotted:1 estimated:2 correctly:1 broadly:1 terminology:1 changing:1 sum:1 parameterized:1 p3:1 home:1 vex:9 guaranteed:7 convergent:3 howse:5 sato:1 x2:2 department:1 remain:4 qrs:1 modification:1 intuitively:1 restricted:1 pr:1 xo:3 equation:30 remains:1 skew:3 eventually:1 needed:1 decomposing:3 operation:1 appropriate:1 denotes:2 bl:2 move:5 gradient:11 kth:1 subspace:6 distance:9 reversed:1 thank:1 outer:1 toward:1 viii:1 mexico:1 statement:1 smale:1 negative:2 boeing:1 design:3 unknown:2 finite:2 descent:2 situation:1 emulating:1 ever:1 arbitrary:6 pair:2 required:1 specified:1 cast:1 z1:2 learned:17 suggested:1 dynamical:10 below:1 analogue:1 critical:1 treated:1 altered:1 tangent:15 relative:2 mixed:1 suggestion:1 proven:2 versus:3 pi:3 elsewhere:1 supported:1 transpose:1 jth:1 taking:1 curve:14 dimension:2 xn:1 author:1 qualitatively:2 san:1 global:1 hirsch:1 conceptual:1 assumed:1 xi:2 continuous:1 learn:2 ca:1 complex:1 constructing:2 linearly:3 referred:1 downhill:7 xl:4 vanish:2 theorem:6 specific:3 xt:1 showing:1 insightful:1 exists:1 ofthis:1 hxt:1 vii:6 saddle:1 scalar:1 u2:8 satisfies:2 determines:1 stimulating:1 considerable:1 determined:2 called:3 total:1 ijk:1 evaluate:1 tested:4 mendes:1 |
40 | 1,034 | Is Learning The n-th Thing Any Easier Than
Learning The First?
Sebastian Thrun I
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213-3891
World Wide Web: http://www.cs.cmu.edul'''thrun
Abstract
This paper investigates learning in a lifelong context. Lifelong learning
addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge
across multiple learning tasks, in order to generalize more accurately from
less training data. In this paper, several different approaches to lifelong
learning are described, and applied in an object recognition domain. It
is shown that across the board, lifelong learning approaches generalize
consistently more accurately from less training data, by their ability to
transfer knowledge across learning tasks.
1 Introduction
Supervised learning is concerned with approximating an unknown function based on examples. Virtually all current approaches to supervised learning assume that one is given a set
of input-output examples, denoted by X, which characterize an unknown function, denoted
by f. The target function f is drawn from a class of functions, F, and the learner is given a
space of hypotheses, denoted by H, and an order (preference/prior) with which it considers
them during learning. For example, H might be the space of functions represented by an
artificial neural network with different weight vectors.
While this formulation establishes a rigid framework for research in machine learning, it
dismisses important aspects that are essential for human learning. Psychological studies
have shown that humans often employ more than just the training data for generalization.
They are often able to generalize correctly even from a single training example [2, 10]. One
of the key aspects of the learning problem faced by humans, which differs from the vast
majority of problems studied in the field of neural network learning, is the fact that humans
encounter a whole stream of learning problems over their entire lifetime. When faced with
a new thing to learn, humans can usually exploit an enormous amount of training data and
I also
affiliated with: Institut fur Informatik III, Universitat Bonn, Romerstr. 164, Germany
Is Learning the n-th Thing Any Easier Than Learning the First?
641
experiences that stem from other, related learning tasks. For example, when learning to drive
a car, years of learning experience with basic motor skills, typical traffic patterns, logical
reasoning, language and much more precede and influence this learning task. The transfer of
knowledge across learning tasks seems to play an essential role for generalizing accurately,
particularly when training data is scarce.
A framework for the study of the transfer of knowledge is the lifelong learning framework.
In this framework, it is assumed that a learner faces a whole collection of learning problems
over its entire lifetime. Such a scenario opens the opportunity for synergy. When facing its
n-th learning task, a learner can re-use knowledge gathered in its previous n - 1 learning
tasks to boost the generalization accuracy.
In this paper we will be interested in the most simple version of the lifelong learning problem,
in which the learner faces a family of concept learning tasks. More specifically, the functions
to be learned over the lifetime of the learner, denoted by 11 , 12 , 13, .. . E F , are all of the type
I : I --+ {O, I} and sampled from F. Each function I E {II , h ,13, . . .} is an indicator
function that defines a particular concept: a pattern x E I is member of this concept if
and only if I(x) = 1. When learning the n-th indicator function, In , the training set X
contains examples of the type (x , In(x)) (which may be distorted by noise). In addition to
the training set, the learner is also given n - 1 sets of examples of other concept functions,
denoted by Xk (k
1, .. . , n - I). Each Xk contains training examples that characterize
Ik. Since this additional data is desired to support learning In, Xk is called a support set
for the training set X .
=
An example of the above is the recognition of faces [5, 7]. When learning to recognize the
n-th person, say IBob, the learner is given a set of positive and negative example of face
images of this person. In lifelong learning, it may also exploit training information stemming
from other persons, such as I E {/Rieh, IMike , IDave , ... }. The support sets usually cannot be
used directly as training patterns when learning a new concept, since they describe different
concepts (hence have different class labels). However, certain features (like the shape of the
eyes) are more important than others (like the facial expression, or the location of the face
within the image). Once the invariances of the domain are learned, they can be transferred
to new learning tasks (new people) and hence improve generalization.
To illustrate the potential importance of related learning tasks in lifelong learning, this
paper does not present just one particular approach to the transfer of knowledge. Instead,
it describes several, all of which extend conventional memory-based or neural network
algorithms. These approaches are compared with more traditional learning algorithms, i.e.,
those that do not transfer knowledge. The goal of this research is to demonstrate that,
independent of a particular learning approach, more complex functions can be learned from
less training data iflearning is embedded into a lifelong context.
2 Memory-Based Learning Approaches
Memory-based algorithms memorize all training examples explicitly and interpolate them
at query-time. We will first sketch two simple, well-known approaches to memory-based
learning, then propose extensions that take the support sets into account.
2.1
Nearest Neighbor and Shepard's Method
Probably the most widely used memory-based learning algorithm is
J{ -nearest
neighbor
(KNN) [15]. Suppose x is a query pattern, for which we would like to know the output y .
KNN searches the set of training examples X for those J{ examples (Xi, Yi ) E X whose
input patterns Xi are nearest to X (according to some distance metric, e.g., the Euclidian
distance). It then returns the mean output value 2:= Yi of these nearest neighbors.
k
Another commonly used method, which is due to Shepard [13], averages the output values
s. THRUN
642
of all training examples but weights each example according to the inverse distance to the
query
:~~~t x.
(
L
(x"y.)EX
)
Ilx -
~: II + E ?
(
L
(x. ,y.)EX
Ilx -
I)
Xi
-I
(1)
II + E
Here E > 0 is a small constant that prevents division by zero. Plain memory-based learning
uses exclusively the training set X for learning. There is no obvious way to incorporate the
support sets, since they carry the wrong class labels.
2.2
Learning A New Representation
The first modification of memory-based learning proposed in this paper employs the support
sets to learn a new representation of the data. More specifically, the support sets are employed
to learn a function, denoted by 9 : I --+ I', which maps input patterns in I to a new space,
I' . This new space I' forms the input space for a memory-based algorithm.
Obviously, the key property of a good data representations is that multiple examples of a
single concept should have a similar representation, whereas the representation of an example
and a counterexample of a concept should be more different. This property can directly be
transformed into an energy function for g:
~ (X,y~EXk (X"y~EXk Ilg(x)-g(x')11
n-I
E:=
(
(X"y~EXk Ilg( x )-g(x')11
)
(2)
Adjusting 9 to minimize E forces the distance between pairs of examples of the same
concept to be small, and the distance between an example and a counterexample of a concept
to be large. In our implementation, 9 is realized by a neural network and trained using the
Back-Propagation algorithm [12].
Notice that the new representation, g, is obtained through the support sets. Assuming that
the learned representation is appropriate for new learning tasks, standard memory-based
learning can be applied using this new representation when learning the n-th concept.
2.3
Learning A Distance Function
An alternative way for exploiting support sets to improve memory-based learning is to learn
a distance function [3, 9]. This approach learns a function d : I x I --+ [0, I] which accepts
two input patterns, say x and x' , and outputs whether x and x' are members of the same
concept, regardless what the concept is. Training examples for d are
(( x , x'),I)
((x, x'), 0)
ify=y'=l
if(y=IAy'=O)or(y=OAy'=I).
They are derived from pairs of examples (x , y) , (x', y') E Xk taken from a single support
set X k (k = 1, . .. , n - I). In our implementation, d is an artificial neural network trained
with Back-Propagation. Notice that the training examples for d lack information concerning
the concept for which they were originally derived. Hence, all support sets can be used to
train d. After training, d can be interpreted as the probability that two patterns x, x' E I are
examples of the same concept.
Once trained, d can be used as a generalized distance function for a memory-based approach.
Suppose one is given a training set X and a query point x E I. Then, for each positive
example (x' , y' = I) EX , d( x , x') can be interpreted as the probability that x is a member
of the target concept. Votes from multiple positive examples (XI, I) , (X2' I), ... E X are
combined using Bayes' rule, yielding
Prob(fn(x)=I)
.-
1-
(I
+
II
(x' ,y'=I)EXk
I:(~(::~,))-I
(3)
Is Learning the n-th Thing Any Easier Than Learning the First?
643
Notice that d is not a distance metric. It generalizes the notion of a distance metric, because
the triangle inequality needs not hold, and because an example of the target concept x' can
provide evidence that x is not a member of that concept (if d(x, x') < 0.5).
3 Neural Network Approaches
To make our comparison more complete, we will now briefly describe approaches that rely
exclusively on artificial neural networks for learning In.
3.1
Back-Propagation
Standard Back-Propagation can be used to learn the indicator function In, using X as training
set. This approach does not employ the support sets, hence is unable to transfer knowledge
across learning tasks.
3.2 Learning With Hints
Learning with hints [1, 4, 6, 16] constructs a neural network with n output units, one for
each function Ik (k = 1,2, .. . , n). This network is then trained to simultaneously minimize
the error on both the support sets {Xk} and the training set X. By doing so, the internal
representation of this network is not only determined by X but also shaped through the
support sets {X k }. If similar internal representations are required for al1 functions Ik
(k
1,2, .. . , n), the support sets provide additional training examples for the internal
representation.
=
3.3 Explanation-Based Neural Network Learning
The last method described here uses the explanation-based neural network learning algorithm (EBNN), which was original1y proposed in the context of reinforcement learning
[8, 17]. EBNN trains an artificial neural network, denoted by h : I ----+ [0, 1], just like
Back-Propagation. However, in addition to the target values given by the training set X,
EBNN estimates the slopes (tangents) of the target function In for each example in X. More
specifically, training examples in EBNN are of the sort (x, In (x), \7 xln(x)), which are fit
using the Tangent-Prop algorithm [14]. The input x and target value In(x) are taken from
the trai ning set X. The third term, the slope \7 xln ( X ), is estimated using the learned distance
function d described above. Suppose (x', y'
1) E X is a (positive) training example.
Then, the function d x ' : I ----+ [0, 1] with d x ' (z) := d(z , x') maps a single input pattern to
[0, 1], and is an approximation to In. Since d( z, x') is represented by a neural network and
neural networks are differentiable, the gradient 8dx ' (z) /8z is an estimate of the slope of In
at z. Setting z := x yields the desired estimate of \7 xln (x) . As stated above, both the target
value In (x) and the slope vector \7 xIn (x) are fit using the Tangent-Prop algorithm for each
training example x EX .
=
The slope \7 xln provides additional information about the target function In. Since d is
learned using the support sets, EBNN approach transfers knowledge from the support sets
to the new learning task. EBNN relies on the assumption that d is accurate enough to yield
helpful sensitivity information. However, since EBNN fits both training patterns (values)
and slopes, misleading slopes can be overridden by training examples. See [17] for a more
detailed description of EBNN and further references.
4 Experimental Results
All approaches were tested using a database of color camera images of different objects
(see Fig. 3.3). Each of the object in the database has a distinct color or size. The n-th
644
l1
S. THRUN
....
I't'
'I
'I
?
:.
<
,
~~~
>
-~""":::.~
--....
'"
.
~
~
-,:~~,}
I
1:1 ,I
......
'
~..
.
,
?
~
?.~ <.~
~
<.-
~~-
...
:t
-_,1-
"
~
~,-l/> ;' ;'j III
'1 ~' ''',ll
t!
~[~
,,-
,
~_
~
~ ~_l_~
II
-
-...
__
E~
_e?m;,
'~~
;1 ~
t
~,~,AA(
.d!t~)ltI!{iH-""
,
,
,
~
c-
_.
ML~._. . ,
:R;1-;
I
'''!!!i!~,
='
;~~~
"
,
""111':'i, It r
f4~
,
Figure 1: The support sets were compiled out of a hundred
images of a bottle, a
hat, a hammer, a coke
can, and a book. The
n-th learning tasks
involves distinguishing the shoe from the
sunglasses. Images
were subsampled to
a 100x 100 pixel matrix (each pixel has a
color, saturation, and
a brightness value),
shown on the right
side.
learning task was the recognition of one of these objects, namely the shoe. The previous
n - 1 learning tasks correspond to the recognition of five other objects, namely the bottle,
the hat, the hammer, the coke can, and the book. To ensure that the latter images could
not be used simply as additional training data for In, the only counterexamples of the shoe
was the seventh object, the sunglasses. Hence, the training set for In contained images of
the shoe and the sunglasses, and the support sets contained images of the other five objects.
The object recognition domain is a good testbed for the transfer of knowledge in lifelong
learning. This is because finding a good approximation to In involves recognizing the target
object invariant of rotation, translation, scaling in size, change of lighting and so on. Since
these invariances are common to all object recognition tasks, images showing other objects
can provide additional information and boost the generalization accuracy.
Transfer of knowledge is most important when training data is scarce. Hence, in an initial
experiment we tested all methods using a single image of the shoe and the sunglasses only.
Those methods that are able to transfer knowledge were also provided 100 images of each
of the other five objects. The results are intriguing. The generalization accuracies
KNN
Shepard
60.4%
?8.3%
60.4%
?8.3%
repro g+Shep.
74.4%
?18.5%
distanced
75.2%
?18.9%
Back-Prop
hints
EBNN
59.7%
?9.0%
62.1%
?10.2%
74.8%
?11.1%
illustrate that all approaches that transfer knowledge (printed in bold font) generalize significantly better than those that do not. With the exception of the hint learning technique,
the approaches can be grouped into two categories: Those which generalize approximately
60% of the testing set correctly, and those which achieve approximately 75% generalization accuracy. The former group contains the standard supervised learning algorithms, and
the latter contains the "new" algorithms proposed here, which are capable of transferring
knOWledge. The differences within each group are statistically not significant, while the
differences between them are (at the 95% level). Notice that random guessing classifies 50%
of the testing examples correctly.
These results suggest that the generalization accuracy merely depends on the particular
choice of the learning algorithm (memory-based vs. neural networks). Instead, the main
factor determining the generalization accuracy is the fact whether or not knowledge is
transferred from past learning tasks.
Is Learning the n-th Thing Any Easier Than Learning the First?
95%
645
95%
distance function d
85%
~
~
-
80%
hepard 's method with representation g
15%
70%
, ,,'~.
.</'~
70%
Shepard's method
65%
65%
60%
60% ;;./
55%
55%
50%~2--~~----~10~~1~2~1~4--1~6--1~.--~20
training example.
/f
if
.
Back-Propagauon
~%~2--~~----~1~O~1~2--1~4--1~6--~1B--~20
training exampletl
Figure 2: Generalization accuracy as a function of training examples, measured on an
independent test set and averaged over 100 experiments. 95%-confidence bars are also
displayed.
What happens as more training data arrives? Fig. 2 shows generalization curves with
increasing numbers of training examples for some of these methods. As the number of
training examples increases, prior knowledge becomes less important. After presenting 20
training examples, the results
KNN
81.0%
?3.4%
Shepard
70.5%
?4.9%
repro g+Shep.
81.7%
?2.7%
distance d
87.3%
?O_9%
Back-Prop
88.4%
?2.5%
hints
n_avail.
EBNN
90.8%
?2.7%
illustrate that some of the standard methods (especially Back-Propagation) generalize about
as accurately as those methods that exploit support sets. Here the differences in the underlying
learning mechanisms becomes more dominant. However, when comparing lifelong learning
methods with their corresponding standard approaches, the latter ones are stiIl inferior: BackPropagation (88.4%) is outperformed by EBNN (90.8%), and Shepard's method (70.5%)
generalizes less accurately when the representation is learned (81.7%) or when the distance
function is learned (87.3%). All these differences are significant at the 95% confidence level.
5
Discussion
The experimental results reported in this paper provide evidence that learning becomes easier
when embedded in a lifelong learning context. By transferring knowledge across related
learning tasks, a learner can become "more experienced" and generalize better. To test
this conjecture in a more systematic way, a variety of learning approaches were evaluated
and compared with methods that are unable to transfer knowledge. It is consistently found
that lifelong learning algorithms generalize significantly more accurately, particularly when
training data is scarce.
Notice that these results are well in tune with other results obtained by the author. One of
the approaches here, EBNN, has extensively been studied in the context of robot perception
[11], reinforcement learning for robot control, and chess [17]. In all these domains, it has
consistently been found to generalize better from less training data by transferring knowledge
from previous learning tasks. The results are also consistent with observations made about
human learning [2, 10], namely that previously learned knowledge plays an important role
in generalization, particularly when training data is scarce. [18] extends these techniques to
situations where most support sets are not related.w
However, lifelong learning rests on the assumption that more than a single task is to be
learned, and that learning tasks are appropriately related. Lifelong learning algorithms
are particularly well-suited in domains where the costs of collecting training data is the
dominating factor in learning, since these costs can be amortized over several learning tasks.
Such domains include, for example, autonomous service robots which are to learn and
improve over their entire lifetime. They include personal software assistants which have
646
S. THRUN
to perform various tasks for various users. Pattern recognition, speech recognition, time
series prediction, and database mining might be other, potential application domains for the
techniques presented here.
References
[1] Y. S. Abu-Mostafa. Learning from hints in neural networks. Journal of Complexity, 6: 192-198,
1990.
[2] W-K. Ahn and W F. Brewer. Psychological studies of explanation-based learning. In
G. Dejong, editor, Investigating Explanation-Based Learning . Kluwer Academic Publishers,
BostonlDordrechtILondon, 1993.
[3] c. A. Atkeson. Using locally weighted regression for robot learning. In Proceedings of the 1991
1EEE International Conference on Robotics and Automation, pages 958-962, Sacramento, CA,
April 1991.
[4] J. Baxter. Learning internal representations. In Proceedings of the Conference on Computation
Learning Theory, 1995.
[5] D. Beymer and T. Poggio. Face recognition from one model view. In Proceedings of the
International Conference on Computer Vision, 1995.
[6] R. Caruana. MuItitask learning: A knowledge-based of source of inductive bias. In P. E. Utgoff,
editor, Proceedings of the Tenth International Conference on Machine Learning, pages 41-48,
San Mateo, CA, 1993. Morgan Kaufmann.
[7] M. Lando and S. Edelman. Generalizing from a single view in face recognition. Technical Report
CS-TR 95-02, Department of Applied Mathematics and Computer Science, The Weizmann
Institute of Science, Rehovot 76100, Israel, January 1995.
[8] T. M. Mitchell and S. Thrun. Explanation-based neural network learning for robot control. In
S. J. Hanson, J. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing
Systems 5, pages 287-294, San Mateo, CA, 1993. Morgan Kaufmann.
[9] A. W Moore, D. 1. Hill, and M. P. Johnson. An Empirical Investigation of Brute Force to choose
Features, Smoothers and Function Approximators. In S. Hanson, S. Judd, and T. Petsche, editors,
Computational Learning Theory and Natural Learning Systems, Volume 3. MIT Press, 1992.
[10] Y. Moses, S. Ullman, and S. Edelman. Generalization across changes in illumination and viewing
position in upright and inverted faces. Technical Report CS-TR 93-14, Department of Applied
Mathematics and Computer Science, The Weizmann Institute of Science, Rehovot 76100, Israel,
1993.
[11] J. O'Sullivan, T. M. Mitchell, and S. Thrun. Explanation-based neural network learning from
mobile robot perception. In K. Ikeuchi and M. Veloso, editors, Symbolic Visual Learning. Oxford
University Press, 1995.
[12] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error
propagation. In D. E. Rumelhart and 1. L. McClelland, editors, Parallel Distributed Processing.
Vol. I + II. MIT Press, 1986.
[13] D. Shepard. A two-dimensional interpolation function for irregularly spaced data. In 23rd
National Conference ACM, pages 517-523, 1968.
[14] P. Simard, B. Victorri, Y. LeCun, and J. Denker. Tangent prop - a formalism for specifying
selected invariances in an adaptive network. In 1. E. Moody, S. J. Hanson, and R. P. Lippmann,
editors, Advances in Neural Information Processing Systems 4, pages 895-903, San Mateo, CA,
1992. Morgan Kaufmann.
[15] c. Stanfill and D. Waltz. Towards memory-based reasoning. Communications of the ACM,
29(12): 1213-1228, December 1986.
[16] S. C. Suddarth and A. Holden. Symbolic neural systems and the use of hints for developing
complex systems. International Journal of Machine Studies, 35, 1991.
[17] S. Thrun. Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Kluwer
Academic Publishers, Boston, MA, 1996. to appear.
[18] S. Thrun and J. O'Sullivan. Clustering learning tasks and the selective cross-task transfer
of knowledge. Technical Report CMU-CS-95-209, Carnegie Mellon University, School of
Computer Science, Pittsburgh, PA 15213, November 1995.
| 1034 |@word briefly:1 version:1 seems:1 open:1 brightness:1 euclidian:1 tr:2 carry:1 initial:1 contains:4 exclusively:2 series:1 past:1 current:1 comparing:1 coke:2 dx:1 intriguing:1 stemming:1 fn:1 shape:1 motor:1 v:1 selected:1 xk:5 ebnn:12 provides:1 location:1 preference:1 five:3 become:1 ik:3 edelman:2 repro:2 increasing:1 becomes:3 provided:1 classifies:1 underlying:1 what:2 israel:2 interpreted:2 dejong:1 finding:1 collecting:1 wrong:1 control:2 unit:1 brute:1 appear:1 positive:4 service:1 oxford:1 interpolation:1 approximately:2 might:2 studied:2 mateo:3 specifying:1 statistically:1 averaged:1 weizmann:2 camera:1 lecun:1 testing:2 differs:1 backpropagation:1 sullivan:2 empirical:1 significantly:2 printed:1 confidence:2 suggest:1 symbolic:2 cannot:1 context:5 influence:1 www:1 conventional:1 map:2 williams:1 regardless:1 sacramento:1 rule:1 notion:1 autonomous:1 target:9 play:2 suppose:3 user:1 us:2 distinguishing:1 hypothesis:1 pa:2 amortized:1 rumelhart:2 recognition:10 particularly:4 database:3 role:2 exk:4 complexity:1 utgoff:1 personal:1 trained:4 overridden:1 division:1 learner:9 triangle:1 represented:2 various:2 train:2 distinct:1 describe:2 artificial:4 query:4 whose:1 widely:1 dominating:1 say:2 ability:1 knn:4 obviously:1 differentiable:1 propose:1 achieve:1 description:1 exploiting:1 object:12 illustrate:3 measured:1 nearest:4 school:1 c:4 involves:2 memorize:1 ning:1 hammer:2 f4:1 human:6 viewing:1 generalization:12 investigation:1 extension:1 hold:1 mostafa:1 al1:1 assistant:1 outperformed:1 precede:1 label:2 ilg:2 grouped:1 establishes:1 weighted:1 mit:2 mobile:1 derived:2 consistently:3 fur:1 helpful:1 rigid:1 entire:3 transferring:3 holden:1 transformed:1 selective:1 interested:1 germany:1 pixel:2 denoted:7 field:1 once:2 construct:1 shaped:1 others:1 report:3 hint:7 employ:3 simultaneously:1 recognize:1 interpolate:1 national:1 subsampled:1 mining:1 arrives:1 yielding:1 accurate:1 waltz:1 capable:1 experience:2 poggio:1 facial:1 institut:1 re:1 desired:2 psychological:2 formalism:1 giles:1 caruana:1 cost:2 hundred:1 recognizing:1 seventh:1 johnson:1 characterize:2 universitat:1 reported:1 combined:1 person:3 international:4 sensitivity:1 systematic:1 moody:1 choose:1 book:2 simard:1 return:1 ullman:1 account:1 potential:2 bold:1 automation:1 explicitly:1 depends:1 stream:2 view:2 doing:1 traffic:1 bayes:1 sort:1 parallel:1 slope:7 minimize:2 accuracy:7 kaufmann:3 gathered:1 yield:2 correspond:1 spaced:1 generalize:9 accurately:6 informatik:1 lighting:1 drive:1 sebastian:1 energy:1 obvious:1 sampled:1 adjusting:1 logical:1 mitchell:2 knowledge:22 car:1 color:3 back:9 originally:1 supervised:3 april:1 formulation:1 evaluated:1 lifetime:4 just:3 sketch:1 web:1 propagation:7 lack:1 defines:1 concept:18 former:1 hence:6 inductive:1 moore:1 ll:1 during:1 inferior:1 generalized:1 presenting:1 hill:1 complete:1 demonstrate:1 l1:1 reasoning:2 image:11 common:1 rotation:1 shepard:7 volume:1 extend:1 kluwer:2 mellon:2 significant:2 counterexample:3 rd:1 mathematics:2 language:1 robot:6 compiled:1 ahn:1 dominant:1 scenario:2 certain:1 inequality:1 approximators:1 yi:2 stanfill:1 inverted:1 morgan:3 additional:5 employed:1 ii:6 smoother:1 multiple:3 suddarth:1 stem:1 distanced:1 technical:3 academic:2 veloso:1 cross:1 concerning:1 prediction:1 basic:1 regression:1 vision:1 cmu:2 metric:3 robotics:1 addition:2 whereas:1 victorri:1 source:1 publisher:2 appropriately:1 rest:1 probably:1 virtually:1 thing:5 member:4 cowan:1 december:1 ikeuchi:1 iii:2 enough:1 concerned:1 baxter:1 variety:1 fit:3 whether:2 expression:1 speech:1 detailed:1 tune:1 amount:1 extensively:1 locally:1 category:1 mcclelland:1 http:1 notice:5 moses:1 estimated:1 correctly:3 rehovot:2 carnegie:2 vol:1 abu:1 group:2 key:2 enormous:1 drawn:1 lti:1 tenth:1 vast:1 merely:1 year:1 inverse:1 prob:1 distorted:1 extends:1 family:1 eee:1 scaling:1 investigates:1 x2:1 software:1 bonn:1 aspect:2 romerstr:1 conjecture:1 edul:1 transferred:2 department:3 developing:1 according:2 across:7 describes:1 modification:1 happens:1 chess:1 invariant:1 taken:2 previously:1 brewer:1 mechanism:1 know:1 irregularly:1 generalizes:2 denker:1 appropriate:1 petsche:1 alternative:1 encounter:1 hat:2 clustering:1 ensure:1 include:2 opportunity:2 exploit:3 especially:1 approximating:1 realized:1 font:1 traditional:1 guessing:1 gradient:1 distance:14 unable:2 thrun:9 majority:1 considers:1 assuming:1 ify:1 negative:1 stated:1 implementation:2 affiliated:1 unknown:2 perform:1 observation:1 november:1 displayed:1 january:1 situation:2 hinton:1 communication:1 pair:2 required:1 bottle:2 namely:3 hanson:3 learned:10 accepts:1 testbed:1 boost:2 address:1 able:2 bar:1 usually:2 pattern:11 perception:2 saturation:1 memory:13 explanation:7 natural:1 force:2 rely:1 indicator:3 scarce:4 improve:3 misleading:1 eye:1 sunglass:4 faced:2 prior:2 tangent:4 determining:1 embedded:2 facing:1 consistent:1 editor:7 translation:1 last:1 xln:4 l_:1 side:1 bias:1 institute:2 lifelong:16 wide:1 face:9 neighbor:3 distributed:1 curve:1 plain:1 judd:1 world:1 author:1 collection:1 commonly:1 reinforcement:2 made:1 san:3 atkeson:1 adaptive:1 skill:1 lippmann:1 synergy:1 ml:1 investigating:1 pittsburgh:2 assumed:1 xi:4 search:1 learn:6 transfer:14 ca:4 complex:2 domain:7 main:1 whole:3 noise:1 fig:2 board:1 experienced:1 position:1 third:1 learns:1 showing:1 evidence:2 essential:2 trai:1 ih:1 importance:1 illumination:1 easier:5 boston:1 suited:1 generalizing:2 ilx:2 simply:1 beymer:1 shoe:5 visual:1 prevents:1 contained:2 aa:1 relies:1 acm:2 ma:1 prop:5 goal:1 towards:1 change:2 typical:1 specifically:3 determined:1 upright:1 called:1 invariance:3 experimental:2 xin:1 vote:1 exception:1 internal:5 support:21 people:1 latter:3 incorporate:1 tested:2 ex:4 |
41 | 1,035 | A Dynamical Model of Context Dependencies for the
Vestibulo-Ocular Reflex
Terrence J. Sejnowskit
Olivier J.M.D. Coenen*
Computational Neurobiology Laboratory
Howard Hughes Medical Institute
The Salk Institute for Biological Studies
10010 North Torrey Pines Road
La Jolla, CA 92037, U.S.A.
Departments oftBiology and *tPhysics
University of California, San Diego
La Jolla, CA 92093, U.S.A
{olivier,terry}@salk.edu
Abstract
The vestibulo-ocular reflex (VOR) stabilizes images on the retina during rapid
head motions. The gain of the VOR (the ratio of eye to head rotation velocity)
is typically around -1 when the eyes are focused on a distant target. However, to
stabilize images accurately, the VOR gain must vary with context (eye position,
eye vergence and head translation). We first describe a kinematic model of the
VOR which relies solely on sensory information available from the semicircular
canals (head rotation), the otoliths (head translation), and neural correlates of eye
position and vergence angle. We then propose a dynamical model and compare it
to the eye velocity responses measured in monkeys. The dynamical model reproduces the observed amplitude and time course of the modulation of the VOR and
suggests one way to combine the required neural signals within the cerebellum and
the brain stem. It also makes predictions for the responses of neurons to multiple
inputs (head rotation and translation, eye position, etc.) in the oculomotor system.
1 Introduction
The VOR stabilizes images on the retina during rapid head motions: Rotations and translations of
the head in three dimensions must be compensated by appropriate rotations of the eye. Because the
head's rotation axis is not the same as the eye's rotation axis, the calculations for proper image stabilization of an object must take into account diverse variables such as object distance from each eye,
O. J. M. D. COENEN, T. J. SEJNOWSKI
90
gaze direction, and head translation (Viire et al., 1986). The stabilization is achieved by integrating
infonnation from different sources: head rotations from the semicircular canals of the inner ear, head
translations from the otolith organs, eye positions, viewing distance, as well as other context infonnation, such as posture (head tilts) or activity (walking, running) (Snyder and King, 1992; Shelhamer
et al.,1992; Grossman et al., 1989). In this paper we concentrate on the context modulation of the
VOR which can be described by the kinematics of the reflex, i.e. eye position, eye vergence and
head translation.
2
The Vestibulo-Ocular Reflex: Kinematic Model
Definition of Vectors
Target Object
Coordinate System
Gaze Vector
Gaze Angle
Interocular
Distance
Eye position
Vector
Rotation Axis
Semicircular
Canals and
Otoliths
Head
Top View
?
~_--+_
Origin of coordinate
syste,,-, (arbitrary)
Figure 1: Diagram showing the definition of the vectors used in the equation of the kinematic model of the
vestibulo-ocular reflex.
The ideal VOR response is a compensatory eye movement which keeps the image fixed on the retina
for any head rotations and translations. We therefore derived an equation for the eye rotation velocity
by requiring that a target remains stationary on the retina. The velocity of the resulting compensatory
eye rotation can be written as (see fig. 1):
w= -Oe + 1:1
x [Dej x
Oe - To;]
(1)
where Oe is the head rotation velocity sensed by the semicircular canals, TOj is the head translation
velocity sensed by the otoliths, Dej == (e - OJ), eis a constant vector specifying the location of an
eye in the head, OJ is the position of either the left or right otolith, fJ and Igl are the unit vector and
amplitude of the gaze vector: fJ gives the eye position (orientation of the eye relative to the head),
and Igl gives the distance from the eye to the object, and the symbol x indicates the cross-product
between two vectors. wand Oe are rotation vectors which describe the instantaneous angUlar velocity
of the eye and head, respectively. A rotation vector lies along the instantaneous axis of rotation;
its magnitude indicates the speed of rotation around the axis, and its direction is given by the righthand screw rule. A motion of the head combining rotation (0) and translation (T) is sensed as the
combination of a rotation velocity Oe measured by the semicircular canals and a translation velocity
To sensed by the otoliths. The rotation vectors are equal (0 = Oe), and the translation velocity vector
as measured by the otoliths is given by: TOj = OOj x 0 + T, where OOj == (a - OJ), and a is the
position vector of the axis of rotation.
91
A Dynarnical Model of Context Dependencies for the Vestibula-Ocular Reflex
The special case where the gaze is horizontal and the rotation vector is vertical (horizontal head rotation) has been studied extensively in the literature. We used this special case in the sirnulations.
In that case rnay be sirnplify by writing its equation with dot products. Since 9 and
are then
perpendicular (9 . fie = 0). the first term of the following expression in brackets is zero:
w
slc
(2)
The sernicircular canals decornpose and report acceleration and velocity of head rotation fi by its
cornponents along the three canals on each side of the head fie : horizontal. anterior and posterior.
The two otolith organs on each side report the dynamical inertial forces generated during linear rnotion (translation) in two perpendicular plane. one vertical and the other horizontal relative to the head.
Here we assurne that a translation velocity signal (To) derived frorn or reported by the otolith afferents is available. The otoliths encode as well the head orientation relative to the gravity vector force.
but was not included in this study.
To cornplete the correspondence between the equation and a neural correlate. we need to determine
The eye position 9 is assurned to be given by the output of the
a physiological source for 9 and
velocity-to-position transformation or so-called "neural integrator" which provides eye position information and which is necessary for the activation of the rnotoneuron to sustain the eye in a fixed
position. The integrator for horizontal eye position appears to be located in the nucleus prepositus
hypoglossi in the pons. and the vertical integrator in the rnidbrain interstitial nucleus of Cajal. (Crawford. Cadera and Vilis. 1991; Cannon and Robinson. 1987). We assurne that the eye position is given
as the coordinates of the unit vector 9 along the ~ and 1; of fig. 1. The eye position depends on the
eye velocity according to
= 9 x w. For the special case w(t) = w(t)z. i.e. for horizontal head
rotation. the eye position coordinates are given by:
I!I.
'*
91 (t) =
91 (0) + f~ iJ2( r )w( r) dr
92(t) =
92(0) - f~ 91(r)w(r)dr
(3)
This is a set of two negatively coupled integrators. The "neural integrator" therefore does not integrate the eye velocity directly but a product of eye position and eye velocity. The distance frorn eye
to target
can be written using the gaze angles in the horizontal plane of the head:
I!I
1
(4)
1
(5)
Right eye:
19RT
Left eye:
19LT
where ?()R - () L) is the vergence angle. and I is the interocular distance; the angles are rneasured frorn
a straight ahead gaze. and take on negative values when the eyes are turned towards the right. Within
the oculornotor systern. the vergence angle and speed are encoded by the rnesencephalic reticular
formation neurons (Judge and Curnrning. 1986; Mays. 1984). The nucleus reticularis tegrnenti pontis
with reciprocal connections to the flocculus. oculornotor vermis. paravermis of the cerebellurn also
contains neurons which activity varies linearly with vergence angle (Gamlin and Clarke. 1995).
We conclude that it is possible to perform the cornputations needed to obtain an ideal VOR with signals known to be available physiologically.
O. J. M. D. COENEN, T. J. SEJNOWSKI
92
Dynamical Model Overview
Nod_
PftpoIItao
IIyposIoooI
Figure 2: Anatomical connections considered in the dynamical model. Only the left side is shown, the right
side is identical and connected to the left side only for the calculation of vergence angle. The nucleus prepositus
hypoglossi and the nucleus reticularis tegmenti pontis are meant to be representative of a class of nuclei in the
brain stem carrying eye position or vergence signal. All connections are known to exist except the connection
between the prepositus nucleus to the reticularis nucleus which has not been verified. Details of the cerebellum
are in fig. 3 and of the vestibular nucleus in fig. 4.
3 Dynamical Model
Snyder & King (1992) studied the effect of viewing distance and location of the axis of rotation on
the VOR in monkeys; their main results are reproduced in fig. 5. In an attempt to reproduce their
data and to understand how the signals that we have described in section 2 may be combined in time,
we constructed a dynamical model based on the kinematic model. Its basic anatomical structure is
shown in fig. 2. Details of the model are shown in fig. 3, and fig . 4 where all constants are written
using a millisecond time scale. The results are presented in fig. 5. The dynamical variables represent
the change of average firing rate from resting level of activity. The firing rate of the afferents has a
tonic component proportional to the velocity and a phasic component proportional to the acceleration
of movement. Physiologically, the afferents have a wide range of phasic and tonic amplitudes. This
is reflected by a wide selection of parameters in the numerators in the boxes of fig. 3 and fig. 4. The
Laplace transform of the integration operator in equation (3) of the eye position coordinates is ~.
Following Robinson (1981), we modeled the neural integrator with a gain and a time constant of
20 seconds. We therefore replaced the pure integrator ~ with 20~~~~1 in the calculations of eye
position. The term 1 in fig. 3 is calculated by using equations (4) and (5), and by using the integrator
9
20~o:!~1 on the eye velocity motor command to find the angles (h and (JR.
The dynamical model is based on the assumption that the cerebellum is required for context modulation, and that because of its architecture, the cerebellum is more likely to implement complex functions of multiple signals than other relevant nuclei. The major contributions of vergence and eye
position modulation on the VOR are therefore mediated by the cerebellum. Smaller and more transient contributions from eye position are assumed to be mediated through the vestibular nucleus as
shown in fig. 4. The motivation for combining eye position as in fig . 4 are, first, the evidence for eye
response oscillations; second, the theoretical consideration that linear movement information (To) is
useless without eye position information for proper VOR.
The parameters in the dynamical model were adjusted by hand after observing the behavior of the different components of the model and noting how these combine to produce the oscillations observed
93
A Dynamical Model of Context Dependencies for the Vestibulo-Ocular Reflex
Vestibular
Semicirtular
Cerebellum
c..l
O-
- - - - t 401+1 r-----?--f--..j
x
300+1
OIolith
0Igan
VHlibabr
Nuc1tul
Figure 3: Contribution of the cerebellum to the dynamical model. Filtered velocity inputs from the canals and
otoliths are combined with eye position according to equation (2). These calculations could be performed either
outside the cerebellum in one or multiple brain stem nuclei (as shown) or possibly inside the cerebellum. The
only output is to the vestibular nucleus. The Laplace notation is used in each boxes to represent a leaky integrator
with a time constant. input derivative and input gain. The term oe are the coordinates of the vector oe shown
in fig. 1. The x indicates a multiplication. The term! multiplies each inputs individually. The open arrows
indicate inhibitory (negative) connections.
Cere... lIum
VHlibalu
Semicimtlu
c.w
O--'----t~l---+--?----t~~
X
Figure 4: Contribution of the vestibular nucleus to the dynamical model. Three pathways in the vestibular nucleus process the canal and otolith inputs to drive the eye. The first pathway is modulated by the output of the
cerebellum through a FIN (Flocculus Target Neuron). The second and third pathways report transient information from the inputs which are combined with eye position in a manner identical to fig. 3. The location of these
calculations is hypothetical.
in the data. Even though the number of parameters in the model is not small. it was not possible to
fit any single response in fig. 5 without affecting most of the other eye responses. This puts severe
limits on the set of parameters allowed in the model.
The dynamical model suggests that the oscillations present in the data reflect: 1) important acceleration components in the neural signals. both rotational and linear, 2) different time delays between the
canal and otolith signal processing. and 3) antagonistic or synergistic action of the canal and otolith
signals with different axes of rotation, as described by the two terms in the bracket of equation (2).
4 Discussion
By fitting the dynamical model to the data, we tested the hypothesis that the VOR has a response
close to ideal taking into account the time constraints imposed by the sensory inputs and the neural
networks performing the computations. The vector computations that we used in the model may not
94
O. J. M. D. COENEN, T. J. SEJNOWSKI
Dynamical Model Responses vs Experimental Data
80
80
LOMtIOftof
.... 01 rotMIon
-,a.-om
.-
T..........~
60
40
20
~
w
-20
-20
-400~----~5~0------~
10
=0
~
Time (m.)
-40oL-----~
5~
0 ----~1~
0~
0-?
Time (m.)
Figure 5: Comparison between the dynamical model and monkey data. The dotted lines show the effect of
viewing distance and location of the axis of rotation on the VOR as recorded by Snyder & King (1992) from
monkeys in the dark. The average eye velocity response (of left and right eye) to a sudden change in head velocity is shown for different target distances (left) and rotational axes (right). On the left, the location of the axis
of rotation was in the midsagittal plane 12.5 cm behind the eyes (-12.5 cm), and the target distance was varied
between 220 cm and 9 cm. On the right, the target di stance was kept constant at 9 cm in front of the eye, and the
location of the axis of rotation was varied from 14 cm behind t04cm in front of the eyes (-14cm to 4cm) in the
midsagittal plane. The solid lines show the model responses. The model replicates many characteristics of the
data. On the left the model captures the eye velocity fluctuations between 20-50 ms, followed by a decrease and
an increase which are both modulated with target distance (50-80 ms). The later phase of the response (80-100
ms) is almost exact for 220 cm, and one peak is seen at the appropriate location for the other distances. On the
right the closest fits were obtained for the 4 cm and 0 cm locations. The mean values are in good agreement and
the waveforms are close, but could be shifted in time for the other locations of the axis of rotations. Finally, the
latest peak (..... lOOms) in the data appears in the model for -14 cm and 9 cm location.
be the representation used in the oculomotor system. Mathematically, the vector representation is
only one way to describe the computations involved. Other representations exist such as the quaternion representation which has been studied in the context of the saccadic system (Tweed and Vilis,
1987; see also Handzel and Flash, 1996 for a very general representation). Detailed comparisons
between the model and recordings from neurons will be require to settle this issue.
Direct comparison between Purkinje cell recordings (L.H. Snyder & W.M. King, unpublished data)
and predictions of the model could be used to determine more precisely the different inputs to some
Purkinje cells. The model can therefore be an important tool to gain insights difficult to obtain directly with experiments.
The question of how the central nervous system learns the transformations that we described still
remains. The cerebellum may be one site of learning for these transformations, and its output may
modulate the VOR in real time depending on the context. This view is compatible with the results
of Angelaki and Hess (1995) which indicate that the cerebellum is required to correctly perform an
otolith transformation. It is also consistent with adaptation results in the VOR. To test this hypothesis,
we have been working on a model of the cerebellum which learns to anticipate sensory inputs and
feedbacks, and use these signals to modulate the VOR. The learning in the cerebellum and vestibular
nuclei is mediated by the climbing fibers which report a reinforcement signal of the prediction error
(Coenen and Sejnowski. in preparation).
A Dynamical Model of Context Dependencies for the Vestibulo-Ocular Reflex
95
5 Conclusion
Most research on the VOR has assumed forward gaze focussed at infinity. The kinematics of offcenter gaze and fixation at finite distance necessitates nonlinear corrections that require the integration of a variety of sensory inputs. The dynamical model studied here is a working hypothesis for
how these corrections could be computed and is generally consistent with what is known about the
cerebellum and brain stem nuclei. We are, however, far from knowing the mechanisms underlying
these computations, or how they are learned through experience.
6 Acknowledgments
The first author was supported by a McDonnell-Pew Graduate Fellowship during this research. We
would like to thank Paul Viola for helpful discussions.
References
Angelaki, D. E. and Hess, B. J. (1995). Inertial representation of angular motion in the vestibular system of rhesus monkeyus. II. Otolith-controlled transformation that depends on an intact cerebellar nodulus. Journal
of Neurophysiology, 73(5): 1729-1751.
Cannon, S. C. and Robinson, D. A. (1987). Loss of the neural integrator of the oculomotor system from brain
stem lesions in monkey. Journal of Neurophysiology, 57(5):1383-1409.
Crawford, J. D., Cadera, W., and Vilis, T. (1991). Generation of torsional and vertical eye position signals by
the interstitial nucleus of Cajal. Science, 252:1551-1553.
Gamlin, P. D. R. and Clarke, R. J. (1995). Single-unit activity in the primate nucleus reticularis tegmenti pontis
related to vergence and ocular accomodation. Journal of Neurophysiology, 73(5):2115-2119.
Grossman, G. E., Leigh, R. J., Bruce, E. N., Huebner, W. P.,and Lanska, D.J. (1989). Performanceofthe human
vestibu1oocu1ar reflex during locomotion. Journal of Neurophysiology, 62(1 ):264-272.
Handzel, A. A. and Flash, T. (1996). The geometry of eye rotations and listing's law. In Touretzky, D., Mozer,
M., and Hasselmo, M., editors, Advances in Neural Information Processing Systems 8, Cambridge, MA.
MIT Press.
Judge, S. J. and Cumming, B. G. (1986). Neurons in the monkey midbrain with activity related to vergence eye
movement and accomodation. Journal of Neurophysiology, 55:915-930.
Mays, L. E. (1984). Neural control of vergence eye movements: Convergence and divergence neurons in midbrain. Journal of Neurophysiology, 51:1091-1108.
Robinson, D. A. (1981). The use of control systems analysis in the neurophysiology of eye movements. Ann.
Rev. Neurosci., 4:463-503.
Shelhamer, M., Robinson, D. A., and Tan, H. S. (1992). Context-specific adaptation of the gain of the vestibuloocular reflex in humans. Journal of Vestibular Research, 2:89-96.
Snyder, L. H. and King, W. M. (1992). Effect of viewing distance and location ofthe axis of head rotation on the
monkey's vestibuloocular reflex I. eye movement response. Journal of Neurophysiology, 67(4):861-874.
Tweed, D. and Vilis, T. (1987). Implications of rotational kinematics for the oculomotor system in three dimensions. Journal of Neurophysiology, 58(4):832-849.
Viire, E., Tweed, D., Milner, K., and Vilis, T. (1986). A reexamination of the gain ofthe vestibuloocular reflex.
Journal of Neurophysiology, 56(2):439-450.
| 1035 |@word neurophysiology:10 open:1 rhesus:1 sensed:4 solid:1 contains:1 anterior:1 activation:1 must:3 written:3 vor:18 distant:1 motor:1 v:1 stationary:1 nervous:1 plane:4 reciprocal:1 filtered:1 sudden:1 provides:1 location:11 along:3 constructed:1 direct:1 ooj:2 fixation:1 combine:2 pathway:3 fitting:1 inside:1 manner:1 rapid:2 behavior:1 brain:5 integrator:10 ol:1 notation:1 underlying:1 what:1 cm:13 monkey:7 transformation:5 hypothetical:1 gravity:1 control:2 unit:3 medical:1 pontis:3 limit:1 syste:1 torsional:1 solely:1 modulation:4 firing:2 fluctuation:1 studied:4 suggests:2 specifying:1 perpendicular:2 range:1 graduate:1 acknowledgment:1 hughes:1 implement:1 road:1 integrating:1 synergistic:1 selection:1 operator:1 close:2 put:1 context:11 writing:1 imposed:1 compensated:1 handzel:2 latest:1 focused:1 pure:1 rule:1 insight:1 coordinate:6 sejnowskit:1 laplace:2 antagonistic:1 diego:1 target:9 tan:1 milner:1 exact:1 olivier:2 hypothesis:3 origin:1 agreement:1 locomotion:1 velocity:22 walking:1 located:1 observed:2 vestibula:1 capture:1 eis:1 connected:1 oe:8 movement:7 decrease:1 mozer:1 ij2:1 carrying:1 negatively:1 necessitates:1 fiber:1 describe:3 sejnowski:4 formation:1 outside:1 encoded:1 reticular:1 torrey:1 transform:1 reproduced:1 propose:1 product:3 flocculus:2 adaptation:2 turned:1 combining:2 relevant:1 convergence:1 produce:1 object:4 depending:1 measured:3 judge:2 indicate:2 direction:2 concentrate:1 waveform:1 stabilization:2 human:2 viewing:4 transient:2 settle:1 require:2 biological:1 anticipate:1 mathematically:1 adjusted:1 correction:2 around:2 considered:1 pine:1 stabilizes:2 major:1 vary:1 infonnation:2 individually:1 hasselmo:1 organ:2 tool:1 mit:1 cannon:2 vestibuloocular:3 command:1 encode:1 derived:2 ax:2 indicates:3 helpful:1 typically:1 toj:2 reproduce:1 issue:1 orientation:2 multiplies:1 special:3 integration:2 equal:1 identical:2 screw:1 report:4 leigh:1 retina:4 cajal:2 divergence:1 lium:1 replaced:1 phase:1 geometry:1 attempt:1 kinematic:4 righthand:1 severe:1 replicates:1 bracket:2 behind:2 implication:1 necessary:1 experience:1 theoretical:1 purkinje:2 pons:1 delay:1 front:2 reported:1 dependency:4 varies:1 combined:3 peak:2 terrence:1 gaze:9 reflect:1 ear:1 recorded:1 central:1 possibly:1 dr:2 derivative:1 grossman:2 account:2 slc:1 stabilize:1 north:1 afferent:3 depends:2 performed:1 view:2 later:1 observing:1 bruce:1 contribution:4 om:1 characteristic:1 listing:1 ofthe:2 climbing:1 accurately:1 interocular:2 drive:1 straight:1 touretzky:1 tweed:3 definition:2 ocular:8 involved:1 di:1 gain:7 amplitude:3 inertial:2 appears:2 reflected:1 response:12 sustain:1 box:2 though:1 angular:2 hand:1 working:2 horizontal:7 otolith:9 nonlinear:1 effect:3 requiring:1 stance:1 laboratory:1 cerebellum:15 during:5 numerator:1 m:3 motion:4 fj:2 prepositus:3 image:5 instantaneous:2 consideration:1 fi:1 rotation:33 overview:1 tilt:1 vermis:1 resting:1 cambridge:1 hess:2 pew:1 dot:1 etc:1 posterior:1 closest:1 jolla:2 seen:1 determine:2 signal:12 ii:1 multiple:3 stem:5 fie:2 calculation:5 cross:1 controlled:1 prediction:3 basic:1 represent:2 cerebellar:1 achieved:1 cell:2 affecting:1 fellowship:1 diagram:1 source:2 midsagittal:2 recording:2 noting:1 ideal:3 variety:1 accomodation:2 fit:2 architecture:1 inner:1 tegmenti:2 knowing:1 expression:1 coenen:5 action:1 generally:1 detailed:1 dark:1 extensively:1 exist:2 millisecond:1 inhibitory:1 dotted:1 shifted:1 canal:11 correctly:1 anatomical:2 diverse:1 snyder:5 dej:2 verified:1 vilis:5 kept:1 wand:1 angle:9 almost:1 oscillation:3 clarke:2 followed:1 correspondence:1 activity:5 ahead:1 constraint:1 precisely:1 infinity:1 reticularis:4 speed:2 performing:1 department:1 according:2 combination:1 mcdonnell:1 jr:1 smaller:1 rev:1 primate:1 midbrain:2 interstitial:2 equation:8 remains:2 kinematics:3 mechanism:1 needed:1 phasic:2 available:3 appropriate:2 top:1 running:1 gamlin:2 question:1 posture:1 saccadic:1 rt:1 distance:14 thank:1 hypoglossi:2 igl:2 modeled:1 useless:1 ratio:1 rotational:3 difficult:1 negative:2 proper:2 perform:2 vertical:4 neuron:7 howard:1 semicircular:5 fin:1 finite:1 viola:1 neurobiology:1 tonic:2 head:31 varied:2 arbitrary:1 unpublished:1 required:3 compensatory:2 connection:5 offcenter:1 california:1 learned:1 vestibular:9 robinson:5 dynamical:20 oculomotor:4 oj:3 terry:1 force:2 loom:1 eye:62 axis:12 mediated:3 coupled:1 crawford:2 literature:1 multiplication:1 relative:3 law:1 loss:1 generation:1 proportional:2 shelhamer:2 nucleus:19 integrate:1 consistent:2 vestibulo:6 editor:1 translation:14 course:1 compatible:1 supported:1 side:5 understand:1 institute:2 wide:2 taking:1 focussed:1 leaky:1 feedback:1 dimension:2 calculated:1 sensory:4 forward:1 author:1 reinforcement:1 san:1 far:1 correlate:2 keep:1 reproduces:1 conclude:1 assumed:2 physiologically:2 vergence:12 ca:2 complex:1 main:1 linearly:1 arrow:1 motivation:1 neurosci:1 paul:1 allowed:1 angelaki:2 lesion:1 fig:17 representative:1 site:1 salk:2 cumming:1 position:28 lie:1 third:1 learns:2 specific:1 showing:1 symbol:1 physiological:1 evidence:1 magnitude:1 lt:1 likely:1 reflex:12 relies:1 ma:1 modulate:2 king:5 acceleration:3 flash:2 towards:1 ann:1 change:2 included:1 except:1 frorn:3 called:1 experimental:1 la:2 intact:1 modulated:2 meant:1 quaternion:1 preparation:1 tested:1 |
42 | 1,036 | Improved Gaussian Mixture Density
Estimates Using Bayesian Penalty Terms
and Network Averaging
Dirk Ormoneit
Institut fur Informatik (H2)
Technische Universitat Munchen
80290 Munchen, Germany
ormoneit@inJormatik.tu-muenchen.de
Volker Tresp
Siemens AG
Central Research
81730 Munchen, Germany
Volker. Tresp@zJe.siemens.de
Abstract
We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density
estimates. The first method uses a Bayesian prior on the parameter space. We derive EM (Expectation Maximization) update rules
which maximize the a posterior parameter probability. In the second approach we apply ensemble averaging to density estimation.
This includes Breiman's "bagging" , which recently has been found
to produce impressive results for classification networks.
1
Introduction
Gaussian mixture models have recently attracted wide attention in the neural network community. Important examples of their application include the training of
radial basis function classifiers, learning from patterns with missing features, and
active learning. The appeal of Gaussian mixtures is based to a high degree on the
applicability of the EM (Expectation Maximization) learning algorithm, which may
be implemented as a fast neural network learning rule ([Now91], [Orm93]). Severe
problems arise, however, due to singularities and local maxima in the log-likelihood
function. Particularly in high-dimensional spaces these problems frequently cause
the computed density estimates to possess only relatively limited generalization capabilities in terms of predicting the densities of new data points. As shown in this
paper, considerably better generalization can be achieved using regularization.
543
Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms
We will compare two regularization methods. The first one uses a Bayesian prior
on the parameters. By using conjugate priors we can derive EM learning rules
for finding the MAP (maximum a posteriori probability) parameter estimate. The
second approach consists of averaging the outputs of ensembles of Gaussian mixture
density estimators trained on identical or resampled data sets. The latter is a form
of "bagging" which was introduced by Breiman ([Bre94]) and which has recently
been found to produce impressive results for classification networks. By using the
regularized density estimators in a Bayes classifier ([THA93], [HT94], [KL95]) , we
demonstrate that both methods lead to density estimates which are superior to the
unregularized Gaussian mixture estimate.
2
Gaussian Mixtures and the EM Algorithm
Consider the lroblem of estimating the probability density of a continuous random
vector x E 'R based on a set x* = {x k 11 S k S m} of iid. realizations of x. As a density model we choose the class of Gaussian mixtures p(xle) = L:7=1 Kip(xli, pi, E i ),
where the restrictions Ki ~ 0 and L:7=1 Kj = 1 apply. e denotes the parameter
vector (Ki' Iti, E i )i=1. The p(xli, Pi, E i ) are multivariate normal densities:
p( xli , Pi , Ei) = (271")- 41Ei 1- 1 / 2 exp [-1/2(x - Pi)tEi 1 (x - Iti)] .
The Gaussian mixture model is well suited to approximate a wide class of continuous
probability densities. Based on the model and given the data x*, we may formulate
the log-likelihood as
lee)
= log [rr mk=l p(xkle)] = ",m
log "'~ Kip(xkli, Pi, Ei) .
.L...".k=1
.L...".J=l
e
Maximum likelihood parameter estimates may efficiently be computed with the
EM (Expectation Maximization) algorithm ([DLR77]) . It consists of the iterative
application of the following two steps:
1. In the E-step, based on the current parameter estimates, the posterior
probability that unit i is responsible for the generation of pattern xk is
estimated as
(1)
2. In the M-step, we obtain new parameter estimates (denoted by the prime):
,
K ?
J
= -m1 L mk=1 h?k
J
~m
,
(2)
=
Pi
wk-l
~m
hki X k
hi
wl=l
~.' _ L:~1 hf(x k - pD(x k - pDt
L.J J
-
m
I
(3)
i
(4)
L:l=l hi
Note that K~ is a scalar , whereas p~ denotes a d-dimensional vector and E/
is a d x d matrix.
It is well known that training neural networks as predictors using the maximum
likelihood parameter estimate leads to overfitting. The problem of overfitting is
even more severe in density estimation due to singularities in the log-likelihood
function. Obviously, the model likelihood becomes infinite in a trivial way if we
concentrate all the probability mass on one or several samples of the training set.
544
D. ORMONEIT, V. TRESP
In a Gaussian mixture this is just the case if the center of a unit coincides with
one of the data points and E approaches the zero matrix. Figure 1 compares the
true and the estimated probability density in a toy problem. As may be seen,
the contraction of the Gaussians results in (possibly infinitely) high peaks in the
Gaussian mixture density estimate. A simple way to achieve numerical stability
is to artificially enforce a lower bound on the diagonal elements of E. This is a
very rude way of regularization, however, and usually results in low generalization
capabilities. The problem becomes even more severe in high-dimensional spaces.
To yield reasonable approximations, we will apply two methods of regularization,
which will be discussed in the following two sections.
Figure 1: True density (left) and unregularized density estimation (right).
3
Bayesian Regularization
In this section we propose a Bayesian prior distribution on the Gaussian mixture
parameters, which leads to a numerically stable version of the EM algorithm. We
first select a family of prior distributions on the parameters which is conjugate*.
Selecting a conjugate prior has a number of advantages. In particular, we obtain
analytic solutions for the posterior density and the predictive density. In our case,
the posterior density is a complex mixture of densities t . It is possible, however, to
derive EM-update rules to obtain the MAP parameter estimates.
A conjugate prior of a single multivariate normal density is a product of a normal
density N(JLilft,1]-lE i ) and a Wishart density Wi(E;lla,,8) ([Bun94]). A proper
conjugate prior for the the mixture weightings '" = ("'1, ... , "'n) is a Dirichlet density
D("'hV. Consequently, the prior of the overall Gaussian mixture is the product
D(",lr)
N(JLilil, 71- 1Ei)Wi(E;1I a , ,8). Our goal is to find the MAP parameter
estimate, that is parameters which assume the maximum of the log-posterior
il7=1
Ip(S)
2:=~=1 log 2:=;=1 "'iP(X k Ii, JLi, Ei ) + log D("'lr)
+ 2:=;=1 [logN(JLilft, 71- 1Ei) + log Wi(E;lla, ,8)].
As in the unregularized case, we may use the EM-algorithm to find a local maximum
? A family F of probability distributions on 0 is said to be conjugate if, for every 1r E F,
the posterior 1r(0Ix) also belongs to F ([Rob94]).
tThe posterior distribution can be written as a sum of nm simple terms.
tThose densities are defined as follows (b and c are normalizing constants):
bII n
D(1I:17)
.=1
~.=l
(21r)-i 11,-IE;I-l/2 exp [-~(Il' -
N(Il.lp,1,-IE.)
W i(Ei l la,,8)
11:7,-1, with 11:, ~ 0 and ",n
=
cIEillo-Cd+l)/2 exp [-tr(,8Ei 1 )]
11:.
=1
Mt Ei 1 (1l' - M]
?
545
Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms
of Ip(8). The E-step is identical to (1). The M-step becomes
,
L.."k-l hki + ri - 1
(5)
,L.."k=l hki x k + '1J1.
"'i
J1.i
hi
m + L.."i=l ri - n
L..,,1=1 i + 11
=
"m
E~ =
"m
"n
2:;-1 hf(x k -
A
= "m
+ 11(J1.i 2:~1 h~ + 20: - d
J1.D(xk - J1.D t
I
jJ.)(J1.i - jJ.)t
(6)
+ 2f3
(7)
As typical for conjugate priors, prior knowledge corresponds to a set of artificial
training data which is also reflected in the EM-update equations. In our experiments, we focus on a prior on the variances which is implemented by f3 =F 0, where
o denotes the d x d zero matrix. All other parameters we set to "neutral" values:
ri=l'v'i : l::;i::;n,
0:= (d+I)/2,
11=0,
f3=iJl d
ld is the d x d unity matrix. The choice of 0: introdu~es a bias which favors large
variances?. The effect of various values of the scalar f3 on the density estimate is
illustrated in figure 2. Note that if iJ is chosen too small, overfitting still occurs. If
it is chosen to large , on the other hand, the model is too constraint to recognize the
underlying structure.
Figure 2: Regularized density estimates (left:
iJ =
0.05, right: 'iJ = 0.1).
Typically, the optimal value for iJ is not known a priori. The simplest procedure
consists of using that iJ which leads to the best performance on a validation set,
analogous to the determination of the optimal weight decay parameter in neural
network training. Alternatively, iJ might be determined according to appropriate
Bayesian methods ([Mac9I]). Either way, only few additional computations are
required for this method if compared with standard EM.
4
Averaging Gaussian Mixtures
In this section we discuss the averaging of several Gaussian mixtures to yield improved probability density estimation. The averaging over neural network ensembles
has been applied previously to regression and classification tasks ([PC93]) .
There are several different variants on the simple averaging idea. First, one may
train all networks on the complete set of training data. The only source of disagreement between the individual predictions consists in different local solutions
found by the likelihood maximization procedure due to different starting points.
Disagreement is essential to yield an improvement by averaging, however, so that
this proceeding only seems advantageous in cases where the relation between training data and weights is extremely non-deterministic in the sense that in training,
?If A is distributed according to Wi(AIO', (3), then E[A- 1 ] = (0' - (d + 1)/2)-1 {3. In our
case A is B;-I, so that E[Bi] -+ 00 ? {3 for 0' -+ (d + 1)/2.
546
D. ORMONEIT, V. TRESP
different solutions are found from different random starting points. A straightforward way to increase the disagreement is to train each network on a resampled
version of the original data set. If we resample the data without replacement, the
size of each training set is reduced, in our experiments to 70% of the original. The
averaging of neural network predictions based on resampling with replacement has
recently been proposed under the notation "bagging" by Breiman ([Bre94]), who
has achieved dramatic.ally improved results in several classification tasks. He also
notes, however, that an actual improvement of the prediction can only result if the
estimation procedure is relatively unstable. As discussed, this is particularly the
case for Gaussian mixture training. We therefore expect bagging to be well suited
for our task.
5
Experiments and Results
To assess the practical advantage resulting from regularization, we used the density
estimates to construct classifiers and compared the resulting prediction accuracies
using a toy problem and a real-world problem. The reason is that the generalization error of density estimates in terms of the likelihood based on the test data
is rather unintuitive whereas performance on a classification problem provides a
good impression of the degree of improvement. Assume we have a set of N labeled
data z* = {(xk, lk)lk = 1, ... , N}, where lk E Y = {I, ... , C} denotes the class label
of each input xk . A classifier of new inputs x is yielded by choosing the class I
with the maximum posterior class-probability p(llx). The posterior probabilities
may be derived from the class-conditional data likelihood p(xll) via Bayes theorem:
p(llx) = p(xll)p(l)/p(x) ex p(xll)p(l) . The resulting partitions ofthe input space are
optimal for the true p(llx). A viable way to approximate the posterior p(llx) is to
estimate p(xll) and p(l) from the sample data.
5.1
Toy Problem
In the toy classification problem the task is to discriminate the two classes of circulatory arranged data shown in figure 3. We generated 200 data points for each class
and subdivided them into two sets of 100 data points. The first was used for training, the second to test the generalization performance. As a network architecture
we chose a Gaussian mixture with 20 units. Table 1 summarizes the results, beginning with the unregularized Gaussian mixture which is followed by the averaging
and the Bayesian penalty approaches. The three rows for averaging correspond to
the results yielded without applying resampling (local max.), with resampling with-
Figure 3: Toy Classification Task.
547
Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms
out replacement (70% subsets), and with resampling with replacement (bagging).
The performances on training and test set are measured in terms of the model loglikelihood. Larger values indicate a better performance. We report separate results
for dass A and B, since the densities of both were estimated separately. The final
column shows the prediction accuracy in terms of the percentage of correctly classified data in the test set. We report the average results from 20 experiments. The
numbers in brackets denote the standard deviations u of the results. Multiplying u
with T19;95%/v'20 = 0.4680 yields 95% confidence intervals. The best result in each
category is underlined.
Algorithm
Log- Likelihood
unreg.
Averaging:
local max.
70% subset
bagging
Penalty:
[3 = 0.01
[3 = 0.02
[3 = 0.05
[3 = 0.1
Accuracy
A
-120.8 (13.3)
-120.4 (10.8)
Test
A
B
-224.9 (32.6) -241.9 (34.1)
-115.6 (6.0)
-106.8 (5.8)
-83.8 (4.9)
-112.6 (6.6)
-105.1 (6.7)
-83.1 (7.1)
-200.9 (13.9)
-188.8 (9.5)
-194.2 (7.3)
-209.1 (16.3)
-196.4 (11.3)
-200.1 (11.3)
81.8% (3.1)
83.2% (2.9)
82.6% (3.4)
-149.3
-156.0
-173.9
-183.0
-146.5 (5.9)
-153.0 (4.8)
-167.0 (15.8)
-181.9 (21.1)
-186.2
-177.1
-182.0
-184.6
-182.9 (11.6)
-174.9 (7.0)
-173.9 (14.3)
-182.5 (21.1)
83.1%
84.4%
81.5%
78.5%
Training
(18.5)
(16.5)
(24.3)
(21.9)
B
(13.9)
(11.8)
(20.1)
(21.0)
I
80.6'70 (2.8)
(2.9)
(6.3)
(5.9)
(5.1)
Table 1: Performances in the toy classification problem .
As expected, all regularization methods outperform the maximum likelihood approach in terms of correct classification. The performance of the Bayesian regularization is hereby very sensitive to the appropriate choice of the regularization
parameter (3. Optimality of (3 with respect to the density prediction and oytimality
with respect to prediction accuracy on the test set roughly coincide (for (3 = 0.02).
A veraging is inferior to the Bayesian approach if an optimal {3 is chosen.
5.2
BUPA Liver Disorder Classification
As a second task we applied our methods to a real-world decision problem from
the medical environment. The problem is to detect liver disorders which might
arise from excessive alcohol consumption. Available information consists of five
blood tests as well as a measure of the patients' daily alcohol consumption. We
subdivided the 345 available samples into a training set of 200 and a test set of 145
samples. Due to the relatively few data we did not try to determine the optimal
regularization parameter using a validation process and will report results on the
test set for different parameter values.
Algorithm
unregularized
Bayesian penalty ({3 = 0.05)
Bayesian penalty ?(3 = 0.10)
Bayesian penal ty (3 = 0.20
averaging local maxima
averaging (70 % subset)
averaging (bagging)
Accuracy
64.8 %
65.5 %
66.9 %
61.4 %
65 .5 0
72.4 %
71.0 %
Table 2: Performances in the liver disorder classification problem.
548
D. ORMONEIT. V. TRESP
The results of our experiments are shown in table 2. Again, both regularization
methods led to an improvement in prediction accuracy. In contrast to the toy problem, the averaged predictor was superior to the Bayesian approach here. Note that
the resampling led to an improvement of more than five percent points compared
to unresampled averaging.
6
Conclusion
We proposed a Bayesian and an averaging approach to regularize Gaussian mixture
density estimates. In comparison with the maximum likelihood solution both approaches led to considerably improved results as demonstrated using a toy problem
and a real-world classification task. Interestingly, none of the methods outperformed
the other in both tasks. This might be explained with the fact that Gaussian mixture density estimates are particularly unstable in high-dimensional spaces with
relatively few data. The benefit of averaging might thus be greater in this case.
A veraging proved to be particularly effective if applied in connection with resampIing of the training data, which agrees with results in regression and classification
tasks. If compared to Bayesian regularization, averaging is computationally expensive. On the other hand, Baysian approaches typically require the determination of
hyper parameters (in our case 13), which is not the case for averaging approaches.
References
[Bre94]
L. Breiman. Bagging predictors. Technical report , UC Berkeley, 1994.
[Bun94]
W . Buntine. Operations for learning with graphical models. Journal of Artificial
Intelligence Research, 2:159-225, 1994.
[DLR77] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from
incomplete data via the EM algorithm. J. Royal Statistical Society B, 1977.
[HT94]
T. Hastie and R. Tibshirani. Discriminant analysis by gaussian mixtures. Technical report , AT&T Bell Labs and University of Toronto, 1994.
[KL95]
N. Kambhatla and T. K. Leen. Classifying with gaussian mixtures and clusters.
In Advances in Neural Information Processing Systems 7. Morgan Kaufman,
1995.
[Mac91]
D. MacKay. Bayesian Modelling and Neural Networks. PhD thesis, California
Institute of Technology, Pasadena, 1991.
[Now91] S. J. Nowlan. Soft Competitive Adaption: Neural Network Learning Algorithms
based on Fitting Statistical Mixtures. PhD thesis, School of Computer Science,
Carnegie Mellon University, Pittsburgh, 1991.
[Orm93] D. Ormoneit. Estimation of probability densities using neural networks. Master's
thesis, Technische Universitiit Munchen, 1993.
[PC93]
M. P. Perrone and L. N. Cooper. When networks disagree: Ensemble methods for
hybrid Neural networks. In Neural Networks for Speech and Image Processing.
Chapman Hall, 1993.
[Rob94]
C. P. Robert. The Bayesian Choice. Springer-Verlag, 1994.
[THA93] V. Tresp, J. Hollatz, and S. Ahmad. Network structuring and training using
rule-based knowledge. In Advances in Neural Information Processing Systems 5.
Morgan Kaufman, 1993.
| 1036 |@word version:2 seems:1 advantageous:1 contraction:1 dramatic:1 tr:1 ld:1 selecting:1 interestingly:1 current:1 nowlan:1 attracted:1 written:1 numerical:1 partition:1 j1:6 analytic:1 update:3 resampling:5 intelligence:1 xk:4 beginning:1 lr:2 provides:1 toronto:1 five:2 viable:1 consists:5 fitting:1 expected:1 roughly:1 frequently:1 actual:1 becomes:3 estimating:1 underlying:1 notation:1 mass:1 kaufman:2 ag:1 finding:1 berkeley:1 every:1 classifier:4 unit:3 medical:1 local:6 might:4 chose:1 bupa:1 limited:1 bi:1 averaged:1 practical:1 responsible:1 procedure:3 lla:2 bell:1 confidence:1 radial:1 applying:1 restriction:1 map:3 deterministic:1 missing:1 center:1 demonstrated:1 straightforward:1 attention:1 starting:2 formulate:1 disorder:3 rule:5 estimator:2 regularize:1 stability:1 jli:1 analogous:1 us:2 element:1 expensive:1 particularly:4 labeled:1 hv:1 ahmad:1 pd:1 environment:1 dempster:1 trained:1 predictive:1 basis:1 various:1 train:2 fast:1 effective:1 artificial:2 hyper:1 choosing:1 larger:1 loglikelihood:1 favor:1 laird:1 ip:3 final:1 obviously:1 advantage:2 rr:1 propose:1 product:2 tu:1 realization:1 achieve:1 cluster:1 produce:2 derive:3 liver:3 measured:1 ij:6 school:1 implemented:2 indicate:1 concentrate:1 correct:1 require:1 subdivided:2 tthe:1 generalization:6 singularity:2 hall:1 normal:3 exp:3 kambhatla:1 resample:1 estimation:6 outperformed:1 label:1 sensitive:1 agrees:1 wl:1 gaussian:25 rather:1 breiman:4 volker:2 structuring:1 derived:1 focus:1 improvement:5 fur:1 likelihood:13 modelling:1 contrast:1 sense:1 detect:1 posteriori:1 typically:2 pasadena:1 relation:1 germany:2 overall:1 classification:13 denoted:1 logn:1 priori:1 mackay:1 uc:1 construct:1 f3:4 chapman:1 identical:2 excessive:1 report:5 few:3 recognize:1 individual:1 replacement:4 severe:3 mixture:28 bracket:1 daily:1 institut:1 incomplete:1 mk:2 column:1 soft:1 aio:1 maximization:4 applicability:1 deviation:1 technische:2 neutral:1 subset:3 predictor:3 too:2 universitat:1 buntine:1 considerably:2 density:39 peak:1 ie:2 lee:1 again:1 central:1 nm:1 thesis:3 choose:1 possibly:1 wishart:1 toy:8 de:2 wk:1 includes:1 try:1 hollatz:1 lab:1 competitive:1 hf:2 bayes:2 capability:3 ass:1 il:2 accuracy:6 variance:2 who:1 efficiently:1 ensemble:4 yield:4 ofthe:1 correspond:1 xli:3 bayesian:20 informatik:1 iid:1 none:1 multiplying:1 classified:1 ty:1 hereby:1 proved:1 knowledge:2 reflected:1 improved:7 arranged:1 leen:1 just:1 hand:2 ally:1 ei:9 effect:1 true:3 regularization:13 illustrated:1 inferior:1 coincides:1 ijl:1 impression:1 complete:1 demonstrate:1 percent:1 image:1 recently:4 superior:2 mt:1 hki:3 discussed:2 he:1 m1:1 numerically:1 mellon:1 llx:4 stable:1 impressive:2 posterior:10 multivariate:2 t19:1 belongs:1 prime:1 verlag:1 underlined:1 seen:1 morgan:2 additional:1 greater:1 determine:1 maximize:1 ii:1 technical:2 determination:2 das:1 prediction:8 variant:1 regression:2 muenchen:1 patient:1 expectation:3 achieved:2 whereas:2 separately:1 interval:1 source:1 posse:1 tei:1 architecture:1 hastie:1 idea:1 penalty:8 speech:1 cause:1 jj:2 category:1 simplest:1 reduced:1 outperform:1 percentage:1 estimated:3 correctly:1 tibshirani:1 carnegie:1 blood:1 sum:1 master:1 family:2 reasonable:1 decision:1 summarizes:1 bound:1 hi:3 ki:2 resampled:2 followed:1 yielded:2 constraint:1 ri:3 extremely:1 optimality:1 relatively:4 rude:1 according:2 perrone:1 conjugate:7 em:11 unity:1 wi:4 lp:1 explained:1 unregularized:5 computationally:1 equation:1 previously:1 discus:1 available:2 gaussians:1 operation:1 apply:3 munchen:4 pdt:1 enforce:1 appropriate:2 disagreement:3 bii:1 original:2 bagging:8 denotes:4 dirichlet:1 include:1 graphical:1 society:1 occurs:1 diagonal:1 said:1 separate:1 consumption:2 unstable:2 trivial:1 reason:1 unreg:1 discriminant:1 robert:1 unintuitive:1 proper:1 xll:4 disagree:1 iti:2 dirk:1 community:1 introduced:1 required:1 xle:1 dlr77:2 kip:2 connection:1 baysian:1 california:1 usually:1 pattern:2 max:2 royal:1 hybrid:1 regularized:2 predicting:1 ormoneit:6 alcohol:2 improve:1 technology:1 lk:3 tresp:6 kj:1 prior:12 expect:1 generation:1 validation:2 h2:1 degree:2 rubin:1 classifying:1 pi:6 cd:1 row:1 bias:1 institute:1 wide:2 distributed:1 benefit:1 world:3 coincide:1 approximate:2 veraging:2 active:1 overfitting:3 pittsburgh:1 alternatively:1 continuous:2 iterative:1 table:4 complex:1 artificially:1 did:1 arise:2 cooper:1 weighting:1 ix:1 theorem:1 appeal:1 decay:1 normalizing:1 essential:1 phd:2 suited:2 led:3 infinitely:1 scalar:2 springer:1 corresponds:1 adaption:1 conditional:1 goal:1 consequently:1 universitiit:1 infinite:1 typical:1 determined:1 averaging:20 discriminate:1 e:1 la:1 siemens:2 select:1 latter:1 ex:1 |
43 | 1,037 | Quadratic-Type Lyapunov Functions for
Competitive Neural Networks with
Different Time-Scales
Anke Meyer-Base
Institute of Technical Informatics
Technical University of Darmstadt
Darmstadt, Germany 64283
Abstract
The dynamics of complex neural networks modelling the selforganization process in cortical maps must include the aspects of
long and short-term memory. The behaviour of the network is such
characterized by an equation of neural activity as a fast phenomenon and an equation of synaptic modification as a slow part of the
neural system. We present a quadratic-type Lyapunov function for
the flow of a competitive neural system with fast and slow dynamic
variables. We also show the consequences of the stability analysis
on the neural net parameters.
1
INTRODUCTION
This paper investigates a special class of laterally inhibited neural networks. In
particular, we have examined the dynamics of a restricted class of laterally inhibited
neural networks from a rigorous analytic standpoint.
The network models for retinotopic and somatotopic cortical maps are usually composed of several layers of neurons from sensory receptors to cortical units, with
feedforward excitations between the layers and lateral (or recurrent) connection
within the layer. Standard techniques include (1) Hebbian rule and its variations
for modifying synaptic efficacies, (2) lateral inhibition for establishing topographical
organization of the cortex, and (3) adiabatic approximation in decoupling the dynamics of relaxation (which is on the fast time scale) and the dynamics of learning
(which is on the slow time scale) of the network . However, in most cases, only computer simulation results were obtained and therefore provided limited mathematical
understanding of the self-organizating neural response fields.
The networks under study model the dynamics of both the neural activity levels,
A. MEYER-BASE
338
the short-term memory (STM), and the dynamics of synaptic modifications, the
long-term memory (LTM). The actual network models under consideration may be
considered extensions of Grossberg's shunting network [Gr076] or Amari's model
for primitive neuronal competition [Ama82]. These earlier networks are considered
pools of mutually inhibitory neurons with fixed synaptic connections. Our results
extended these earlier studies to systems where the synapses can be modified by
external stimuli. The dynamics of competitive systems may be extremely complex,
exhibiting convergence to point attractors and periodic attractors. For networks
which model only the dynamic of the neural activity levels Cohen and Grossberg
[CG83] found a Lyapunov function as a necessary condition for the convergence
behavior to point attractors.
In this paper we apply the results of the theory of Lyapunov functions for singularly
perturbed systems on large-scale neural networks, which have two types of state
variables (LTM and STM) describing the slow and the fast dynamics of the system.
So we can find a Lyapunov function for the neural system with different time-scales
and give a design concept of storing desired pattern as stable equilibrium points.
2
THE CLASS OF NEURAL NETWORKS WITH
DIFFERENT TIME-SCALES
This section defines the network of differential equations characterizing laterally
inhibited neural networks. We consider a laterally inhibited network with a deterministic signal Hebbian learning law [Heb49] and is similar to the spatiotemporal
system of Amari [Ama83] .
The general neural network equations describe the temporal evolution of the STM
(activity modification) and LTM states (synaptic modification). For the jth neuron
of aN-neuron network these equations are:
N
Xj
= -ajxj + L
D i j!(Xi )
+ BjSj
(1)
i=l
(2)
where Xj is the current activity level, aj is the time constant of the neuron , Bj is
the contribution of the external stimulus term, !(Xi) is the neuron's output , D ij is
the .lateral inhibition term and Yi is the external stimulus. The dynamic variable
Sj represents the synaptic modification state and lyl21 is defined as lyl2 = yTy.
We will assume that the input stimuli are normalized vectors of unit magnitude
lyl2 = 1. These systems will be subject to our analysis considerations regarding the
stability of their equilibrium points.
3
ASYMPTOTIC STABILITY OF NEURAL
NETWORKS WITH DIFFERENT TIME-SCALES
We show in this section that it is possible to determine the asymptotic stability of
this class of neural networks interpreting them as nonlinear singularly perturbed
systems. While singular perturbation theory, a traditional tool of fluid dynamics
and nonlinear mechanics, embraces a wide variety of dynamic phenomena possesing
slow and fast modes, we show that singular perturbations are present in many
339
Quadratic-type Lyapunov Functions for Competitive Neural Networks
neurodynamical problems. In this sense we apply in this paper the results of this
valuable analysis tool on the dynamics of laterally inhibited networks.
In [SK84] is shown that a quadratic-type Lyapunov function for a singularly perturbed system is obtained as a weighted sum of quadratic-type Lyapunov functions
of two lower order systems: the so-called reduced and the boundary-layer systems.
Assuming that each of the two systems is asymptotically stable and has a Lyapunov
function, conditions are derived to guarantee that, for a sufficiently small perturbation parameter, asymptotic stability of the singularly perturbed system can be
established by means of a Lyapunov function which is composed as a weighted sum
of the Lyapunov functions of the reduced and boundary-layer systems.
Adopting the notations from [SK84] we will consider the singularly perturbed system 2
x = f(x, y)
x E Bx C R n
(3)
(4)
We assume that, in Bx and By, the origin (x = y = 0) is the unique equilibrium point
and (3) and (4) has a unique solution. A reduced system is defined by setting c = in (3)
and (4) to obtain
?
x = f(x,y)
(5)
O=g(x,y,O)
(6)
Assuming that in Bx and By, (6) has a unique root y = h(x), the reduced system is
rewritten as
x = f(x, h(x)) = fr(x)
(7)
A boundary-layer system is defined as
ay
aT
(8)
= g(X,y(T),O)
where T = tic is a stretching time scale. In (8) the vector x E R n is treated as a fixed
unknown parameter that takes values in Bx. The aim is to establish the stability properties
of the singularly perturbed system (3) and (4), for small c, from those of the reduced system
(7) and the boundary-layer system (8). The Lyapunov functions for system 7 and 8 are of
quadratic-type. In [SK84] it is shown that under mild assumptions, for sufficiently small
c, any weighted sum of the Lyapunov functions of the reduced and boundary-layer system
is a quadratic-type Lyapunov function for the singularly perturbed system (3) and (4).
The necessary assumptions are stated now [SK84]:
1. The reduced system (7) has a Lyapunov function V : R n
-+
R+ such that for all
xE Bx
(9)
where t/I(x) is a scalar-valued function of x that vanishes at x = 0 and is different
from zero for all other x E Bx. This condition guarantees that x = 0 is an
asymptotically stable equilibrium point of the reduced system (7).
2The symbol Bx indicates a closed sphere centered at x = OJ By is defined in the same
way.
A. MEYER-BASE
340
2. The boundary-layer system (8) has a Lyapunov function W(x, y) : R n x R m
R+ such that for all x E Bx and y E By
('\7yW(X,y)fg(X,y , O)::;-0:2??(y-h(x))
0:2>0
->
(10)
where ?>(y - h(x)) is a scalar-valued function (y - h(x)) E R m that vanishes
at y = h(x) and is different from zero for all other x E Bx and y E By. This
condition guarantees that y = h(x) is an asymptotically stable equilibrium point
of the boundary-layer system (8).
3. The following three inequalities hold "Ix E Bx and Vy E By:
a.)
('\7 ,..W(x, y)ff(x, y) ::; C1?>2(y - h(x)) + C21/J(X)?>(Y - h(x))
(11)
b.)
('\7,.. V(x)f[f(x, y) - f(x, h(x))] ::; /311/J(X)?>(y - h(x))
(12)
c.)
<
('\7yW(x,y)f[g(x,y,()-g(x,y,O)]
(K1?>2(y - h(x))
+
(K21/J(X)?>(Y - h(x)) (13)
The constants C1, C2, /31 , K1 and K2 are nonnegative. The inequalities above determine the
permissible interaction between the slow and fast variables. They are basically smoothness
requirements of f and g.
After these introductory remarks the stability criterion is now stated:
Theorem: Suppose that conditions 1-3 hold; let d be a positive number such that
0< d < 1, and let c*(d) be the positive number given by
(14)
where Ih = f{2 + G2, 'Y = f{l + Gl , then for all c < c*(d), the origin (x
is an asymptotically stable equilibrium point of (3) and (.0 and
v(x, y) = (1 - d)V(x)
+ dW(x, y)
= y = 0)
(15)
is a Lyapunov function of (3) and (4).
t
If we put c =
as a global neural time constant in equation (1) then we have
to determine two Lyapunov functions: one for the boundary-layer system and the
other for the reduced-order system.
In [CG83] is mentioned a global Lyapunov function for a competitive neural network
with only an activation dynamics.
(16)
under the constraints: mij = mji, ai(xi)
2: 0,
fj(xj)
2: O.
This Lyapunov-function can be t?aken as one for the boundary-layer system (STMequation) , if the LTM contribution Si is considered as a fixed unknown parameter:
Quadratic-type Lyapunov Functions for Competitive Neural Networks
N
W(x, S) = L
j=l
r
10
i
(Xi
N
aj((j)!;((j)d(j-L BjSj
0
10
j=l
341
1 N
f;((j)d(j-2 L Dij!i(Xj)!k(Xk)
j=l
0
(17)
For the reduced-order system (LTM- equation) we can take as a Lyapunov-function:
N
V(S)
= ~STS = L S;
(18)
i=l
The Lyapunov-function for the coupled STM and LTM dynamics is the sum of the
two Lyapunov-function:
vex, S)
4
= (1 -
d)V(S)
+ dW(x, S)
(19)
DESIGN OF STABLE COMPETITIVE NEURAL
NETWORKS
Competitive neural networks with learning rules have moving equilibria during the
learning process. The concept of asymptotic stability derived from matrix perturbation theory can capture this phenomenon.
We design in this section a competitive neural network that is able to store a desired
pattern as a stable equilibrium.
The theoretical implications are illustrated in an example of a two neuron network .
Example: Let N = 2, ai = A, B j = B, Dii = a > 0, Dij = -(3
nonlinearity be a linear function f(xj) = Xj in equations (1) and (2).
<
0 and the
We get for the boundary-layer system:
N
Xj
= -Axj + L
Dijf(xd + BSj
(20)
i=l
and for the reduced-order system:
.
B
lA-a
C
A-a
S? = S ? [ - - -1] - - J
(21)
Then we get for the Lyapunov-functions:
(22)
and
(23)
A. MEYER-BASE
342
-0.2
.
\
\
-0.4
[JJ
OJ
.IJ
lIS
.IJ
[JJ
-0.6
/
~
U)
\J
-0.8
-1
-1.2
~
o
__
~
__
~
1
2
____ __ __- L__
3
4
5
time in msec
~
~
~~
6
__
~
__- L__
7
~~~
8
9
10
Figure 1: Time histories of the neural network with the origin as an equilibrium
point: STM states.
For the nonnegative constants we get: al = 1 - A~a' a2
with B < 0 , and C2 = i3l = i32 = 1 and I<l = I<2 = O.
= (A -
a)2,
Cl
= 'Y = - B,
We get some interesting implications from the above results as: A-a> B , A-a> 0
and B < o.
The above impications can be interpreted as follows: To achieve a stable equilibrium
point (0,0) we should have a negative contribution of the external stimulus term
and the sum of the excitatory and inhibitory contribution of the neurons should
be less than the time constant of a neuron. An evolution of the trajectories of the
STM and LTM states for a two neuron system is shown in figure 1 and 2. The
STM states exhibit first an oscillation from the expected equilibrium point, while
the LTM states reach monotonically the equilibrium point. We can see from the
pictures that the equilibrium point (0,0) is reached after 5 msec by the STM- and
LTM-states.
= 55+ll.of
.
.d(1-d)
From the above formula we can see that f*(d) has a maximum at d = d* = 0.5.
Choosing B = -5, A
5
= 1 and a = 0.5 we obtain for
f*(d) : f*(d)
CONCLUSIONS
We presented in this paper a quadratic-type Lyapunov function for analyzing the
stability of equilibrium points of competitive neural networks with fast and slow
dynamics. This global stability analysis method is interpreting neural networks
as nonlinear singularly perturbed systems. The equilibrium point is constrained
to a neighborhood of (0,0). This technique supposes a monotonically increasing
non-linearity and a symmetric lateral inhibition matrix. The learning rule is a
deterministic Hebbian. This method gives an upper bound on the perturbation
343
Quadratic-type Lyapunov Functions for Competitive Neural Networks
0.6
~--~--~----~--~--~----~--'---~--~r---.
0.5
0.4
III
<II
.j.J
III
.j.J
III
0.3
~
~
0.2
0.1
o L-__
1
o
~~~~~~~
2
3
____~__~__~__- L_ _~
4
5
6
7
8
9
10
time in msec
Figure 2: Time histories of the neural network with the origin as an equilibrium
point: LTM states.
parameter and such an estimation of a maximal positive neural time-constant. The
practical implication ofthe theoretical problem is the design of a competitive neural
network that is able to store a desired pattern as a stable equilibrium.
References
[Ama82] S. Amari. Competitive and cooperative aspects in dynamics of neural excitation and self-organization. Competition and cooperation in neural networks, 20:1-28, 7 1982.
[Ama83] S. Amari. Field theory of self-organizing neural nets. IEEE Transactions
on systems, machines and communication, SMC-13:741-748, 7 1983.
A. M. Cohen und S. Grossberg. Absolute Stability of Global Pattern Formation and Parallel Memory Storage by Competitive Neural Networks.
IEEE Transactions on Systems, Man and Cybernetics, SMC-13:815-826,
9 1983.
[Gro76] S. Grossberg. Adaptive Pattern Classification and Universal Recording.
Biological Cybernetics, 23:121-134, 1 1976.
[CG83]
[Heb49] D. O. Hebb. The Organization of Behavior. J. Wiley Verlag, 1949.
[SK84] Ali Saberi und Hassan Khalil. Quadratic-Type Lyapunov Functions for
Singularly Perturbed Systems. IEEE Transactions on A utomatic Control,
pp. 542-550, June 1984.
| 1037 |@word mild:1 establish:1 concept:2 selforganization:1 normalized:1 evolution:2 lyapunov:28 exhibiting:1 symmetric:1 modifying:1 simulation:1 illustrated:1 centered:1 yty:1 traditional:1 ll:1 during:1 hassan:1 dii:1 self:3 exhibit:1 excitation:2 lateral:4 behaviour:1 darmstadt:2 criterion:1 efficacy:1 l__:2 biological:1 ay:1 extension:1 interpreting:2 fj:1 current:1 hold:2 sufficiently:2 considered:3 bsj:1 activation:1 si:1 saberi:1 must:1 equilibrium:17 bj:1 consideration:2 analytic:1 a2:1 cohen:2 stated:2 fluid:1 negative:1 estimation:1 design:4 unknown:2 upper:1 neuron:10 xk:1 ai:2 smoothness:1 short:2 tool:2 weighted:3 nonlinearity:1 extended:1 communication:1 aim:1 modified:1 moving:1 stable:9 perturbation:5 cortex:1 mathematical:1 inhibition:3 c2:2 base:4 differential:1 axj:1 derived:2 somatotopic:1 june:1 introductory:1 connection:2 modelling:1 indicates:1 store:2 verlag:1 rigorous:1 expected:1 inequality:2 behavior:2 sense:1 xe:1 mechanic:1 yi:1 established:1 able:2 usually:1 pattern:5 actual:1 determine:3 increasing:1 monotonically:2 provided:1 retinotopic:1 stm:8 notation:1 linearity:1 germany:1 classification:1 signal:1 ii:1 tic:1 i3l:1 interpreted:1 hebbian:3 constrained:1 special:1 technical:2 characterized:1 long:2 field:2 sphere:1 shunting:1 guarantee:3 temporal:1 represents:1 picture:1 xd:1 laterally:5 coupled:1 k2:1 stimulus:5 control:1 unit:2 inhibited:5 adopting:1 understanding:1 c1:2 composed:2 treated:1 positive:3 asymptotic:4 law:1 singular:2 consequence:1 receptor:1 standpoint:1 attractor:3 analyzing:1 permissible:1 establishing:1 interesting:1 organization:3 subject:1 recording:1 examined:1 flow:1 limited:1 smc:2 feedforward:1 iii:3 storing:1 c21:1 excitatory:1 grossberg:4 unique:3 practical:1 implication:3 variety:1 xj:7 cooperation:1 gl:1 necessary:2 jth:1 regarding:1 l_:1 institute:1 wide:1 characterizing:1 absolute:1 desired:3 universal:1 fg:1 boundary:10 theoretical:2 cortical:3 earlier:2 sensory:1 adaptive:1 get:4 jj:2 remark:1 put:1 storage:1 transaction:3 sj:1 yw:2 map:2 deterministic:2 memory:4 dij:2 primitive:1 global:4 reduced:11 perturbed:9 periodic:1 spatiotemporal:1 supposes:1 vy:1 inhibitory:2 rule:3 xi:4 dw:2 stability:11 informatics:1 variation:1 pool:1 decoupling:1 suppose:1 complex:2 cl:1 origin:4 assuming:2 external:4 bx:10 asymptotically:4 li:1 relaxation:1 cooperative:1 sum:5 neuronal:1 capture:1 topographical:1 ff:1 sts:1 slow:7 hebb:1 wiley:1 oscillation:1 meyer:4 adiabatic:1 root:1 valuable:1 closed:1 mentioned:1 vex:1 vanishes:2 und:2 reached:1 competitive:14 investigates:1 parallel:1 layer:13 singularly:9 bound:1 dynamic:18 quadratic:11 ix:1 theorem:1 nonnegative:2 contribution:4 activity:5 formula:1 ali:1 constraint:1 k21:1 stretching:1 symbol:1 ofthe:1 lyl2:2 aspect:2 ih:1 mji:1 utomatic:1 extremely:1 basically:1 ltm:10 magnitude:1 trajectory:1 anke:1 fast:7 describe:1 cybernetics:2 history:2 embrace:1 synapsis:1 reach:1 formation:1 choosing:1 neighborhood:1 synaptic:6 valued:2 pp:1 amari:4 modification:5 g2:1 scalar:2 restricted:1 mij:1 equation:8 mutually:1 net:2 describing:1 interaction:1 maximal:1 fr:1 man:1 organizing:1 rewritten:1 response:1 achieve:1 apply:2 oj:2 called:1 competition:2 khalil:1 msec:3 la:1 convergence:2 requirement:1 neurodynamical:1 aken:1 nonlinear:3 include:2 recurrent:1 defines:1 mode:1 aj:2 ij:3 phenomenon:3 k1:2 |
44 | 1,038 | Stable Linear Approximations to
Dynamic Programming for Stochastic
Control Problems with Local Transitions
Benjamin Van Roy and John N. Tsitsiklis
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Cambridge, MA 02139
e-mail: bvr@mit.edu, jnt@mit.edu
Abstract
We consider the solution to large stochastic control problems by
means of methods that rely on compact representations and a variant of the value iteration algorithm to compute approximate costto-go functions. While such methods are known to be unstable in
general, we identify a new class of problems for which convergence,
as well as graceful error bounds, are guaranteed. This class involves linear parameterizations of the cost-to- go function together
with an assumption that the dynamic programming operator is a
contraction with respect to the Euclidean norm when applied to
functions in the parameterized class. We provide a special case
where this assumption is satisfied, which relies on the locality of
transitions in a state space. Other cases will be discussed in a full
length version of this paper.
1
INTRODUCTION
Neural networks are well established in the domains of pattern recognition and
function approximation, where their properties and training algorithms have been
well studied. Recently, however, there have been some successful applications of
neural networks in a totally different context - that of sequential decision making
under uncertainty (stochastic control).
Stochastic control problems have been studied extensively in the operations research
and control theory literature for a long time, using the methodology of dynamic
programming [Bertsekas, 1995]. In dynamic programming, the most important
object is the cost-to-go (or value) junction, which evaluates the expected future
1046
B. V. ROY, 1. N. TSITSIKLIS
cost to be incurred, as a function of the current state of a system. Such functions
can be used to guide control decisions.
Dynamic programming provides a variety of methods for computing cost-to- go
functions. Unfortunately, dynamic programming is computationally intractable in
the context of many stochastic control problems that arise in practice. This is
because a cost-to-go value is computed and stored for each state, and due to the
curse of dimensionality, the number of states grows exponentially with the number
of variables involved.
Due to the limited applicability of dynamic programming, practitioners often rely
on ad hoc heuristic strategies when dealing with stochastic control problems. Several recent success stories - most notably, the celebrated Backgammon player of
Tesauro (1992) - suggest that neural networks can help in overcoming this limitation. In these applications, neural networks are used as compact representations
that approximate cost- to-go functions using far fewer parameters than states. This
approach offers the possibility of a systematic and practical methodology for addressing complex stochastic control problems.
Despite the success of neural networks in dynamic programming, the algorithms
used to tune parameters are poorly understood. Even when used to tune the parameters of linear approximators, algorithms employed in practice can be unstable
[Boyan and Moore, 1995; Gordon, 1995; Tsitsiklis and Van Roy, 1994].
Some recent research has focused on establishing classes of algorithms and compact
representation that guarantee stability and graceful error bounds. Tsitsiklis and Van
Roy (1994) prove results involving algorithms that employ feature extraction and interpolative architectures. Gordon (1995) proves similar results concerning a closely
related class of compact representations called averagers. However, there remains
a huge gap between these simple approximation schemes that guarantee reasonable
behavior and the complex neural network architectures employed in practice.
In this paper, we motivate an algorithm for tuning the parameters of linear compact representations, prove its convergence when used in conjunction with a class
of approximation architectures, and establish error bounds. Such architectures are
not captured by previous results. However, the results in this paper rely on additional assumptions. In particular, we restrict attention to Markov decision problems
for which the dynamic programming operator is a contraction with respect to the
Euclidean norm when applied to functions in the parameterized class. Though
this assumption on the combination of compact representation and Markov decision problem appears restrictive, it is actually satisfied by several cases of practical
interest. In this paper, we discuss one special case which employs affine approximations over a state space, and relies on the locality of transitions. Other cases will
be discussed in a full length version of this paper.
2
MARKOV DECISION PROBLEMS
We consider infinite horizon, discounted Markov decision problems defined on a
finite state space S = {I, .. . , n} [Bertsekas, 1995]. For every state i E S, there is
a finite set U(i) of possible control actions, and for each pair i,j E S of states and
control action u E U (i) there is a probability Pij (u) of a transition from state i to
state j given that action u is applied. Furthermore, for every state i and control
action u E U (i), there is a random variable Ciu which represents the one-stage cost
if action u is applied at state i.
Let
f3
E [0,1) be a discount factor. Since the state spaces we consider in this paper
Stable Linear Approximations Programming for Stochastic Control Problems
1047
are finite, we choose to think of cost-to-go functions mapping states to cost- to-go
values in terms of cost-to-go vectors whose components are the cost-to-go values
of various states. The optimal cost-to-go vector V* E !R n is the unique solution to
Bellman's equation:
Vi*= min. (E[CiU]+.BLPij(U)Vj*),
uEU(t)
ViES.
(1)
jES
If the optimal cost-to-go vector is known, optimal decisions can be made at any
state i as follows:
u*=arg min. (E[CiU]+.BLPij(U)l--j*),
uEU(t)
ViES.
jES
There are several algorithms for computing V* but we only discuss the value iteration algorithm which forms the basis of the approximation algorithm to be considered later on. We start with some notation. We define the dynamic programming
operator as the mapping T : !R n r-t !R n with components Ti : !R n r-t !R defined by
Ti(V) = min. (E[CiU]+.BLPij(U)Vj ),
uEU(t)
ViES.
(2)
jES
It is well known and easy to prove that T is a maximum norm contraction. In
particular ,
IIT(V) - T(V')lloo :s;
.BIIV -
V'lIoo,
The value iteration algorithm is described by
V(t + 1) = T(V(t)),
where V (0) is an arbitrary vector in !R n used to initialize the algorithm. It is easy
to see that the sequence {V(t)} converges to V*, since T is a contraction.
3
APPROXIMATIONS TO DYNAMIC PROGRAMMING
Classical dynamic programming algorithms such as value iteration require that we
maintain and update a vector V of dimension n. This is essentially impossible when
n is extremely large, as is the norm in practical applications. We set out to overcome
this limitation by using compact representations to approximate cost-to-go vectors.
In this section, we develop a formal framework for compact representations, describe
an algorithm for tuning the parameters of linear compact representations, and prove
a theorem concerning the convergence properties of this algorithm.
3.1
COMPACT REPRESENTATIONS
A compact representation (or approximation architecture) can be thought of as a
scheme for recording a high-dimensional cost-to-go vector V E !R n using a lowerdimensional parameter vector wE !Rm (m ?n). Such a scheme can be described by
a mapping V : !Rm r-t !R n which to any given parameter vector w E !R m associates
a cost-to-go vector V(w). In particular, each component Vi (w) of the mapping is
the ith component of a cost-to-go vector represented by the parameter vector w.
Note that, although we may wish to represent an arbitrary vector V E !R n , such a
scheme allows for exact representation only of those vectors V which happen to lie
in the range of V.
In this paper, we are concerned exclusively with linear compact representations of
the form V(w) = Mw, where M E !Rnxm is a fixed matrix representing our choice
of approximation architecture. In particular, we have Vi(w) = Miw, where Mi (a
row vector) is the ith row of the matrix M.
1048
3.2
B. V. ROY, J. N. TSITSIKLIS
A STOCHASTIC APPROXIMATION SCHEME
Once an appropriate compact representation is chosen, the next step is to generate
a parameter vector w such that V{w) approximates V*. One possible objective is
to minimize squared error of the form IIMw - V*II~. If we were given a fixed set
of N samples {( iI, ~:), (i2' Vi;), ... , (i N, ~:)} of an optimal cost-to-go vector V*, it
seems natural to choose a parameter vector w that minimizE's E7=1 (Mij w - ~;)2.
On the other hand, if we can actively sample as many data pairs as we want, one
at a time, we might consider an iterative algorithm which generates a sequence of
parameter vectors {w(t)} that converges to the desired parameter vector. One such
algorithm works as follows: choose an initial guess w(O), then for each t E {O, 1, ... }
sample a state i{t) from a uniform distribution over the state space and apply the
iteration
(3)
where {a(t)} is a sequence of diminishing step sizes and the superscript T denotes
a transpose. Such an approximation scheme conforms to the spirit of traditional
function approximation - the algorithm is the common stochastic gradient descent
method. However, as discussed in the introduction, we do not have access to such
samples of the optimal cost-to-go vector. We therefore need more sophisticated
methods for tuning parameters.
One possibility involves the use of an algorithm similar to that of Equation 3,
replacing samples of ~(t) with TiCt) (V(t)). This might be justified by the fact that
T(V) can be viewed as an improved approximation to V*, relative to V. The
modified algorithm takes on the form
(4)
Intuitively, at each time t this algorithm treats T(Mw(t)) as a "target" and takes
a steepest descent step as if the goal were to find a w that would minimize IIMwT(Mw(t))II~. Such an algorithm is closely related to the TD(O) algorithm of Sutton
(1988). Unfortunately, as pointed out in Tsitsiklis and Van Roy (1994), such a
scheme can produce a diverging sequence {w(t)} of weight vectors even when there
exists a parameter vector w* that makes the approximation error V* - Mw* zero at
every state. However, as we will show in the remainder of this paper, under certain
assumptions, such an algorithm converges.
3.3
MAIN CONVERGENCE RESULT
Our first assumption concerning the step size sequence {a(t)} is standard to stochastic approximation and is required for the upcoming theorem.
Assumption 1 Each step size a(t) is chosen prior to the generation of i(t), and
the sequence satisfies E~o a(t) = 00 and E~o a 2 (t) < 00.
Our second assumption requires that T : lR n t-+ lR n be a contraction with respect
to the Euclidean norm, at least when it operates on value functions that can be
represented in the form Mw, for some w. This assumption is not always satisfied,
but it appears to hold in some situations of interest, one of which is to be discussed
in Section 4.
{3' E [0, 1) such that
::; {3'IIMw - Mw'112,
Assumption 2 There exists some
IIT(Mw) - T(Mw')112
Vw,w' E lRm.
Stable Linear Approximations to Programming for Stochastic Control Problems
1049
The following theorem characterizes the stability and error bounds associated with
the algorithm when the Markov decision problem satisfies the necessary criteria.
Theorem 1 Let Assumptions 1 and 2 hold, and assume that M has full column
rank. Let I1 = M(MT M)-l MT denote the projection matrix onto the subspace
X = {Mwlw E ~m}. Then,
(a) With probability 1, the sequence w(t) converges to w*, the unique vector that
solves:
Mw* = I1T(Mw*).
(b) Let V* be the optimal cost-to-go vector. The following error bound holds:
IIMw* - V*1I2
3.4
~ (1 ;!~ynllI1V* - V*lloo.
OVERVIEW OF PROOF
Due to space limitations, we only provide an overview of the proof of Theorem 1.
Let s : ~m
f-7 ~m
be defined by
s(w)
=E
[( Miw - Ti(Mw(t)))MT] ,
where the expectation is taken over i uniformly distributed among {I, .. . , n}.
Hence,
E[w(t + l)lw(t), a(t)] = w(t) - a(t)s(w(t)),
where the expectation is taken over i(t). We can rewrite s as
s(w) =
~(MTMW -
MTT(MW)) ,
and it can be thought of as a vector field over ~m. If the sequence {w(t)} converges
to some w, then s (w) must be zero, and we have
MTMw
Mw
MTT(Mw)
I1T(Mw).
=
Note that
III1T(Mw) -
I1T(Mw')lb
~
{j'IIMw - Mw'112,
Vw,w' E
~m,
due to Assumption 2 and the fact that projection is a nonexpansion of the Euclidean
norm. It follows that I1Te) has a unique fixed point w* E ~m, and this point
uniquely satisfies
Mw* = I1T(Mw*).
We can further establish the desired error bound:
IIMw* -
V*112 <
IIMw* - I1T(I1V*) 112 + III1T(I1V*) - I1V*112 + III1V*
< {j'IIMw* - V*112 + IIT(I1V*) - V*112 + III1V* - V*1I2
< t3'IIMw* - V*112 + (1 + mv'nII I1 V* - V*lloo,
- V*112
and it follows that
Consider the potential function U(w)
(\1U(w))T s(w) 2 ,U(w), for some,
=
>
~llw
-
w*II~.
We will establish that
0, and we are therefore dealing with a
B. V. ROY, J. N. TSITSIKLIS
1050
"pseudogradient algorithm" whose convergence follows from standard results on
stochastic approximation [Polyak and Tsypkin, 1972J. This is done as follows:
(\7U(w)f s(w)
~ (w -
w*) T MT (Mw - T(Mw))
~ (w -
w*) T MT(Mw - IIT(Mw) - (J - II)T(MW))
~(MW-Mw*)T(MW-IIT(MW)),
=
where the last equality follows because MTrr = MT. Using the contraction assumption on T and the nonexpansion property of projection mappings, we have
IlIIT(Mw) - Mw*112
::;
IIIIT(Mw) - rrT(Mw*)112
,6'IIMw - Mw*1I2'
and applying the Cauchy-Schwartz inequality, we obtain
(\7U(W))T s(w)
1
n
> -(IIMw - Mw*ll~ -IIMw - Mw*1121IMw* - IIT(Mw)112)
> !:.(l - ,6')IIMw - Mw*II~?
n
Since M has full column rank, it follows that (\7U(W))T s(w) ~ 1'U(w), for some
fixed l' > 0, and the proof is complete.
4
EXAMPLE: LOCAL TRANSITIONS ON GRIDS
Theorem 1 leads us to the next question: are there some interesting cases for which
Assumption 2 is satisfied? We describe a particular example here that relies on
properties of Markov decision problems that naturally arise in some practical situations.
When we encounter real Markov decision problems we often interpret the states
in some meaningful way, associating more information with a state than an index
value. For example, in the context of a queuing network, where each state is one
possible queue configuration, we might think of the state as a vector in which each
component records the current length of a particular queue in the network. Hence,
if there are d queues and each queue can hold up to k customers, our state space is
(Le., the set of vectors with integer components each in the range
a finite grid
{O, ... ,k-l}).
zt
Consider a state space where each state i E {I, ... , n} is associated to a point
xi E
(n = k d ), as in the queuing example. We might expect that individual
transitions between states in such a state space are local. That is, if we are at
a state xi the next visited state x j is probably close to xi in terms of Euclidean
distance. For instance, we would not expect the configuration of a queuing network
to change drastically in a second. This is because one customer is served at a time
so a queue that is full can not suddenly become empty.
zt
zt
grows exponentially
Note that the number of states in a state space of the form
with d. Consequently, classical dynamic programming algorithms such as value
iteration quickly become impractical. To efficiently generate an approximation to
the cost-to-go vector, we might consider tuning the parameters w E Rd and a E R
of an affine approximation ~(w, a) = w T xi + a using the algorithm presented in
the previous section. It is possible to show that, under the following assumption
Stable Linear Approximations to Programming for Stochastic Control Problems
1051
concerning the state space topology and locality of transitions, Assumption 2 holds
with f3' = .;f32
the algorithm.
+ k~3' and
thus Theorem 1 characterizes convergence properties of
Assumption 3 The Markov decision problem has state space S = {1, ... , k d }, and
each state i is uniquely associated with a vector xi E
with k ~ 6(1 - (32)-1 + 3.
A ny pair xi, x j E
of consecutively visited states either are identical or have
exactly one unequal component, which differs by one.
zt
zt
While this assumption may seem restrictive, it is only one example. There are many
more candidate examples, involving other approximation architectures and particular classes of Markov decision problems, which are currently under investigation.
5
CONCLUSIONS
We have proven a new theorem that establishes convergence properties of an algorithm for generating linear approximations to cost-to-go functions for dynamic
programming. This theorem applies whenever the dynamic programming operator
for a Markov decision problem is a contraction with respect to the Euclidean norm
when applied to vectors in the parameterized class. In this paper, we have described
one example in which such a condition holds. More examples of practical interest
will be discussed in a forthcoming full length version of this paper.
Acknowledgments
This research was supported by the NSF under grant ECS 9216531, by EPRI under
contract 8030-10, and by the ARO.
References
Bertsekas, D. P. (1995) Dynamic Programming and Optimal Control. Athena Scientific, Belmont, MA.
Boyan, J. A. & Moore, A. W. (1995) Generalization in Reinforcement Learning:
Safely Approximating the Value Function. In J. D. Cowan, G. Tesauro, and D.
Touretzky, editors, Advances in Neural Information Processing Systems 7. Morgan
Kaufmann.
Gordon, G. J. (1995) Stable Function Approximation in Dynamic Programming.
Technical Report: CMU-CS-95-103, Carnegie Mellon University.
Polyak, B. T. & Tsypkin, Y. Z., (1972) Pseudogradient Adaptation and Training
Algorithms. A vtomatika i Telemekhanika, 3:45-68.
Sutton, R. S. (1988) Learning to Predict by the Method of Temporal Differences.
Machine Learning, 3:9-44.
Tesauro, G. (1992) Practical Issues in Temporal Difference Learning.
Learning, 8:257-277.
Machine
Tsitsiklis, J. & Van Roy, B. (1994) Feature-Based Methods for Large Scale Dynamic
Programming. Technical Report: LIDS-P-2277, Laboratory for Information and
Decision Systems, Massachusetts Institute of Technology. Also to appear in Machine
Learning.
| 1038 |@word version:3 f32:1 norm:7 seems:1 contraction:7 initial:1 celebrated:1 configuration:2 exclusively:1 nii:1 current:2 must:1 john:1 belmont:1 happen:1 update:1 rrt:1 fewer:1 guess:1 ith:2 steepest:1 record:1 lr:2 provides:1 parameterizations:1 become:2 prove:4 notably:1 expected:1 behavior:1 bellman:1 discounted:1 td:1 curse:1 totally:1 notation:1 averagers:1 impractical:1 guarantee:2 safely:1 temporal:2 every:3 ti:3 exactly:1 rm:2 schwartz:1 control:16 grant:1 appear:1 bertsekas:3 understood:1 local:3 treat:1 despite:1 sutton:2 establishing:1 lrm:1 might:5 studied:2 limited:1 range:2 practical:6 unique:3 acknowledgment:1 practice:3 differs:1 thought:2 projection:3 suggest:1 onto:1 close:1 operator:4 context:3 impossible:1 applying:1 customer:2 go:21 attention:1 focused:1 stability:2 target:1 exact:1 programming:21 associate:1 roy:8 recognition:1 nonexpansion:2 benjamin:1 dynamic:18 motivate:1 rewrite:1 basis:1 iit:6 various:1 represented:2 describe:2 whose:2 heuristic:1 think:2 superscript:1 hoc:1 sequence:8 aro:1 remainder:1 adaptation:1 poorly:1 convergence:7 empty:1 produce:1 generating:1 converges:5 object:1 help:1 develop:1 solves:1 c:1 involves:2 imw:1 closely:2 stochastic:14 consecutively:1 require:1 generalization:1 investigation:1 hold:6 considered:1 mapping:5 predict:1 currently:1 visited:2 establishes:1 mit:2 always:1 e7:1 modified:1 conjunction:1 jnt:1 backgammon:1 rank:2 diminishing:1 i1:2 llw:1 arg:1 among:1 issue:1 special:2 initialize:1 field:1 once:1 f3:2 extraction:1 identical:1 represents:1 future:1 report:2 gordon:3 employ:2 individual:1 maintain:1 huge:1 interest:3 possibility:2 necessary:1 conforms:1 euclidean:6 desired:2 instance:1 column:2 cost:22 applicability:1 addressing:1 costto:1 uniform:1 successful:1 stored:1 systematic:1 contract:1 together:1 quickly:1 mtt:2 squared:1 satisfied:4 choose:3 actively:1 potential:1 mv:1 ad:1 vi:4 later:1 queuing:3 characterizes:2 start:1 minimize:3 kaufmann:1 efficiently:1 t3:1 identify:1 served:1 touretzky:1 whenever:1 evaluates:1 involved:1 naturally:1 associated:3 mi:1 proof:3 massachusetts:2 dimensionality:1 jes:3 sophisticated:1 actually:1 telemekhanika:1 appears:2 methodology:2 improved:1 done:1 though:1 furthermore:1 stage:1 hand:1 replacing:1 scientific:1 grows:2 hence:2 equality:1 laboratory:2 moore:2 i2:4 ll:1 uniquely:2 criterion:1 complete:1 recently:1 common:1 mt:6 overview:2 exponentially:2 discussed:5 approximates:1 interpret:1 mellon:1 cambridge:1 tuning:4 rd:1 grid:2 pointed:1 stable:5 access:1 recent:2 tesauro:3 certain:1 inequality:1 success:2 approximators:1 captured:1 morgan:1 additional:1 lowerdimensional:1 employed:2 ii:6 full:6 technical:2 offer:1 long:1 concerning:4 variant:1 involving:2 essentially:1 expectation:2 cmu:1 iteration:6 represent:1 justified:1 want:1 probably:1 recording:1 cowan:1 spirit:1 seem:1 practitioner:1 integer:1 mw:38 vw:2 easy:2 concerned:1 variety:1 forthcoming:1 architecture:7 restrict:1 associating:1 polyak:2 topology:1 pseudogradient:2 queue:5 action:5 tune:2 discount:1 extensively:1 generate:2 nsf:1 carnegie:1 interpolative:1 parameterized:3 uncertainty:1 reasonable:1 decision:15 ciu:4 bound:6 guaranteed:1 generates:1 min:3 extremely:1 graceful:2 combination:1 lid:1 making:1 intuitively:1 taken:2 computationally:1 equation:2 remains:1 discus:2 tsypkin:2 junction:1 operation:1 apply:1 appropriate:1 encounter:1 denotes:1 restrictive:2 prof:1 establish:3 approximating:1 classical:2 suddenly:1 upcoming:1 objective:1 question:1 strategy:1 traditional:1 gradient:1 subspace:1 distance:1 athena:1 bvr:1 mail:1 cauchy:1 unstable:2 length:4 index:1 unfortunately:2 zt:5 markov:10 finite:4 descent:2 situation:2 arbitrary:2 lb:1 overcoming:1 pair:3 required:1 unequal:1 established:1 pattern:1 natural:1 rely:3 boyan:2 representing:1 scheme:7 technology:2 prior:1 literature:1 relative:1 expect:2 generation:1 limitation:3 interesting:1 proven:1 incurred:1 affine:2 pij:1 editor:1 story:1 row:2 supported:1 last:1 transpose:1 tsitsiklis:8 guide:1 formal:1 drastically:1 institute:2 van:5 distributed:1 overcome:1 dimension:1 transition:7 made:1 reinforcement:1 far:1 ec:1 lloo:3 approximate:3 compact:13 dealing:2 iimw:12 xi:6 iterative:1 complex:2 domain:1 vj:2 main:1 arise:2 ny:1 ueu:3 wish:1 lie:1 candidate:1 lw:1 theorem:9 intractable:1 exists:2 sequential:1 horizon:1 gap:1 locality:3 applies:1 mij:1 satisfies:3 relies:3 ma:2 viewed:1 goal:1 consequently:1 change:1 infinite:1 operates:1 uniformly:1 called:1 diverging:1 player:1 meaningful:1 i1t:5 |
45 | 1,039 | Context-Dependent Classes in a Hybrid
Recurrent Network-HMM Speech
Recognition System
Dan Kershaw
Tony Robinson
Mike Hochberg ?
Cambridge University Engineering Department,
Trumpington Street, Cambridge CB2 1PZ, England.
Tel: [+44]1223332800, Fax: [+44]1223332662.
Email: djk.ajr@eng.cam.ac.uk
Abstract
A method for incorporating context-dependent phone classes in
a connectionist-HMM hybrid speech recognition system is introduced . A modular approach is adopted, where single-layer networks
discriminate between different context classes given the phone class
and the acoustic data. The context networks are combined with a
context-independent (CI) network to generate context-dependent
(CD) phone probability estimates. Experiments show an average
reduction in word error rate of 16% and 13% from the CI system
on ARPA 5,000 word and SQALE 20,000 word tasks respectively.
Due to improved modelling, the decoding speed of the CD system
is more than twice as fast as the CI system.
INTRODUCTION
The ABBOT hybrid connectionist-HMM system performed competitively with many
conventional hidden Markov model (HMM) systems in the 1994 ARPA evaluations
of speech recognition systems (Hochberg, Cook, Renals, Robinson & Schechtman
1995). This hybrid framework is attractive because it is compact, having far fewer
parameters than conventional HMM systems, whilst also providing the discriminative powers of a connectionist architecture.
It is well established that particular phones vary acoustically when they occur in
different phonetic contexts. For example a vowel may become nasalized when following a nasal sound. The short-term contextual influence of co-articulation is
?Mike Hochberg is now at Nuance Communications, 333 Ravenswood Avenue, Building
110, Menlo Park, CA 94025, USA. Tel: [+1] 415 6148260.
Context-dependent Classes in a Speech Recognition System
751
handled in HMMs by creating a model for all sufficiently differing phonetic contexts with enough acoustic evidence. This modelling of phones in their particular
phonetic contexts produces sharper probability density functions . This approach
vastly improves HMM recognition accuracy over equivalent context-independent
systems (Lee 1989). Although the recurrent neural network (RNN) model acoustic
context internally (within the state vector) , it does not model phonetic context.
This paper presents an approach to improving the ABBOT system through phonetic
context-dependent modelling.
In Cohen, Franco, Morgan , Rumelhart & Abrash (1992) separate sets of contextdependent output layers are used to model context effects in different states ofHMM
phone models. A set of networks discriminate between phones in 8 different broadclass left and right contexts. Training time is reduced by initialising from a CI multilayer perceptron (MLP) and only changing the hidden-to-output weights during
context-dependent training. This system performs well on the DARPA Resource
Management Task. The work presented in Zhoa, Schwartz , Sroka & Makhoul (1995)
followed along similar work to Cohen et al. (1992) . A context-dependent mixture
of experts (ME) system (Jordan & Jacobs 1994) based on the structure of the
context-independent ME was built. For each state, the whole training data was
divided into 46 parts according to its left or right context. Then, a separate ME
model was built for each context.
Another approach to phonetic context-dependent modelling with MLPs was proposed by Bourlard & Morgan (1993) . It was based on factoring the conditional
probability of a phone-in-context given the data in terms of the phone given the
data , and its context given the data and the phone. The approach taken in this
paper is a mixture of the above work. However, this work augments a recurrent network (rather than an MLP) and concentrates on building a more compact system,
which is more suited to our requirements. As a result, the context training scheme is
fast and is implemented on a workstation (rather than a parallel processing machine
as is used for training the RNN) .
OVERVIEW OF THE ABBOT HYBRID SYSTEM
The basic framework of the ABBOT system is similar to the one described in Bourlard
& Morgan (1994) except that a recurrent network is used as the acoustic model
for the within the HMM framework. A more detailed description of the recurrent
network for phone probability estimation is given in Robinson (1994). At each 16ms
time frame , the acoustic vector u(t) is mapped to an output vector y(t), which
represents an estimate of the posterior probability of each of the phone classes
Yi(t) ~ Pr(qi(t)lui H ) ,
(1)
where qi(t) is phone class i at time t , and ul = {u(l) , .. . , u(t)} is the input from
time 1 to t . Left (past) acoustic context is modelled internally by a 256 dimensional
state vector x(t) , which can be envisaged as "storing" the information that has
been presented at the input. Right (future) acoustic context is given by delaying
the posterior probability estimation until four frames of input have been seen by the
network . The network is trained using a modified version of error back-propagation
through time (Robinson 1994) .
Decoding with the hybrid connectionist-HMM approach is equivalent to conventional HMM decoding, with the difference being that the RNN models the state
observations. Like typical HMM systems, the recognition process is expressed as
finding the maximum a posteriori state sequence for the utterance . The decoding
criterion specified above requires the computation of the likelihood of the acoustic
752
D. KERSHAW, T. ROBINSON, M. HOCHBERG
data given a phone (state) sequence,
( (t)1 .(t)) = Pr(qi(t)lu(t))p(u(t))
q,
Pr(qi)'
(2)
p u
where p(u(t)) is the same for all phones, and hence drops out of the decoding
process. Hence, the network outputs are mapped to scaled likelihoods by,
Yi(t)
p(U(t)lqi(t)) ::: Pr(qd '
(3)
where the priors Pr(qi) are estimated from the training data. Decoding uses the
NOWAY decoder (Renals & Hochberg 1995) to compute the utterance model that is
most likely to have generated the observed speech signal.
CONTEXT-DEPENDENT PROBABILITY ESTIMATION
The approach taken by this work is to augment the CI RNN, in a similar vein
to Bourlard & Morgan (1993). The context-dependent likelihood, p(UtIC t , Qd,
can be factored as,
p
(u
t
IC Q) = Pr(Ct!Ut, Qt)p(Ut/Qt)
t, t
Pr(Ct!Qd'
(4)
where C is a set of context classes and Q is a set of context-independent phones or
monophones. Substituting for the context independent probability density function,
p(U t IQt), using (2), this becomes
p
(u
t
IC Q) = Pr(C t IUt, Qd Pr(Qt!U t ) (U)
t, t
Pr(CtIQt) Pr(Qt)
Pt?
(5)
The term p(U t} is constant for all frames, so this drops out of the decoding process
and is ignored for all further purposes. This format is extremely appealing since
Pr(C t IQt) and Pr(Qt) are estimated from the training data and the CI RNN estimates Pr(QtIUt). All that is then needed is an estimate of Pr(CtIU t , Qt). The
approach taken in this paper uses a set of context experts or modules for each monophone class to augment the existing CI RNN.
TRAINING ON THE STATE VECTOR
An estimate of Pr(Ct!U t , Qt) can be obtained by training a recurrent network to
discriminate between contexts Cj(t) for phone class qi(t), such that
(6)
where Yjli (t) is an estimate of the posterior probability of context class j given
phone class i. However, training recurrent neural networks in this format would
be expensive and difficult. For a recurrent format, the network must contain no
discontinuities in the frame-by-frame acoustic input vectors. This implies all recurrent networks for all the phone classes i must be "shown" all the data. Instead, the
assumption is made that since the state vector x = f(u), then
x(t
+ 4)
is a good representation for
uiH .
Hence, a single-layer perceptron is trained on the state vectors corresponding to
each monophone, qi, to classify the different phonetic context classes. Finally,
Context-dependent Classes in a Speech Recognition System
753
the likelihood estimates for the phonetic context class j for phone class i used
in decoding are given by,
Pr(qi(t)lui+4) Pr(cj (t)lx(t
+ 4) , qi(t))
Pr(cj(t)lqi(t)) Pr(qi(t))
Yi (t)Yjli (t)
Pr( Cj Iqi) Pr( qd .
(7)
Embedded training is used to estimate the parameters of the CD networks and
the training data is aligned using a Viterbi segmentation. Each context network
is trained on a non-overlapping subset of the state vectors generated from all the
Viterbi aligned training data. The context networks were trained using the RProp
training procedure (Robinson 1994).
=>
=> ~
=> -i
oo
:I
I
~
i:I
C.
CD
:I
"'tJ
CD
~.
o
""I
"'tJ
01ime 0
O~elayO
'---------fO ~
=>
0 11 - - - - '
a
C"
D)
g:
'~
yj1i(t)
~
_______ oJ'
,,
,',.-.._,1
Figure 1: The Phonetic Context-Dependent RNN Modular System.
The frame-by-frame phonetic context posterior probabilities are required as input to
the NOWAY decoder, i.e. all the outputs from the context modules on the right hand
side of Figure 1. These posterior probabilities are calculated from the numerator
of (7). The CI RNN stage operates in its normal fashion, generating frame-by-frame
monophone posterior probabilities. At the same time the CD modules take the state
vector generated by the RNN as input, in order to classify into a context class. The
754
D. KERSHAW, T. ROBINSON, M. HOCHBERG
RNN posterior probability outputs are multiplied by the module outputs to form
context-dependent posterior probability estimates.
RELATIONSHIP WITH MIXTURE OF EXPERTS
This architecture has similarities with mixture of experts (Jordan & Jacobs 1994).
During training, rather than making a "soft" split of the data as in the mixture of
experts case, the Viterbi segmentation selects one expert at every exemplar. This
means only one expert is responsible for each example in the data. This assumes that
the Viterbi segmentation is a good approximation to tjle segmentation/selection
process. Hence, each expert is trained on a small subset of the training data,
avoiding the computationally expensive requirement for each expert to "see" all
the data. During decoding, the RNN is treated as a gating network, smoothing the
predictions of the experts, in an analogous manner to a standard mixture of experts
gating network . For further description of the system see Kershaw, Hochberg &
Robinson (1995) .
CLUSTERING CONTEXT CLASSES
One of the problems faced by having a context-dependent system is to decide which
context classes are to be included in the CD system. A method for overcoming
this problem is a decision-tree based approach to cluster the context classes. This
guarantees a full coverage of all phones in any context with the context classes
being chosen using the acoustic evidence available. The tree clustering framework
also allows for the building of a small number of context-dependent phones, keeping
the new context-dependent connectionist system architecture compact. The tree
building algorithm was based on Young, Odell & Woodland (1994), and further
details can be found in Kershaw et al. (1995). Once the trees were built, they were
used to relabel the training data and the pronunciation lexicon.
EVALUATION OF THE CONTEXT SYSTEM
The context-independent networks were trained on the ARPA Wall Street Journal S184 Corpus. The phonetic context-dependent classes were clustered on the
acoustic data according to the decision tree algorithm. Running the data through a
recurrent network in a feed-forward fashion to obtain three million frames with 256
dimensional state vectors took approximately 8 hours on an HP735 workstation.
Training all the context-dependent networks on all the training data takes between
4- 6 hours (in total) on an HP735 workstation. The context-dependent modules
were cross-validated on a development set at the word level.
Results for two context-dependent systems, compared with the context-independent
baseline are shown in Table 1, where the 1993 spoke 5 test is used for cross-validation
and development purposes.
The context-dependent systems were also applied to larger tasks such as the recent
1995 SQALE (a European multi-language speech recognition evaluation) 20,000
word development and evaluation sets. The American English context-dependent
system (CD527) was extended to include a set of modules trained backwards in
time (which were log-merged with the forward context), to augment a four way logmerged context-independent system (Hochberg, Cook, Renals & Robinson 1994).
755
Context-dependent Classes in a Speech Recognition System
Table 1: Comparison Of The CI System With The CD205 And CD527 Systems,
For 5000 Word , Bigram Language Model Tasks.
1993
Test Sets
Spoke 5
Spoke 6
Eval.
CI System
WER
16.0
14.6
15.7
CD205 System
WER I Red!!. WER
14.0
12.7
12.2
16.3
14.3
8.4
CD527 System
WER I Red!!. WER
13.6
14.9
11.7
19.8
13.7
12.6
Table 2: Comparison Of The Merged CI Systems With The CD527US And
CD465UK Systems, For 20 ,000 Word Tasks. All Tests Use A Trigram Language
Model. The CD527US And CD465UK Evaluation Results Have Been Officially
Adjudicated .
1995 Test Sets
US English dev _test
US English evLtest
UK English dev _test
UK English evLtest
CI System
WER
12.8
14.5
15.6
16.4
CD System
WER
11.3
12.9 T
12.7
13.8 T
Red!!.
WER
12.2
9.8
18.9
15.7
Table 3: Comparison Of Average Utterance Decode Speed Of The CI Systems With
The CD527US And CD465UK Systems On An HP735, For 20,000 Word Tasks. All
Tests Use A Trigram Language Model, And The Same Pruning Levels.
Tests
American English
British English
CI
Utterance Av .
Decode Speed (s)
67
131
CD
Utterance Av .
Decode Speed (s)
31
48
Speedup
2.16
2.73
Table 4: The Number Of Parameters Used For The CI Systems As Compared With
The CD527US And CD465UK Systems.
System
American English
British English
# CI
Parameters
341,000
331 ,000
#CD
Parameters
612,000
570,000
'fo Increase In
Parameters
79.0
72.2
A similar system was built for British English (CD465). Table 2 shows the improvement gained by using context models. The daggers indicate the official entries for
the 1995 SQALE evaluation. These figures represent the lowest reported word error
rate for both the US and UK English tasks.
As a result of improved phonetic modelling and class discrimination the search
space was reduced. This meant that decoding speed was over twice as fast as the
context-dependent system, Table 3, even though there were roughly ten times as
many context-dependent phones compared to the monophones.
The increase in the number of parameters due to the introduction of the context
models for the SQALE evaluation system are shown in Table 4. Although this
seems a large increase in the number of system parameters, it is still an order of
magnitude less than any equivalent HMM system built for this task.
756
D. KERSHAW, T. ROBINSON, M. HOCHBERG
CONCLUSIONS
This paper has discussed a successful way of integrating phonetic context-dependent
classes into the current ABBOT hybrid system. The architecture followed a modular
approach which could be used to augment any current RNN-HMM hybrid system.
Fast training of the context-dependent modules was achieved. Training on all of the
SI84 corpus took between 4 and 6 hours. Utterance decoding was performed using
the standard NOWAY decoder. The word error was significantly reduced, whilst the
decoding speed of the context system was over twice as fast as the baseline system
(for 20,000 word tasks).
References
Bourlard, H. & Morgan, N. (1993), 'Continuous Speech Recognition by Connectionist Statistical Methods', IEEE Transactions on Neural Networks 4(6), 893- 909.
Bourlard, H. & Morgan, N. (1994), Connectionist Speech Recognition: A Hybrid
Approach, Kluwer Acedemic Publishers.
Cohen, M., Franco, H., Morgan, N., Rumelhart, D. & Abrash, V. (1992), ContextDependent Multiple Distribution Phonetic Modeling with MLPs, in 'NIPS 5'.
Hochberg, M., Cook, G., Renals, S. & Robinson, A. (1994), Connectionist Model
Combination for Large Vocabulary Speech Recognition, in 'Neural Networks
for Signal Processing', Vol. IV, pp. 269-278.
Hochberg, M., Cook, G., Renals, S., Robinson, A. & Schechtman, R. (1995), The
1994 ABBOT Hybrid Connectionist-HMM Large-Vocabulary Recognition System, in 'Spoken Language Systems Technology Workshop', ARPA, pp. 170-6.
Jordan, M. & Jacobs, R. (1994), 'Hierarchical Mixtures of Experts and the EM
Algorithm', Neural Computation 6, 181-214.
Kershaw, D., Hochberg, M. & Robinson, A. (1995), Incorporating ContextDependent Classes in a Hybrid Recurrent Network-HMM Speech Recognition
System, F-INFENG TR217, Cambridge University Engineering Department.
Lee, K.-F. (1989), Automatic Speech Recognition; The Development of the SPHINX
System, Kluwer Acedemic Publishers.
Renals, S. & Hochberg, M. (1995), Efficient Search Using Posterior Phone Probability Estimates, in 'ICASSP', Vol. 1, pp. 596-9.
Robinson, A. (1994), 'An Application of Recurrent Nets to Phone Probability Estimation.', IEEE Transactions on Neural Networks 5(2),298-305.
Young, S., Odell, J. & Woodland, P. (1994), 'Tree-Based State Tying for High Accuracy Acoustic Modelling', Spoken Language Systems Technology Workshop.
Zhoa, Y., Schwartz, R., Sroka, J . & Makhoul, J. (1995), Hierarchical Mixtures of
Experts Methodology Applied to Continuous Speech Recognition, in 'NIPS 7'.
| 1039 |@word version:1 bigram:1 seems:1 eng:1 jacob:3 reduction:1 past:1 existing:1 current:2 contextual:1 must:2 drop:2 discrimination:1 fewer:1 cook:4 short:1 lexicon:1 lx:1 along:1 become:1 dan:1 manner:1 roughly:1 multi:1 becomes:1 lowest:1 tying:1 whilst:2 differing:1 finding:1 spoken:2 guarantee:1 every:1 scaled:1 uk:4 schwartz:2 internally:2 iqi:1 engineering:2 approximately:1 twice:3 co:1 hmms:1 responsible:1 iqt:2 cb2:1 procedure:1 rnn:12 significantly:1 word:11 integrating:1 selection:1 context:72 influence:1 conventional:3 equivalent:3 factored:1 analogous:1 pt:1 decode:3 us:2 rumelhart:2 recognition:16 expensive:2 vein:1 mike:2 observed:1 module:7 uih:1 cam:1 trained:7 icassp:1 darpa:1 fast:5 pronunciation:1 modular:3 larger:1 sequence:2 net:1 took:2 renals:6 aligned:2 description:2 cluster:1 requirement:2 produce:1 generating:1 oo:1 recurrent:12 ac:1 exemplar:1 qt:7 implemented:1 coverage:1 implies:1 indicate:1 qd:5 concentrate:1 merged:2 clustered:1 wall:1 sufficiently:1 ic:2 normal:1 viterbi:4 substituting:1 trigram:2 vary:1 purpose:2 estimation:4 ravenswood:1 modified:1 rather:3 validated:1 improvement:1 modelling:6 likelihood:4 baseline:2 posteriori:1 dependent:27 factoring:1 hidden:2 selects:1 djk:1 augment:4 development:4 smoothing:1 once:1 having:2 represents:1 park:1 future:1 connectionist:9 ime:1 vowel:1 mlp:2 eval:1 evaluation:7 mixture:8 tj:2 tree:6 iv:1 monophone:3 arpa:4 classify:2 soft:1 modeling:1 abbot:6 dev:2 subset:2 entry:1 successful:1 reported:1 combined:1 density:2 lee:2 decoding:12 acoustically:1 vastly:1 management:1 creating:1 expert:13 american:3 sphinx:1 performed:2 red:3 dagger:1 parallel:1 mlps:2 accuracy:2 modelled:1 lu:1 fo:2 email:1 rprop:1 pp:3 workstation:3 ut:2 improves:1 cj:4 segmentation:4 back:1 feed:1 lqi:2 methodology:1 improved:2 though:1 stage:1 until:1 hand:1 overlapping:1 propagation:1 building:4 effect:1 usa:1 contain:1 hence:4 kershaw:7 attractive:1 during:3 numerator:1 abrash:2 m:1 criterion:1 performs:1 overview:1 cohen:3 million:1 discussed:1 kluwer:2 cambridge:3 automatic:1 language:6 sroka:2 similarity:1 posterior:9 recent:1 phone:25 phonetic:14 yi:3 morgan:7 seen:1 envisaged:1 signal:2 odell:2 full:1 sound:1 multiple:1 england:1 cross:2 divided:1 qi:10 prediction:1 infeng:1 basic:1 multilayer:1 relabel:1 represent:1 achieved:1 publisher:2 jordan:3 backwards:1 split:1 enough:1 architecture:4 avenue:1 handled:1 ul:1 monophones:2 speech:14 ignored:1 woodland:2 detailed:1 nasal:1 officially:1 ten:1 augments:1 reduced:3 generate:1 estimated:2 vol:2 four:2 changing:1 spoke:3 wer:8 decide:1 decision:2 hochberg:13 initialising:1 layer:3 ct:3 followed:2 occur:1 speed:6 franco:2 extremely:1 format:3 speedup:1 department:2 trumpington:1 according:2 combination:1 makhoul:2 em:1 appealing:1 making:1 pr:22 taken:3 computationally:1 resource:1 needed:1 adopted:1 available:1 competitively:1 multiplied:1 hierarchical:2 ajr:1 assumes:1 clustering:2 tony:1 running:1 include:1 separate:2 mapped:2 hmm:14 street:2 decoder:3 me:3 nuance:1 relationship:1 providing:1 difficult:1 sharper:1 av:2 observation:1 markov:1 extended:1 communication:1 delaying:1 frame:10 overcoming:1 introduced:1 required:1 specified:1 acoustic:12 established:1 hour:3 nip:2 discontinuity:1 robinson:14 articulation:1 built:5 oj:1 power:1 treated:1 hybrid:11 bourlard:5 scheme:1 technology:2 utterance:6 fax:1 faced:1 prior:1 embedded:1 ofhmm:1 validation:1 storing:1 cd:10 keeping:1 english:11 side:1 perceptron:2 calculated:1 vocabulary:2 forward:2 made:1 far:1 transaction:2 pruning:1 compact:3 corpus:2 discriminative:1 search:2 continuous:2 table:8 contextdependent:3 ca:1 menlo:1 tel:2 improving:1 european:1 official:1 whole:1 fashion:2 young:2 british:3 gating:2 pz:1 evidence:2 incorporating:2 workshop:2 gained:1 ci:16 magnitude:1 suited:1 likely:1 expressed:1 conditional:1 included:1 typical:1 except:1 lui:2 operates:1 total:1 discriminate:3 schechtman:2 meant:1 avoiding:1 |
46 | 104 | 728
DIGITAL REALISATION OF SELF-ORGANISING MAPS
Nigel M. Allinson
M~rtin J. Johnson
Department of Electronics
University of York
York
Y015DD
England
Kevin J. Moon
ABSTRACT
A digital realisation of two-dimensional self-organising feature
maps is presented.
The method is based on subspace
Weight vector
classification using an n-tuple technique.
approximation and orthogonal projections to produce a winnertakes-all network are also discussed. Over one million effective
binary weights can be applied in 25ms using a conventional
microcomputer. Details of a number of image recognition tasks,
including character recognition and object centring, are
described.
INTRODUCTION
Background
The overall aim of our work is to develop fast and flexible systems for image
recognition, usually for commercial inspection tasks. There is an urgent need for
automatic learning systems in such applications, since at present most systems
employ heuristic classification techniques. This approach requires an extensive
development effort for each new application, which exaggerates implementation
costs; and for many tasks, there are no clearly defined features which can be
employed for classification. Enquiring of a human expert will often only produce
"good" and "bad" examples of each class and not the underlying strategies which
he may employ. Our approach is to model in a quite abstract way the perceptual
networks found in the mammalian brain for vision. A back-propagation network
could be employed to generalise about the input pattern space, and it would find
some useful representations. However, there are many difficulties with this
approach, since the network structure assumes nothing about the input space and
it can be difficult to bound complicated feature clusters using hyperplanes. The
mammalian brain is a layered structure, and so another model may be proposed
which involves the application of many two-dimensional feature maps. Each map
takes information from the output of the preceding one and performs some type of
clustering analysis in order to reduce the dimensionality of the input information.
For successful recognition, similar patterns must be topologically close so that
Digi tal Realisation of Self-Organising Maps
novel patterns are in the same general area of the feature map as the class they
are most like. There is therefore a need for both global and local ordering
processes within the feature map. The process of global ordering in a topological
map is termed, by Kohonen (1984), as self-organisation.
It Is important to realize that all feedforward networks perform only one function,
namely the labelling of areas in a pattern space. This paper concentrates on a
technique for realising large, fast, two-dimensional feature maps using a purely
digital implementation.
Figure 1. Unbounded Feature Map of Local Edges
Self Organisation
Global ordering needs to adapt the entire neural map, but local ordering needs
only local information. Once the optimum global organisation has been found,
then only more localised ordering can improve the topological organisation. This
process is the basis of the Kohonen clustering algorithm, where the specified area
729
730
Johnson, Allinson and Moon
of adaption decreases with time to give an increasing local ordering. It has been
shown that this approach gives optimal ordering at global and local levels (Oja,
1983). It may be considered as a dimensionality reduction algorithm, and can be
used as a vector quantiser.
Although Kohonen's self-organising feature maps have been successfully applied
to speech recognition (Kohonen, 1988; Tattersall et aI., 1988), there has been little
Investigation in their application for image recognition. Such feature maps can be
used to extract various image primitives, such as textures, localised edges and
terminations, at various scales of representations (Johnson and Allinson, 1988).
As a simple example, a test image of concentric circles is employed to construct a
small feature map of localised edges (Figure 1). The distance measure used is the
normalised dot product since in general magnitude information is unimportant.
Under these conditions, each neuron output can be considered a similarity
measure of the directions between the input pattern and the synaptic weight
vector. This map shows that similar edges have been grouped together and that
inverses are as far from each other as possible.
DIGITAL IMPLEMENTATION
Sub-Space Classification
Although a conventional serial computer is normally thought of as only performing
one operation at a time, there is a task which it can successfully perform involving
parallel computation. The action of addressing memory can be thought of as a
hi&JhlY parallel process, since it involves the comparison of a word, W, with a set ~
2 others where N is the number of bits in W. It is, in effect, performing 2
parallel computations - each being a single match. This can be exploited to speed
up the simulation of a network by using a conversion between conventional
pattern space labelling and binary addressing.
Figure 2 shows how the labelling of two-dimensional pattern space is equivalent to
the partitioning of the same space by the decision regions of a multiple layer
perceptron. If each quantised part of the space is labelled with a number for each
class then all that is necessary is for the pattern to be used as an address to give
the stored label (i.e. the response) for each class. These labels may form a cluster
of any shape and so multiple layers are not required to combine regions.
The apparent flaw in the above suggestion is that for anything other than a trivial
problem, the labelling of every part of pattern space is impractical. For example a
32 x 32 input vector would require a memory of 2 1024 words per unit! What is
needed is a coding system which uses some basic assumptions about patterns in
order to reduce the memory requirements. One assumption which can be made
is that patterns will cI uster together into various classes. As early as 1959, a
method known as the n-tuple technique was used for pattern recognition (Bledsoe
and Browning, 1959). This technique takes a number of subspaces of the pattern
Digital Realisation of Self-Organising Maps
PERCEPTRON
c1/c2
x2
x1
~=C1
=c2
I' I,' ,I,' ~I,J::-I' 1.-1-1?' 1' 1? I?'
I.;:I~'I~I-::I ' I,
I? ~~t:'
l
I? ~ ~~
~~~
....t It ..
,- II ?
010 I. ' .' I,
? ? ~'I
f
.
.ltl
~.
I- I.
??
,
I-
I_
.?
~~
I .? ~
I"~ ~~~ ~~~~
~
i' I, I.'
I? ?
? I?
x2
~
"I.'
~
~
~~
'.
II ? .'
I~. ,
1:\ ,
~~
1--1' ~~
, 1,' 1' ~
I'
?
I,' , I"
1411. Itili ?
1.'1,
i" "
1'1 ?
I~r.-r~
,
'
~
~ .'
~
.
LABELING
The labeling of a quantized
subspace is equivalent to
the partitioning of pattern
space by the multi-layer
perceptron.
I,'
I?
i? - It It 11111.
?1? 1?1?1-
? = Class 1 0
= Class 2
Figure 2. Comparison of Perceptron and Sub-Space Classification
space and uses the sum of the resultant labels as the overall response. This gives
a set of much smaller memories and inherent in the coding method is that similar
patterns will have identical labels.
For example, assume a 16 bit pattern - 0101101001010100. Taking a four-bit
sample from this, say bits 0-3, giving 0100. This can be used to address a 16 word
memory to produce a single bit. If this bit is set to 1, then it is in effect labelling all
patterns with 0100 as their first four bits; that is 4096 patterns of the form
xxxxxxxxxxxx0100. Taking a second sample, namely bits 4-7 (0101). This labels
xxxxxxxx0101xxxx patterns, but when added to the first sample there will be 256
patterns labelled twice (namely, xxxxxxxx01010100) and 7936 (Le. 8192-256)
labelled once.
The third four-bit sample produces 16 patterns (namely,
731
732
Johnson, Allinson and Moon
xxx(101001010100) labelled three times. The fourth sample produces only one
pattem 0101101001010100, which has been labelled four times. If an input pattern
is applied which differs from this by one bit, then this will now be labelled three
times by the samples; if it differs by two bits, it will either be labelled two or three
times depending on whether the changes were in the same four-bit sample or not.
Thus a distance measure is implicit in the coding method and reflects the
assumed clustering of patterns. Applying this approach to the earlier problem of a
32 x 32 binary input vector and taking 128 eight-bit samples results in a distance
measure between 0 and 128 and uses 32K bits of memory per unit.
Weight Vector Approximation
It is possible to make an estimate of the approximate weight vector for a particular
sample from the bit table. For simplicity, consider a binary image from which t
samples are taken to form a word, w, where
t-1
w = xo + 2x 1 + .... + 2 ~-1
This word can be used to address a vector W. Every bit in W[b] which is 1 either
increases the weight vector probability where the respective bit in the address is
set, or decreases if it is clear. Hence, if BIT [w,i] is the ith bit of wand A[i] is the
contents of the memory {O, 1} then,
2 t -1
E
W[b] =
i= 0
A[i] (2 BIT(b,i) -1)
This represents an approximate measure of the weight element. Table 1
demonstrates the principle for a four-bit sample memory. Given randomly
distributed inputs this binary vector is equivalent to the weight vector [2, 4, 0, -2].
If there is a large number of set bits in the memory for a particular unit then that
will always give a high response - that is, it will become saturated. However, if
there are too few bits set, this unit will not rfiSpond strongly to a general set of
patterns. The number of bits must, therefore, be fixed at the start of training,
distributed randomly within the memory and only redistribution of these bits
allowed. Set bits could be taken from any other sample, but some samples will be
more important than others. The proportion of 1's in an image should not be used
as a measure, otherwise large uniform regions will be more significant than the
pattern detail. This is a form of magnitude independent operation similar to the
use of the normalised dot product applied in the analogue approach and so bits
may only be moved from addresses with the same number of set bits as the
current address.
Digital Realisation of Self-Organising Maps
TABLE 1. Weight Vector Approximation
Address
X3 x2 x,
Xo
A
Weight change
W3 W2 W, W0
Address
x3 x2 x,
Xo
A
Weight change
W3 W2 W, Wo
+
-
0
0
0
0
0
1
0
0
0
1
0
0
0
1
0
1
0
0
1
0
0
0
1
0
0
1
0
1
0
0
a
0
1
1
1
+ +
1
0
1
1
0
0
1
0
0
1
-
1
1
0
0
1
+
+
-
0
1
0
1
0
1
1
0
1
1
+ +
-
0
1
1
0
1
1
1
1
0
1
+
+ + -
0
1
1
1
0
1
1
1
1
1
+
+
+
2
4
0-2
+
+
+
-
Equivalent weight vector
+
+
Orthogonal Projections
In order to speed up the simulation further, instead of representing each unit by a
single bit in memory, each unit can be represented by a combination of bits.
Hence many calculations can be effectively computed in parallel. The number of
units which require a 1 for a particular sample will always be relatively small, and
hence these can be coded. The coding method employed is to split the binary
word, W, into x and y fields. These projection fields address a two dimensional
map and so provide a fast technique of approximating the true content of the
memory. The x bits are summed separately to the y bits, and together they give a
good estimate of the unit co-ordinates with the most bits set in x and in y. This
map becomes, in effect, a winner-takes-all network. The reducing neighbourhood
of adaption employed in the Kohonen algorithm can also be readily incorporated
by applying an overall mask to this map during the training phase.
Though only this output map is required during normal application of the system
to image recognition tasks, it is possible to reconstruct the distribution of the twodimensional weight vectors. Figure 3, using the technique illustrated in Table 1,
shows this weight vector map for the concentric circle test image applied
733
734
Johnson, Allinson and Moon
Figure 3. Reconstructed Feature Map of Local Edges
previously in the conventional analogue approach. This is a small digitised map
containing 32 x 32 elements each with 16 x 16 input units and can be applied,
using a general purpose desktop microcomputer running at 4 mips, in a few
milliseconds.
APPLICATION EXAMPLES
Character Recognition
Though a long term objective remains the development of general purpose
computer vision systems, with many layers of interacting feature maps together
with suitable pre- and post-processing, many commercial tasks require decisions
based on a constricted range of objects - that is their perceptual set is severely
limited. However, ease of training and speed of application are paramount. An
example of such an application involves the recognition of characters.
Figures 4 and 5 show an input pattern of hand-drawn A's and B's. The network,
using the above digital technique, was given no information concerning the input
image and the input window of 32 x 32 pixels was placed randomly on the image.
,The network took less than one minute to adapt and can be applied in 25 ms. This
network is a 32 x 32 feature map of 32 x 32 elements, thus giving over one million
effective weights. The output map forms two distinct clusters, one for A's in the
top right corner of the map (Figure 4), and one for B's in the bottom left corner
(Figure 5). If further characters are introduced in the input image then the output
map will, during the training phase, self-organise to incorporate them.
Digital Realisation of Self-Organising Maps
Figure 4. Trained Network Response for 'A' in Input Window
Figure 5. Trained Network Response for 'B' in Input Window
735
736
Johnson, Allinson and Moon
Corrupted Images
Once the maximum response from the map is known, then the parts of the input
window which caused it can be reconstructed to provide a form of ideal input
pattern. The reconstructed input pattern is shown in the figures beneath the input
image. This reconstruction can be employed to recognise occuluded patterns or
to eliminate noise in subsequent input images.
Figure 6. Trained Network Response for Corrupted 'A' in Input Window.
Reconstructed Input Pattern Shown Below Test Image
Figure 6 shows the response of the network, trained on the input image of Figures
4 and 5, to a corrupted image of A's and B's. It has still managed to recognise the
input character as an A, but the reconstructed version shows that the extra noise
has been eliminated.
Object Centring
The centering of an object within the input window permits the application of
conformant mapping strategies, such as polar exponential grids, to be applied
which yields scale and rotation invariant recognition. The same network as
employed in the previous example was used, but a target position for the
maximum network response was specified and the network was adapted half-way
between this and the actual maximum response location.
Digital Realisation of Self-Organising Maps
Figure 7. Trained Network Response for Off-Centred Character. Input Window is
Low-Pass Filtered as shown.
Figure 7 shows such a network. When the response is in the centre of the output
map then an input object (character) is centred in the recognition window. In the
example shown, there is an off-centred response of the trained network for an offcentred character. This deviation is used to change the position of the input
window. Once centering has been achieved, object recognition can occur.
CONCLUSIONS
The application of unsupervised feature maps for image recognition has been
demonstrated. The digital realisation technique permits the application of large
maps. which can be applied in real time using conventional microcomputers. The
use of orthogonal projections to give a winner-take-all network reduces memorY
requirements by approximately 3D-fold and gives a computational cost of O(n 1/2),
where n is the number of elements in the map. The general approach can be
applied in any form of feedforward neural network.
Acknowledgements
This work has been supported by the Innovation and Research Priming Fund of
the University of York.
737
738
Johnson, Allinson and Moon
References
W. W. Bledsoe and I. Browning. Pattern Recognition and Reading by Machine.
Proc. East. Joint Compo Conf., 225-232 (1959).
M. J. Johnson and N. M. Allinson. An Advanced Neural Network for Visual Pattern
Recognition. Proc. UKIT 88, Swansea, 296-299 (1988).
T. Kohonen. Self Organization and Associative Memory. Springer-Vertag, Bertin
(1984).
T. Kohonen. The 'Neural' Phonetic Typewriter. Computer21,11-22 (1988).
E. Oja.
Subspace Methods of Pattern Recognition.
Letchworth (1983).
Research Studies Press,
G. D. Tattersall, P. W. Linford and R. Linggard. Neural Arrays for Speech
Recognition. Br. Telecom Techno/. J. Q. 140-163 (1988).
| 104 |@word version:1 proportion:1 termination:1 simulation:2 reduction:1 electronics:1 swansea:1 current:1 must:2 readily:1 realize:1 subsequent:1 shape:1 fund:1 half:1 inspection:1 desktop:1 ith:1 compo:1 filtered:1 quantized:1 location:1 organising:8 hyperplanes:1 unbounded:1 c2:2 become:1 combine:1 mask:1 multi:1 brain:2 little:1 actual:1 window:9 increasing:1 becomes:1 underlying:1 what:1 microcomputer:3 impractical:1 every:2 demonstrates:1 partitioning:2 normally:1 unit:9 local:7 severely:1 approximately:1 twice:1 co:1 ease:1 limited:1 range:1 differs:2 x3:2 area:3 thought:2 projection:4 word:6 pre:1 close:1 layered:1 twodimensional:1 applying:2 equivalent:4 conventional:5 map:36 demonstrated:1 primitive:1 simplicity:1 array:1 digitised:1 exaggerates:1 target:1 commercial:2 us:3 techno:1 element:4 recognition:18 mammalian:2 bottom:1 region:3 ordering:7 decrease:2 trained:6 purely:1 basis:1 joint:1 various:3 represented:1 distinct:1 fast:3 effective:2 labeling:2 kevin:1 quite:1 heuristic:1 apparent:1 say:1 otherwise:1 reconstruct:1 associative:1 took:1 reconstruction:1 product:2 kohonen:7 beneath:1 moved:1 cluster:3 optimum:1 requirement:2 produce:5 object:6 depending:1 develop:1 involves:3 concentrate:1 direction:1 human:1 redistribution:1 require:3 investigation:1 considered:2 normal:1 mapping:1 early:1 purpose:2 polar:1 proc:2 label:5 grouped:1 successfully:2 reflects:1 clearly:1 always:2 aim:1 flaw:1 browning:2 entire:1 eliminate:1 pixel:1 overall:3 classification:5 flexible:1 development:2 summed:1 field:2 once:4 construct:1 eliminated:1 identical:1 represents:1 unsupervised:1 others:2 realisation:8 employ:2 inherent:1 few:2 randomly:3 oja:2 phase:2 organization:1 saturated:1 tuple:2 edge:5 necessary:1 respective:1 orthogonal:3 typewriter:1 circle:2 earlier:1 cost:2 addressing:2 deviation:1 uniform:1 successful:1 johnson:8 too:1 stored:1 nigel:1 corrupted:3 off:2 together:4 containing:1 corner:2 conf:1 expert:1 centred:3 coding:4 caused:1 start:1 complicated:1 parallel:4 moon:6 yield:1 synaptic:1 centering:2 resultant:1 dimensionality:2 back:1 xxx:1 response:13 though:2 strongly:1 implicit:1 hand:1 propagation:1 effect:3 true:1 managed:1 hence:3 illustrated:1 during:3 self:12 allinson:8 anything:1 m:2 performs:1 image:19 novel:1 rotation:1 ltl:1 winner:2 million:2 discussed:1 he:1 significant:1 ai:1 automatic:1 grid:1 centre:1 dot:2 similarity:1 termed:1 phonetic:1 binary:6 exploited:1 preceding:1 employed:7 ii:2 multiple:2 reduces:1 match:1 england:1 adapt:2 calculation:1 long:1 concerning:1 post:1 serial:1 coded:1 involving:1 basic:1 vision:2 achieved:1 c1:2 background:1 separately:1 w2:2 extra:1 quantiser:1 ideal:1 feedforward:2 split:1 mips:1 winnertakes:1 w3:2 reduce:2 br:1 whether:1 effort:1 wo:1 speech:2 york:3 action:1 useful:1 clear:1 unimportant:1 millisecond:1 per:2 four:6 drawn:1 sum:1 wand:1 quantised:1 inverse:1 fourth:1 pattem:1 topologically:1 recognise:2 decision:2 bit:33 layer:4 bound:1 hi:1 fold:1 topological:2 paramount:1 adapted:1 occur:1 x2:4 tal:1 speed:3 performing:2 relatively:1 department:1 combination:1 smaller:1 character:8 urgent:1 invariant:1 xo:3 taken:2 previously:1 remains:1 needed:1 operation:2 permit:2 eight:1 neighbourhood:1 assumes:1 clustering:3 running:1 top:1 giving:2 approximating:1 objective:1 added:1 strategy:2 subspace:4 distance:3 constricted:1 w0:1 trivial:1 innovation:1 difficult:1 localised:3 implementation:3 perform:2 conversion:1 i_:1 neuron:1 incorporated:1 interacting:1 concentric:2 ordinate:1 introduced:1 namely:4 required:2 specified:2 extensive:1 address:9 usually:1 pattern:33 below:1 reading:1 including:1 memory:14 analogue:2 suitable:1 difficulty:1 advanced:1 representing:1 improve:1 extract:1 acknowledgement:1 suggestion:1 bertin:1 digital:10 principle:1 placed:1 supported:1 normalised:2 organise:1 generalise:1 perceptron:4 taking:3 distributed:2 made:1 far:1 reconstructed:5 approximate:2 global:5 assumed:1 table:4 priming:1 noise:2 nothing:1 allowed:1 x1:1 realising:1 telecom:1 sub:2 position:2 exponential:1 perceptual:2 third:1 minute:1 bad:1 organisation:4 effectively:1 ci:1 texture:1 magnitude:2 labelling:5 conformant:1 visual:1 springer:1 adaption:2 labelled:7 content:2 change:4 reducing:1 pas:1 east:1 incorporate:1 |
47 | 1,040 | Empirical Entropy Manipulation for
Real-World Problems
Paul Viola: Nicol N. Schraudolph, Terrence J. Sejnowski
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
10010 North Torrey Pines Road
La Jolla, CA 92037-1099
viola@salk.edu
Abstract
No finite sample is sufficient to determine the density, and therefore
the entropy, of a signal directly. Some assumption about either the
functional form of the density or about its smoothness is necessary.
Both amount to a prior over the space of possible density functions.
By far the most common approach is to assume that the density
has a parametric form.
By contrast we derive a differential learning rule called EMMA
that optimizes entropy by way of kernel density estimation. Entropy and its derivative can then be calculated by sampling from
this density estimate. The resulting parameter update rule is surprisingly simple and efficient.
We will show how EMMA can be used to detect and correct corruption in magnetic resonance images (MRI). This application is
beyond the scope of existing parametric entropy models.
1
Introduction
Information theory is playing an increasing role in unsupervised learning and visual
processing. For example, Linsker has used the concept of information maximization
to produce theories of development in the visual cortex (Linsker, 1988). Becker and
Hinton have used information theory to motivate algorithms for visual processing
(Becker and Hinton, 1992). Bell and Sejnowski have used information maximization
? Author to whom correspondence should be addressed. Current address: M.LT., 545
Technology Square, Cambridge, MA 02139.
P. VIOLA, N. N. SCHRAUDOLPH, T. J. SEJNOWSKI
852
to solve the "cocktail party" or signal separation problem (Bell and Sejnowski,
1995). In order to simplify analysis and implementation, each of these techniques
makes specific assumptions about the nature of the signals used, typically that the
signals are drawn from some parametric density. In practice, such assumptions are
very inflexible.
In this paper we will derive a procedure that can effectively estimate and manipulate the entropy of a wide variety of signals using non-parametric densities. Our
technique is distinguished by is simplicity, flexibility and efficiency.
We will begin with a discussion of principal components analysis (PCA) as an example of a simple parametric entropy manipulation technique. After pointing out some
of PCA's limitation, we will then derive a more powerful non-parametric entropy
manipulation procedure. Finally, we will show that the same entropy estimation
procedure can be used to tackle a difficult visual processing problem.
1.1
Parametric Entropy Estimation
Typically parametric entropy estimation is a two step process. We are given a
parametric model for the density of a signal and a sample. First, from the space
of possible density functions the most probable is selected. This often requires a
search through parameter space. Second, the entropy of the most likely density
function is evaluated.
Parametric techniques can work well when the assumed form of the density matches
the actual data. Conversely, when the parametric assumption is violated the resulting algorithms are incorrect. The most common assumption, that the data follow the
Gaussian density, is especially restrictive. An entropy maximization technique that
assumes that data is Gaussian, but operates on data drawn from a non-Gaussian
density, may in fact end up minimizing entropy.
1.2
Example: Principal Components Analysis
There are a number of signal processing and learning problems that can be formulated as entropy maximization problems. One prominent example is principal component analYllill (PCA). Given a random variable X, a vector v can be used to define
a new random variable, Y" = X . v with variance Var(Y,,) = E[(X . v - E[X . v])2].
The principal component v is the unit vector for which Var(Yv) is maximized.
In practice neither the density of X nor Y" is known. The projection variance is
computed from a finite sample, A, of points from X,
Var(Y,,) ~ Var(Y,,) == EA[(X . v - EA[X . v])2] ,
(1)
A
where VarA(Y,,) and E A [?] are shorthand for the empirical variance and mean evaluated over A. Oja has derived an elegant on-line rule for learning v when presented
with a sample of X (Oja, 1982).
Under the assumption that X is Gaussian is is easily proven that Yv has maximum
entropy. Moreover, in the absence of noise, Yij, contains maximal information about
X. However, when X is not Gaussian Yij is generally not the most informative
projection.
2
Estimating Entropy with Parzen Densities
We will now derive a general procedure for manipulating and estimating the entropy
of a random variable from a sample. Given a sample of a random variable X, we can
Empirical Entropy Manipulation for Real-world Problems
853
construct another random variable Y = F(X,l1). The entropy, heY), is a function of
v and can be manipulated by changing 11. The following derivation assumes that Y is
a vector random variable. The joint entropy of a two random variables, h(Wl' W2),
can be evaluated by constructing the vector random variable, Y = [Wl' w2 jT and
evaluating heY).
Rather than assume that the density has a parametric form, whose parameters are
selected using maximum likelihood estimation, we will instead use Parzen window
density estimation (Duda and Hart, 1973). In the context of entropy estimation, the
Parzen density estimate has three significant advantages over maximum likelihood
parametric density estimates: (1) it can model the density of any signal provided
the density function is smooth; (2) since the Parzen estimate is computed directly
from the sample, there is no search for parameters; (3) the derivative of the entropy
of the Parzen estimate is simple to compute.
The form of the Parzen estimate constructed from a sample A is
p.(y, A)
= ~A
I: R(y -
YA)
= EA[R(y -
(2)
YA)] ,
YAEA
where the Parzen estimator is constructed with the window function R(?) which
integrates to 1. We will assume that the Parzen window function is a Gaussian
density function. This will simplify some analysis, but it is not necessary. Any
differentiable function could be used. Another good choice is the Cauchy density.
Unfortunately evaluating the entropy integral
hey)
~ -E[log p.(~, A)] =
-
i:
log p.(y, A)dy
is inordinately difficult. This integral can however be approximated as a sample
mean:
(3)
where EB{ ] is the sample mean taken over the sample B. The sample mean
converges toward the true expectation at a rate proportional to 1/ v'N B (N B is
the size of B). To reiterate, two samples can be used to estimate the entropy of a
density: the first is used to estimate the density, the second is used to estimate the
entropyl. We call h? (Y) the EMMA estimate of entropy2.
One way to extremize entropy is to use the derivative of entropy with respect to v.
This may be expressed as
~h(Y) ~ ~h?(Y) =
dl1
dv
__1_ ' " LYAEA f;gt/J(YB - YA)
N B YBE
L....iB Ly A EA gt/J(YB - YA)
1
= NB
I: I: Wy (YB , YA) dl1d "21 Dt/J(YB -
(4)
YA),
(5)
YBEB YAEA
_
where WY(Yl' Y2) = L
gt/J(Yl - Y2)
(
) ,
YAEA gt/J Yl - YA
(6)
Dt/J(Y) == yT.,p-ly, and gt/J(Y) is a multi-dimensional Gaussian with covariance .,p.
Wy(Yl' Y2) is an indicator of the degree of match between its arguments, in a "soft"
lUsing a procedure akin to leave-one-out cross-validation a single sample can be used
for both purposes.
2EMMA is a random but pronounceable subset of the letters in the words "Empirical
entropy Manipulation and Analysis".
P. VIOLA, N. N. SCHRAUDOLPH, T. J. SEJNOWSKl
854
sense. It will approach one if Yl is significantly closer to Y2 than any element of A.
To reduce entropy the parameters v are adjusted such that there is a reduction in
the average squared distance between points which Wy indicates are nearby.
2.1
Stochastic Maximization Algorithm
Both the calculation of the EMMA entropy estimate and its derivative involve a
double summation. As a result the cost of evaluation is quadratic in sample size:
O(NANB). While an accurate estimate of empirical entropy could be obtained by
using all of the available data (at great cost), a stochastic estimate of the entropy
can be obtained by using a random subset of the available data (at quadratically
lower cost). This is especially critical in entropy manipulation problems, where the
derivative of entropy is evaluated many hundreds or thousands of times. Without
the quadratic savings that arise from using smaller samples entropy manipulation
would be impossible (see (Viola, 1995) for a discussion of these issues).
2.2
Estimating the Covariance
In addition to the learning rate .A, the covariance matrices of the Parzen window
functions, g,p, are important parameters of EMMA. These parameters may be chosen so that they are optimal in the maximum likelihood sense. For simplicity, we
assume that the covariance matrices are diagonal,.,p
DIAG(O"~,O"~, ... ). Following a derivation almost identical to the one described in Section 2 we can derive an
equation analogous to (4),
=
" ""
-d
h. (Y) = - 1 "L...J
L...J WY(YB' YA) ( -1 )
NB
dO"k
b
O"k
YsE YAEa
([y]~
-- O"~
1)
(7)
where [Y]k is the kth component of the vector y. The optimal, or most likely,
.,p minimizes h? (Y). In practice both v and .,p are adjusted simultaneously; for
example, while v is adjusted to maximize h? (YlI ), .,p is adjusted to minimize h? (y,,).
3
Principal Components Analysis and Information
As a demonstration, we can derive a parameter estimation rule akin to principal
components analysis that truly maximizes information. This new EMMA based
component analysis (ECA) manipulates the entropy of the random variable Y" =
X?v under the constraint that Ivl = 1. For any given value of v the entropy of Y v can
be estimated from two samples of X as: h?(Yv )
-EB[logEA[g,p(xB?v - XA? v)]],
where .,p is the variance of the Parzen window function. Moreover we can estimate
the derivative of entropy:
=
d~ h?(Y = ;
L
lI )
B
B
L Wy(YB, YA) .,p-l(YB - YA)(XB - XA) ,
A
where YA = XA . v and YB = XB . v. The derivative may be decomposed into parts
which can be understood more easily. Ignoring the weighting function Wy.,p-l we
are left with the derivative of some unknown function f(y"):
d
1
dvf(Yv ) = N N L L(YB - YA)(XB - XA)
(8)
B
A
B
A
What then is f(y")? The derivative of the squared difference between samples is:
d~ (YB - YA)2 = 2(YB - YA)(XB - XA) . So we can see that
f(Y,,) = 2N IN L
B
A
B
L(YB - YA)2
A
Empirical Entropy Manipulation for Real-world Problems
?
3
855
I
.
:
2
ECA-MIN
ECA-MAX
BCM
BINGO
PCA
o
-I
??
-2
t
-3
-4
-2
o
2
4
Figure 1: See text for description.
is one half the expectation of the squared difference between pairs of trials of Yv ?
Recall that PCA searches for the projection, Yv , that has the largest sample variance. Interestingly, f(Yv ) is precisely the sample variance. Without the weighting
term Wll ,p-l, ECA would find exactly the same vector that PCA does: the maximum variance projection vector. However because of Wll , the derivative of ECA
does not act on all points of A and B equally. Pairs of points that are far apart are
forced no further apart. Another way of interpreting ECA is as a type of robust
variance maximization. Points that might best be interpreted as outliers, because
they are very far from the body of other points, playa very small role in the minimization. This robust nature stands in contrast to PCA which is very sensitive to
outliers.
For densities that are Gaussian, the maximum entropy projection is the first principal component. In simulations ECA effectively finds the same projection as PCA,
and it does so with speeds that are comparable to Oja's rule. ECA can be used both
to find the entropy maximizing (ECA-MAX) and minimizing (ECA-MIN) axes. For
more complex densities the PCA axis is very different from the entropy maximizing
axis. To provide some intuition regarding the behavior of ECA we have run ECAMAX, ECA-MIN, Oja's rule, and two related procedures, BCM and BINGO, on
the same density. BCM is a learning rule that was originally proposed to explain
development of receptive fields patterns in visual cortex (Bienenstock, Cooper and
Munro, 1982). More recently it has been argued that the rule finds projections
that are far from Gaussian (Intrator and Cooper, 1992). Under a limited set of
conditions this is equivalent to finding the minimum entropy projection. BINGO
was proposed to find axes along which there is a bimodal distribution (Schraudolph
and Sejnowski, 1993).
Figure 1 displays a 400 point sample and the projection axes discussed above. The
density is a mixture of two clusters. Each cluster has high kurtosis in the horizontal
direction. The oblique axis projects the data so that it is most uniform and hence
has the highest entropy; ECA-MAX finds this axis. Along the vertical axis the
data is clustered and has low entropy; ECA-MIN finds this axis. The vertical axis
also has the highest variance. Contrary to published accounts, the first principal
component can in fact correspond to the minimum entropy projection. BCM, while
it may find minimum entropy projections for some densities, is attracted to the
kurtosis along the horizontal axis. For this distribution BCM neither minimizes nor
maximizes entropy. Finally, BINGO successfully discovers that the vertical axis is
very bimodal.
856
P. VIOLA, N. N. SCHRAUOOLPH, T. J. SEJNOWSKI
\ Corrupted-
1200
:. Corrected .?
....:
'
1000
800
600
400
200
~.1
0
0.1 0.2 0.3 0.4
'.
0.7 0.8 0.9
Figure 2: At left: A slice from an MRI scan of a head. Center: The scan after
correction. Right: The density of pixel values in the MRI scan before and after
correction.
4
Applications
EMMA has proven useful in a number of applications. In object recognition EMMA
has been used align 3D shape models with video images (Viola and Wells III, 1995).
In the area of medical imaging EMMA has been used to register data that arises
from differing medical modalities such as magnetic resonance images, computed
tomography images, and positron emission tomography (Wells, Viola and Kikinis,
1995).
4.1
MRI Processing
In addition, EMMA can be used to process magnetic resonance images (MRI).
An MRI is a 2 or 3 dimensional image that records the density of tissues inside the
body. In the head, as in other parts of the body, there are a number of distinct tissue
classes including: bone, water, white matter, grey matter, and fat. ~n principle the
density of pixel values in an MRI should be clustered, with one cluster for each
tissue class. In reality MRI signals are corrupted by a bias field, a multiplicative
offset that varies slowly in space. The bias field results from unavoidable variations
in magnetic field (see (Wells III et al., 1994) for an overview of this problem).
Because the densities of each tissue type cluster together tightly, an uncorrupted
MRI should have relatively low entropy. Corruption from the bias field perturbs
the MRI image, increasing the values of some pixels and decreasing others. The
bias field acts like noise, adding entropy to the pixel density. We use EMMA to find
a low-frequency correction field that when applied to the image, makes the pixel
density have a lower entropy. The resulting corrected image will have a tighter
clustering than the original density.
Call the uncorrupted scan s(z); it is a function of a spatial random variable z. The
corrupted scan, c( x) s( z) + b( z) is a sum of the true scan and the bias field. There
are physical reasons to believe b( x) is a low order polynomial in the components of
z. EMMA is used to minimize the entropy of the corrected signal, h( c( x) - b( z, v?,
where b( z, v), a third order polynomial with coefficients v, is an estimate for the
bias corruption.
=
Figure 2 shows an MRI scan and a histogram of pixel intensity before and after
correction. The difference between the two scans is quite subtle: the uncorrected
scan is brighter at top right and dimmer at bottom left. This non-homogeneity
Empirical Entropy Manipulation for Real-world Problems
857
makes constructing automatic tissue classifiers difficult. In the histogram of the
original scan white and grey matter tissue classes are confounded into a single peak
ranging from about 0.4 to 0.6. The histogram of the corrected scan shows much
better separation between these two classes. For images like this the correction field
takes between 20 and 200 seconds to compute on a Sparc 10.
5
Conclusion
We have demonstrated a novel entropy manipulation technique working on problems
of significant complexity and practical importance. Because it is based on nonparametric density estimation it is quite flexible, requiring no strong assumptions
about the nature of signals. The technique is widely applicable to problems in
signal processing, vision and unsupervised learning. The resulting algorithms are
computationally efficient.
Acknowledgements
This research was support by the Howard Hughes Medical Institute.
References
Becker, S. and Hinton, G. E. (1992). A self-organizing neural network that discovers
surfaces in random-dot stereograms. Nature, 355:161-163.
Bell, A. J. and Sejnowski, T. J. (1995). An information-maximisation approach to blind
separation. In Tesauro, G., Touretzky, D. S., and Leen, T. K., editors, Advance8 in
Neural Information Proce88ing, volume 7, Denver 1994. MIT Press, Cambridge.
Bienenstock, E., Cooper, L., and Munro, P. (1982). Theory for the development of neuron
selectivity: Orientation specificity and binocular interaction in visual cortex. Journal
of Neur08cience, 2.
Duda, R. and Hart, P. (1973). Pattern Cla88ification and Scene AnalY8i8. Wiley, New
York.
Intrator, N. and Cooper, L. N. (1992). Objective function formulation of the bcm theory of visual cortical plasticity: Statistical connections, stability conditions. Neural
Network., 5:3-17.
Linsker, R. (1988). Self-organization in a perceptual network. IEEE Computer, pages
105-117.
Oja, E. (1982). A simplified neuron model as a principal component analyzer. Journal of
Mathematical Biology, 15:267-273.
Schraudolph, N. N. and Sejnowski, T. J. (1993). Unsupervised discrimination of clustered
data via optimization of binary information gain. In Hanson, S. J., Cowan, J. D.,
and Giles, C. L., editors, Advance. in Neural Information Proce88ing, volume 5, pages
499-506, Denver 1992. Morgan Kaufmann, San Mateo.
Viola, P. A. (1995). Alignment by Ma:cimization of Mutual Information. PhD thesis,
Massachusetts Institute of Technology. MIT AI Laboratory TR 1548.
Viola, P. A. and Wells III, W. M. (1995). Alignment by maximization of mutual information. In Fifth Inti. Conf. on Computer Vi8ion, pages 16-23, Cambridge, MA.
IEEE.
Wells, W., Viola, P., and Kikinis, R. (1995). Multi-modal volume registration by maximization of mutual information. In Proceeding. of the Second International Sympo8ium on Medical Robotic. and Computer A88i8ted Surgery, pages 55 - 62. Wiley.
Wells III, W., Grimson, W., Kikinis, R., and Jolesz, F. (1994). Statistical Gain Correction
and Segmentation of MRI Data. In Proceeding. of the Computer Society Conference
on Computer Vi.ion and Pattern Recognition, Seattle, Wash. IEEE, Submitted.
| 1040 |@word trial:1 mri:12 polynomial:2 duda:2 grey:2 simulation:1 covariance:4 tr:1 reduction:1 contains:1 interestingly:1 existing:1 current:1 attracted:1 informative:1 plasticity:1 shape:1 wll:2 update:1 discrimination:1 half:1 selected:2 positron:1 oblique:1 record:1 mathematical:1 along:3 constructed:2 differential:1 incorrect:1 shorthand:1 emma:13 inside:1 behavior:1 nor:2 multi:2 decomposed:1 decreasing:1 actual:1 window:5 increasing:2 begin:1 estimating:3 moreover:2 provided:1 maximizes:2 project:1 entropy2:1 what:1 interpreted:1 minimizes:2 differing:1 finding:1 act:2 tackle:1 exactly:1 fat:1 classifier:1 unit:1 ly:2 medical:4 before:2 understood:1 might:1 eb:2 mateo:1 conversely:1 limited:1 practical:1 practice:3 hughes:1 maximisation:1 procedure:6 area:1 empirical:7 bell:3 significantly:1 projection:11 word:1 road:1 specificity:1 nb:2 context:1 impossible:1 equivalent:1 demonstrated:1 yt:1 maximizing:2 center:1 simplicity:2 bingo:4 manipulates:1 rule:8 estimator:1 stability:1 variation:1 analogous:1 element:1 approximated:1 recognition:2 bottom:1 role:2 thousand:1 highest:2 grimson:1 intuition:1 complexity:1 stereograms:1 motivate:1 efficiency:1 easily:2 joint:1 derivation:2 forced:1 distinct:1 sejnowski:8 whose:1 quite:2 widely:1 solve:1 torrey:1 advantage:1 differentiable:1 kurtosis:2 interaction:1 maximal:1 organizing:1 flexibility:1 description:1 seattle:1 double:1 cluster:4 produce:1 converges:1 leave:1 object:1 derive:6 pronounceable:1 strong:1 uncorrected:1 direction:1 correct:1 stochastic:2 argued:1 clustered:3 biological:1 probable:1 sejnowskl:1 summation:1 yij:2 adjusted:4 tighter:1 correction:6 great:1 scope:1 pointing:1 pine:1 purpose:1 estimation:9 integrates:1 applicable:1 sensitive:1 largest:1 wl:2 successfully:1 minimization:1 mit:2 gaussian:9 rather:1 derived:1 ax:3 emission:1 likelihood:3 indicates:1 ivl:1 contrast:2 detect:1 sense:2 typically:2 bienenstock:2 manipulating:1 pixel:6 issue:1 flexible:1 orientation:1 development:3 resonance:3 spatial:1 mutual:3 field:9 construct:1 saving:1 sampling:1 identical:1 biology:1 unsupervised:3 linsker:3 others:1 simplify:2 oja:5 manipulated:1 simultaneously:1 tightly:1 homogeneity:1 organization:1 evaluation:1 alignment:2 truly:1 mixture:1 xb:5 accurate:1 integral:2 closer:1 necessary:2 soft:1 giles:1 maximization:8 cost:3 subset:2 hundred:1 uniform:1 varies:1 corrupted:3 density:39 peak:1 international:1 terrence:1 yl:5 parzen:10 together:1 squared:3 thesis:1 unavoidable:1 slowly:1 conf:1 derivative:10 li:1 account:1 north:1 coefficient:1 matter:3 register:1 reiterate:1 blind:1 vi:1 multiplicative:1 bone:1 yv:7 minimize:2 square:1 variance:9 kaufmann:1 maximized:1 correspond:1 corruption:3 published:1 tissue:6 submitted:1 explain:1 touretzky:1 frequency:1 gain:2 massachusetts:1 recall:1 segmentation:1 subtle:1 ea:4 originally:1 dt:2 follow:1 modal:1 yb:12 leen:1 evaluated:4 dimmer:1 formulation:1 xa:5 binocular:1 working:1 horizontal:2 believe:1 concept:1 true:2 y2:4 requiring:1 hence:1 laboratory:2 white:2 self:2 prominent:1 eca:14 l1:1 interpreting:1 image:10 ranging:1 discovers:2 recently:1 novel:1 common:2 functional:1 physical:1 overview:1 denver:2 volume:3 discussed:1 significant:2 cambridge:3 ai:1 smoothness:1 automatic:1 analyzer:1 dot:1 cortex:3 surface:1 gt:5 align:1 playa:1 jolla:1 optimizes:1 apart:2 manipulation:10 selectivity:1 sparc:1 tesauro:1 binary:1 uncorrupted:2 morgan:1 minimum:3 determine:1 maximize:1 signal:12 smooth:1 match:2 calculation:1 cross:1 schraudolph:5 hart:2 manipulate:1 equally:1 vision:1 expectation:2 histogram:3 kernel:1 bimodal:2 ion:1 addition:2 addressed:1 modality:1 w2:2 elegant:1 cowan:1 contrary:1 call:2 iii:4 variety:1 brighter:1 reduce:1 regarding:1 pca:9 munro:2 becker:3 akin:2 york:1 cocktail:1 generally:1 useful:1 involve:1 amount:1 nonparametric:1 extremize:1 tomography:2 estimated:1 drawn:2 changing:1 neither:2 registration:1 imaging:1 sum:1 run:1 letter:1 powerful:1 almost:1 separation:3 dy:1 comparable:1 display:1 correspondence:1 quadratic:2 constraint:1 precisely:1 scene:1 nearby:1 speed:1 argument:1 min:4 relatively:1 inflexible:1 smaller:1 jolesz:1 dv:1 outlier:2 inti:1 taken:1 computationally:1 equation:1 end:1 confounded:1 available:2 intrator:2 magnetic:4 distinguished:1 original:2 assumes:2 clustering:1 top:1 restrictive:1 especially:2 society:1 surgery:1 objective:1 parametric:13 receptive:1 diagonal:1 kth:1 distance:1 perturbs:1 whom:1 cauchy:1 toward:1 water:1 reason:1 minimizing:2 demonstration:1 difficult:3 unfortunately:1 vara:1 implementation:1 unknown:1 yli:1 vertical:3 neuron:2 howard:1 finite:2 viola:11 neurobiology:1 hinton:3 head:2 intensity:1 pair:2 connection:1 hanson:1 bcm:6 quadratically:1 address:1 beyond:1 wy:7 pattern:3 max:3 including:1 video:1 critical:1 indicator:1 technology:2 axis:9 text:1 prior:1 acknowledgement:1 nicol:1 kikinis:3 limitation:1 proportional:1 proven:2 var:4 validation:1 degree:1 sufficient:1 principle:1 editor:2 playing:1 surprisingly:1 bias:6 institute:3 wide:1 fifth:1 slice:1 calculated:1 cortical:1 world:4 evaluating:2 stand:1 author:1 san:1 simplified:1 far:4 party:1 robotic:1 assumed:1 search:3 reality:1 nature:4 robust:2 ca:1 ignoring:1 complex:1 constructing:2 diag:1 noise:2 paul:1 arise:1 dl1:1 body:3 salk:2 cooper:4 wiley:2 perceptual:1 ib:1 weighting:2 third:1 specific:1 jt:1 offset:1 adding:1 effectively:2 importance:1 phd:1 wash:1 entropy:54 lt:1 likely:2 visual:7 expressed:1 hey:3 ma:3 formulated:1 absence:1 operates:1 corrected:4 principal:9 called:1 la:1 ya:15 support:1 scan:11 arises:1 violated:1 |
48 | 1,041 | The Geometry of Eye Rotations
and Listing's Law
Amir A. Handzel*
Tamar Flash t
Department of Applied Mathematics and Computer Science
Weizmann Institute of Science
Rehovot, 76100 Israel
Abstract
We analyse the geometry of eye rotations, and in particular
saccades, using basic Lie group theory and differential geometry. Various parameterizations of rotations are related through
a unifying mathematical treatment, and transformations between
co-ordinate systems are computed using the Campbell-BakerHausdorff formula. Next, we describe Listing's law by means of
the Lie algebra so(3). This enables us to demonstrate a direct
connection to Donders' law, by showing that eye orientations are
restricted to the quotient space 80(3)/80(2). The latter is equivalent to the sphere S2, which is exactly the space of gaze directions.
Our analysis provides a mathematical framework for studying the
oculomotor system and could also be extended to investigate the
geometry of mUlti-joint arm movements .
1
INTRODUCTION
1.1
SACCADES AND LISTING'S LAW
Saccades are fast eye movements, bringing objects of interest into the center of
the visual field. It is known that eye positions are restricted to a subset of those
which are anatomically possible, both during saccades and fixation (Tweed & Vilis ,
1990). According to Donders' law, the eye's gaze direction determines its orientation
uniquely, and moreover, the orientation does not depend on the history of eye motion
which has led to the given gaze direction . A precise specification of the "allowed"
subspace of position is given by Listing's law: the observed orientations of the eye
are those which can be reached from the distinguished orientation called primary
*hand@wisdom.weizmann.ac.il
t tamar@wisdom.weizmann.ac.il
118
A. A. HANDZEL, T. FLASH
position through a single rotation about an axis which lies in the plane perpendicular
to the gaze direction at the primary position (Listing's plane). We say then that
the orientation of the eye has zero torsion. Recently, the domain of validity of
Listing's law has been extended to include eye vergence by employing a suitable
mathematical treatment (Van Rijn & Van Den Berg, 1993).
Tweed and Vilis used quaternion calculus to demonstrate, in addition, that in order
to move from one allowed position to another in a single rotation, the rotation axis
itself lies outside Listing's plane (Tweed & Vilis, 1987). Indeed, normal saccades are
performed approximately about a single axis. However, the validity of Listing's law
does not depend on the rotation having a single axis, as was shown in double-step
target displacement experiments (Minken, Van Opstal & Van Gisbergen, 1993):
even when the axis of rotation itself changes during the saccade, Listing's law is
obeyed at each and every point along the trajectory which is traced by the eye.
Previous analyses of eye rotations (and in particular of Listing's law) have been
based on various representations of rotations: quaternions (Westheimer, 1957), rotation vectors (Hepp, 1990), spinors (Hestenes, 1994) and 3 x 3 rotation matrices;
however, they are all related through the same underlying mathematical object the three dimensional (3D) rotation group . In this work we analyse the geometry of
saccades using the Lie algebra of the rotation group and the group structure. Next,
we briefly describe the basic mathematical notions which will be needed later. This
is followed by Section 2 in which we analyse various parameterizations of rotations
from the point of view of group theory; Section 3 contains a detailed mathematical
analysis of Listing's law and its connection to Donders' law based on the group
structure; in Section 4 we briefly discuss the issue of angular velocity vectors or
axes of rotation ending with a short conclusion.
1.2
THE ROTATION GROUP AND ITS LIE ALGEBRA
The group of rotations in three dimensions, G = 80(3), (where '80' stands for
special orthogonal transformations) is used both to describe actual rotations and
to denote eye positions by means of a unique virtual rotation from the primary
position. The identity operation leaves the eye at the primary position, therefore,
we identify this position with the unit element of the group e E 80(3) . A rotation
can be parameterized by a 3D axis and the angle of rotation about it. Each axis
"generates" a continuous set of rotations through increasing angles . Formally, if n
is a unit axis of rotation, then
EXP(O? n)
(1)
is a continuous one-parameter subgroup (in G) of rotations through angles () in the
plane that is perpendicular to n. Such a subgroup is denoted as 80(2) C 80(3).
We can take an explicit representation of n as a matrix and the exponent can
be calculated as a Taylor series expansion. Let us look, for example, at the one
parameter subgroup of rotations in the y- z plane, i.e. rotations about the x axis
which is represented in this case by the matrix
o
o
(2)
-1
A direct computation of this rotation by an angle () gives
o
cos ()
- sin ()
o
sin () )
cos ()
(3)
119
The Geometry of Eye Rotations and Listing's Law
where I is the identity matrix. Thus, the rotation matrix R( 0) can be constructed
from the axis and angle of rotation . The same rotation, however, could also be
achieved using ALx instead of Lx, where A is any scalar, while rescaling the angle
to 0/ A. The collection of matrices ALx is a one dimensional linear space whose
elements are the generators of rotations in the y-z plane.
The set of all the generators constitutes the Lie algebra of a group. For the full
space of 3D rotations, the Lie algebra is the three dimensional vector space that is
spanned by the standard orthonormal basis comprising the three direction vectors
of the principal axes:
(4)
Every axis n can be expressed as a linear combination of this basis. Elements of
the Lie algebra can also be represented in matrix form and the corresponding basis
for the matrix space is
L.=
0 D
0
0
-1
L,
=(
0
0
-1
0
0
0
n
L,
=(
~1
1
0
0
D;
(5)
hence we have the isomorphism
( -~,
-Oy
Oz
0
-Ox
8, )
Ox
+-------t
0
U: )
(6)
Thanks to its linear structure, the Lie algebra is often more convenient for analysis
than the group itself. In addition to the linear structure, the Lie algebra has a
bilinear antisymmetric operation defined between its elements which is called the
bracket or commutator. The bracket operation between vectors in g is the usual
vector cross product . When the elements of the Lie algebra are written as matrices ,
the bracket operation becomes a commutation relation, i.e.
[A,B] == AB - BA.
(7)
As expected, the commutation relations of the basis matrices of the Lie algebra (of
the 3D rotation group) are equivalent to the vector product:
(8)
Finally, in accordance with (1), every rotation matrix is obtained by exponentiation:
R(8) = EXP(OxLx +OyLy +OzLz).
(9)
where 8 stands for the three component angles .
2
CO-ORDINATE SYSTEMS FOR ROTATIONS
In linear spaces the "position" of a point is simply parameterized by the co-ordinates
w.r.t. the principal axes (a chosen orthonormal basis). For a non-linear space (such
as the rotation group) we define local co-ordinate charts that look like pieces of
a vector space ~ n. Several co-ordinate systems for rotations are based on the
fact that group elements can be written as exponents of elements of the Lie algebra (1). The angles 8 appearing in the exponent serve as the co-ordinates.
The underlying property which is essential for comparing these systems is the noncommutativity of rotations. For usual real numbers, e.g. Cl and C2, commutativity implies expCI exp C2 = expCI +C2. A corresponding equation for non-commuting
elements is the Campbell-Baker-Hausdorff formula (CBH) which is a Taylor series
A. A. HANDZEL. T. FLASH
120
expansion using repeated commutators between the elements of the Lie algebra.
The expansion to third order is (Choquet-Bruhat et al., 1982):
EXP(Xl)EXP(X2) = EXP (Xl
+ X2 + ~[Xl' X2] + 112 [Xl -
X2, [Xl, X2]])
(10)
where Xl, X2 are variables that stand for elements of the Lie algebra.
One natural parameterization uses the representation of a rotation by the axis and
the angle of rotation. The angles which appear in (9) are then called canonical
co-ordinates of the first kind (Varadarajan, 1974). Gimbal systems constitute a
second type of parameterization where the overall rotation is obtained by a series
of consecutive rotations about the principal axes. The component angles are then
called canonical co-ordinates of the second kind. In the present context, the first
type of co-ordinates are advantageous because they correspond to single axis rotations which in turn represent natural eye movements. For convenience, we will use
the name canonical co-ordinates for those of the first kind, whereas those of the
second type will simply be called gimbals. The gimbals of Fick and Helmholtz are
commonly used in the study of oculomotor control (Van Opstal, 1993). A rotation
matrix in Fick gimbals is
RF(Bx,Oy,Oz)
= EXP(OzL z )
. EXP(ByLy) . EXP(OxLx),
(11)
and in Helmholtz gimbals the order of rotations is different:
RH(Ox, By,Oz) = EXP(ByLy) . EXP(OzL z ) . EXP(OxLx).
(12)
The CBH formula (10) can be used as a general tool for obtaining transformations
between various co-ordinate systems (Gilmore, 1974) such as (9,11,12). In particular, we apply (10) to the product of the two right-most terms in (11) and then again
to the product of the result with the third term. We thus arrive at an expression
whose form is the same as the right hand side of (10). By equating it with the
expression for canonical angles (9) and then taking the log of the exponents on
both sides of the equation, we obtain the transformation formula from Fick angles
to canonical angles. Repeating this calculation for (12) gives the equivalent formula
for Helmholtz angles l . Both transformations are given by the following three equations where OF,H stands for an angle either in Fick or in Helmholtz co-ordinates; for
Helmholtz angles there is a plus sign in front of the last term of the first equation
and a minus sign in the case of Fick angles:
Be
- OF,H
x x
(1 _ ...L ((BF,H)2 + (OF,H)2)) ? lOF,HOF,H
12
Y
z
2 Y
z
Of
= O:,H ( 1 - /2 (( O;,H)2 + (O:,H)2) ) + ~O;,H O:,H
Of
= O;,H ( 1 - /2 (( B;,H? + (B:,H)2))
(13)
- !O;,H O:,H
The error caused by the above approximation is smaller than 0.1 degree within most
of the oculomotor range.
We mention in closing two additional parameterizations, namely quaternions and
rotation vectors. Unit quaternions lie on the 3D sphere S3 (embedded in lR 4) which
constitutes the same manifold as the group of unitary rotations SU(2). The latter
is the double covering group of SO(3) having the same local structure. This enables
to use quaternions to parameterize rotations. The popular rotation vectors (written
as tan(Oj2)n, n being the axis of rotation and B its angle) are closely related to
1 In contrast to this third order expansion, second order approximations usually appear
in the literature; see for example equation B2 in (Van Rijn & Van Den Berg, 1993).
121
The Geometry of Eye Rotations and Listing's Law
quaternions because they are central (gnomonic) projections of a hemisphere of S3
onto the 3D affine space tangent to the quaternion qe = (1,0,0,0) E ]R4 . 2
3
LISTING'S LAW AND DONDERS' LAW
A customary choice of a head fixed coordinate system is the following: ex IS III
the straight ahead direction in the horizontal plane, e y is in the lateral direction
and e z points upwards in the vertical direction . ex and e z thus define the midsagittal plane; e y and e z define the coronal plane. The principal axes of rotations
(Lx, Ly, Lz) are set parallel to the head fixed co-ordinate system. A reference eye orientation called the primary position is chosen with the gaze direction being (1,0,0)
in the above co-ordinates. How is Listing's law expressed in terms of the Lie algebra
of SO(3)? The allowed positions are generated by linear combinations of Lz and
Ly only. This 2D subspace of the Lie algebra,
1 = Span{Ly, L z },
(14)
is Listing's plane. Denoting Span{ Lx} by h, we have a decomposition of the Lie
algebra so(3) into a direct sum of two linear subspaces:
9 = 1 EB h.
(15)
Every vector v E 9 can be projected onto its component which is in I:
V
= +
VI
proj.
Vh ----t VI.
(16)
Until now, only the linear structure has been considered. In addition, h is closed
under the bracket operation:
(17)
and because h is closed both under vector addition and the Lie bracket, it is a
sub algebra of g. In contrast, I is not a sub algebra because it is not closed under
commutation (8) . The fact that h stands as an algebra on its own implies that it
has a corresponding group H, just as 9 = so(3) corresponds to G = SO(3). The
subalgebra h generates rotations about the x axis, and therefore H is SO(2), the
group of rotations in a plane.
The group G = SO(3) does not have a linear structure. We may still ask whether
some kind of decomposition and projection can be achieved in G in analogy to
(15,16). The answer is positive and the projection is performed as follows: take any
element of the group , a E G , and multiply it by all the elements of the subgroup H.
This gives a subset in G which is considered as a single object a called a coset:
a = {ab I bEH} .
(18)
The set of all cosets constitutes the quotient space. It is written as
S == G / H = SO(3)/ SO(2)
(19)
because mapping the group to the quotient space can be understood as dividing G
by H. The quotient space is not a group , and this corresponds to the fact that the
subspace I above (14) is not a sub algebra. The quotient space has been constructed
algebraically but is difficult to visualize; however, it is mathematically equivalent
2 Geometrically, each point q E S3 can be connected to the center of the sphere by a
line. Another line runs from qe in the direction parallel to the vector part of q within the
tangent space. The intersection of the two lines is the projected point. Numerically, one
simply takes the vector part of q divided by its scalar part.
A. A.HANDZEL,T. FLASH
122
Table 1: Summary table of biological notions and the corresponding mathematical
representation, both in terms of the rotation group and its Lie algebra.
Biological notion
general eye position
primary position
eye torsion
"allowed" eye
positions
Lie Algebra
9
= so(3) = h El71
O.q E 9
h = Span{Lx}
1= Span{ L y, LzJ
(Listing's plane)
Rotation Group
G = SO(3)
eE G
H SO(2)
S ~/H SO(3)/SO(2)
~ S2 (Donders' sphere
of gaze directions)
=
=
=
to another space - the unit sphere S2 (embedded in ~3). This equivalence can be
seen in the following way: a unit vector in ~3, e.g. e = (1,0,0), can be rotated so
that its head reaches every point on the unit sphere S2; however, for any such point
there are infinitely many rotations by which the point can be reached. Moreover,
all the rotations around the x axis leave the vector e above invariant. We therefore
have to "factor out" these rotations (of H =SO(2? in order to eliminate the above
degeneracy and to obtain a one-to-one correspondence between the required subset
of rotations and the sphere. This is achieved by going to the quotient space.
The matrix of a torsion less rotation (generated by elements in Listing's plane) is
obtained by setting Ox =0 in (9):
cosO
R = ( - sin 0 sin ljJ
- sin 0 cos ljJ
where
sin 0 sin ljJ
cos 0 + (1 - cos 0) cos 2 ljJ
cos ljJ sin ljJ(l - cos 0)
sin 0 cos ljJ
)
cos ljJ sin ljJ(l - cos 0)
,(20)
2
cos 0 + (1 - cos 0) sin ljJ
0= .)0;+0; is the total angle of rotation and ljJ is the angle between 0and
the y axis in the Oy -Oz plane, i.e. (0, ljJ) are polar co-ordinates in Listing's plane.
Notice that the first column on the left constitutes the Cartesian co-ordinates of a
point on a sphere of unit radius (Gilmore, 1974).
As we have just seen, there is an exact correspondence between the group level and
the Lie algebra level. In fact, the two describe the same reality, the former in a
global manner and the latter in an infinitesimal one. Table 1 summarizes the important biological notions concerning Listing's law together with their corresponding
mathematical representations. The connection between Donders' law and Listing's
law can now be seen in a clear and intuitive way. The sphere, which was obtained
by eliminating torsion, is the space of gaze directions. Recall that Donders' law
states that the orientation of the eye is determined uniquely by its gaze direction.
Listing's law implies that we need only take into consideration the gaze direction
and disregard torsion. In order to emphasize this point, we use the fact that locally,
SO(3) looks like a product of topological spaces: 3
p =
u x SO(2)
where
(21)
U parameterizes gaze direction and SO(2) - torsion. Donders' law restricts eye
orientation to an unknown 2D submanifold of the product space P. Listing's law
shows that the submanifold is U, a piece of the sphere. This representation is
advantageous for biological modelling, because it mathematically sets apart the
degrees of freedom of gaze orientation from torsion, which also differ functionally.
350(3) is a principal bundle over S2 with fiber 50(2).
The Geometry of Eye Rotations and Listing's Law
4
123
AXES OF ROTATION FOR LISTING'S LAW
As mentioned in the introduction, moving between two (non-primary) positions
requires a rotation whose axis (i.e. angular velocity vector) lies outside Listing's
plane. This is a result of the group structure of SO(3). Had the axis of rotation
been contained within Listing's plane, the matrices of the quotient space (20) should
have been closed under multiplication so as to form a subgroup of SO(3). In other
words, if ri and rJ are matrices representing the current and target orientations of
the eye corresponding to axes in Listing's plane, then rJ . r;l should have been a
matrix of the same form (20); however, as explained in Section 3, this condition is
not fulfilled.
Finally, since normal saccades involve rotations about a single axis, they are oneparameter subgroups generated by a single element of the Lie algebra (1). In addition, they have the property of being geodesic curves in the group manifold under
the natural metric which is given by the bilinear Cartan-Killing form of the group
(Choquet-Bruhat et al., 1982).
5
CONCLUSION
We have analysed the geometry of eye rotations using basic Lie group theory and
differential geometry. The unifying view presented here can serve to improve the
understanding of the oculomotor system. It may also be extended to study the
three dimensional rotations of the joints of the upper limb.
Acknowledgements
We would like to thank Stephen Gelbart, Dragana Todoric and Yosef Yomdin for
instructive conversations on the mathematical background and Dario Liebermann
for fruitful discussions. Special thanks go to Stan Gielen for conversations which
initiated this work.
References
Choquet-Bruhat Y., De Witt-Morette C. & Dillard-Bleick M., Analysis, Manifolds
and Physics, North-Holland (1982).
Gilmore R.,LieGroups, Lie Algebras, and Some of Their Applications, Wiley (1974).
Hepp K., Commun. Math. Phys. 132 (1990) 285-292.
Hestenes D., Neural Networks 7, No.1 (1994) 65-77.
Minken A.W.H. Van Opstal A.J. & Van Gisbergen J.A .M., Exp. Brain Research
93 (1993) 521-533.
Tweed, D. & Vilis T., J. Neurophysiology 58 (1987) 832-849.
Tweed D. & Vilis T., Vision Research 30 (1990) 111-127.
Van Opstal J., "Representations of Eye Positions in Three Dimensions", in Multisensory Control of Movement, ed. Berthoz A., (1993) 27-4l.
Van Rijn L.J. & Van Den Berg A.V., Vision Research 33, No. 5/6 (1993) 691-708.
Varadarajan V.S., Lie Groups, Lie Algebras, and Their Reps., Prentice-Hall (1974).
Westheimer G., Journal of the Optical Society of America 47 (1957) 967-974.
| 1041 |@word neurophysiology:1 briefly:2 eliminating:1 advantageous:2 bf:1 calculus:1 decomposition:2 mention:1 minus:1 contains:1 series:3 denoting:1 current:1 comparing:1 analysed:1 written:4 enables:2 leaf:1 amir:1 parameterization:2 plane:18 short:1 lr:1 provides:1 parameterizations:3 math:1 lx:4 cosets:1 mathematical:9 along:1 constructed:2 direct:3 differential:2 c2:3 fixation:1 manner:1 expected:1 indeed:1 multi:1 brain:1 actual:1 increasing:1 becomes:1 moreover:2 underlying:2 baker:1 israel:1 kind:4 transformation:5 every:5 exactly:1 control:2 unit:7 ly:3 appear:2 positive:1 understood:1 accordance:1 local:2 bilinear:2 initiated:1 approximately:1 plus:1 eb:1 equating:1 r4:1 equivalence:1 co:29 perpendicular:2 range:1 weizmann:3 unique:1 displacement:1 convenient:1 projection:3 word:1 varadarajan:2 convenience:1 onto:2 prentice:1 context:1 equivalent:4 fruitful:1 center:2 handzel:4 go:1 commutation:3 spanned:1 orthonormal:2 notion:4 coordinate:1 target:2 tan:1 exact:1 us:1 velocity:2 element:14 helmholtz:5 coso:1 observed:1 parameterize:1 connected:1 movement:4 mentioned:1 geodesic:1 depend:2 algebra:26 serve:2 basis:5 joint:2 various:4 represented:2 fiber:1 america:1 fast:1 describe:4 coronal:1 outside:2 whose:3 say:1 analyse:3 itself:3 product:6 oz:4 intuitive:1 double:2 leave:1 rotated:1 object:3 ac:2 dividing:1 quotient:7 implies:3 differ:1 direction:15 radius:1 closely:1 virtual:1 biological:4 mathematically:2 around:1 considered:2 lof:1 normal:2 exp:13 hall:1 mapping:1 visualize:1 consecutive:1 polar:1 tool:1 ax:7 modelling:1 contrast:2 hestenes:2 eliminate:1 relation:2 proj:1 going:1 comprising:1 issue:1 overall:1 orientation:11 denoted:1 exponent:4 special:2 field:1 having:2 look:3 constitutes:4 geometry:10 ab:2 freedom:1 interest:1 investigate:1 multiply:1 bracket:5 bundle:1 orthogonal:1 cbh:2 commutativity:1 taylor:2 column:1 subset:3 hof:1 submanifold:2 front:1 obeyed:1 answer:1 lzj:1 thanks:2 physic:1 gaze:11 together:1 again:1 central:1 rescaling:1 bx:1 de:1 opstal:4 b2:1 north:1 caused:1 vi:2 piece:2 performed:2 later:1 view:2 closed:4 reached:2 parallel:2 il:2 chart:1 listing:28 correspond:1 wisdom:2 identify:1 killing:1 trajectory:1 straight:1 history:1 reach:1 phys:1 tweed:5 ed:1 infinitesimal:1 degeneracy:1 treatment:2 popular:1 ask:1 recall:1 conversation:2 campbell:2 ox:4 angular:2 just:2 until:1 hand:2 horizontal:1 su:1 name:1 dario:1 validity:2 gilmore:3 hausdorff:1 former:1 hence:1 sin:11 during:2 uniquely:2 covering:1 qe:2 gelbart:1 demonstrate:2 motion:1 upwards:1 consideration:1 recently:1 rotation:71 numerically:1 functionally:1 mathematics:1 closing:1 had:1 moving:1 specification:1 own:1 hemisphere:1 apart:1 commun:1 rep:1 seen:3 additional:1 algebraically:1 stephen:1 alx:2 full:1 rj:2 calculation:1 cross:1 sphere:10 divided:1 concerning:1 basic:3 vision:2 metric:1 represent:1 achieved:3 addition:5 whereas:1 background:1 bringing:1 midsagittal:1 oj2:1 unitary:1 ee:1 iii:1 parameterizes:1 tamar:2 whether:1 expression:2 gnomonic:1 isomorphism:1 constitute:1 detailed:1 involve:1 clear:1 repeating:1 locally:1 restricts:1 canonical:5 s3:3 notice:1 sign:2 fulfilled:1 rehovot:1 group:30 traced:1 vilis:5 geometrically:1 sum:1 run:1 angle:21 parameterized:2 exponentiation:1 arrive:1 summarizes:1 followed:1 correspondence:2 topological:1 ahead:1 x2:6 ri:1 generates:2 span:4 optical:1 department:1 according:1 combination:2 yosef:1 smaller:1 berthoz:1 anatomically:1 restricted:2 den:3 invariant:1 explained:1 equation:5 discus:1 turn:1 needed:1 studying:1 operation:5 coset:1 apply:1 limb:1 appearing:1 distinguished:1 customary:1 choquet:3 include:1 unifying:2 society:1 move:1 primary:7 usual:2 subspace:4 thank:1 lateral:1 manifold:3 westheimer:2 difficult:1 ba:1 unknown:1 upper:1 gisbergen:2 vertical:1 commuting:1 extended:3 precise:1 head:3 beh:1 commutator:2 ordinate:16 namely:1 required:1 connection:3 subgroup:6 usually:1 oculomotor:4 rf:1 suitable:1 natural:3 arm:1 representing:1 improve:1 torsion:7 eye:27 stan:1 axis:20 vh:1 ljj:12 literature:1 understanding:1 tangent:2 acknowledgement:1 multiplication:1 law:26 embedded:2 rijn:3 oy:3 analogy:1 generator:2 degree:2 affine:1 summary:1 last:1 side:2 institute:1 taking:1 van:12 curve:1 dimension:2 calculated:1 ending:1 donders:8 stand:5 collection:1 commonly:1 projected:2 lz:2 employing:1 emphasize:1 global:1 continuous:2 vergence:1 table:3 reality:1 obtaining:1 expansion:4 cl:1 domain:1 antisymmetric:1 rh:1 s2:5 allowed:4 repeated:1 wiley:1 sub:3 position:17 explicit:1 xl:6 lie:30 third:3 formula:5 showing:1 essential:1 cartesian:1 intersection:1 led:1 simply:3 gielen:1 infinitely:1 visual:1 expressed:2 contained:1 scalar:2 saccade:8 holland:1 corresponds:2 determines:1 identity:2 flash:4 change:1 determined:1 gimbal:5 principal:5 called:7 total:1 hepp:2 disregard:1 multisensory:1 formally:1 berg:3 latter:3 quaternion:7 instructive:1 ex:2 |
49 | 1,042 | Reinforcement Learning by Probability
Matching
Philip N. Sabes
Michael I. Jordan
sabes~psyche.mit.edu
jordan~psyche.mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
We present a new algorithm for associative reinforcement learning. The algorithm is based upon the idea of matching a network's
output probability with a probability distribution derived from the
environment's reward signal. This Probability Matching algorithm
is shown to perform faster and be less susceptible to local minima
than previously existing algorithms. We use Probability Matching to train mixture of experts networks, an architecture for which
other reinforcement learning rules fail to converge reliably on even
simple problems. This architecture is particularly well suited for
our algorithm as it can compute arbitrarily complex functions yet
calculation of the output probability is simple.
1
INTRODUCTION
The problem of learning associative networks from scalar reinforcement signals is
notoriously difficult . Although general purpose algorithms such as REINFORCE
(Williams, 1992) and Generalized Learning Automata (Phansalkar, 1991) exist, they
are generally slow and have trouble with local minima. As an example, when we
attempted to apply these algorithms to mixture of experts networks (Jacobs et al. ,
1991), the algorithms typically converged to the local minimum which places the
entire burden of the task on one expert.
Here we present a new reinforcement learning algorithm which has faster and more
reliable convergence properties than previous algorithms. The next section describes
the algorithm and draws comparisons between it and existing algorithms. The
following section details its application to Gaussian units and mixtures of Gaussian
experts. Finally, we present empirical results.
1081
Reinforcement Learning by Probability Matching
2
REINFORCEMENT PROBABILITY MATCHING
We begin by formalizing the learning problem. Given an input x E X from the
environment, the network must select an output y E y. The network then receives
a scalar reward signal r, with a mean r and distribution that depend on x and
y. The goal of the learner is to choose an output which maximizes the expected
reward. Due to the lack of an explicit error signal, the learner must choose its
output stochastically, exploring for better rewards. Typically the learner starts with
a parameterized form for the conditional output density P8(ylx), and the learning
problem becomes one of finding the parameters 0 which maximize the expected
reward:
Jr(O) =
1
p(x)p8(ylx)r(x, y)dydx.
X,Y
We present an alternative route to the maximum expected reward cost function,
and in doing so derive a novel learning rule for updating the network's parameters.
The learner's task is to choose from a set of conditional output distributions based
on the reward it receives from the environment. These rewards can be thought of as
inverse energies; input/output pairs that receive high rewards are low energy and
are preferred by the environment. Energies can always be converted into probabilities through the Boltzmann distribution, and so we can define the environment's
conditional distribution on Y given x,
*( I) exp( _T- 1 E(x, y?
exp(T-1r(x, y?
pyx =
ZT(X)
=
ZT(X)
,
where T is a temperature parameter and ZT(X) is a normalization constant which
depends on T . This distribution can be thought of as representing the environment's
ideal input-output mapping, high reward input-output pairs being more typical or
likely than low reward pairs. The temperature parameter determines the strength of
this preference: when T is infinity all outputs are equally likely; when T is zero only
the highest reward output is chosen. This new distribution is a purely theoretical
construct, but it can be used as a target distribution for the learner. If the 0 are
adjusted so that P8(ylx) is nearly equal to p*(ylx), then the network's output will
typically result in high rewards.
The agreement between the network and environment conditional output densities
can be measured with the Kullback-Liebler (KL) divergence:
K L(p II P*)
=
-1
-~ 1
=
(1)
p(x)p8(ylx) [logp*(Ylx) -logp8(ylx)] dydx
X,Y
p(x)p8(ylx)[r(x,y) - Tr8(X,y)]dydx+
X,Y
f p(x)logZT(x)dx,
Jx
where r8(x, y) is defined as the logarithm of the conditional output probability and
can be thought of as the network's estimate of the mean reward. This cost function
is always greater than or equal to zero, with equality only when the two probability
distributions are identical.
Keeping only the part of Equation 1 which depends on 0, we define the Probability
Matching (PM) cost function:
JpM(O)
= - Jx,y
f p(x)p8(ylx)[r(x, y) -
Tr8(X, y)] dydx
= -Jr(O) -
TS(P8)
The PM cost function is analogous to a free energy, balancing the energy, in the
form of the negative of the average reward, and the entropy S(P8) of the output
P. N. SABES, M. I. JORDAN
1082
-1
-1
T=.5
T=l
T= .2
-0.5
0
0.5
T= .05
Figure 1: p*'s (dashed) and PM optimal Gaussians (solid) for the same bimodal reward
function and various temperatures. Note the differences in scale.
distribution. A higher T corresponds to a smoother target distribution and tilts the
balance of the cost function in favor of the entropy term, making diffuse output distributions more favorable. Likewise, a small T results in a sharp target distribution
placing most of the weight on the reward dependent term of cost function, which is
always optimized by the singular solution of a spike at the highest reward output.
Although minimizing the PM cost function will result in sampling most often at
high reward outputs, it will not optimize the overall expected reward if T > O.
There are two reasons for this. First, the output y which maximizes ro(x, y) may
not maximize rex, y). Such an example is seen in the first panel of Figure 1:
the network's conditional output density is a Gaussian with adjustable mean and
variance, and the environment has a bimodal reward function and T = 1. Even in
the realizable case, however, the network will choose outputs which are suboptimal
with respect to its own predicted reward, with the probability of choosing output y
falling off exponentially with ro(x, y). The key point here is that early in learning
this non-optimality is exactly what is desired. The PM cost function forces the
learner to maintain output density everywhere the reward, as measure by p*l/T, is
not much smaller than its maximum. When T is high, the rewards are effectively
flattened and even fairly small rewards look big. This means that a high temperature
ensures that the learner will explore the output space.
Once the network is nearly PM optimal, it would be advantageous to "sharpen up"
the conditional output distribution, sampling more often at outputs with higher
predicted rewards. This translates to decreasing the entropy of the output distribution or lowering T. Figure 1 shows how the PM optimal Gaussian changes as the
temperature is lowered in the example discussed above; at very low temperatures
the output is almost always near the mode of the target distribution. In the limit
of T = 0, J PM becomes original reward maximization criterion Jr. The idea of the
Probability Matching algorithm is to begin training with a large T, say unity, and
gradually decrease it as the performance improves, effectively shifting the bias of
the learner from exploration to exploitation.
We now must find an update rule for 0 which minimizes JpM(O). We proceed by
looking for a stochastic gradient descent step. Differentiating the cost function gives
\T OJpM(O) =
-1
X,Y
p(x)po(Ylx) [rex, y) - Tro(x, y)] \T oro(x, y)dydx.
Thus, if after every action the parameters are updated by the step
t:.o =
a [r - Tro(x, y)] \T oro (x, y),
(2)
where alpha is a constant which can vary over time, then the parameters will on
average move down the gradient of the PM cost function. Note that any quantity
Reinforcement Learning by Probability Matching
1083
which does not depend on Y or r can be added to the difference in the update rule,
and the expected step will still point along the direction of the gradient.
The form of Equation 2 is similar to the REINFORCE algorithm (Williams, 1992),
whose update rule is
t:.() = a(r - b)V' elogpe(Ylx),
where b, the reinforcement baseline, is a quantity which does not depend on Y or r.
Note that these two update rules are identical when T is zero.! The advantage of the
PM rule is that it allows for an early training phase which encourages exploration
without forcing the output distribution to converge on suboptimal outputs. This
will lead to striking qualitative differences in the performance of the algorithm for
training mixtures of Gaussian experts.
3
UPDATE RULES FOR GAUSSIAN UNITS AND
MIXTURES OF GAUSSIAN EXPERTS
We employ Gaussian units with mean I' = w T x and covariance 0"21. The learner
must select the matrix wand scalar 0" which minimize JpM(W, 0"). Applying the
update rule in Equation 2, we get
"
1
a[r - Tr(x,y)] 2"(Y -I'?x
t:.w
0"
"
a [r - Tr(x, y)]
1 ("Y -I'W
0"2
0"2
-
1) .
In practice, for both single Gaussian units and the mixtures presented below we
avoid the issue of constraining 0" > 0 by updating log 0" directly.
We can generalize the linear model by considering a conditional output distribution
in the form of a mixture of Gaussian experts (Jacobs et al., 1991),
N
p(Ylx)
1
1
= Lgi(x)(27r0"1)-~ exp(--2I1y -l'iW)?
i=!
20"i
r
Expert i has mean I'i = w x and covariance 0"[1. The prior probability given x of
choosing expert i, gi(X), is determined by a single layer gating network with weight
matrix v and softmax output units. The gating network learns a soft partitioning
of the input space into regions for which each expert is responsible .
Again, we can apply Equation 2 to get the PM update rules:
t:.Vi
a [r - Tf(x,y)] (hi - gi)X
t:.Wi
a [r - Tf(x, y)]
hi~(Y -
l'i?X
O"i
6.O"i
a[r-Tf(x,Y)]h i
:
1 ("Y~riW
-1),
where hi = giPi(ylx)jp(ylx) is the posterior probability of choosing expert i given
y. We note that the PM update rules are equivalent to the supervised learning
gradient descent update rules in (Jacobs et al., 1991) modulated by the difference
between the actual and expected rewards.
lThis fact implies that the REINFORCE step is in the direction of the gradient of JR(B),
as shown by (Williams, 1992). See Williams and Peng, 1991, for a similar REINFORCE
plus entropy update rule.
1084
P. N. SABES, M. I. JORDAN
Table 1: Convergence times and gate entropies for the linear example (standard errors
in parentheses). Convergence times: An experiment consisting of 50 runs was conducted
for each algorithm, with a wide range of learning rates and both reward functions. Best
results for each algorithm are reported. Entropy: Values are averages over the last 5,000
time steps of each run. 20 runs of 50,000 time steps were conducted.
Algorithm
PM, T= 1
PM, T=.5
PM, T =.1
REINFORCE
REINF-COMP
I Convergence Time I
1088 (43)
-
2998 (183)
1622 (46)
Entropy
.993 .0011
.97 .02
.48 .04
.21 .03
.21 .03
Both the hi and r depend on the overall conditional probability p(ylx), which in
turn depends on each Pi(ylx). This adds an extra step to the training procedure.
After receiving the input x, the network chooses an expert based on the priors gi(X)
and an output y from the selected expert's output distribution . The output is then
. sent back to each of the experts in order to compute the likelihood of their having
generated it. Given the set of Pi'S, the network can update its parameters as above.
4
SIMULATIONS
We present three examples designed to explore the behavior of the Probability
Matching algorithm. In each case, networks were trained using Probability Matching, REINFORCE, and REINFORCE with reinforcement comparison (REINFCOMP), where a running average of the reward is used as a reinforcement baseline (Sutton, 1984). In the first two examples an optimal output function y*(x)
was chosen and used to calculate a noisy error, c = Ily - y*(x) - zll, where z was
i.i.d. zero-mean Gaussian with u = .1. The error signal determined the reward
by one of two functions, r
-c 2 /2 or exp( _c 2 /2). When the RMSE between the
network mean and the optimal output was less that .05 the network was said to
have converged.
=
4.1
A Linear Example
In this example x was chosen uniformly from [-1,1]2 x {I}, and the optimal output
was y* = Ax, for a 2 x 3 matrix A. A mixture of three Gaussian experts was trained.
The details of the simulation and results for each algorithm are shown in Table 1.
Probability Matching with constant T
1 shows almost a threefold reduction
in training time compared to REINFORCE and about a 50% improvement over
REINF-COMP.
=
The important point of this example is the manner in which the extra Gaussian
units were employed. We calculated the entropy of the gating network, normalized
so that a value of one means that each expert has equal probability of being chosen
and a value of zero means that only one expert is ever chosen. The values after
50,000 time steps are shown in the second column of Table 1. When T ~ 1, the
Probability Matching algorithm gives the three experts roughly equal priors. This
is due to the fact that small differences in the experts' parameters lead to increased
output entropy if all experts are used. REINFORCE on the other hand always
converges to a solution which employs only one expert. This difference in the
behavior of the algorithms will have a large qualitative effect in the next example.
Reinforcement Learning by Probability Matching
J085
Table 2: Results for absolute value. The percentage of trials that converged and the
average time to convergence for those trials. Standard errors are in parentheses. 50 trials
were conducted for a range of learning rates and with both reward functions; the best
results for each algorithm are shown.
Algorithm
PM
REINFORCE
REINF-COMP
I Successful Trials I Convergence Time I
100%
48%
38%
6,052 313)
76,775 3,329)
42,105 3,869)
8 .0
110
100
10
60
2.0
.0
0 .0
10
0.0
(a)
1.0
(b)
1 .0
'.0
' .0
(c)
' .0
-2.0
0.0
1.0
2.0
'.0
'.0
(d)
Figure 2: Example 4.3. The environment's probability distribution for T = 1: (a) density
plot of p. vs. y / x, (b) cross-sectional view with Y2 = o. Locally weighted mean and
variance of Y2/X over representative runs: (c) T
4.2
= 1,
(d) T
= 0 (i.e.
REINFORCE).
Absolute Value
We used a mixture of two Gaussian units to learn the absolute value function.
The details of the simulation and the best results for each algorithm are shown in
Table 2. Probability Matching with constant T = 1 converged to criterion on every
trial, in marked contrast to the REINFORCE algorithm. With no reinforcement
baseline, REINFORCE converged to criterion in only about half of the cases, less
with reinforcement comparison. In almost all of the trials that didn't converge, only
one expert was active on the domain of the input. Neither version of REINFORCE
ever converged to criterion in less than 14,000 time steps.
This example highlights the advantage of the Probability Matching algorithm. During training, all three algorithms initially use both experts to capture the overall
mean of the data. REINFORCE converges on this local minimum, cutting one
expert off before it has a chance to explore the rest of the parameter space. The
Probability Matching algorithm keeps both experts in use. Here, the more conservative approach leads to a stark improvement in performance.
4.3
An Example with Many Local Maxima
In this example, the learner's conditional output distribution was a bivariate Gaussian with It = [Wl, W2]T x, and the environment's rewards were a function of y/x.
The optimal output distribution p*(y/x) is shown in Figures 2(a,b). These figures
can also be interpreted as the expected value of p* for a given w. The weight vector
is initially chosen from a uniform distribution over [-.2, .2]2, depicted as the very
small while dot in Figure 2(a). There are a series of larger and larger local maxima
off to the right, with a peak of height 2n at Wl = 2n.
The results are shown in Table 3. REINFORCE, both with and without reinforcement comparison, never got past third peak; the variance of the Gaussian unit would
'.0
P. N. SABES, M. I. JORDAN
1086
Table 3: Results for Example 4.3. These values represent 20 runs for 50,000 time steps
each. The first and second columns correspond to number of the peak the learner reached.
Algorithm
PM, T= 2
PM, T
1
PM, T=.5
REINFORCE
REINF-COMP
=
Mean Final
log2 Wl
28,8
6.34
3.06
2.17
2.05
Range of Final
log2 Wl'S
[19.1,51.0]
5.09,8.08
3.04,3.07
2.00,2.90
2.05,2.06
Mean Final
(T
> 101>
13.1
.40
.019
.18
very quickly close down to a small value making further exploration of the output
space impossible. Probability Matching, on the other hand, was able to find greater
and greater maxima, with the variance growing adaptively to match the local scale
of the reward function. These differences can be clearly seen in Figures 2( c,d),
which show typical behavior for the Probability Matching algorithm with T = 1
and T O.
=
5
CONCLUSION
We have presented a new reinforcement learning algorithm for associative networks
which converges faster and more reliably than existing algorithms. The strength of
the Probability Matching algorithm is that it allows for a better balance between
exploration of the output space and and exploitation of good outputs. The parameter T can be adjusted during learning to allow broader output distributions early
in training and then to force the network to sharpen up its distribution once nearly
optimal parameters have been found.
Although the applications in this paper were restricted to networks with Gaussian units, the Probability Matching algorithm can be applied to any reinforcement
learning task and any conditional output distribution. It could easily be employed,
for example, on classification problems using logistic or multinomial (softmax) output units or mixtures of such units. Finally, the simulations presented in this paper
are of simple examples. Preliminary results indicate that the advantages of the
Probability Matching algorithm scale up to larger, more interesting problems.
References
Jacobs, R . A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive
mixtures of local experts. Neural Computation, 3:79-87.
Phansalkar, V. V. (1991). Learning automata algorithms for connectionist systems
- local and global convergence. PhD Thesis, Dept. of Electrical Engineering,
India Institute of Science, Bangalore.
Sutton, R. S. (1984). Temporal credit assignment in reinforcement learning.
PhD Thesis, Dept. of Computer and Information Science, University of Massachusetts, Amherst, MA.
Williams, R. J. (1992) . Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256.
Williams, R. J. and Peng, J. (1991). Function optimization using connectionist
reinforcement learning algorithms. Connection Science, 3:241-268.
| 1042 |@word trial:6 exploitation:2 version:1 advantageous:1 simulation:4 jacob:4 covariance:2 tr:2 solid:1 reduction:1 series:1 past:1 existing:3 nowlan:1 yet:1 dx:1 must:4 zll:1 dydx:5 designed:1 plot:1 update:11 v:1 half:1 selected:1 preference:1 height:1 along:1 qualitative:2 manner:1 peng:2 p8:8 expected:7 roughly:1 behavior:3 growing:1 brain:1 decreasing:1 actual:1 considering:1 becomes:2 begin:2 formalizing:1 panel:1 maximizes:2 didn:1 what:1 interpreted:1 minimizes:1 finding:1 temporal:1 every:2 exactly:1 ro:2 partitioning:1 unit:11 before:1 engineering:1 local:9 limit:1 sutton:2 plus:1 range:3 responsible:1 practice:1 procedure:1 empirical:1 thought:3 got:1 matching:22 get:2 close:1 applying:1 impossible:1 optimize:1 equivalent:1 williams:6 automaton:2 rule:13 analogous:1 updated:1 target:4 agreement:1 particularly:1 updating:2 electrical:1 capture:1 calculate:1 region:1 ensures:1 decrease:1 highest:2 environment:10 reward:34 trained:2 depend:4 purely:1 upon:1 learner:11 po:1 easily:1 various:1 train:1 choosing:3 whose:1 larger:3 say:1 favor:1 gi:3 noisy:1 final:3 associative:3 advantage:3 convergence:7 converges:3 derive:1 measured:1 predicted:2 implies:1 indicate:1 direction:2 stochastic:1 exploration:4 preliminary:1 adjusted:2 exploring:1 credit:1 exp:4 mapping:1 oro:2 vary:1 jx:2 early:3 purpose:1 favorable:1 iw:1 wl:4 tf:3 weighted:1 mit:2 clearly:1 gaussian:17 always:5 avoid:1 broader:1 derived:1 ax:1 improvement:2 ily:1 likelihood:1 contrast:1 baseline:3 realizable:1 dependent:1 typically:3 entire:1 initially:2 overall:3 issue:1 classification:1 softmax:2 fairly:1 equal:4 construct:1 once:2 having:1 never:1 sampling:2 identical:2 placing:1 look:1 nearly:3 connectionist:3 bangalore:1 employ:2 divergence:1 phase:1 consisting:1 maintain:1 mixture:11 logarithm:1 desired:1 theoretical:1 increased:1 column:2 soft:1 logp:1 assignment:1 maximization:1 cost:10 uniform:1 successful:1 conducted:3 rex:2 reported:1 chooses:1 adaptively:1 density:5 peak:3 amherst:1 off:3 receiving:1 michael:1 quickly:1 again:1 thesis:2 choose:4 cognitive:1 stochastically:1 expert:26 stark:1 converted:1 depends:3 vi:1 view:1 doing:1 reached:1 start:1 rmse:1 minimize:1 variance:4 likewise:1 correspond:1 generalize:1 lgi:1 notoriously:1 comp:4 converged:6 liebler:1 energy:5 massachusetts:2 improves:1 back:1 higher:2 supervised:1 hand:2 receives:2 lack:1 mode:1 logistic:1 effect:1 normalized:1 y2:2 equality:1 during:2 encourages:1 criterion:4 generalized:1 temperature:6 tro:2 novel:1 multinomial:1 tilt:1 exponentially:1 jp:1 discussed:1 cambridge:1 pm:19 sharpen:2 dot:1 lowered:1 add:1 posterior:1 own:1 forcing:1 route:1 arbitrarily:1 seen:2 minimum:4 greater:3 employed:2 r0:1 converge:3 riw:1 maximize:2 signal:5 ii:1 smoother:1 dashed:1 faster:3 match:1 calculation:1 cross:1 equally:1 parenthesis:2 normalization:1 represent:1 bimodal:2 receive:1 singular:1 extra:2 rest:1 w2:1 sent:1 jordan:6 near:1 ideal:1 constraining:1 architecture:2 suboptimal:2 idea:2 translates:1 proceed:1 action:1 generally:1 ylx:16 locally:1 exist:1 percentage:1 threefold:1 key:1 falling:1 neither:1 lowering:1 wand:1 run:5 inverse:1 parameterized:1 everywhere:1 striking:1 place:1 almost:3 draw:1 layer:1 hi:4 strength:2 infinity:1 diffuse:1 optimality:1 department:1 jr:4 describes:1 smaller:1 psyche:2 unity:1 wi:1 making:2 gradually:1 restricted:1 equation:4 previously:1 turn:1 fail:1 gaussians:1 apply:2 alternative:1 gate:1 original:1 running:1 trouble:1 log2:2 reinf:4 move:1 added:1 quantity:2 spike:1 said:1 gradient:6 reinforce:17 philip:1 reason:1 balance:2 minimizing:1 difficult:1 susceptible:1 negative:1 reliably:2 zt:3 boltzmann:1 adjustable:1 perform:1 descent:2 t:1 hinton:1 looking:1 ever:2 sharp:1 pair:3 kl:1 optimized:1 connection:1 able:1 below:1 reliable:1 shifting:1 force:2 representing:1 technology:1 sabes:5 prior:3 highlight:1 interesting:1 pi:2 balancing:1 last:1 keeping:1 free:1 bias:1 allow:1 institute:2 wide:1 india:1 differentiating:1 absolute:3 calculated:1 reinforcement:20 adaptive:1 alpha:1 preferred:1 kullback:1 cutting:1 keep:1 global:1 active:1 lthis:1 table:7 learn:1 pyx:1 complex:1 domain:1 big:1 representative:1 slow:1 explicit:1 third:1 learns:1 down:2 gating:3 r8:1 bivariate:1 burden:1 effectively:2 flattened:1 phd:2 suited:1 entropy:9 depicted:1 likely:2 explore:3 sectional:1 scalar:3 corresponds:1 determines:1 chance:1 ma:2 conditional:11 goal:1 marked:1 change:1 typical:2 determined:2 uniformly:1 conservative:1 attempted:1 select:2 modulated:1 dept:2 |
50 | 1,043 | Neural Control for Nonlinear Dynamic Systems
Ssu-Hsin Yu
Department of Mechanical Engineering
Massachusetts Institute of Technology
Cambridge, MA 02139
Email: hsin@mit.edu
Anuradha M. Annaswamy
Department of Mechanical Engineering
Massachusetts Institute of Technology
Cambridge, MA 02139
Email: aanna@mit.edu
Abstract
A neural network based approach is presented for controlling two distinct
types of nonlinear systems. The first corresponds to nonlinear systems
with parametric uncertainties where the parameters occur nonlinearly.
The second corresponds to systems for which stabilizing control structures cannot be determined. The proposed neural controllers are shown
to result in closed-loop system stability under certain conditions.
1
INTRODUCTION
The problem that we address here is the control of general nonlinear dynamic systems
in the presence of uncertainties. Suppose the nonlinear dynamic system is described as
f(x , u , 0) , y = h(x, u, 0) where u denotes an external input, y is the output, x is the
state, and 0 is the parameter which represents constant quantities in the system. The control
objectives are to stabilize the system in the presence of disturbances and to ensure that
reference trajectories can be tracked accurately, with minimum delay. While uncertainties
can be classified in many different ways, we focus here on two scenarios. One occurs
because the changes in the environment and operating conditions introduce uncertainties
in the system parameter O. As a result, control objectives such as regulation and tracking,
which may be realizable using a continuous function u = J'(x, 0) cannot be achieved since
is unknown. Another class of problems arises due to the complexity of the nonlinear
function f. Even if 0, f and h can be precisely determined, the selection of an appropriate
J' that leads to stabilization and tracking cannot be made in general. In this paper, we
present two methods based on neural networks which are shown to be applicable to both
the above classes of problems. In both cases, we clearly outline the assumptions made,
the requirements for adequate training of the neural network, and the class of engineering
problems where the proposed methods are applicable. The proposed approach significantly
expands the scope of neural controllers in relation to those proposed in (Narendra and
Parthasarathy, 1990; Levin and Narendra, 1993; Sanner and Slotine, 1992; Jordan and
Rumelhart, 1992).
x=
o
Neural Control for Nonlinear Dynamic Systems
1011
The first class of problems we shall consider includes nonlinear systems with parametric
uncertainties. The field of adaptive control has addressed such a problem, and over the
past thirty years, many results have been derived pertaining to the control of both linear
and nonlinear dynamic systems (Narendra and Annaswamy, 1989). A common assumption
in almost all of the published work in this field is that the uncertain parameters occur
linearly. In this paper, we consider the control of nonlinear dynamic systems with nonlinear
parametrizations. We design a neural network based controller that adapts to the parameter
and show that closed-loop system stability can be achieved under certain conditions. Such
a controller will be referred to as a O-adaptive neural controller. Pertinent results to this
class are discussed in section 2.
o
The second class of problems includes nonlinear systems, which despite being completely
known, cannot be stabilized by conventional analytical techniques. The obvious method for
stabilizing nonlinear systems is to resort to linearization and use linear control design methods. This limits the scope of operation of the stabilizing controller. Feedback linearization
is another method by which nonlinear systems can be stably controlled (lsidori, 1989). This
however requires fairly stringent set of conditions to be satisfied by the functions! and h.
Even after these conditions are satisfied, one cannot always find a closed-form solution to
stabilize the system since it is equivalent to solving a set of partial differential equations.
We consider in this paper, nonlinear systems, where system models as well as parameters
are known, but controlIer structures are unknown. A neural network based controller is
shown to exist and trained so that a stable closed-loop system is achieved. We denote this
class of controllers as a stable neural controller. Pertinent results to this class are discussed
in section 3.
2
O-ADAPTIVE NEURAL CONTROLLER
The focus of the nonlinear adaptive controller to be developed in this paper is on dynamic
systems that can be written in the d-step ahead predictor form as follows:
Yt+d
= !r(Wt,Ut,O)
(I)
where wi = [Yt," . ,Yt-n+l, Ut-I, ' ", Ut-m-d+l], n ~ I, m ~ 0, d ~ I, m + d = n,
YI, U I C ~ containing the origin and 8 1 C ~k are open, ir : Y1 x U;n+d x 8 1 - t ~, Yt
and Ut are the output and the input of the system at time t respectively, and 0 is an unknown
parameter and occurs nonlinearly in ir.1 The goal is to choose a control input 'It such that
the system in (1) is stabilized and the plant output is regulated around zero.
Letxi
~
[Yt+d-I , '" , Yt+l , wil T , Am = [e2,"', en+d-I, 0 , en+d+I,"', e n +m+2d-2,
0], Bm = [el' en+d], where e, is an unit vector with the i-th term equal to I. The following
assumptions are made regarding the system in Eq. (I ).
(AI) For every 0 E 8
1,
ir(O , O, O) = 0 .
CA2) There exist open and convex neighborhoods of the origin Y2 C YI and U2 C U I , an
open and convex set 82 C 8 1 , and a function K : 0.2 x Y2 x 8 2 ---> U I such that for
every Wt E 0.2 , Yt+d E Y2 and 0 E 8 2 , Eq. (1) can be written as Ut = K(wt, Yt+d, 0),
where 0.2 ~ Y 2
X
u;,,+d-I.
(A3) K is twice differentiable and has bounded first and second derivatives on EI ~ 0. 2 X
Y2 X 8 2 , while ir is differentiable and has a bounded derivative on 0.2 x I{ (E I ) x 8 2 .
(A4) There exists bg
o,BE28 , 11 1Here,
>
?
such that for every YI E ir(o.2, K(o.2' 0 , 8 2), 8 2), W E 0.2 and
(8K(w,y ,O) _ 8K(w,y,9))1 _
ay
ay
Y-
YI
. 8f,(w ,u ,O)
au
I - I > bg'
U-UI
as well as in the following sections, An denotes the n-th product space of the set A .
1012
S. YU, A. M. ANNASWAMY
(A5) There exist positive definite matrices P and Q of dimensions (n + m + 2d - 2) such
T'
-T T
T T
T
that x T
t (AmPAm - P)Xt+ J( BrnPBmK + 2x t ArnPBmK ~ -Xt QXt, where
[( = [0, K(wt, 0 , O)]T.
Since the objective is to control the system~n (1) where 0 is unknown, in order to stabilize
the output Y at the origin with an estimate Of, we choose the control input as
(2)
2.1
PARAMETER ESTIMATION SCHEME
Suppose the estimation algorithm for updating Ot is defined recursively as /10t ~ OtOt-I = R(Yt,Wt-d,Ut-d,Ot-l) the problem is to determine the function R such that Ot
converges to 0 asymptotical1y. In general, R is chosen to depend on Yt, Wt-d, 1?t-d and
Ot-l since they are measurable and contain information regarding O. For example, in the
case of linear systems which can be cast in the input predictor form, 1?t = <b[ 0, a wel\known linear parameter estimation method is to adjust /10 as (Goodwin and Sin, 1984)
/10t = 1+4>'t~~t-'/ [1?t-d -
?LdOt-d?
In other words, the mechanism for carrying out
parameter estimation is realized by R. In the case of general nonlinear systems, the task
of determining such a function R is quite difficult, especial\y when the parameters occur
nonlinearly. Hence, we propose the use of a neural network parameter estimation algorithm
denoted O-adaptive neural network (TANN) (Annaswamy and Yu, 1996). That is, we adjust
Ot as
if /1Vd, < - f
(3)
otherwise
where the inputs of the neural network are Yt, Wt-d, 1?t-d and
f defines a dead-zone where parameter adaptation stops.
Ot-I, the output is /10t, and
The neural network is to be trained so that the resulting network can improve the parameter estimation over time for any possible 0 in a compact set. In addition, the
trained network must ensure that the overal1 system in Eqs. (1), (2) and (3) is stable.
Toward this end, N in TANN algorithm is required to satisfy the fol1owing two properties: (PI) IN(Yt,wt - d,1?t - d,Ot-l)12
fl
>
A Tf
0 , where L.l.Vt
--
IiiUt 12
_
~ a(llfJt;~~,~1~2)2uLd' and (P2) /1Vt -/1Vd, < fl,
IiiUt - I,
12
ii - II _
Ut
- Ut
II
u,
2+ IC( <f;, _,,)1 1 1?-2
d, -- -a ( 1+
IC(
)12)2 t- d'
, 4>,-./
AV,
L.l.
= (~~
(Wt,Yt+d,O)lo=oo)T, Ut = Ut - K(Wt,Yt+d,Ot+d - I), (fit = [WT,Yt+djT,
a E (0, I) and 00 is the point where K is linearized and often chosen to be the mean value
C(?t)
of parameter variation.
2.2
TRAINING OF TANN FOR CONTROL
In the previous section, we proposed an algorithm using a neural network for adjusting
the control parameters. We introduced two properties (PI) and (P2) of the identification
algorithm that the neural network needs to possess in order to maintain stability of the
closed-loop system. In this section, we discuss the training procedure by which the weights
of the neural network are to be adjusted so that the network retains these properties,
The training set is constructed off-line and should compose of data needed in the training phase. If we wan..!. the algorithm in Eq. (3) to be valid on the specified sets Y3 and
U3 for various 0 and 0 in 83, the training set should cover those variables appearing in
Eq. (3) in their respective ranges. Hence, we first sample W in the set Y;- x U;:+d-I,
1013
Neural Control for Nonlinear Dynamic Systems
and B, 8 in the set 83. Their values are, say, WI, BI and 81 respectively. For the
particular
and BI we sample 8 again in the set {B E 8 3 1 IB - BII
181 - BI I},
and its value is, say, 8t. Once WI, BI , 81 and 8t are sampled, other data can then
be calculated, such as UI = K(WI' 0, 8d and YI = fr(WI, UI, Bd. We can also ob. th ecorrespon d?mg C(A-)
BK (
B) All: - -a(I+IC(?I)i2)2
2+IC(?dI 2 (UI -'ttl
~)2 an d
tam
'1'1 -- ao
WI , YI, 0, i l d l -
fh
:s:
_
IC(?IW
LI - a (1+IC(<I>I)I2)2
~ 2
UI) ,where
(UI -
element can then be formed as (Yl
ner, by choosing various w s , Bs ,
- _
?I -
T
[WI
~
T
_
,yJ} and UI -
~d
K(WI' YI,( 1 )?
A data
8t, BI , ~ Vd l , Ld. Proceeding in the same manand 8~ in their respective ranges, we form a typical
,WI ,UI,
1f.
training set Ttram = { (Ys , W s, us,1f~ , Bs , ~ Vd d Ls) 11
s :s: M}, where M denotes the
total number of patterns in the training set. If the quadratic penalty function method (Bertsekas, 1995) is used, properties (PI) and (P2) can be satisfied by training the network on
the training set to minimize the following cost function:
:s:
M
mJpl
~ mJ,n~~{(max{0, ~VeJ)2+ ;2 (max{0, IN i(W)1 2 -
L t})2}
(4)
To find a W which minimizes the above unconstrained cost function 1, we can apply
algorithms such as the gradient method and the Gauss-Newton method.
2.3
STABILITY RESULT
With the plant given by Eq. (1), the controller by Eq. (2), and the TANN parameter
estimation algorithm by Eq. (3), it can be shown that the stability of the closed-loop system
is guaranteed.
Based on the assumptions of the system in (1) and properties (PI) and (P2) that TANN
satisfies, the stability result of the closed-loop system can be concluded in the following
theorem. We refer the reader to (Yu and Annaswamy, 1996) for further detail.
Theorem 1 Given the compact sets Y;+ I X U:;+d x 8 3 where the neural network in Eq. (3)
is trained. There exist EI, E > 0 such that for any interior point B of 8 3 , there exist open
sets Y4 C Y3, U4 C U3 and a neighborhood 8 4 of B such that if Yo , ... , Yn+d-2 E Y4,
Uo, .. . , U n -2 E U4 , and 8n - l , ... ,8n +d - 2 E 8 4 , then all the signals in the closed-loop
system remain bounded and Yt converges to a neighborhood of the origin.
2.4
SIMULATION RESULTS
In this section, we present a simulation example of the TANN controller proposed in this
.
Th e system IS
. 0 f the f orm Yt+1 = I+e
lIy, ( I-y,)
h
B?IS the parameter to be
sectton.
U.USH", + Ut, were
determined on-line. Prior information regarding the system is that () E [4, 10]. Based on
8 (I)
~
Eq. (2), the controller was chosen to be Ut = - ,y, 0 ,-;'Y' ' where Bt denotes the parameter
I+e - ? ""
estimate at time t. According to Eq. (3), B was estimated using the TANN algorithm with
inputs YHI, Yt. Ut and~, and E = 0.01. N is a Gaussian network with 700 centers. The
training set and the testing set were composed of 6,040 and 720 data elements respectively.
After the training was completed, we tested the TANN controller on the system with six
different values of B, 4.5, 5.5, 6.5, 7.~, 8.5 and 9.5, while the initial parameter estimate
and the initial output were chosen as BI = 7 and Yo = -0.9 respectively. The results are
plotted in Figure 1. It can be seen that Yt can be stabilized at the origin for all these values
of B. For comparison, we also simulated the system under the same conditions but with 8
1014
S. YU, A. M. ANNASWAMY
~
0
-1
-1
-2
-2
o
o
10
50
100
10
50
4
100
Figure 1: Yt (TANN Controller)
4
Figure 2: Yt (Extended Kalman Filter)
estimated using the extended Kalman filter (Goodwin and Sin, 1984). Figure 2 shows the
output responses. It is not surprising that for some values of fJ, especially when the initial
estimation error is large, the responses either diverge or exhibit steady state error.
3
3.1
STABLE NEURAL CONTROLLER
STATEMENT OF THE PROBLEM
Consider the following nonlinear dynamical system
X= j(x,u),
Y = h(x)
(5)
where x E Rn and u E RTn. Our goal is to construct a stabilizing neural controller as
u = N(y; W) where N is a neural network with weights W, and establish the conditions
under which the closed-loop system is stable.
The nonlinear system in (5) is expressed as a combination of a higher-order linear part and
a nonlinear part as
A x + Bu + RI (x, u) and y = Cx + R 2 (x), where j(O, O) = 0
and h(O) = O. We make the following assumptions: (AI) j, h are twice continuously
differentiable and are completely known. (A2) There exists a K such that (A - BKC) is
asymptotically stable.
x=
3.2 TRAINING OF THE STABLE NEURAL CONTROLLER
In order for the neural controller in Section 3.1 to result in an asymptotically stable c1osedloop system, it is sufficient to establish that a continuous positive definite function of the state
variables decreases monotonically through output feedback. In other words, if we can find a
scalar definite function with a negative definite derivative of all points in the state space, we
can guarantee stability of the overall system. Here, we limit our choices of the Lyapunov
function candidates to the quadratic form, i.e. V = x T Px, where P is positive definite,
and the goal is to choose the controller so that V< 0 where V= 2xT P j(x, N(h(x), W)).
Based on the above idea, we define a "desired" time-derivative Vd as Vd= -xTQx where
Q = QT > O. We choose P and Q matrices as follows. First, according to (AI), we can
find a matrix K to make (A - BKC) asymptotically stable. We can then find a (P, Q)
pair by choosing an arbitrary positive definite matrix Q and solving the Lyapunov equation,
(A - BKC)T P + P(A - BKC) = -Q to obtain a positive definite P.
1015
Neural Control for Nonlinear Dynamic Systems
With the contro\1er of the form in Section 3.1, the goal is to find W in the neural network
which yields V:::; V d along the trajectories in a neighborhood X C ~n of the origin in the
state space. Let Xi denote the value of a sample point where i is an index to the sample
variable X E X in the state space. To establish V:::; V d, it is necessary that for every Xl in
a neighborhood X C ~n of the origin, Vi:::;V d" where Vi= 2x;Pf(x l ,N(h(:rl ) , W))
and
Vd, =
-x;
QX i .
That is, the goal is to find a W such that the inequality constraints
tlVe , :::; 0, where i = 1,??? , M, is satisfied, where tlVe , =V l - V d, and M denotes the
total number of sample points in X. As in the training of TANN controller, this can also
be posed as an optimization problem. If the same quadratic penalty function method is
used, the problem is to find W to minimize the fo\1owing cost function over the training
set, which is described as Ttra in = {(Xl' Yi, V d.)\l :::; i :::; M}:
1M
rwn J
6.
mJp 2 I: (max{O, tlVe ,})2
(6)
i= 1
3.3 STABILITY OF THE CLOSED-LOOP SYSTEM
Assum~tions
(A 1) and (A2) imply that a stabilizing controller u = - J( y exists so that
V = X Px is a candidate Lyapunov function . More genera\1y, suppose a continuous but
unknown function ,,((y) exists such that for V = x T Px, a control input 1t = "((y) leads to
V:::; -x T Qx, then we can find a neural network N (y) which approximates "((y) arbitrarily
closely in a compact set leading to closed-loop stability. This is summarized in Theorem 2
(Yu and Annaswamy, 1995).
Theorem 2 Let there be a continuous function "((h(x)) such that 2xT P f(x , "((h(x))) +
x T Qx :::; 0 for every X E X where X is a compact set containing the origin as an interior
point. Then, given a neighborhood 0 C X of the origin, there exists a neural controlierH =
N(h(x); W) and a compact set Y E X such that the solutions of
f(x , N(h(x); W))
converge to 0, for every initial condition x(to) E y.
x=
3.4
SIMULATION RESULTS
In this section, we show simulation results for a discrete-time nonlinear systems using the
proposed neural network contro\1er in Section 3, and compare it with a linear contro\1er to
illustrate the difference. The system we considered is a second-order nonlinear system Xt =
f(xt-I , Ut-I), where f = [II, 12]T, h = Xl t _ 1X (1 +X2'_ 1)+X2t-1 x (l-u t- I +uLI) and
12 = XT'_I + 2X2'_1 +Ut-I (1 + X2'_I)? It was assumed that X is measurable, and we wished
to stabilize the system around the origin. The controller is of the form Ht = N (x It, X2 t ).
The neural network N used is a Gaussian network with 120 centers. The training set and
the testing set were composed of 441 and 121 data elements respectively.
After the training was done, we plotted the actual change of the Lyapunov function, tlV,
using the linear controller U = - K x and the neural network controller in Figures 3 and 4
respectively. It can be observed from the two figures that if the neural network contro\1er is
used, tl V is negative definite except in a small neighborhood of the origin, which assures
that the closed-loop system would converge to vicinity of the origin; whereas, if the linear
controller is used, tl V becomes positive in some region away from the origin, which implies
that the system can be unstable for some initial conditions. Simulation results confirmed
our observation.
S. YU, A. M. ANNASWAMY
1016
-0 01
-0 I
-0 J
-0 J
Figure 3: ~V(u
= -K x )
-()2
-O J
Figure 4: ~V(u = N(x))
Acknowledgments
This work is supported in part by Electrical Power Research Institute under contract No.
8060-13 and in part by National Science Foundation under grant No. ECS-9296070.
References
[1] A. M. Annaswamy and S. Yu.
O-adaptive neural networks: A new approach to
parameter estimation. IEEE Transactions on Neural Networks, (to appear) 1996.
[2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995.
[3] G. C. Goodwin and K. S. Sin. Adaptive Filtering Prediction and Control. PrenticeHall, Inc., 1984.
[4] A. Isidori. Nonlinear Control Systems. Springer-Verlag, New York, NY, 1989.
[5] M. L Jordan and D. E. Rumelhart. Forward models: Supervised learning with a distal
teacher. Cognitive Science , 16:307-354, 1992.
[6] A. U. Levin and K. S. Narendra. Control of nonlinear dynamical systems using neural
networks: Controllability and stabilization. IEEE Transactions on Neural Networks,
4(2): 192-206, March 1993.
[7] K. S. Narendra and A . M. Annaswamy. Stable Adaptive Systems. Prentice-Hall, Inc.,
1989.
[8] K. S. Narendra and K. Parthasarathy. Identification and control of dynamical systems
using neural networks. IEEE Transactions on Neural Networks, 1(I ):4-26, March
1990.
[9] R. M. Sanner and J.-J. E. Slotine. Gaussian networks for direct adaptive control. IEEE
Transactions on Neural Networks, 3(6):837-863, November 1992.
[10] S. Yu and A. M. Annaswamy. Adaptive control of nonlinear dynamic systems using
O-adaptive neural networks. Technical Report 9601 , Adaptive Control Laboratory,
Department of Mechanical Engineering, M.LT., 1996.
[11] S.-H. Yu and A. M. Annaswamy. Control of nonlinear dynamic systems using a
stability based neural network approach. In Technical report 9501, Adaptive Control
Laboratory, MIT, Submitted to Proceedings of the 34th IEEE Conference on Decision
and Control, New Orleans, LA, 1995.
| 1043 |@word mjp:1 open:4 simulation:5 linearized:1 recursively:1 ld:1 initial:5 past:1 surprising:1 written:2 must:1 bd:1 belmont:1 pertinent:2 along:1 constructed:1 direct:1 differential:1 compose:1 introduce:1 actual:1 pf:1 becomes:1 bounded:3 ttl:1 minimizes:1 developed:1 guarantee:1 every:6 y3:2 expands:1 control:28 unit:1 grant:1 uo:1 yn:1 assum:1 bertsekas:2 appear:1 positive:6 ner:1 engineering:4 annaswamy:12 limit:2 despite:1 twice:2 au:1 genus:1 range:2 bi:6 acknowledgment:1 thirty:1 yj:1 testing:2 orleans:1 definite:8 procedure:1 significantly:1 word:2 orm:1 cannot:5 interior:2 selection:1 prentice:1 conventional:1 equivalent:1 measurable:2 yt:21 center:2 l:1 convex:2 stabilizing:5 stability:10 variation:1 controlling:1 suppose:3 programming:1 origin:13 element:3 rumelhart:2 updating:1 u4:2 observed:1 electrical:1 region:1 decrease:1 environment:1 complexity:1 wil:1 ui:8 dynamic:11 trained:4 depend:1 solving:2 carrying:1 completely:2 various:2 distinct:1 pertaining:1 neighborhood:7 choosing:2 quite:1 posed:1 say:2 otherwise:1 differentiable:3 mg:1 analytical:1 propose:1 product:1 adaptation:1 fr:1 loop:11 parametrizations:1 adapts:1 requirement:1 converges:2 tions:1 oo:1 illustrate:1 qt:1 wished:1 eq:11 p2:4 implies:1 lyapunov:4 closely:1 owing:1 filter:2 stabilization:2 stringent:1 ao:1 adjusted:1 around:2 considered:1 ic:6 hall:1 scope:2 narendra:6 u3:2 a2:2 fh:1 estimation:9 applicable:2 iw:1 tf:1 mit:3 clearly:1 always:1 gaussian:3 derived:1 focus:2 yo:2 uli:1 realizable:1 am:1 el:1 bt:1 relation:1 overall:1 denoted:1 fairly:1 field:2 equal:1 once:1 construct:1 represents:1 yu:10 report:2 composed:2 national:1 phase:1 maintain:1 a5:1 adjust:2 partial:1 necessary:1 respective:2 desired:1 plotted:2 uncertain:1 cover:1 retains:1 cost:3 predictor:2 delay:1 uld:1 levin:2 teacher:1 bu:1 contract:1 off:1 yl:1 diverge:1 continuously:1 prenticehall:1 again:1 satisfied:4 containing:2 choose:4 wan:1 external:1 dead:1 resort:1 derivative:4 tam:1 leading:1 cognitive:1 li:1 summarized:1 stabilize:4 includes:2 inc:2 satisfy:1 bg:2 vi:2 hsin:2 closed:12 liy:1 minimize:2 formed:1 ir:5 yield:1 identification:2 accurately:1 trajectory:2 confirmed:1 published:1 classified:1 submitted:1 fo:1 email:2 slotine:2 obvious:1 e2:1 di:1 stop:1 sampled:1 adjusting:1 massachusetts:2 ut:15 higher:1 supervised:1 response:2 done:1 ei:2 nonlinear:29 defines:1 stably:1 scientific:1 contain:1 y2:4 hence:2 vicinity:1 laboratory:2 i2:2 distal:1 sin:3 steady:1 djt:1 outline:1 ay:2 fj:1 common:1 rl:1 tracked:1 discussed:2 approximates:1 refer:1 cambridge:2 ai:3 unconstrained:1 stable:10 operating:1 scenario:1 certain:2 verlag:1 inequality:1 arbitrarily:1 vej:1 vt:2 yi:8 seen:1 minimum:1 determine:1 converge:2 monotonically:1 signal:1 ii:4 technical:2 y:1 controlled:1 prediction:1 controller:27 achieved:3 addition:1 whereas:1 addressed:1 concluded:1 ot:8 posse:1 jordan:2 presence:2 fit:1 regarding:3 idea:1 six:1 penalty:2 york:1 adequate:1 exist:5 stabilized:3 estimated:2 discrete:1 shall:1 ht:1 asymptotically:3 year:1 uncertainty:5 ca2:1 almost:1 reader:1 ob:1 decision:1 wel:1 fl:2 guaranteed:1 quadratic:3 occur:3 ahead:1 precisely:1 constraint:1 ri:1 x2:4 px:3 department:3 according:2 combination:1 march:2 remain:1 ush:1 wi:9 b:2 equation:2 assures:1 discus:1 mechanism:1 x2t:1 needed:1 end:1 operation:1 apply:1 away:1 appropriate:1 appearing:1 bii:1 denotes:5 ensure:2 completed:1 a4:1 newton:1 bkc:4 especially:1 establish:3 objective:3 quantity:1 occurs:2 realized:1 parametric:2 exhibit:1 regulated:1 gradient:1 simulated:1 vd:7 athena:1 unstable:1 toward:1 kalman:2 index:1 y4:2 regulation:1 difficult:1 statement:1 negative:2 design:2 unknown:5 av:1 observation:1 november:1 contro:4 controllability:1 extended:2 y1:1 ssu:1 rn:1 arbitrary:1 introduced:1 bk:1 nonlinearly:3 mechanical:3 cast:1 goodwin:3 required:1 specified:1 pair:1 address:1 dynamical:3 pattern:1 max:3 power:1 disturbance:1 qxt:1 sanner:2 scheme:1 improve:1 technology:2 imply:1 rtn:1 parthasarathy:2 prior:1 determining:1 plant:2 filtering:1 especial:1 foundation:1 sufficient:1 pi:4 lo:1 supported:1 institute:3 feedback:2 dimension:1 calculated:1 valid:1 forward:1 made:3 adaptive:13 bm:1 ec:1 qx:3 transaction:4 compact:5 assumed:1 xi:1 continuous:4 mj:1 linearly:1 referred:1 en:3 tl:2 ny:1 xl:3 candidate:2 ib:1 theorem:4 xt:7 er:4 a3:1 exists:5 linearization:2 yhi:1 cx:1 lt:1 expressed:1 tracking:2 scalar:1 u2:1 springer:1 corresponds:2 satisfies:1 ma:3 goal:5 change:2 determined:3 typical:1 except:1 wt:11 total:2 gauss:1 la:1 zone:1 arises:1 tested:1 |
51 | 1,044 | Learning with ensembles: How
over-fitting can be useful
Peter Sollich
Department of Physics
University of Edinburgh, U.K.
P.SollichGed.ac.uk
Anders Krogh'"
NORDITA, Blegdamsvej 17
2100 Copenhagen, Denmark
kroghGsanger.ac.uk
Abstract
We study the characteristics of learning with ensembles. Solving
exactly the simple model of an ensemble of linear students, we
find surprisingly rich behaviour. For learning in large ensembles,
it is advantageous to use under-regularized students, which actually over-fit the training data. Globally optimal performance can
be obtained by choosing the training set sizes of the students appropriately. For smaller ensembles, optimization of the ensemble
weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide
range of regularization parameters makes this improvement robust
against changes in the unknown level of noise in the training data.
1
INTRODUCTION
An ensemble is a collection of a (finite) number of neural networks or other types
of predictors that are trained for the same task. A combination of many different predictors can often improve predictions, and in statistics this idea has been
investigated extensively, see e.g. [1, 2, 3]. In the neural networks community, ensembles of neural networks have been investigated by several groups, see for instance
[4, 5, 6, 7]. Usually the networks in the ensemble are trained independently and
then their predictions are combined.
In this paper we study an ensemble of linear networks trained on different but
overlapping training sets. The limit in which all the networks are trained on the
full data set and the one where all the data sets are different has been treated in
[8] . In this paper we treat the case of intermediate training set sizes and overlaps
?Present address: The Sanger Centre, Hinxton, Cambs CBIO IRQ, UK.
Learning with Ensembles: How Overfitting Can Be Useful
191
exactly, yielding novel insights into ensemble learning. Our analysis also allows us to
study the effect of regularization and of having different predictors in an ensemble.
2
GENERAL FEATURES OF ENSEMBLE LEARNING
We consider the task of approximating a target function fo from RN to R. It
will be assumed that we can only obtain noisy samples of the function, and the
(now stochastic) target function will be denoted y(x) . The inputs x are taken
to be drawn from some distribution P(x). Assume now that an ensemble of K
independent predictors fk(X) of y(x) is available. A weighted ensemble average is
denoted by a bar, like
(1)
lex) = L,wkfk(X),
k
which is the final output of the ensemble. One can think of the weight Wk as the
belief in predictor k and we therefore constrain the weights to be positive and sum
to one. For an input x we define the error of the ensemble c(x), the error of the
kth predictor ck(X), and its ambiguity ak(x)
c(x)
ck(X)
(y(x) -lex)?
(y(x) - fk(X)?
(fk(X) -1(x?2.
=
(2)
(3)
(4)
=
The ensemble error can be written as c(x)
lex) - a(x) [7], where lex)
L,k Wkck(X) is the average error over the individual predictors and a(x) =
L,k Wkak(X) is the average of their ambiguities, which is the variance of the output
over the ensemble. By averaging over the input distribution P(x) (and implicitly
over the target outputs y(x?, one obtains the ensemble generalization error
(5)
where c(x) averaged over P(x) is simply denoted c, and similarly for land a. The
first term on the right is the weighted average of the generalization errors of the individual predictors, and the second is the weighted average of the ambiguities, which
we refer to as the ensemble ambiguity. An important feature of equation (5) is that
it separates the generalization error into a term that depends on the generalization
errors of the individual students and another term that contains all correlations between the students. The latter can be estimated entirely from unlabeled data, i. e.,
without any knowledge of the target function to be approximated. The relation (5)
also shows that the more the predictors differ, the lower the error will be, provided
the individual errors remain constant.
In this paper we assume that the predictors are trained on a sample of p examples
of the target function, (xt',yt'), where yt' = fo(xt') + TJt' and TJt' is some additive
noise (Jl. = 1, ... ,p). The predictors, to which we refer as students in this context
because they learn the target function from the training examples, need not be
trained on all the available data. In fact, since training on different data sets will
generally increase the ambiguity, it is possible that training on subsets of the data
will improve generalization. An additional advantage is that, by holding out for
each student a different part of the total data set for the purpose of testing, one
can use the whole data set for training the ensemble while still getting an unbiased
estimate of the ensemble generalization error. Denoting this estimate by f, one has
(6)
where Ctest = L,k WkCtest,k is the average of the students' test errors. As already
pointed out, the estimate ft of the ensemble ambiguity can be found from unlabeled
data.
P. SOLLICH, A. KROGH
192
So far, we have not mentioned how to find the weights Wk. Often uniform weights
are used, but optimization of the weights in some way is tempting. In [5, 6] the
training set was used to perform the optimization, i.e., the weights were chosen to
minimize the ensemble training error. This can easily lead to over-fitting, and in [7]
it was suggested to minimize the estimated generalization error (6) instead. If this
is done, the estimate (6) acquires a bias; intuitively, however, we expect this effect
to be small for large ensembles.
3
ENSEMBLES OF LINEAR STUDENTS
In preparation for our analysis of learning with ensembles of linear students we now
briefly review the case of a single linear student, sometimes referred to as 'linear
perceptron learning'. A linear student implements the input-output mapping
1 T
J(x) = ..JNw x
parameterized in terms of an N-dimensional parameter vector w with real components; the scaling factor 1/..JN is introduced here for convenience, and . ..T denotes
the transpose of a vector. The student parameter vector w should not be confused with the ensemble weights Wk. The most common method for training such
a linear student (or parametric inference models in general) is minimization of the
sum-of-squares training error
E = L:(y/J - J(x/J))2 + Aw2
/J
where J.L = 1, ... ,p numbers the training examples. To prevent the student from
fitting noise in the training data, a weight decay term Aw2 has been added. The size
of the weight decay parameter A determines how strongly large parameter vectors
are penalized; large A corresponds to a stronger regularization of the student.
For a linear student, the global minimum of E can easily be found. However,
in practical applications using non-linear networks, this is generally not true, and
training can be thought of as a stochastic process yielding a different solution each
time. We crudely model this by considering white noise added to gradient descent
updates of the parameter vector w. This yields a limiting distribution of parameter
vectors P(w) ex: exp(-E/2T), where the 'temperature' T measures the amount of
noise in the training process.
We focus our analysis on the 'thermodynamic limit' N - t 00 at constant normalized
number of training examples, ex = p/N. In this limit, quantities such as the training
or generalization error become self-averaging, i.e., their averages over all training
sets become identical to their typical values for a particular training set. Assume
now that the training inputs x/J are chosen randomly and independently from a
Gaussian distribution P(x) ex: exp( - ~x2), and that training outputs are generated
by a linear target function corrupted by additive noise, i.e., y/J = w'f x/J /..IN + 1]/J,
where the 1]/J are zero mean noise variables with variance u 2 ? Fixing the length of the
parameter vector of the target function to w~ = N for simplicity, the generalization
error of a linear student with weight decay A and learning noise T becomes [9]
(; = (u 2 + T)G + A(U 2
-
8G
A) 8A .
(7)
On the r.h.s. of this equation we have dropped the term arising from the noise on
the target function alone, which is simply u 2 , and we shall follow this convention
throughout . The 'response function' Gis [10, 11]
G = G(ex, A) = (1 - ex - A+ )(1 - ex - A)2 + 4A)/2A.
(8)
193
Learning with Ensembles: How Overfitting Can Be Useful
For zero training noise, T = 0, and for any a, the generalization error (7} is minimized when the weight decay is set to A = (T2j its value is then (T2G(a, (T2), which
is the minimum achievable generalization error [9].
3.1
ENSEMBLE GENERALIZATION ERROR
We now consider an ensemble of K linear students with weight decays Ak and
learning noises Tk (k = 1 . . . K). Each ,student has an ensemble weight Wk and
is trained on N ak training examples, with students k and I sharing N akl training
examples (of course, akk = ak). As above, we consider noisy training data generated
by a linear target function. The resulting ensemble generalization error can be
calculated by diagrammatic [10] or response function [11] methods. We refer the
reader to a forthcoming publication for details and only state the result:
(9)
where
(10)
Here Pk is defined as Pk = AkG(ak, Ak). The Kronecker delta in the last term
of (10) arises because the training noises of different students are uncorrelated. The
generalization errors and ambiguities of the individual students are
ak = ckk - 2 LWlckl
I
+ LWIWmclm;
1m
the result for the Ck can be shown to agree with the single student result (7). In
the following sections, we shall explore the consequences of the general result (9) .
We will concentrate on the case where the training set of each student is sampled
randomly from the total available data set of size NO', For the overlap of the training
sets of students k and I (k 'II) one then has akl/a = (ak/a)(al/a) and hence
ak/ = akal/a
(11)
up to fluctuations which vanish in the thermodynamic limit. For finite ensembles
one can construct training sets for which akl < akal/a. This is an advantage,
because it results in a smaller generalization error, but for simplicity we use (11).
4
LARGE ENSEMBLE LIMIT
We now use our main result (9) to analyse the generalization performance of an ensemble with a large number K of students, in particular when the size of the training
sets for the individual students are chosen optimally. If the ensemble weights Wk
are approximately uniform (Wk ~ 1/ K) the off-diagonal elements of the matrix
(ckl) dominate the generalization error for large K, and the contributions from the
training noises
are suppressed. For the special case where all students are identical and are trained on training sets of identical size, ak = (1 - c)a, the ensemble
generalization error is shown in Figure 1(left). The minimum at a nonzero value
of c, which is the fraction of the total data set held out for testing each student,
can clearly be seen. This confirms our intuition: when the students are trained
on smaller, less overlapping training sets, the increase in error of the individual
students can be more than offset by the corresponding increase in ambiguity.
n
The optimal training set sizes ak can be calculated analytically:
_
Ck
=1-
ak/ a
1 - Ak/(T2
= 1 + G(a, (T2) '
(12)
P. SOLLICH, A. KROGH
194
1.0 r - - - , - - - - - , r - - - . , - - - - , . - - - - : .
1.0 r - - - , - - - - - , - - - . - - - - r - - - - "
0.8
0.8
0.6
0.6
w
.'
w
0.4
,...-------
0.2
/
/
0.0 /
0.0
/
0.2
0.4
0.6
,,
,
0.8
0.2
------,
1.0
0.0
0.0
C
...
0.2
0.4
0.6
0.8
1.0
C
Figure 1: Generalization error and ambiguity for an infinite ensemble of identical
students. Solid line: ensemble generalization error, fj dotted line: average generalization error of the individual students, l; dashed line: ensemble ambiguity, a.
For both plots a = 1 and (72 = 0.2 . The left plot corresponds to under-regularized
students with A = 0.05 < (72. Here the generalization error of the ensemble has
a minimum at a nonzero value of c. This minimum exists whenever>' < (72. The
right plot shows the case of over-regularized students (A = 0.3 > (72), where the
generalization error is minimal at c = O.
The resulting generalization error is f = (72G(a, (72) + 0(1/ K), which is the globally
minimal generalization error that can be obtained using all available training data,
as explained in Section 3. Thus, a large ensemble with optimally chosen training
set sizes can achieve globally optimal generalization performance. However, we see
from (12) that a valid solution Ck > 0 exists only for Ak < (72, i.e., if the ensemble
is under-regularized. This is exemplified, again for an ensemble of identical students, in Figure 1(right) , which shows that for an over-regularized ensemble the
generalization error is a: monotonic function of c and thus minimal at c = o.
We conclude this section by discussing how the adaptation of the training set sizes
could be performed in practice, for simplicity confining ourselves to an ensemble of
identical students, where only one parameter c = Ck = 1- ak/a has to be adapted.
If the ensemble is under-regularized one expects a minimum of the generalization
error for some nonzero c as in Figure 1. One could, therefore, start by training
all students on a large fraction of the total data set (corresponding to c ~ 0), and
then gradually and randomly remove training examples from the students' training
sets. Using (6), the generalization error of each student could be estimated by
their performance on the examples on which they were not trained, and one would
stop removing training examples when the estimate stops decreasing. The resulting
estimate of the generalization error will be slightly biased; however, for a large
enough ensemble the risk of a strongly biased estimate from systematically testing
all students on too 'easy' training examples seems small, due to the random selection
of examples.
5
REALISTIC ENSEMBLE SIZES
We now discuss some effects that occur in learning with ensembles of 'realistic' sizes.
In an over-regularized ensemble nothing can be gained by making the students more
diverse by training them on smaller, less overlapping training sets. One would also
195
Learning with Ensembles: How Overfitting Can Be Useful
Figure 2: The generalization error of
an ensemble with 10 identical students as a function of the test set
fraction c. From bottom to top the
curves correspond to training noise
T = 0,0.1,0.2, ... ,1.0. The star on
each curve shows the error of the optimal single perceptron (i. e., with optimal weight decay for the given T)
trained on all examples, which is independent of c. The parameters for
this example are: a = 1, A = 0.05,
0'2 = 0.2.
0.2
0.0 L - _ - - ' - _ - - - '_ _-'--_--'-_~
0.0
0.2
0.4
0.6
0.8
1.0
C
expect this kind of 'diversification' to be unnecessary or even counterproductive
when the training noise is high enough to provide sufficient 'inherent' diversity of
students. In the large ensemble limit, we saw that this effect is suppressed, but
it does indeed occur in finite ensembles. Figure 2 shows the dependence of the
generalization error on c for an ensemble of 10 identical, under-regularized students
with identical training noises Tk = T. For small T, the minimum of f. at nonzero c
persists. For larger T, f. is monotonically increasing with c, implying that further
diversification of students beyond that caused by the learning noise is wasteful. The
plot also shows the performance of the optimal single student (with A chosen to
minimize the generalization error at the given T), demonstrating that the ensemble
can perform significantly better by effectively averaging out learning noise.
For realistic ensemble sizes the presence of learning noise generally reduces the
potential for performance improvement by choosing optimal training set sizes. In
such cases one can still adapt the ensemble weights to optimize performance, again
on the basis of the estimate of the ensemble generalization error (6). An example is
1.0
1.0
,,
0.8
I
I
0.8
,-
,-
/
I
0.6
0.6
I
tV
tV
0.4 ..... -_ .................
0.4
0.2
... ....
0.0
0.001
---0.010
0.2
0.100
02
1.000
0.0
0.001
0.010
02
0.100
1.000
Figure 3: The generalization error of an ensemble of 10 students with different
weight decays (marked by stars on the 0'2-axis) as a function of the noise level
0'2. Left: training noise T = 0; right: T = 0.1. The dashed lines are for the
ensemble with uniform weights, and the solid line is for optimized ensemble weights.
The dotted lines are for the optimal single perceptron trained on all data. The
parameters for this example are: a = 1, c = 0.2.
196
P. SOu...ICH, A. KROGH
shown in Figure 3 for an ensemble of size 1< = 10 with the weight decays >'k equally
spaced on a logarithmic axis between 10- 3 and 1. For both of the temperatures T
shown, the ensemble with uniform weights performs worse than the optimal single
student. With weight optimization, the generalization performance approaches that
of the optimal single student for T = 0, and is actually better at T = 0.1 over
the whole range of noise levels rr2 shown. Even the best single student from the
ensemble can never perform better than the optimal single student, so combining the
student outputs in a weighted ensemble average is superior to simply choosing the
best member of the ensemble by cross-validation, i.e., on the basis of its estimated
generalization error. The reason is that the ensemble average suppresses the learning
noise on the individual students.
6
CONCLUSIONS
We have studied ensemble learning in the simple, analytically solvable scenario of
an ensemble of linear students. Our main findings are: In large ensembles, one
should use under-regularized students in order to maximize the benefits of the
variance-reducing effects of ensemble learning. In this way, the globally optimal
generalization error on the basis of all the available data can be reached by optimizing the training set sizes of the individual students. At the same time an estimate
of the generalization error can be obtained. For ensembles of more realistic size, we
found that for students subjected to a large amount of noise in the training process
it is unnecessary to increase the diversity of students by training them on smaller,
less overlapping training sets. In this case, optimizing the ensemble weights can
still yield substantially better generalization performance than an optimally chosen
single student trained on all data with the same amount of training noise. This
improvement is most insensitive to changes in the unknown noise levels rr2 if the
weight decays of the individual students cover a wide range. We expect most of these
conclusions to carryover, at least qualitatively, to ensemble learning with nonlinear
models, and this correlates well with experimental results presented in [7].
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
C. Granger, Journal of Forecasting 8, 231 (1989).
D. Wolpert, Neural Networks 5, 241 (1992) .
L. Breimann, Tutorial at NIPS 7 and personal communication.
L. Hansen and P. Salamon, IEEE Trans. Pattern Anal. and Mach. Intell. 12,
993 (1990).
M. P. Perrone and L. N. Cooper, in Neural Networks for Speech and Image
processing, ed. R. J. Mammone (Chapman-Hall, 1993).
S. Hashem: Optimal Linear Combinations of Neural Networks. Tech. Rep .
PNL-SA-25166, submitted to Neural Networks (1995) .
A. Krogh and J. Vedelsby, in NIPS 7, ed. G. Tesauro et al., p. 231 (MIT Press,
1995).
R. Meir, in NIPS 7, ed. G. Tesauro et al., p. 295 (MIT Press, 1995).
A. Krogh and J. A. Hertz, J. Phys. A 25,1135 (1992).
J. A. Hertz, A. Krogh, and G. I. Thorbergsson, J. Phys. A 22, 2133 (1989).
P. Sollich, J. Phys. A 27, 7771 (1994).
| 1044 |@word briefly:1 achievable:1 seems:1 advantageous:1 stronger:1 confirms:1 solid:2 contains:1 denoting:1 written:1 realistic:4 additive:2 remove:1 plot:4 update:1 alone:1 implying:1 t2j:1 become:2 fitting:3 indeed:1 globally:4 decreasing:1 considering:1 increasing:1 becomes:1 provided:1 confused:1 kind:1 akl:3 substantially:1 suppresses:1 finding:1 exactly:2 uk:3 positive:1 dropped:1 persists:1 treat:1 limit:6 consequence:1 mach:1 ak:15 fluctuation:1 approximately:1 studied:1 range:3 averaged:1 practical:1 testing:3 practice:1 implement:1 t2g:1 thought:1 significantly:1 convenience:1 unlabeled:2 selection:1 ctest:1 context:1 risk:1 optimize:1 yt:2 independently:2 simplicity:3 insight:1 dominate:1 akal:2 limiting:1 target:10 element:1 approximated:1 bottom:1 ft:1 mentioned:1 intuition:1 hashem:1 personal:1 trained:13 solving:1 basis:3 easily:2 choosing:4 mammone:1 larger:1 statistic:1 gi:1 think:1 analyse:1 noisy:2 final:1 advantage:2 adaptation:1 ckl:1 combining:1 achieve:1 getting:1 tk:2 ac:2 fixing:1 sa:1 krogh:7 ckk:1 convention:1 differ:1 concentrate:1 stochastic:2 behaviour:1 generalization:41 tjt:2 hall:1 exp:2 mapping:1 purpose:1 hansen:1 saw:1 weighted:4 minimization:1 mit:2 clearly:1 gaussian:1 ck:6 publication:1 sou:1 focus:1 improvement:4 tech:1 inference:1 anders:1 relation:1 denoted:3 special:1 construct:1 never:1 having:1 chapman:1 identical:9 minimized:1 t2:3 carryover:1 inherent:1 randomly:3 intell:1 individual:12 ourselves:1 akk:1 yielding:2 held:1 minimal:3 instance:1 cover:1 subset:1 expects:1 predictor:11 uniform:4 too:1 optimally:3 corrupted:1 combined:1 physic:1 off:1 again:2 ambiguity:10 worse:1 potential:1 diversity:2 star:2 student:64 wk:6 caused:1 depends:1 performed:1 reached:1 start:1 contribution:1 minimize:3 square:1 variance:3 characteristic:1 ensemble:80 yield:3 correspond:1 counterproductive:1 spaced:1 submitted:1 fo:2 phys:3 sharing:1 whenever:1 ed:3 against:1 vedelsby:1 sampled:1 stop:2 knowledge:1 actually:2 salamon:1 follow:1 response:2 done:1 strongly:2 correlation:1 crudely:1 nonlinear:1 overlapping:4 effect:5 normalized:1 unbiased:1 true:1 regularization:3 hence:1 analytically:2 nonzero:4 white:1 self:1 acquires:1 performs:1 temperature:2 fj:1 image:1 novel:1 common:1 superior:1 insensitive:1 jl:1 aw2:2 significant:1 refer:3 fk:3 similarly:1 pointed:1 centre:1 optimizing:2 tesauro:2 irq:1 diversification:2 scenario:1 rep:1 discussing:1 seen:1 minimum:7 additional:1 maximize:1 monotonically:1 tempting:1 dashed:2 ii:1 full:1 thermodynamic:2 reduces:1 adapt:1 cross:1 equally:1 prediction:2 sometimes:1 appropriately:1 biased:2 subject:1 member:1 presence:1 intermediate:1 enough:2 easy:1 fit:1 forthcoming:1 idea:1 forecasting:1 peter:1 speech:1 useful:4 generally:3 amount:3 extensively:1 meir:1 tutorial:1 dotted:2 estimated:4 arising:1 delta:1 diverse:1 nordita:1 shall:2 group:1 demonstrating:1 drawn:1 wasteful:1 prevent:1 fraction:3 sum:2 parameterized:1 ich:1 throughout:1 reader:1 scaling:1 entirely:1 adapted:1 occur:2 kronecker:1 constrain:1 x2:1 department:1 tv:2 combination:2 perrone:1 hertz:2 smaller:5 sollich:4 remain:1 suppressed:2 slightly:1 making:1 intuitively:1 explained:1 gradually:1 taken:1 equation:2 agree:1 discus:1 granger:1 subjected:1 available:5 jn:1 denotes:1 top:1 sanger:1 approximating:1 already:1 lex:4 added:2 quantity:1 parametric:1 dependence:1 diagonal:1 gradient:1 kth:1 separate:1 blegdamsvej:1 reason:1 denmark:1 length:1 holding:1 anal:1 unknown:2 perform:3 confining:1 finite:3 descent:1 communication:1 rn:1 community:1 introduced:1 copenhagen:1 optimized:1 nip:3 trans:1 address:1 beyond:1 bar:1 suggested:1 usually:1 exemplified:1 pattern:1 belief:1 overlap:2 treated:1 regularized:9 solvable:1 improve:2 axis:2 cbio:1 review:1 expect:3 validation:1 sufficient:1 systematically:1 uncorrelated:1 land:1 course:1 penalized:1 surprisingly:1 last:1 transpose:1 bias:1 perceptron:3 wide:2 edinburgh:1 benefit:1 curve:2 calculated:2 valid:1 rich:1 collection:1 qualitatively:1 far:1 correlate:1 obtains:1 implicitly:1 global:1 overfitting:3 assumed:1 conclude:1 unnecessary:2 learn:1 robust:1 investigated:2 pk:2 main:2 whole:2 noise:27 nothing:1 referred:1 cooper:1 rr2:2 vanish:1 removing:1 diagrammatic:1 xt:2 offset:1 decay:9 exists:2 effectively:1 gained:1 pnl:1 wolpert:1 logarithmic:1 simply:3 explore:1 monotonic:1 thorbergsson:1 corresponds:2 determines:1 marked:1 change:2 typical:1 infinite:1 reducing:1 averaging:3 total:4 experimental:1 latter:1 arises:1 preparation:1 ex:6 |
52 | 1,045 | SEEMORE: A View-Based Approach to
3-D Object Recognition Using Multiple
Visual Cues
Bartlett W. Mel
Department of Biomedical Engineering
University of Southern California
Los Angeles, CA 90089
mel@quake.usc.edu
Abstract
A neurally-inspired visual object recognition system is described
called SEEMORE, whose goal is to identify common objects from
a large known set-independent of 3-D viewiag angle, distance,
and non-rigid distortion. SEEMORE's database consists of 100 objects that are rigid (shovel), non-rigid (telephone cord), articulated (book), statistical (shrubbery), and complex (photographs of
scenes). Recognition results were obtained using a set of 102 color
and shape feature channels within a simple feedforward network architecture. In response to a test set of 600 novel test views (6 of
each object) presented individually in color video images, SEEMORE
identified the object correctly 97% of the time (chance is 1%) using
a nearest neighbor classifier. Similar levels of performance were
obtained for the subset of 15 non-rigid objects. Generalization behavior reveals emergence of striking natural category structure not
explicit in the input feature dimensions.
1
INTRODUCTION
In natural contexts, visual object recognition in humans is remarkably fast, reliable,
and viewpoint invariant. The present approach to object recognition is "view-based"
(e.g. see [Edelman and Bulthoff, 1992]), and has been guided by three main dogmas.
First, the "natural" object recognition problem faced by visual animals involves a
large number of objects and scenes, extensive visual experience, and no artificial
866
B. W.MEL
distinctions among object classes, such as rigid, non-rigid, articulated, etc.
Second, when an object is recognized in the brain, the "heavy lifting" is done by
the first wave of action potentials coursing from the retina to the inferotemporal
cortex (IT) over a period of 100 ms [Oram and Perrett, 1992]. The computations
carried out during this time can be modeled as a shallow but very wide feedforward
network of simple image filtering operations. Shallow means few processing levels,
wide means a sparse, high-dimensional representation combining cues from multiple
visual submodalities, such as color, texture, and contour [Tanaka et al., 1991].
Third, more complicated processing mechanisms, such as those involving focal attention, segmentation, binding, normalization, mental rotation, dynamic links, parts
recognition, etc., may exist and may enhance recognition performance but are not
necessary to explain rapid, robust recognition with objects in normal visual situations.
In this vein, the main goal of this project has been to explore the limits of performance of a shallow-but very wide-feedforward network of simple filtering operations
for viewpoint-invariant 3-D object recognition, where the filter "channels" themselves have been loosely modeled after the shape- and color-sensitive visual response
properties seen in the higher levels of the primate visual system [Tanaka et al., 1991].
Architecturally similar approaches to vision have been most often applied in the domain of optical character recognition [Fukushima et al., 1983, Le Cun et al., 1990].
SEEMORE'S architecture is also similar in spirit to the color histogramming approach
of [Swain and Ballard, 1991], but includes spatially-structured features that provide
also for shape-based generalization.
Figure 1: The database includes 100 objects of many different types, including rigid
(soup can), non-rigid (necktie), statistical (bunch of grapes), and photographs of
complex indoor and outdoor scenes.
SEEMORE: A View-based Approach to 3-D Object Recognition
2
867
SEEMORE'S VISUAL WORLD
SEEMORE's database contains 100 common 3-D objects and photogaphs of scenes,
each represented by a set of pre-segmented color video images (fig. 1). The training
set consisted of 12-36 views of each object as follows. For rigid objects, 12 training
views were chosen at roughly 60? intervals in depth around the viewing sphere, and
each view was then scaled to yield a total of three images at 67%, 100%, and 150%.
Image plane orientation was allowed to vary arbitrarily. For non-rigid objects, 12
training views were chosen in random poses.
During a recognition trial, SEEMORE was required to identify novel test images of
the database objects. For rigid objects, test images were drawn from the viewpoint
interstices of the training set, excluding highly foreshortened views (e.g. bottom of
can). Each test view could therefore be presumed to be correctly recognizable, but
never closer than roughly 30-> in orientation in depth or 22% in scale to the nearest
training view of the object, while position and orientation in the image plane could
vary arbitrarily. For non-rigid objects, test images consisted of novel random poses.
Each test view depicted the isolated object on a smooth background.
2.1
FEATURE CHANNELS
SEEMORE's internal representation of a view of an object is encoded by a set
of feature channels. The ith channel is based on an elemental nonlinear filter
fi(z, y, (h, (J2, .? .), parameterized by position in the visual field and zero or more
internal degrees of freedom. Each channel is by design relatively sensitive to changes
in the image that are strongly related to object identity, such as the object's shape,
color, or texture, while remaining relatively insensitive to changes in the image that
are unrelated to object identity, such as are caused by changes in the object's pose.
In practice, this invariance is achieved in a straightfOl'ward way for each channel by
subsampling and summing the output of the elemental channel filter over the entire
visual field and one or more of its internal degrees of freedom, giving a channel
output Fi = Lx,y,(h , .. . fiO. For example, a particular shape-sensitive channel might
"look" for the image-plane projections of right-angle corners, over the entire visual
field, 360? of rotation in the image plane, 30? of rotation in depth, one octave in
scale, and tolerating partial occlusion and/or slight misorientation of the elemental
contours that define the right angle. In general, then, Fi may be viewed as a "cell"
with a large receptive field whose output is an estimate of the number of occurences
of distal feature i in the workspace over a large range of viewing parameters.
SEEMORE'S architecture consists of 102 feature channels, whose outputs form an
input vector to a nearest-neighbor classifer. Following the design of the individual
channels, the channel vector F = {FI, ... F 102 } is (1) insensitive to changes in image
plane position and orientation of the object, (2) modestly sensitive to changes in
object scale, orientation in depth, or non-rigid deformation, but (3) highly sensitive
to object "quality" as pertains to object identity. Within this representation, total
memory storage for all views of an object ranged from 1,224 to 3,672 integers.
As shown in fig . 2, SEEMORE's channels fall into in five groups: (1) 23 color channels, each of which responds to a small blob of color parameterized by "best" hue
and saturation, (2) 11 coarse-scale intensity corner channels parameterized by open
angle, (3) 12 "blob" features, parameterized by the shape (round and elongated) and
868
B.
W.MEL
size (small, medium, and large) of bright and dark intensity blobs, (4) 24 contour
shape features, including straight angles, curve segments of varying radius, and parallel and oblique line combinations, and (5) 16 shape/texture-related features based
on the outputs of Gabor functions at 5 scales and 8 orientations. The implementations of the channel groups were crude, in the interests of achieving a working,
multiple-cue system with minimal development time. Images were grabbed using an
off-the-shelf Sony S-Video Camcorder and SunVideo digitizing board.
Colors
Blobs
Angles
o.
Contours
0.1
e.
oe
oe
.e
o
00
??
??
??
0
c
??
oe
o
=:>
Gabor-Based Features
./" 1: -
sin2 +cos2 .......... 0 _
2
energy @ scale i
energy variance
@scalei
0
6
45 90
<30
>30
Figure 2: SEEMORE's 102 channels fall into 5 groups, sensitive to (1) colors, (2) intensity corners, (3) circular and elongated intensity blobs, (4) contour shape features,
and (5) 16 oriented-energy and relative-orientation features based on the outputs of
Gabor functions at several scales and orientations.
3
RECOGNITION
SEEMORE's recognition performance was assesed quantitatively as follows. A test
set consisting of 600 novel views (100 objects x 6 views) was culled from the database, and presented to SEEMORE for identification. It was noted empirically that
a compressive transform on the feature dimensions (histogram values) led to improved classification performance; prior to all learning and recognition operations,
SEEMORE: A View-based Approach to 3-D Object Recognition
869
Figure 3: Generalization using only shape-related channels. In each row, a novel
test view is shown at far left. The sequence of best matching training views (one
per object) is shown to right, in order of decreasing similarity.
therefore, each feature value was replaced by its natural logarithm (0 values were
first replaced with a small positive constant to prevent the logarithm from blowing
up). For each test view, the city-block distance was computed to every training view
in the database and the nearest neighbor was chosen as the best match. The log
transform of the feature dimension:.; thus tied this distance to the ratios of individual
feature values in two images rather than their differences.
4
RESULTS
Recognition time on a Sparc-20 was 1-2 minutes per view; the bulk of the time was
devoted to shape processing, with under 2 seconds required for matching.
Recognition results are reported as the proportion of test views that were correctly
classified. Performance using all 102 channels for the 600 novel object views in the
intact test set was 96.7%; the chance rate of correct classification was 1%. Across
recognition conditions, second-best matches usually accounted for approximately
half the errors. Results were broken down in terms of the separate contributions
to recognition performance of color-related vs. shape-related feature channels. Performance using only the 23 color-related channels was 87.3%, and using only the
79 shape-related channels was 79.7%. Remarkably, very similar performance figures
were obtained for the subset of 90 test views of the non-rigid objects, which included
several scarves, a bike chain, necklace, belt, sock, necktie, maple-leaf cluster, bunch
of grapes, knit bag, and telephone cord. Thus, a novel random configuration of a
telephone cord was as easily recognized as a novel view of a shovel.
870
5
B. W.MEL
GENERALIZATION BEHAVIOR
Numerical indices of recognition performance are useful, but do not explicitly convey
the similarity structure of the underlying feature space. A more qualitative but
extremely informative representation of system performance lies in the sequence of
images in order of increasing distance from a test view. Records of this kind are
shown in fig. 3 for trials in which only shape-related channels were used. In each, a
test view is shown at the far left, and the ordered set of nearest neighbors is shown
to the right. When a test view's nearest neighbor (second image from left) was not
the correct match, the trial was classified as an error.
As shown in row (1), a view of a book is judged most similar to a series of other books
(or the bottom of a rectangular cardboard box)---each a view of a rectangular object
with high-frequency surface markings. A similar sequence can be seen in subsequent
rows for (2) a series of cans, each a right cylinder with detailed surface markings, (3)
a series of smooth, not-quite-round objects, (4) a series of photographs of complex
scenes, and (5) a series of dinosaurs (followed by a teddy bear). In certain cases,
SEEMORE'S shape-related similarity metric was more difficult to visually interpret
or verbalize (last two rows), or was different from that of a human observer.
6
DISCUSSION
The ecology of natural object vision gives rise to an apparent contradiction: (i)
generalization in shape-space must in some cases permit an object whose global
shape has been grossly perturbed to be matched to itself, such as the various tangled
forms of a telephone cord, but (ii) quasi-rigid basic-level shape categories (e.g. chair,
shoe, tree) must be preserved as well, and distinguished from each other.
A partial It wi uti on to this conundrum lies in the observation tbat locally-cumputed
shape statistics are in large part preserved under the global shape deformations that
non-rigid common objects (e.g. scarf, bike-chain) typically undergo. A feature-space
representation with an emphasis on locally-derived shape channels will therefore
exhibit a significant degree of invariance to global nonrigid shape deformations. The
definition of shape similarity embodied in the present approach is that two objects
are similar if they contain similar profiles (histograms) of their shape measures,
which emphasize locality. One way of understanding the emergence of global shape
categories, then, such as "book", "can", "dinosaur", etc., is to view each as a set of
instances of a single canonical object whose local shape statistics remain quasi-stable
as it is warped into various global forms. In many cases, particularly within rigid
object categories, exemplars may share longer-range shape statistics as well.
It is useful to consider one further aspect of SEEMORE'S shape representation, pertaining to an apparent mismatch between the simplicity of the shape-related feature channels and the complexity of the shape categories that can emerge from
them. Specifically, the order of binding of spatial relations within SEEMORE's shape
channels is relatively low, i.e. consisting of single simple open or closed curves,
or conjunctions of two oriented contours or Gabor patches. The fact that shape
categories, such as "photographs of rooms", or "smooth lumpy objects", cluster
together in a feature space of such low binding order would therefore at first seem
surprising. This phenomenon relates closely to the notion of "wickelfeatures" (see
[Rumelhart and McClelland, 1986], ch. 18), in which features (relating to phonemes)
SEEMORE: A View-based Approach to 3-D Object Recognition
871
that bind spatial information only locally are nonetheless used to represent global
patterns (words) with little or no residual ambiguity.
The pre segmentation of objects is a simplifying assumption that is clearly invalid in
the real world. The advantage of the assumption from a methodological perspective
is that the object similarity structure induced by the feature dimensions can be
studied independently from the problem of segmenting or indexing objects imbedded
in complex scenes. In continuing work, we are pursuing a leap to sparse very-highdimensional space (e.g. 10,000 dimensions), whose advantages for classification in
the presence of noise (or clutter) have been discussed elsewhere [Kanerva, 1988,
Califano and Mohan, 1994].
Acknowledgements
Thanks to J6zsef Fiser for useful discusf!ions and for development of the Gabor-based
channel set, to Dan Lipofsky and Scott Dewinter for helping in the construction of
the image database, and to Christof Koch for providing support at Caltech where
this work was initiated. This work was funded by the Office of Naval Research, and
the McDonnell-Pew Foundation.
References
[Califano and Mohan, 1994] Califano, A. and Mohan, R. (1994). Multidimensional
indexing for recognizing visual shapes. IEEE Trans. on PAMI, 16:373-392.
[Edelman and Bulthoff, 1992] Edelman, S. and Bulthoff, H. (1992). Orientation dependence in the recognition of familiar and novel views of three-dimensional objects. Vision Res., 32:2385-2400.
[Fukushima et al., 1983] Fukushima, K., Miyake, S., and Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition.
IEEE Trans. Sys. Man & Cybernetics, SMC-13:826-834.
[Kanerva, 1988] Kanerva, P. (1988). Sparse distributed memory. MIT Press, Cambridge, MA.
[Le Cun et al., 1990] Le Cun, Y., Matan, 0., Boser, B., Denker, J., Henderson, D.,
Howard, R., Hubbard, W., Jackel, L., and Baird, H. (1990). Handwritten zip
code recognition with multilayer networks. In Proc. of the 10th Int. Conf. on
Patt. Rec. IEEE Computer Science Press.
[Oram and Perrett, 1992] Oram, M. and Perrett, D. (1992). Time course of neural
responses discriminating different views of the face and head. J. Neurophysiol.,
68(1) :70-84.
[Rumelhart and McClelland, 1986] Rumelhart, D. and McClelland, J. (1986). Parallel distributed processing. MIT Press, Cambridge, Massachusetts.
[Swain and Ballard, 1991] Swain, M. and Ballard, D. (1991). Color indexing. Int.
J. Computer Vision, 7:11-32.
[Tanaka et al., 1991] Tanaka, K., Saito, H., Fukada, Y., and Moriya, M. (1991).
Coding visual images of objects in the inferotemporal cortex of the macaque
monkey. J. Neurophysiol., 66:170-189.
PART VIII
APPLICATIONS
| 1045 |@word trial:3 proportion:1 open:2 cos2:1 simplifying:1 configuration:1 contains:1 series:5 surprising:1 must:2 subsequent:1 numerical:1 informative:1 shape:33 v:1 cue:3 half:1 leaf:1 plane:5 sys:1 ith:1 oblique:1 record:1 mental:1 coarse:1 lx:1 belt:1 five:1 edelman:3 consists:2 qualitative:1 dan:1 verbalize:1 recognizable:1 presumed:1 rapid:1 roughly:2 themselves:1 behavior:2 brain:1 inspired:1 decreasing:1 little:1 increasing:1 project:1 unrelated:1 underlying:1 matched:1 medium:1 bike:2 kind:1 monkey:1 compressive:1 every:1 multidimensional:1 classifier:1 scaled:1 christof:1 segmenting:1 positive:1 engineering:1 local:1 bind:1 limit:1 initiated:1 approximately:1 pami:1 might:1 emphasis:1 studied:1 smc:1 range:2 practice:1 block:1 saito:1 gabor:5 projection:1 matching:2 pre:2 word:1 judged:1 storage:1 context:1 tangled:1 elongated:2 maple:1 attention:1 independently:1 rectangular:2 miyake:1 simplicity:1 occurences:1 contradiction:1 conundrum:1 notion:1 construction:1 rumelhart:3 recognition:26 particularly:1 rec:1 fukada:1 database:7 vein:1 bottom:2 cord:4 grabbed:1 oe:3 architecturally:1 broken:1 complexity:1 dynamic:1 dogma:1 segment:1 classifer:1 neurophysiol:2 easily:1 represented:1 various:2 articulated:2 fast:1 pertaining:1 artificial:1 tbat:1 matan:1 whose:6 encoded:1 quite:1 apparent:2 distortion:1 statistic:3 ward:1 emergence:2 transform:2 itself:1 blob:5 sequence:3 advantage:2 j2:1 combining:1 moriya:1 los:1 elemental:3 cluster:2 object:56 pose:3 exemplar:1 nearest:6 seemore:20 involves:1 guided:1 radius:1 closely:1 correct:2 filter:3 human:2 viewing:2 shovel:2 generalization:5 helping:1 around:1 koch:1 normal:1 visually:1 vary:2 proc:1 bag:1 leap:1 jackel:1 sensitive:6 individually:1 hubbard:1 city:1 mit:2 clearly:1 rather:1 shelf:1 varying:1 office:1 conjunction:1 derived:1 naval:1 methodological:1 sin2:1 rigid:17 scarf:2 entire:2 typically:1 relation:1 quasi:2 among:1 orientation:9 classification:3 histogramming:1 development:2 animal:1 spatial:2 field:4 never:1 look:1 quantitatively:1 few:1 retina:1 oriented:2 individual:2 usc:1 replaced:2 familiar:1 occlusion:1 consisting:2 fukushima:3 ecology:1 freedom:2 cylinder:1 interest:1 highly:2 circular:1 henderson:1 devoted:1 chain:2 closer:1 partial:2 necessary:1 experience:1 tree:1 loosely:1 logarithm:2 cardboard:1 continuing:1 re:1 isolated:1 deformation:3 minimal:1 instance:1 subset:2 swain:3 recognizing:1 reported:1 perturbed:1 thanks:1 discriminating:1 workspace:1 off:1 enhance:1 together:1 ambiguity:1 corner:3 book:4 warped:1 conf:1 potential:1 coding:1 includes:2 int:2 baird:1 caused:1 explicitly:1 view:35 observer:1 closed:1 wave:1 complicated:1 parallel:2 contribution:1 bright:1 variance:1 phoneme:1 yield:1 identify:2 misorientation:1 identification:1 handwritten:1 tolerating:1 bunch:2 cybernetics:1 straight:1 classified:2 lumpy:1 explain:1 definition:1 grossly:1 energy:3 nonetheless:1 frequency:1 oram:3 massachusetts:1 color:14 segmentation:2 blowing:1 higher:1 response:3 improved:1 done:1 box:1 strongly:1 biomedical:1 fiser:1 working:1 bulthoff:3 nonlinear:1 quality:1 consisted:2 ranged:1 contain:1 spatially:1 distal:1 round:2 during:2 noted:1 mel:5 m:1 octave:1 nonrigid:1 neocognitron:1 perrett:3 soup:1 image:20 novel:9 fi:4 common:3 rotation:3 empirically:1 insensitive:2 digitizing:1 discussed:1 slight:1 relating:1 interpret:1 significant:1 cambridge:2 pew:1 focal:1 funded:1 stable:1 cortex:2 similarity:5 surface:2 etc:3 longer:1 inferotemporal:2 perspective:1 camcorder:1 sparc:1 certain:1 arbitrarily:2 caltech:1 seen:2 zip:1 recognized:2 period:1 ii:1 relates:1 multiple:3 neurally:1 segmented:1 smooth:3 match:3 sphere:1 involving:1 basic:1 multilayer:1 vision:4 metric:1 histogram:2 normalization:1 represent:1 achieved:1 cell:1 ion:1 preserved:2 background:1 remarkably:2 interval:1 induced:1 undergo:1 spirit:1 seem:1 integer:1 presence:1 feedforward:3 architecture:3 identified:1 angeles:1 bartlett:1 action:1 useful:3 detailed:1 clutter:1 dark:1 hue:1 locally:3 category:6 mcclelland:3 exist:1 canonical:1 correctly:3 per:2 bulk:1 patt:1 group:3 achieving:1 drawn:1 prevent:1 angle:6 parameterized:4 striking:1 uti:1 pursuing:1 patch:1 followed:1 scene:6 aspect:1 extremely:1 chair:1 optical:1 relatively:3 department:1 structured:1 marking:2 combination:1 mcdonnell:1 fio:1 across:1 remain:1 character:1 wi:1 shallow:3 cun:3 primate:1 invariant:2 indexing:3 kanerva:3 mechanism:2 sony:1 operation:3 permit:1 denker:1 distinguished:1 remaining:1 subsampling:1 giving:1 receptive:1 imbedded:1 dependence:1 responds:1 modestly:1 southern:1 exhibit:1 distance:4 link:1 separate:1 viii:1 code:1 modeled:2 index:1 ratio:1 providing:1 culled:1 difficult:1 rise:1 design:2 implementation:1 observation:1 howard:1 teddy:1 situation:1 excluding:1 head:1 intensity:4 grape:2 required:2 extensive:1 california:1 distinction:1 boser:1 tanaka:4 macaque:1 trans:2 usually:1 pattern:2 mismatch:1 indoor:1 scott:1 saturation:1 reliable:1 including:2 video:3 memory:2 natural:5 residual:1 carried:1 embodied:1 faced:1 prior:1 understanding:1 acknowledgement:1 relative:1 bear:1 filtering:2 foundation:1 degree:3 viewpoint:3 quake:1 heavy:1 share:1 row:4 dinosaur:2 elsewhere:1 accounted:1 course:1 last:1 neighbor:5 wide:3 fall:2 face:1 emerge:1 sparse:3 distributed:2 curve:2 dimension:5 depth:4 world:2 contour:6 far:2 emphasize:1 global:6 reveals:1 summing:1 califano:3 channel:28 ballard:3 robust:1 ca:1 complex:4 domain:1 main:2 noise:1 profile:1 allowed:1 convey:1 fig:3 board:1 position:3 explicit:1 lie:2 outdoor:1 crude:1 tied:1 third:1 ito:1 minute:1 down:1 lifting:1 texture:3 mohan:3 locality:1 depicted:1 led:1 photograph:4 explore:1 wickelfeatures:1 shoe:1 visual:16 ordered:1 binding:3 ch:1 chance:2 ma:1 goal:2 identity:3 viewed:1 invalid:1 room:1 man:1 change:5 included:1 telephone:4 specifically:1 called:1 total:2 invariance:2 intact:1 highdimensional:1 internal:3 support:1 pertains:1 phenomenon:1 |
53 | 1,046 | Analog VLSI Processor Implementing the
Continuous Wavelet Transform
R. Timothy Edwards and Gert Cauwenberghs
Department of Electrical and Computer Engineering
Johns Hopkins University
3400 North Charles Street
Baltimore, MD 21218-2686
{tim,gert}@bach.ece.jhu.edu
Abstract
We present an integrated analog processor for real-time wavelet decomposition and reconstruction of continuous temporal signals covering the
audio frequency range. The processor performs complex harmonic modulation and Gaussian lowpass filtering in 16 parallel channels, each clocked
at a different rate, producing a multiresolution mapping on a logarithmic
frequency scale. Our implementation uses mixed-mode analog and digital circuits, oversampling techniques, and switched-capacitor filters to
achieve a wide linear dynamic range while maintaining compact circuit
size and low power consumption. We include experimental results on the
processor and characterize its components separately from measurements
on a single-channel test chip.
1 Introduction
An effective mathematical tool for multiresolution analysis [Kais94], the wavelet transform
has found widespread use in various signal processing applications involving characteristic
patterns that cover multiple scales of resolution, such as representations of speech and vision.
Wavelets offer suitable representations for temporal data that contain pertinent features both
in the time and frequency domains; consequently, wavelet decompositions appear to be
effective in representing wide-bandwidth signals interfacing with neural systems [Szu92].
The present system performs a continuous wavelet transform on temporal one-dimensional
analog signals such as speech, and is in that regard somewhat related to silicon models
of the cochlea implementing cochlear transforms [Lyon88], [Liu92] , [Watt92], [Lin94].
The multiresolution processor we implemented expands on the architecture developed
in [Edwa93], which differs from the other analog auditory processors in the way signal
components in each frequency band are encoded. The signal is modulated with the center
Analog VLSI Processor Implementing the Continuous Wavelet Transform
1m
_: '\/'V
-I
~
x
Multiplier
s'(t)
LPF
set)
x(t)
693
x(t)
~ ~ yet)
Lff
LPF
~
~
get)
yet)
h(t)
h(t)
Prefilter
(a)
Multiplexer
(b)
Figure 1: Demodulation systems, (a) using multiplication, and (b) multiplexing.
frequency of each channel and subsequently lowpass filtered, translating signal components
taken around the center frequency towards zero frequency. In particular. we consider wavelet
decomposition and reconstruction of analog continuous-time temporal data with a complex
Gaussian kernel according to the following formulae:
Yk(t)
{teo x(e) exp (jWke- Q(Wk(t - e))2) de
(decomposition)
x'(t)
(1)
C 2::y\(t) exp(-jwkt)
k
(reconstruction)
where the center frequencies Wk are spaced on a logarithmic scale. The constant Q sets the
relative width of the frequency bins in the decomposition , and can be adjusted (together
with C) alter the shape of the wavelet kernel. Successive decomposition and reconstruction
transforms yield an approximate identity operation; it cannot be exact as no continuous
orthonormal basis function exists for the CWT [Kais94].
2
Architecture
The above operations are implemented in [Edwa93] using two demodulator systems per
channel, one for the real component of (1), and another for the imaginary component, 90?
out of phase with the first. Each takes the form of a sinusoidal modulator oscillating at
the channel center frequency, followed by a Gaussian-shaped lowpass filter, as shown in
Figure 1 (a). This arrangement requires a precise analog sine wave generator and an accurate
linear analog multiplier. In the present implementation, we circumvent both requirements
by using an oversampled binary representation of the modulation reference signal.
2.1
Multiplexing vs. MUltiplying
Multiplication of an analog signal x(t) with a binary (? 1) sequence is naturally implemented
with high precision using a mUltiplexer, which alternates between presenting either the
input or its inverse -x(t) to the output. This principle is applied to simplify harmonic
modulation. and is illustrated in Figure 1 (b). The multiplier has been replaced by an analog
inverter followed by a multiplexer, where the multiplexer is controlled by an oversampled
binary periodic sequence representing the sine wave reference. The oversampled binary
sequence is chosen to approximate the analog sine wave as closely as possible. disregarding
components at high frequency which are removed by the subsequent lowpass filter. The
assumption made is that no high frequency components are present in the input signal
R. T. EDWARDS, G. CAUWENBERGHS
694
SiglUll
,---- ? ________ ----I In
,,:
In Seltrt
CLK2
eLK]
f ---<.-iK4----1:CLKS-- -: ;----- -- ---- ---- --1
eLK/:
,,
II
E
?
I
II
"
I'
'I
.---f-..., ::
::
Ret'onJlructed :
: cmLy,
0.,.
.--...L..--,::
I
,
I
:
~~--+,~,
'-==:.J
,
: ~ ____ __________ __J
'
I. ________________ :
Reconstruction Input
i
:,
"
Wav elet
Reconstruction
Mult;pl;er
Gaussian Filter
Output Mux ing
Figure 2: Block diagram of a single channel in the wavelet processor, showing test points
A through E.
under modulation, which otherwise would convolve with corresponding high frequency
components in the binary sequence to produce low frequency distortion components at the
output. To that purpose, an additionallowpass filter is added in front of the multiplexer.
Residual low-frequency distortion at the output is minimized by maximizing roll-off of the
filters, placing proper constraints on their cutofffrequencies, and optimally choosing the bit
sequence in the oversampled reference [Edwa95]. Clearly, the signal accuracy that can be
achieved improves as the length N of the sequence is extended. Constraints on the length
N are given by the implied overhead in required signal bandwidth, power dissipation, and
complexity of implementation.
2.2
Wavelet Gaussian Function
The reason for choosing a Gaussian kernel in (l) is to ensure optimal support in both
time and frequency [Gros89]. A key requirement in implementing the Gaussian filter
is linear phase, to avoid spectral distortion due to non-uniform group delays. A worryfree architecture would be an analog FIR filter; however the number of taps required to
accommodate the narrow bandwidth required would be prohibitively large for our purpose.
Instead, we approximate a Gaussian filter by cascading several first-order lowpass filters .
From probabilistic arguments, the obtained lowpass filter approximates a Gaussian filter
increasingly well as the number of stages increases [Edwa93] .
3
Implementation
Two sections of a wavelet processor, each containing 8 parallel channels, were integrated
onto a single 4 mm x 6 mm die in 2 /lm CMOS technology. Both sections can be configured
to perform wavelet decomposition as well as reconstruction. The block diagram for one
of the channels is shown in Figure 2. In addition, a separate test chip was designed which
performs one channel of the wavelet function . Test points were made available at various
points for either input or output, as indicated in boldface capitals, A through E, in Figure 2.
Each channel performs complex harmonic modulation and Gaussian lowpass filtering, as
defined above. At the front end of the chip is a sample-and-hold section to sample timemultiplexed wavelet signals for reconstruction . In cases of both signal decomposition
and reconstruction, each channel removes the input DC component removed, filters the
result through the premultiplication lowpass (PML) filter, inverts the result, and passes
both non-inverted and inverted signals onto the multiplexer. The multiplexer output is
passed through a postmultiplication lowpass filter (PML, same architecture) to remove high
frequency components of the oversampled sequence, and then passed through the Gaussianshaped lowpass filter. The cutoff frequencies of all filters are controlled by the clock rates
695
Analog VLSI Processor Implementing the Continuous Wavelet Transform
(CLKI to CLK4 in Figure 2). The remainder of the system is for reconstruction and for
time-multiplexing the output.
3.1
MUltiplier
The multiplier is implemented by use of the above multiplexing scheme, driven by an
oversampled binary sequence representing a sine wave. The sequence we used was 256
samples in length, created from a 64-sample base sequence by reversal and inversion . The
sequence length of256 generates a modulator wave of 4 kHz (useful for speech applications)
from a clock of about 1 MHz.
We derived a sequence which, after postfiltering through a 3rd-order lowpass filter of the
fonn of the PML prefilter (see below), produces a sine wave in which all hannonics are
more than 60 dB down from the primary [Edwa95]. The optimized 64-bit base sequence
consists of 11 zeros and 53 ones, allowing a very simple implementation in which an address
decoder decodes the "zero" bits. The binary sequence is shown in Figure 4. The magnitude
of the prime hannonic of the sequence is approximately 1.02, within 2% of unity.
The process of reversing and inverting the sequence is simplified by using a gray code
counter to produce the addresses for the sequence, with only a small amount of combinatorial
logic needed to achieve the desired result [Edwa95]. It is also straightforward to generate
the addresses for the cosine channel, which is 90? out of phase with the original.
3.2 Linear Filtering
All filters used are implemented as linear cascades of first-order, single-pole filter sections.
The number of first-order sections for the PML filters is 3. The number of sections for the
"Gaussian" filter is 8, producing a suitable approximation to a Gaussian filter response for
all frequencies of interest (Figure 5).
Figure 3 shows one first-order lowpass section of the filters as implemented. This standard
>-.+--o
v,,,
va"'
+
Figure 3: Single discrete-time lowpass filter section.
switched-capacitor circuit implements a transfer function containing a single pole, approximately located in the Laplace domain at s = Is / a for large values of the parameter a, with
Is being the sampling frequency. The value for this parameter a is fixed at the design stage
as the ratio of two capacitors in Figure 3, and was set to be 15 for the The PML filters and
12 for the Gaussian filters.
4 Measured Results
4.1
Sine wave modulator
We tested the accuracy of the sine wave modulation signal by applying two constant voltages
at test points A and B, such that the sine wave modulation signal is effectively multiplied
R. T. EDWARDS, G. CAUWENBERGHS
696
Sine sequence and filtered sine wave output
Binary sine sequence
Simulated filtered output
x
Measured output
-1.5 L -_ _ _- - '_ _ _ _- ' -_ _ _ _........_ _ _ _- ' -_ _ _ _.J...J
o
50
100
150
200
250
Time (us)
Figure 4: Filtered sine wave output.
by a constant. The output of the mUltiplier is filtered and the output taken at test point D,
before the Gaussian filter. Figure 4 shows the (idealized) multiplexer output at test point
C, which accurately creates the desired binary sequence. Figure 4 also shows the measured
sine wave after filtering with the PML filter and the expected output from the simulation
model, using a deviating value of 8.0 for the capacitor ratio a, as justified below. FFT
analysis of Figure 4 has shown that the resulting sine wave has all harmonics below about
-49 dB . This is in good agreement with the simulation model, provided a correction is made
for the value of the capacitor ratio a to account for fringe and (large) parasitic capacitances.
The best fit for the measured data from the postmultiplication filter is a = 8.0, compared to
the desired value of a = 15.0. The transform of the simulated output shown in the figure
takes into account the smaller value of a. Because the postmultiplication filter is followed
by the Gaussian filter, the bandwidth of the output can be directly controlled by proper
clocking ofthe Gaussian filter, so the distortion in the sine wave is ultimately much smaller
than that measured at the output of the postmultiplication filter.
4.2
Gaussian filter
The Gaussian filter was tested by applying a signal at test point D and measuring the
response at test point E. Figure 5 shows the response of the Gaussian filter as compared to
expected responses. There are two sets of curves, one for a filter clocked at 64 kHz, and the
other clocked at 128 kHz; these curves are normalized by plotting time relative to the clock
frequency is . The solid line indicates the best match for an 8th-order lowpass filter, using
the capacitor ratio, a, as a fitting parameter. The best-fit value of a is approximately 6.8.
This is again much lower than the capacitor area ratio of 12 on the chip. The dotted line is
the response of the ideal Gaussian characteristic exp ( _w 2 / (2aw~)) approximated by the
cascade of first-order sections with capacitor ratio a.
Figure 5 (b) shows the measured phase response of the Gaussian filter for the 128 kHz
clock. The phase response is approximately linear throughout the passband region.
Analog VLSI Processor Implementing the Continuous Wavelet Transform
697
Gaussian filter response
o~~~~--~-=~~~~~~~--~
x
o
x
0
iii'-10
~
Chip data at 64kHz clock
Chip data at 128kHz clock
8th-order filter ideal response
Gaussian filter ideal response
500
<lJ
]1
-20
~
?E
-30
"0
<lJ
o
N
~ -40
Theoretical 8-stage phase
Measured response
?
o
Z -50
0.01
0.07
Frequency (units fs)
0.08
0.0 I
0.02
0.03
0.04
0.05
0.06
0.07
Frequency (units fs)
Figure 5: Gaussianfilter transfer functions: theoretical and actual. (a) Relative amplitude;
(b) Phase.
4.3
Wavelet decomposition
Figure 6 shows the test chip performing a wavelet transform on a simple sinusoidal input,
illustrating the effects of (oversampled) sinusoidal modulation followed by lowpass filtering
through the Gaussian filter. The chip multiplier system is clocked at 500 kHz. The input
wave is approximately 3.1 kHz, close to the center frequency of the modulator signal,
which is the clock rate divided by 128, or about 3.9 kHz (a typical value for the highestfrequency channel in an auditory application). The top trace in the figure shows the filtered
and inverted input, taken from test point B. The middle trace shows the output of the
multiplexer (test point C), wherein the output is multiplexed between the signal and its
inverse. The bottom trace is taken from the system output (labeled Cosine Out in Figure 2)
and shows the demodulated signal of frequency 800 Hz (= 3.9 kHz - 3.1 kHz). Not shown
is the cosine output, which is 90? out of phase with the one shown. This demonstrates
the proper operation of complex demodulation in a single channel configured for wavelet
decomposition. In addition, we have tested the full16-channel chip decomposition, and all
individual parts function properly. The total power consumption of the 16-channel wavelet
chip was measured to be less than 50mW, of which a large fraction can be attributed to
external interfacing and buffering circuitry at the periphery of the chip.
5
Conclusions
We have demonstrated the full functionality of an analog chip performing the continuous
wavelet transform (decomposition). The chip is based on mixed analog/digital signal
processing principles, and uses a demodulation scheme which is accurately implemented
using oversampling methods. Advantages of the architecture used in the chip are an
increased dynamic range and a precise control over lateral synchronization of wavelet
components. An additional advantage inherent to the modulation scheme used is the
potential to tune the channel bandwidths over a wide range, down to unusually narrow
bands, since the cutoff frequency of the Gaussian filter and the center frequency of the
modulator are independently adjustable and precisely controllable parameters.
References
G. Kaiser, A Friendly Guide to Wavelets, Boston, MA: Birkhauser, 1994.
T. Edwards and M. Godfrey, "An Analog Wavelet Transform Chip," IEEE Int'l Can! on
0.08
698
R. T. EDWARDS, G. CAUWENBERGHS
Figure 6: Scope trace of the wavelet transform: filtered input (top), multiplexed signal
(middle), and wavelet output (bottom).
Neural Networks, vol. III, 1993, pp. 1247-1251.
T. Edwards and G. Cauwenberghs, "Oversampling Architecture for Analog Harmonic
Modulation," to appear in Electronics Letters, 1996.
A Grossmann, R Kronland-Martinet, and J. MorIet, "Reading and understanding continuous wavelet transforms," Wavelets: Time-Frequency Methods and Phase Space. SpringerVerlag, 1989, pp. 2-20.
W. Liu, AG. Andreou, and M.G. Goldstein, "Voiced-Speech Representation by an Analog
Silicon Model ofthe Auditory Periphery," IEEE T. Neural Networks, vol. 3 (3), pp 477-487,
1992.
J. Lin, W.-H. Ki, T. Edwards, and S. Shamma, "Analog VLSI Implementations of Auditory Wavelet Transforms Using Switched-Capacitor Circuits," IEEE Trans. Circuits and
Systems-I, vol.41 (9), pp. 572-583, September 1994.
A Lu and W. Roberts, ''A High-Quality Analog Oscillator Using Oversampling D/A Conversion Techniques," IEEE Trans. Circuits and Systems-II, vol.41 (7), pp. 437-444, July
1994.
RF. Lyon and C.A Mead, "An Analog Electronic Cochlea," IEEE Trans. Acoustics, Speech
and Signal Proc., vol. 36, pp 1119-1134, 1988.
H.H. Szu, B. Tefter, and S. Kadembe, "Neural Network Adaptive Wavelets for Signal Representation and Classification," Optical Engineering, vol. 31 (9), pp. 1907-1916, September
1992.
L. Watts, D.A Kerns, and RF. Lyon, "Improved Implementation of the Silicon Cochlea,"
IEEE Journal of Solid-State Circuits, vol. 27 (5), pp 692-700,1992.
| 1046 |@word illustrating:1 middle:2 inversion:1 simulation:2 decomposition:12 hannonic:1 fonn:1 solid:2 accommodate:1 electronics:1 liu:1 imaginary:1 yet:2 john:1 subsequent:1 shape:1 pertinent:1 remove:2 designed:1 v:1 filtered:7 successive:1 mathematical:1 consists:1 overhead:1 fitting:1 expected:2 actual:1 lyon:2 provided:1 circuit:7 developed:1 ret:1 ag:1 temporal:4 expands:1 unusually:1 friendly:1 prohibitively:1 demonstrates:1 control:1 unit:2 appear:2 producing:2 before:1 engineering:2 mead:1 modulation:10 approximately:5 shamma:1 range:4 block:2 implement:1 differs:1 area:1 jhu:1 mult:1 cascade:2 kern:1 get:1 cannot:1 onto:2 close:1 applying:2 demonstrated:1 center:6 maximizing:1 straightforward:1 independently:1 resolution:1 cascading:1 orthonormal:1 gert:2 laplace:1 exact:1 us:2 agreement:1 szu:1 approximated:1 located:1 labeled:1 bottom:2 electrical:1 region:1 martinet:1 counter:1 removed:2 yk:1 complexity:1 dynamic:2 ultimately:1 creates:1 basis:1 lowpass:15 chip:15 various:2 effective:2 choosing:2 encoded:1 distortion:4 otherwise:1 transform:11 sequence:20 advantage:2 reconstruction:10 remainder:1 multiresolution:3 achieve:2 requirement:2 produce:3 oscillating:1 cmos:1 tim:1 measured:8 edward:7 implemented:7 closely:1 functionality:1 filter:45 subsequently:1 translating:1 implementing:6 bin:1 adjusted:1 pl:1 correction:1 mm:2 hold:1 around:1 exp:3 mapping:1 scope:1 lm:1 circuitry:1 inverter:1 purpose:2 proc:1 combinatorial:1 teo:1 tool:1 clearly:1 interfacing:2 gaussian:25 avoid:1 voltage:1 derived:1 properly:1 indicates:1 integrated:2 lj:2 vlsi:5 classification:1 godfrey:1 shaped:1 sampling:1 placing:1 buffering:1 alter:1 minimized:1 simplify:1 inherent:1 individual:1 deviating:1 replaced:1 phase:9 interest:1 clocking:1 accurate:1 demodulated:1 desired:3 theoretical:2 increased:1 cover:1 mhz:1 measuring:1 pole:2 uniform:1 delay:1 front:2 characterize:1 optimally:1 cwt:1 periodic:1 aw:1 probabilistic:1 off:1 together:1 hopkins:1 again:1 containing:2 fir:1 external:1 multiplexer:9 elk:2 account:2 potential:1 sinusoidal:3 de:1 wk:2 north:1 int:1 configured:2 idealized:1 sine:15 cauwenberghs:5 wave:15 parallel:2 voiced:1 accuracy:2 roll:1 characteristic:2 spaced:1 yield:1 ofthe:2 decodes:1 accurately:2 lu:1 multiplying:1 processor:11 frequency:28 pp:8 naturally:1 attributed:1 auditory:4 improves:1 amplitude:1 goldstein:1 response:11 wherein:1 improved:1 stage:3 clock:7 widespread:1 mode:1 quality:1 indicated:1 gray:1 effect:1 contain:1 multiplier:7 normalized:1 illustrated:1 width:1 covering:1 die:1 cosine:3 clocked:4 presenting:1 performs:4 dissipation:1 harmonic:5 prefilter:2 charles:1 khz:11 analog:23 approximates:1 measurement:1 silicon:3 rd:1 base:2 driven:1 prime:1 periphery:2 binary:9 inverted:3 additional:1 somewhat:1 signal:25 ii:3 july:1 multiple:1 full:1 ing:1 match:1 bach:1 offer:1 lin:1 divided:1 demodulation:3 controlled:3 va:1 involving:1 vision:1 demodulator:1 cochlea:3 kernel:3 achieved:1 justified:1 addition:2 separately:1 baltimore:1 diagram:2 pass:1 hz:1 db:2 capacitor:9 mw:1 ideal:3 iii:2 fft:1 fit:2 architecture:6 bandwidth:5 modulator:5 passed:2 f:2 speech:5 useful:1 tune:1 transforms:4 amount:1 band:2 generate:1 wav:1 oversampling:4 dotted:1 per:1 discrete:1 elet:1 vol:7 group:1 key:1 capital:1 cutoff:2 fraction:1 inverse:2 letter:1 throughout:1 electronic:1 bit:3 ki:1 followed:4 constraint:2 precisely:1 multiplexing:4 generates:1 argument:1 performing:2 optical:1 department:1 according:1 alternate:1 watt:1 smaller:2 increasingly:1 pml:6 unity:1 taken:4 needed:1 end:1 reversal:1 available:1 operation:3 multiplied:1 spectral:1 premultiplication:1 original:1 convolve:1 top:2 include:1 ensure:1 maintaining:1 mux:1 passband:1 implied:1 lpf:2 arrangement:1 added:1 capacitance:1 kaiser:1 primary:1 md:1 september:2 separate:1 simulated:2 lateral:1 street:1 decoder:1 consumption:2 cochlear:1 reason:1 boldface:1 length:4 code:1 ratio:6 robert:1 trace:4 implementation:7 design:1 proper:3 adjustable:1 perform:1 allowing:1 conversion:1 extended:1 precise:2 dc:1 inverting:1 required:3 oversampled:7 optimized:1 tap:1 andreou:1 acoustic:1 narrow:2 gaussianshaped:1 address:3 trans:3 below:3 pattern:1 reading:1 rf:2 power:3 suitable:2 circumvent:1 residual:1 representing:3 scheme:3 technology:1 created:1 understanding:1 multiplication:2 relative:3 synchronization:1 grossmann:1 mixed:2 filtering:5 generator:1 digital:2 switched:3 principle:2 plotting:1 guide:1 wide:3 regard:1 curve:2 made:3 adaptive:1 simplified:1 approximate:3 compact:1 logic:1 continuous:10 channel:17 transfer:2 controllable:1 complex:4 domain:2 precision:1 inverts:1 wavelet:31 formula:1 down:2 showing:1 er:1 lff:1 disregarding:1 exists:1 effectively:1 magnitude:1 boston:1 logarithmic:2 timothy:1 ma:1 fringe:1 identity:1 consequently:1 towards:1 oscillator:1 springerverlag:1 typical:1 birkhauser:1 reversing:1 total:1 ece:1 experimental:1 parasitic:1 support:1 modulated:1 multiplexed:2 audio:1 tested:3 |
54 | 1,047 | Selective Attention for Handwritten
Digit Recognition
Ethem Alpaydm
Department of Computer Engineering
Bogazi<1i U ni versi ty
Istanbul, TR-SOS15 Turkey
alpaydin@boun.edu.tr
Abstract
Completely parallel object recognition is NP-complete. Achieving
a recognizer with feasible complexity requires a compromise between parallel and sequential processing where a system selectively
focuses on parts of a given image, one after another. Successive
fixations are generated to sample the image and these samples are
processed and abstracted to generate a temporal context in which
results are integrated over time. A computational model based on a
partially recurrent feedforward network is proposed and made credible by testing on the real-world problem of recognition of handwritten digits with encouraging results.
1
INTRODUCTION
For all-parallel bottom-up recognition, allocating one separate unit for each possible
feature combination, i.e., conjunctive encoding, implies combinatorial explosion. It
has been shown that completely parallel, bottom-up visual object recognition is
NP-complete (Tsotsos, 1990). By exchanging space with time, systems with much
less complexity may be designed. For example, to phone someone at the press of a
button, one needs 10 7 buttons on the phone; the sequential alternative is to have
10 buttons on the phone and press one at a time, seven times.
We propose recognition based on selective attention where we analyze only a small
part of the image in detail at each step, combining results in time. N oton and Stark's
(1971) "scanpath" theory advocates that each object is internally represented as a
feature-ring which is a temporal sequence of features extracted at each fixation and
the positions or the motor commands for the eye movements in between. In this
approach, there is an "eye" that looks at an image but which can really see only a
small part of it. This part of the image that is examined in detail is the fovea. The
772
E. ALPAYDIN
ASSOCIATIVE
Class Probabilities
(lOx!)
LEVEL
P~r------7-"7
L-L_ _ _ _ _ _/ '
t
softmax
Class Units
(lOxI) 0 /
7
T1
Hidden Units (s x I)
H L..../~_....;...._-_-_-_-~_-_-_-_-_-~_-_-~7-."
I~-----------------;t~---------- ----;;------------------PRE-ATTENTIVE LEVEL
ATTENTIVE LEVEL
,-------------------- -----,
------
-------------------,
I
:
F
Feature Map
I
(rxI):
Eye Position Map
(pxp)
//p
1
Fovea
1-------'---1-
~
-I
WTA
subsample
and blur
I
- -:- - - - - - - ~
Saliency Map
(n x n)
M
Bitmap Image (n x n)
Figure 1: The block diagram of the implemented system.
fovea's content is examined by the pre-attentive level where basic feature extraction
takes place. The features thus extracted are fed to an a660ciative part together
with the current eye position. If the accumulated information is not sufficient for
recognition, the eye is moved to another part of the image, making a saccade. To
minimize recognition time, the number of saccades should be minimized. This is
done through defining a criterion of being "interesting" or saliency and by fixating
only at the most interesting. Thus sucessive fixations are generated to sample the
image and these samples are processed and abstracted to generate a temporal context in which results are integrated over time. There is a large amount of literature
on selective attention in neuroscience and psychology; for reviews see respectively
(Posner and Peterson, 1990) and (Treisman, 1988). The point stressed in this paper
is that the approach is also useful in engineering.
2
AN EXAMPLE SYSTEM FOR OCR
The structure of the implemented system for recognition of handwritten digits is
given in Fig. 1.
Selective Attention for Handwritten Digit Recognition
773
We have an n x n binary image in which the fovea is m x m with m < n. To
minimize recognition time, the system should only attend to the parts of the image
that carry discriminative information. We define a criterion of being "interesting"
or saliency which is applied to all image locations in parallel to generate a 8aliency
map, S. The saliency measure should be chosen to draw attention to parts that
have the highest information content. Here, the saliency criterion is a low-pass filter
which roughly counts the number of on pixels in the corresponding m x m region
of the input image M. As the strokes in handwritten digits are mostly one or two
pixels wide, a count of the on pixels is a good measure of the discontinuity (and
thus information). It is also simple to compute:
i+lm/2J
Sij =
L
HLm/2J
L
MkIN2 ((i,jl, (Lm/6J)2 *1), i,j = 1. .. n
k=i-Lm/2J l=j-Lm/2J
where N 2 (p., E) is the bivariate normal with mean p. and the covariance E. Note
that we want the convolution kernel to have effect up to Lm/2J and also that the
normal is zero after p.? 30-. In our simulations where n is 16 and m is 5 (typical for
digit recognition), 0- ~ 1. The location that is most salient is the position ofthe next
fixation and as such defines the new center of the fovea. A location once attended
to is no longer interesting; after each fixation, the saliency of all the locations that
currently are in the scope of the fovea are set to 0 to inhibit another fixation there.
The attentive level thus controls the scope of the pre-attentive level. The maximum
of the saliency map through a winner-take-all gives the eye position (i*, j*) at
fixation t.
(i*(t),j*(t))
arg~B:XSij
',J
By thus following the salient regions, we get an input-dependent emergent sequence
in time.
=
Eye-Position Map
The eye p08ition map, P, stores the position of the eye in the current fixation. It is
p x p. p is chosen to be smaller than n for dimensionality reduction for decreasing
complexity and introducing an effect of regularization (giving invariance to small
translations). When p is a factor of n, computations are also simpler. We also blur
the immediate neighbors for a smoother representation:
P( t)
= blur(subsample( winner-take-all( S)))
Pre-Attentive Level: Feature Extraction
The pre-attentive level extracts detailed features from the fovea to generate a feature
map. This information and the current eye position is passed to the associative
system for recognition. There is a trade-off between the fovea size and the number
of saccades required for recognition: As the operation in the pre-attentive level is
carried out in parallel, to minimize complexity the features extracted there should
not be many and the fovea should not be large: Fovea is where the expensive
computation takes place. On the other hand, the fovea should be large enough to
extract discriminative features and thus complete recognition in a small amount of
time. The features to be extracted can be learned through an supervised method
when feedback is available .
774
E. ALPAYDIN
The m x m region symmetrically around (i*, j*) is extracted as the fovea I and is
fed to the feature extractors. The r features extracted there are passed on to the
associative level as the feature map, F. r is typically 4 to 8. Ug denote the weights
of feature 9 and Fg is the value of feature 9 that is found by convolving the fovea
input with the feature weight vector (1(.) is the sigmoid function):
M i o(t)-Lm/2J+i,jo(t)-Lm/2J+j, i,j = 1 ... m
f (
~ ~ U"jI,j(t?) , g = 1. ..
r
Associative Level: Classification
At each fixation, the associative level is fed the feature map from the pre-attentive
level and the eye position map from the attentive level. As a number of fixations
may be necessary to recognize an image, the associative system should have a shortterm memory able to accumulate inputs coming through time. Learning similarly
should be through time. When used for classification, the class units are organized
so as to compete and during recognition the activations of the class units evolve
till one class gets sufficiently active and suppresses the others. When a training
set is available, a temporal supervised method can be used to train the associative
level. Note that there may be more than one scanpath for each object and learning
one sequence for each object fails. We see it is a task of accumulating two types of
information through time: the "what" (features extracted) and the "where" (eye
position).
The fovea map, F, and the eye position map, P, are concatenated to make a
r + p X P dimensional input that is fed to the associative level. Here we use an
artificial neural network with one hidden layer of 8 units. We have experimented
with various architectures and noticed that recurrency at the output layer is the
best. There are 10 output units.
f (L VhgFg(t) + L L WhabPab(t)) , h =
1. .. s
gab
LTchHh + L RckPk(t - 1), c = 1. .. 10
h
k
exp[Oc(t)]
Lk exp[Ok(t)]
where P denotes the "softmax"ed output probabilities (Bridle, 1990) and P(t - 1)
are the values in the preceding fixation (initially 0). We use the cross-entropy as
the goodness measure:
C=
L
t
1
t L Dk 10gPc(t), t ~
1
c
Dc is the required output for class c. Learning is gradient-ascent on this goodness
measure. The fraction lit is to give more weight to initial fixations than later ones.
Connections to the output units are updated as follows (11 is the learning factor):
Selective Attention for Handwritten Digit Recognition
Note that we assume 8PIc(t -1)/8Rc lc =
we have:
o.
775
For the connections to the hidden units
c
We can back-propagate one step more to train the feature extractors. Thus the
update equations for the connections to feature units are:
Cg(t) =
L Ch(t)Vhg
h
A series of fixations are made until one of the class units is sufficiently active:
3c, Pc > 8 (typically 0.99), or when the most salient point has a saliency less than a
certain threshold (this condition is rarely met after the first few epochs). Then the
computed changes are summed up and the updates are made like the exaple below:
Backpropagation through time where the recurrent connections are unfolded in time
did not work well in this task because as explained before, for the same class, there is
more than one scanpath. The above-mentioned approach is like real-time recurrent
learning (Williams and Zipser, 1989) where the partial derivatives in the previous
time step is 0, thus ignoring this temporal dependence.
3
RESULTS AND DISCUSSION
We have experimented with various parameter settings and finally chose the architecture given above: When input is 16 x 16 and there are 10 classes, the fovea is
5 x 5 with 8 features and there are 16 hidden units. There are 1,934 images for
training, 946 for cross-validation and 943 for testing. Results are given in Table
1. ( It can be seen that by scanning less than half of the image, we get 80% generalization. Additional to the local high-resolution image provided by the fovea, a
low-resolution image of the surrounding parafovea can be given to the associative
level for better recognition. For example we low-pass filtered and undersampled the
original image to get a 4 x 4 image which we fed to the class units additional to
the attention-based hidden units. Success went up quite high and fewer fixations
were necessary; compare rows 1 and 2 of the Table. The information provided by
the 4 x 4 map is actually not much as can be seen from row 3 of the table where
only that is given as input. Thus the idea is that when we have a coarse input,
looking only at a quarter of the image in detail is sufficient to get 93% accuracy.
Both features (what) and eye positions (where) are necessary for good recognition.
When only one is used without the other, success is quite low as can be seen in rows
4 and 5. In the last row, we see the performance of a multi layer percept ron with
10 hidden units that does all-parallel recognition.
Beyond a certain network size, increasing the number of features do not help much.
Decreasing 8, the certainty threshold, decreases the number of fixations necessary
776
E. ALPAYDIN
Table 1: Results of handwritten digit recognition with selective attention. Values
given are average and standard deviation of 10 independent runs. See text for
comments.
NO OF
PARAMS
TEST
SUCCESS
TRAINING
EPOCHS
NO OF
FIXATIONS
SA system
SA+parafovea
Only parafovea
Only what info
Only where info
878
1,038
170
622
440
79.7, 1.8
92.5,0.8
86.9,0.2
49.0,21.0
54.2, 1.4
74.5, 17.1
54.2, 10.2
52.3,8.2
66.6, 30.6
92.9,6.5
6.5,0.2
3.9,0.3
1.0, 0.0
7.5,0.1
7.6,0.0
MLP, 10 hiddens
2,680
95.1, 0.6
13.5,4.1
1.0,0.0
METHOD
which we want, but decreases success too which we don't. Smaller foveas decrease
the number of free parameters but decrease success and require a larger number
of fixations. Similarly larger foveas decrease the number of fixations but increase
complexity.
The simple low-pass filter used here as a saliency measure is the simplest measure.
Previously it has been used by Fukushima and Imagawa (1993) for finding the next
character, i.e., segmentation, and also by Olshausen et al. (1992) for translation
invariance. More robust measures at the expense of more computations, are possible; see (Rimey and Brown, 1990; Milanese et al., 1993). Salient regions are those
that are conspicious, i.e., different from their surrounding where there is a change
in X where X can be brightness or color (edges), orientation (corners), time (motion), etc. It is also possible that top-down, task-dependent saliency measures be
integrated to minimize further recognition time implying a remembered explicit
sequence analogous to skilled motor behaviour (probably gained after many repetitions).
Here a partially recurrent network is used for temporal processing. Hidden Markov
Models like used in speech recognition are another possibility (Rimey and Brown,
1990; Haclsalihzade et al., 1992). They are probabilistic finite automata which can
be trained to classify sequences and one can have more than one model for an object.
It should be noted here that better approaches for the same problem exists (Le Cun
et al., 1989). Here we advocate a computational model and make it plausible by
testing it on a real-world problem. It is necessary for more complicated problems
where an all-parallel approach would not work. For example Le Cun et al. 's model
for the same type of inputs has 2,578 free parameters. Here there are
(mx m+1) x r+(r+pxp+ 1) x 8+(S+ 1) x 10+10 x 10
,
#'
iT
v';w
#~~
T
R
free parameters which make 878 when m = 5, r = 8, S = 16. This is the main
advantage of selective attention which is that the complexity of the system is heavily
reduced at the expense of slower recognition, both in overt form of attention through
foveation and in its covert form, for binding features - For this latter type of
attention not discussed here, see (Ahmad, 1992). Also note that low-level feature
extraction operations like carried out in the pre-attentive level are local convolutions
Selective Attention for Handwritten Digit Recognition
777
and are appropriate for parallel processing, e.g., on a SIMD machine. Higherlevel operations require larger connectivity and are better carried out sequentially.
Nature also seems to have taken this direction.
Acknowledgements
This work is supported by Tiibitak Grant EEEAG-143 and Bogazi<;;i University
Research Funds 95HA108. Cenk Kaynak prepared the handwritten digit database
based on the programs provided by NIST (Garris et al., 1994).
References
S. Ahmad. (1992) VISIT: A Neural Model of Covert Visual Attention. In J. Moody,
S. Hanson, R. Lippman (Eds.) Advances in Neural Information Processing Systems
4,420-427. San Mateo, CA: Morgan Kaufmann.
J.S. Bridle. (1990) Probabilistic Interpretation of Feedforward Classification Network Outputs with Relationships to Statistical Pattern Recognition. In Neurocomputing, F. Fogelman-Soulie, J. Herault, Eds. Springer, Berlin, 227-236.
K. Fukushima, T. Imagawa. (1993) Recognition and Segmentation of Connected
Characters with Selective Attention, Neural Networks, 6: 33-41.
M.D. Garris et al. (1994) NIST Form-Based Handprint Recognition System, NISTIR 5469, NIST Computer Systems Laboratory.
S.S. Haclsalihzade, L.W. Stark, J .S. Allen. (1992) Visual Perception and Sequences
of Eye Movement Fixations: A Stochastic Modeling Approach, IEEE SMC, 22,
474-481.
Y. Le Cun et al. (1991) Handwritten Digit Recognition with a Back-Propagation
Network. In D.S. Touretzky (ed.) Advances in Neural Information Processing
Systems 2, 396-404. San Mateo, CA: Morgan Kaufmann.
R. Milanese et al. (1994) Integration of Bottom-U p and Top- Down Cues for Visual
Attention using Non-Linear Relaxation IEEE Int'l Conf on CVPR, Seattle, WA,
USA.
D. Noton and L. Stark. (1971) Eye Movements and Visual Perception, Scientific
American, 224: 34-43.
B. Olshausen, C. Anderson, D. Van Essen. (1992) A Neural Model of Visual Attention and Invariant Pattern Recognition, CNS Memo 18, CalTech.
M.L Posner, S.E. Petersen. (1990) The Attention System of the Human Brain,
Ann. Rev. Neurosci., 13:25-42.
R.D. Rimey, C.M. Brown. (1990) Selective Attention as Sequential Behaviour: Modelling Eye Movements with an Augmented Hidden Markov Model, TR-327, Computer Science, Univ of Rochester.
A. Treisman. (1988) Features and Objects, Quarterly Journ. of Ezp . Psych., 40:
201-237.
J.K. Tsotsos. (1990) Analyzing Vision at the Complexity Level, Behav. and Brain
Sci. 13: 423-469.
R.J. Williams, D. Zipser. (1989) A Learning Algorithm for Continually Running
Fully Recurrent Neural Networks Neural Computation, 1, 270-280.
| 1047 |@word seems:1 simulation:1 propagate:1 covariance:1 attended:1 brightness:1 tr:3 carry:1 reduction:1 initial:1 series:1 bitmap:1 current:3 activation:1 conjunctive:1 blur:3 motor:2 designed:1 update:2 fund:1 implying:1 half:1 fewer:1 cue:1 filtered:1 coarse:1 location:4 successive:1 ron:1 simpler:1 rc:1 skilled:1 fixation:19 advocate:2 roughly:1 multi:1 brain:2 decreasing:2 unfolded:1 encouraging:1 increasing:1 provided:3 what:3 psych:1 suppresses:1 finding:1 temporal:6 oton:1 certainty:1 control:1 unit:15 internally:1 grant:1 continually:1 t1:1 before:1 engineering:2 attend:1 local:2 encoding:1 analyzing:1 chose:1 mateo:2 examined:2 someone:1 smc:1 testing:3 block:1 backpropagation:1 lippman:1 digit:11 pre:8 petersen:1 get:5 context:2 accumulating:1 map:14 center:1 williams:2 attention:18 automaton:1 resolution:2 rimey:3 posner:2 analogous:1 updated:1 heavily:1 recognition:30 expensive:1 database:1 bottom:3 hlm:1 region:4 connected:1 went:1 trade:1 alpaydin:4 movement:4 highest:1 inhibit:1 mentioned:1 decrease:5 ahmad:2 complexity:7 trained:1 compromise:1 completely:2 emergent:1 represented:1 various:2 surrounding:2 train:2 univ:1 artificial:1 quite:2 larger:3 plausible:1 cvpr:1 associative:9 higherlevel:1 sequence:6 advantage:1 propose:1 coming:1 combining:1 till:1 moved:1 seattle:1 ring:1 gab:1 object:7 help:1 recurrent:5 sa:2 implemented:2 implies:1 met:1 direction:1 filter:2 stochastic:1 human:1 require:2 behaviour:2 generalization:1 really:1 sucessive:1 around:1 sufficiently:2 normal:2 exp:2 scope:2 milanese:2 lm:7 recognizer:1 overt:1 combinatorial:1 currently:1 repetition:1 command:1 focus:1 modelling:1 cg:1 dependent:2 accumulated:1 istanbul:1 integrated:3 typically:2 initially:1 hidden:8 journ:1 selective:10 pixel:3 arg:1 classification:3 orientation:1 fogelman:1 herault:1 softmax:2 summed:1 integration:1 once:1 simd:1 extraction:3 lit:1 look:1 minimized:1 np:2 others:1 few:1 recognize:1 neurocomputing:1 cns:1 fukushima:2 mlp:1 possibility:1 essen:1 pc:1 allocating:1 edge:1 explosion:1 necessary:5 partial:1 classify:1 modeling:1 versi:1 rxi:1 goodness:2 exchanging:1 introducing:1 deviation:1 too:1 scanning:1 params:1 hiddens:1 probabilistic:2 off:1 together:1 treisman:2 moody:1 jo:1 connectivity:1 corner:1 conf:1 convolving:1 derivative:1 american:1 stark:3 fixating:1 int:1 later:1 analyze:1 parallel:9 complicated:1 pxp:2 rochester:1 minimize:4 ni:1 accuracy:1 kaufmann:2 percept:1 saliency:10 ofthe:1 handwritten:10 stroke:1 touretzky:1 ed:4 attentive:11 ty:1 bridle:2 color:1 dimensionality:1 credible:1 organized:1 segmentation:2 actually:1 back:2 ok:1 supervised:2 done:1 anderson:1 until:1 hand:1 propagation:1 defines:1 scientific:1 olshausen:2 usa:1 effect:2 brown:3 regularization:1 laboratory:1 during:1 noted:1 oc:1 criterion:3 complete:3 covert:2 motion:1 allen:1 image:20 sigmoid:1 ug:1 quarter:1 ji:1 winner:2 jl:1 discussed:1 interpretation:1 accumulate:1 handprint:1 similarly:2 longer:1 etc:1 phone:3 store:1 certain:2 binary:1 success:5 remembered:1 caltech:1 seen:3 morgan:2 ezp:1 additional:2 preceding:1 smoother:1 turkey:1 cross:2 visit:1 basic:1 vision:1 kernel:1 want:2 diagram:1 ascent:1 comment:1 probably:1 zipser:2 symmetrically:1 feedforward:2 enough:1 psychology:1 architecture:2 idea:1 parafovea:3 passed:2 speech:1 behav:1 scanpath:3 useful:1 gpc:1 detailed:1 amount:2 prepared:1 processed:2 simplest:1 reduced:1 generate:4 neuroscience:1 salient:4 threshold:2 achieving:1 button:3 relaxation:1 tsotsos:2 fraction:1 compete:1 run:1 place:2 draw:1 layer:3 department:1 combination:1 smaller:2 character:2 cun:3 wta:1 making:1 rev:1 explained:1 invariant:1 sij:1 taken:1 equation:1 previously:1 count:2 fed:5 available:2 operation:3 quarterly:1 ocr:1 appropriate:1 recurrency:1 alternative:1 slower:1 original:1 denotes:1 top:2 running:1 giving:1 concatenated:1 boun:1 noticed:1 dependence:1 gradient:1 fovea:18 mx:1 separate:1 berlin:1 sci:1 seven:1 vhg:1 relationship:1 mostly:1 expense:2 info:2 memo:1 convolution:2 markov:2 finite:1 nist:3 kaynak:1 immediate:1 defining:1 looking:1 dc:1 pic:1 required:2 connection:4 hanson:1 learned:1 discontinuity:1 able:1 beyond:1 garris:2 below:1 pattern:2 perception:2 program:1 memory:1 undersampled:1 eye:17 lk:1 carried:3 shortterm:1 lox:1 extract:2 text:1 review:1 literature:1 epoch:2 acknowledgement:1 evolve:1 fully:1 interesting:4 validation:1 sufficient:2 translation:2 row:4 supported:1 last:1 free:3 l_:1 wide:1 peterson:1 neighbor:1 fg:1 van:1 feedback:1 soulie:1 world:2 made:3 san:2 abstracted:2 active:2 sequentially:1 discriminative:2 don:1 table:4 nature:1 robust:1 ca:2 ignoring:1 did:1 main:1 neurosci:1 subsample:2 augmented:1 fig:1 lc:1 fails:1 position:12 explicit:1 extractor:2 down:2 ethem:1 experimented:2 dk:1 bivariate:1 exists:1 sequential:3 gained:1 conspicious:1 entropy:1 visual:6 partially:2 saccade:3 binding:1 springer:1 ch:1 extracted:7 ann:1 feasible:1 content:2 change:2 typical:1 foveation:1 pas:3 invariance:2 rarely:1 selectively:1 latter:1 stressed:1 |
55 | 1,048 | Gaussian Processes for Regression
Christopher K. I. Williams
Neural Computing Research Group
Aston University
Birmingham B4 7ET, UK
Carl Edward Rasmussen
Department of Computer ,Science
University of Toronto
Toronto , ONT, M5S lA4, Canada
c.k.i.williams~aston.ac.uk
carl~cs.toronto.edu
Abstract
The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over
functions . In this paper we investigate the use of Gaussian process
priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly
using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been
tested on a number of challenging problems and have produced
excellent results.
1
INTRODUCTION
In the Bayesian approach to neural networks a prior distribution over the weights
induces a prior distribution over functions. This prior is combined with a noise
model, which specifies the probability of observing the targets t given function
values y, to yield a posterior over functions which can then be used for predictions.
For neural networks the prior over functions has a complex form which means
that implementations must either make approximations (e.g. MacKay, 1992) or use
Monte Carlo approaches to evaluating integrals (Neal , 1993) .
As Neal (1995) has argued , there is no reason to believe that, for real-world problems, neural network models should be limited to nets containing only a "small"
number of hidden units . He has shown that it is sensible to consider a limit where
the number of hidden units in a net tends to infinity, and that good predictions can
be obtained from such models using the Bayesian machinery. He has also shown
that a large class of neural network models will converge to a Gaussian process prior
over functions in the limit of an infinite number of hidden units.
In this paper we use Gaussian processes specified parametrically for regression problems. The advantage of the Gaussian process formulation is that the combination of
515
Gaussian Processes for Regression
the prior and noise models can be carried out exactly using matrix operations. We
also show how the hyperparameters which control the form of the Gaussian process
can be estimated from the data, using either a maximum likelihood or Bayesian
approach, and that this leads to a form of "Automatic Relevance Determination"
(Mackay 1993j Neal 1995).
2
PREDICTION WITH GAUSSIAN PROCESSES
A stochastic process is a collection of random variables {Y (x) Ix EX} indexed by a
set X. In our case X will be the input space with dimension d, the number of irlputs .
The stochastic process is specified by giving the probability distribution for every
finite subset of variables Y(x(1)), . .. , Y(x(k)) in a consistent manner. A Gaussian
process is a stochastic process which can be fully specified by its mean function
J.1.(:x:) = E[Y(x)] and its covariance function C(X , X/) = E[(Y(x) - J.1.(x))(Y(x /)J.1.( Xl))]; any finite set of points will have a joint multivariate Gaussian distribution.
Below we consider Gaussian processes which have J.1.( x) == O.
In section 2.1 we will show how to parameterise covariances using hyperparametersj
for now we consider the form of the covariance C as given. The training data
consists of n pairs of inputs and targets {( xCi) , t(i)) , i = 1 .. . n} . The input vector
for a test case is denoted x (with no superscript). The inputs are d-dimensional
Xl, . .. , Xd and the targets are scalar.
The predictive distribution for a test case x is obtained from the n + 1 dimensional
joint Gaussian distribution for the outputs of the n training cases and the test
case, by conditioning on the observed targets in the training set. This procedure is
illustrated in Figure 1, for the case where there is one training point and one test
point. In general, the predictive distribution is Gaussian with mean and variance
k T (x)K- 1t
(1)
C(x,x) - kT (x)K- 1 k(x),
(2)
where k(x) = (C(x, x(1)), ... , C(x, x(n))f , K is the covariance matrix for the
training cases Kij = C(x(i), x(j)), and t = (t(l), ... , t(n))T .
The matrix inversion step in equations (1) and (2) implies that the algorithm has
O( n 3 ) time complexity (if standard methods of matrix inversion are employed) ;
for a few hundred data points this is certainly feasible on workstation computers,
although for larger problems some iterative methods or approximations may be
needed.
2.1
PARAMETERIZING THE COVARIANCE FUNCTION
There are many choices of covariance functions which may be reasonable. Formally,
we are required to specify functions which will generate a non-negative definite
covariance matrix for any set of points (x(1 ), ... , x(k )). From a modelling point of
view we wish to specify covariances so that points with nearby inputs will give rise
to similar predictions. We find that the following covariance function works well:
-t L WI(x~i)
d
Vo exp{
-
x~j))2}
1=1
d
+ao + a1 Lx~i)x~j) + V18(i , j),
1=1
(3)
c. K. I. WILLIAMS, C. E. RASMUSSEN
516
y
y
/
/
p(y)
y1
/
/
Figure 1: An illustration of prediction using a Gaussian process. There is one training
case (x(1), t(1)) and one test case for which we wish to predict y. The ellipse in the lefthand plot is the one standard deviation contour plot of the joint distribution of Yl and
y . The dotted line represents an observation Yl = t(1). In the right-hand plot we see
the distribution of the output for the test case, obtained by conditioning on the observed
target. The y axes have the same scale in both plots.
=
log(vo, V1, W1, . . . , Wd, ao, ad plays the role of hyperparameters 1 . We
where (}
define the hyperparameters to be the log of the variables in equation (4) since these
are positive scale-parameters.
The covariance function is made up of three parts; the first term, a linear regression
term (involving ao and aI) and a noise term V1b(i, j). The first term expresses the
idea that cases with nearby inputs will have highly correlated outputs; the WI parameters allow a different distance measure for each input dimension. For irrelevant
inputs, the corresponding WI will become small, and the model will ignore that input. This is closely related to the Automatic Relevance Determination (ARD) idea
of MacKay and Neal (MacKay, 1993; Neal 1995). The Vo variable gives the overall
scale of the local correlations. This covariance function is valid for all input dimensionalities as compared to splines, where the integrated squared mth derivative is
only a valid regularizer for 2m > d (see Wahba, 1990). ao and a1 are variables
controlling the scale the of bias and linear contributions to the covariance. The last
term accounts for the noise on the data; VI is the variance of the noise.
Given a covariance function , the log likelihood of the training data is given by
1= -
~ logdet I< - ~tT I<-lt - !!.log27r.
222
In section 3 we will discuss how the hyperparameters
response to the training data.
2.2
III
(4)
C can be adapted, in
RELATIONSHIP TO PREVIOUS WORK
The Gaussian process view provides a unifying framework for many regression methods . ARMA models used in time series analysis and spline smoothing (e.g. Wahba,
1990 and earlier references therein) correspond to Gaussian process prediction with
1 We call () the hyperparameters as they correspond closely to hyperparameters in neural
networks; in effect the weights have been integrated out exactly.
Gaussian Processes for Regression
517
a particular choice of covariance function 2 . Gaussian processes have also been used
in the geostatistics field (e .g. Cressie, 1993) , and are known there as "kriging", but
this literature has concentrated on the case where the input space is two or three
dimensional , rather than considering more general input spaces.
This work is similar to Regularization Networks (Poggio and Girosi, 1990; Girosi,
Jones and Poggio, 1995), except that their derivation uses a smoothness functional
rather than the equivalent covariance function. Poggio et al suggested that the
hyperparameters be set by cross-validation. The main contributions of this paper
are to emphasize that a maximum likelihood solution for 8 is possible, to recognize
the connections to ARD and to use the Hybrid Monte Carlo method in the Bayesian
treatment (see section 3).
3
TRAINING A GAUSSIAN PROCESS
The partial derivative of the log likelihood of the training data I with respect to
all the hyperparameters can be computed using matrix operations, and takes time
O( n 3 ) . In this section we present two methods which can be used to adapt the
hyperparameters using these derivatives.
3.1
MAXIMUM LIKELIHOOD
In a maximum likelihood framework, we adjust the hyperparameters so as to maximize that likelihood of the training data. We initialize the hyperparameters to
random values (in a reasonable range) and then use an iterative method, for example conjugate gradient, to search for optimal values of the hyperparameters. Since
there are only a small number of hyperparameters (d + 4) a relatively small number
of iterations are usually sufficient for convergence. However, we have found that
this approach is sometimes susceptible to local minima, so it is advisable to try a
number of random starting positions in hyperparameter space.
3.2
INTEGRATION VIA HYBRID MONTE CARLO
According to the Bayesian formalism, we should start with a prior distribution P( 8)
over the hyperparameters which is modified using the training data D to produce
a posterior distribution P(8ID). To make predictions we then integrate over the
posterior; for example, the predicted mean y( x) for test input x is given by
y(x) =
J
Y8(x)P(8I D )d8
(5)
where Y8( x) is the predicted mean (as given by equation 1) for a particular value of
8. It is not feasible to do this integration analytically, but the Markov chain Monte
Carlo method of Hybrid Monte Carlo (HMC) (Duane et ai, 1987) seems promising
for this application. We assign broad Gaussians priors to the hyperparameters, and
use Hybrid Monte Carlo to give us samples from the posterior.
HMC works by creating a fictitious dynamical system in which the hyperparameters
are regarded as position variables, and augmenting these with momentum variables
p. The purpose of the dynamical system is to give the hyperparameters "inertia"
so that random-walk behaviour in 8-space can be avoided. The total energy, H, of
the system is the sum of the kinetic energy, J{, (a function of the momenta) and the
potential energy, E. The potential energy is defined such that p(8ID) ex: exp(-E).
We sample from the joint distribution for 8 and p given by p(8,p) ex: exp(-E2Technically splines require generalized covariance functions.
C. K. I. WILUAMS, C. E. RASMUSSEN
518
I<); the marginal of this distribution for 8 is the required posterior. A sample of
hyperparameters from the posterior can therefore be obtained by simply ignoring
the momenta.
Sampling from the joint distribution is achieved by two steps: (i) finding new points
in phase space with near-identical energies H by simulating the dynamical system
using a discretised approximation to Hamiltonian dynamics, and (ii) changing the
energy H by doing Gibbs sampling for the momentum variables.
Hamiltonian Dynamics
Hamilton's first order differential equations for H are approximated by a discrete
step (specifically using the leapfrog method). The derivatives of the likelihood
(equation 4) enter through the derivative of the potential energy. This proposed
state is then accepted or rejected using the Metropolis rule depending on the final
energy H* (which is not necessarily equal to the initial energy H because of the
discretization). The same step size c is used for all hyperparameters , and should be
as large as possible while keeping the rejection rate low .
Gibbs Sampling for Momentum Variables
The momentum variables are updated using a modified version of Gibbs sampling,
thereby allowing the energy H to change. A "persistence" of 0.95 is used; the new
value of the momentum is a weighted sum of the previous value (with weight 0.95)
and the value obtained by Gibbs sampling (weight (1 - 0.95 2)1/ 2). With this form
of persistence, the momenta change approximately twenty times more slowly, thus
increasing the "inertia" of the hyperparameters, so as to further help in avoiding
random walks. Larger values of the persistence will further increase the inertia, but
reduce the rate of exploration of H .
Practical Details
The priors over hyperparameters are set to be Gaussian with a mean of -3 and a
standard deviation of 3. In all our simulations a step size c = 0.05 produced a very
low rejection rate ? 1%). The hyperparameters corresponding to V1 and to the
WI ' S were initialised to -2 and the rest to O.
To apply the method we first rescale the inputs and outputs so that they have mean
of zero and a variance of one on the training set. The sampling procedure is run
for the desired amount of time, saving the values of the hyperparameters 200 times
during the last two-thirds of the run . The first third of the run is discarded; this
"burn-in" is intended to give the hyperparameters time to come close to their equilibrium distribution. The predictive distribution is then a mixture of 200 Gaussians.
For a squared error loss, we use the mean of this distribution as a point estimate.
The width of the predictive distribution tells us the uncertainty of the prediction.
4
EXPERIMENTAL RESULTS
We report the results of prediction with Gaussian process on (i) a modified version
of MacKay's robot arm problem and (ii) five real-world data sets.
4.1
THE ROBOT ARM PROBLEM
We consider a version of MacKay's robot arm problem introduced by Neal (1995).
The standard robot arm problem is concerned with the mappings
Y1
= r1 cos Xl + r2 COS(X1 + X2)
Y2
= r1 sin Xl + r2 sin(x1 + X2)
(6)
Gaussian Processes for Regression
Method
Gaussian process
Gaussian process
MacKay
Neal
Neal
519
No . of inputs
2
6
2
2
6
sum squared test error
1.126
1.138
1.146
1.094
1.098
Table 1: Results on the robot arm task. The bottom three lines of data were obtained
from Neal (1995) . The MacKay result is the test error for the net with highest "evidence".
The data was generated by picking Xl uniformly from [-1.932, -0.453] and [0.453 ,
1.932] and picking X2 uniformly from [0 .534, 3.142]. Neal added four further inputs,
two of which were copies of Xl and X2 corrupted by additive Gaussian noise of
standard deviation 0.02 , and two further irrelevant Gaussian-noise inputs with zero
mean and unit variance. Independent zero-mean Gaussian noise of variance 0.0025
was then added to the outputs YI and Y2 . We used the same datasets as Neal and
MacKay, with 200 examples in the training set and 200 in the test set .
The theory described in section 2 deals only with the prediction of a scalar quantity
Y , so predictors were constructed for the two outputs separately, although a joint
prediction is possible within the Gaussian process framework (see co-kriging, ?3 .2.3
in Cressie, 1993).
Two experiments were conducted, the first using only the two "true" inputs, and
the second one using all six inputs. In this section we report results using maximum likelihood training; similar results were obtained with HMC . The log( v),s
and loge w )'s were all initialized to values chosen uniformly from [-3.0, 0.0], and
were adapted separately for the prediction of YI and Y2 (in these early experiments
the linear regression terms in the covariance function involving aa and al were not
present) . The conjugate gradient search algorithm was allowed to run for 100 iterations, by which time the likelihood was changing very slowly. Results are reported
for the run which gave the highest likelihood of the training data, although in fact
all runs performed very similarly. The results are shown in Table 1 and are encouraging, as they indicate that the Gaussian process approach is giving very similar
performance to two well-respected techniques. All of the methods obtain a level of
performance which is quite close to the theoretical minimum error level of 1.0 ....Jt is
interesting to look at the values of the w's obtained after the optimization; for the
Y2 task the values were 0.243,0.237,0.0639,7.0 x 10- 4 , 2.32 x 10- 6 ,1.70 x 10- 6 ,
and Va and VI were 7.5278 and 0.0022 respectively. The w values show nicely that
the first two inputs are the most important, followed by the corrupted inputs and
then the irrelevant inputs. During training the irrelevant inputs are detected quite
quickly, but the w 's for the corrupted inputs shrink more slowly, implying that the
input noise has relatively little effect on the likelihood.
4.2
FIVE REAL-WORLD PROBLEMS
Gaussian Processes as described above were compared to several other regression
algorithms on five real-world data sets in (Rasmussen, 1996; in this volume). The
data sets had between 80 and 256 training examples, and the input dimension
ranged from 6 to 16. The length of the HMC sampling for the Gaussian processes
was from 7.5 minutes for the smallest training set size up to 1 hour for the largest
ones on a R4400 machine. The results rank the methods in the order (lowest error
first) a full-blown Bayesian treatment of neural networks using HMC, Gaussian
C. K. I. WILLIAMS, C. E. RASMUSSEN
520
processes, ensembles of neural networks trained using cross validation and weight
decay, the Evidence framework for neural networks (MacKay, 1992), and MARS.
We are currently working on assessing the statistical significance of this ordering.
5
DISCUSSION
We have presented the method of regression with Gaussian processes, and shown
that it performs well on a suite of real-world problems.
We have also conducted some experiments on the approximation of neural nets (with
a finite number of hidden units) by Gaussian processes, although space limitations
do not allow these to be described here. Some other directions currently under
investigation include (i) the use of Gaussian processes for classification problems by
softmaxing the outputs of k regression surfaces (for a k-class classification problem),
(ii) using non-stationary covariance functions, so that C(x , Xl) f:- C(lx - XII) and
(iii) using a covariance function containing a sum of two or more terms of the form
given in line 1 of equation 3.
We hope to make our code for Gaussian process prediction publically available in the
near future. Check http://www.cs.utoronto.ca/neuron/delve/delve.html for details.
Acknowledgements
We thank Radford Neal for many useful discussions, David MacKay for generously providing the robot arm data used in this paper, and Chris Bishop, Peter Dayan, Radford Neal
and Huaiyu Zhu for comments on earlier drafts. CW was partially supported by EPSRC
grant GRjJ75425.
References
Cressie, N. A. C. (1993) . Statistics for Spatial Data. Wiley.
Duane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987). Hybrid Monte Carlo.
Physics Letters B, 195:216-222.
Girosi, F., Jones, M., and Poggio, T. (1995). Regularization Theory and Neural Networks
Architectures. Neural Computation, 7(2):219-269.
MacKay, D . J. C. (1992). A Practical Bayesian Framework for Backpropagation Networks.
Neural Computation, 4(3):448-472.
MacKay, D. J. C. (1993). Bayesian Methods for Backpropagation Networks. In van
Hemmen, J. L., Domany, E., and Schulten, K., editors, Models of Neural Networks
II. Springer.
Neal, R. M. (1993). Bayesian Learning via Stochastic Dynamics. In Hanson, S. J., Cowan,
J. D., and Giles, C. L., editors, Neural Information Processing Systems, Vol. 5, pages
475-482. Morgan Kaufmann, San Mateo, CA.
Neal, R. M. (1995). Bayesian Learning for Neural Networks. PhD thesis, Dept. of Computer Science, University of Toronto.
Poggio, T. and Girosi, F. (1990). Networks for approximation and learning. Proceedings
of IEEE, 78:1481-1497.
Rasmussen, C. E. (1996). A Practical Monte Carlo Implementation of Bayesian Learning.
In Touretzky, D. S., Mozer, M. C., and Hasselmo, M. E., editors, Advances in Neural
Information Processing Systems 8. MIT Press.
Wahba, G. (1990). Spline Models for Observational Data. Society for Industrial and Applied Mathematics. CBMS-NSF Regional Conference series in applied mathematics.
| 1048 |@word version:3 inversion:2 seems:1 simulation:1 covariance:19 thereby:1 initial:1 series:2 wd:1 discretization:1 must:1 additive:1 girosi:4 plot:4 implying:1 stationary:1 hamiltonian:2 draft:1 provides:1 toronto:4 lx:2 five:3 constructed:1 become:1 differential:1 consists:1 manner:1 ont:1 encouraging:1 little:1 considering:1 increasing:1 lowest:1 finding:1 suite:1 every:1 xd:1 exactly:3 uk:2 control:1 unit:5 grant:1 hamilton:1 positive:1 local:2 tends:1 limit:2 id:2 approximately:1 burn:1 therein:1 mateo:1 challenging:1 co:3 delve:2 limited:1 range:1 practical:3 definite:1 backpropagation:2 procedure:2 persistence:3 close:2 www:1 equivalent:1 xci:1 williams:4 starting:1 y8:2 parameterizing:1 rule:1 regarded:1 updated:1 target:5 play:1 controlling:1 carl:2 cressie:3 us:1 approximated:1 bottom:1 epsrc:1 observed:2 role:1 ordering:1 highest:2 kriging:2 mozer:1 complexity:1 dynamic:3 trained:1 predictive:5 joint:6 regularizer:1 derivation:1 monte:9 detected:1 tell:1 pendleton:1 quite:2 larger:2 statistic:1 la4:1 superscript:1 final:1 advantage:1 net:4 lefthand:1 convergence:1 r1:2 assessing:1 produce:1 help:1 depending:1 advisable:1 ac:1 augmenting:1 rescale:1 ard:2 edward:1 c:2 predicted:2 implies:2 come:1 indicate:1 direction:1 closely:2 stochastic:4 exploration:1 observational:1 argued:1 require:1 behaviour:1 assign:1 ao:4 investigation:1 exp:3 equilibrium:1 mapping:1 predict:1 early:1 smallest:1 purpose:1 birmingham:1 currently:2 largest:1 hasselmo:1 weighted:1 hope:1 mit:1 generously:1 gaussian:36 modified:3 rather:2 ax:1 leapfrog:1 modelling:1 likelihood:12 rank:1 check:1 industrial:1 dayan:1 publically:1 integrated:2 hidden:4 mth:1 overall:1 classification:2 html:1 denoted:1 smoothing:1 integration:2 mackay:13 initialize:1 marginal:1 field:1 equal:1 saving:1 nicely:1 spatial:1 sampling:7 identical:1 represents:1 broad:1 jones:2 look:1 future:1 report:2 spline:4 few:1 recognize:1 phase:1 intended:1 investigate:1 highly:1 certainly:1 adjust:1 mixture:1 chain:1 kt:1 integral:1 partial:1 poggio:5 machinery:1 indexed:1 walk:2 arma:1 desired:1 loge:1 initialized:1 theoretical:1 roweth:1 kij:1 formalism:1 earlier:2 giles:1 deviation:3 parametrically:1 subset:1 hundred:1 predictor:1 conducted:2 reported:1 corrupted:3 combined:1 yl:2 physic:1 picking:2 quickly:1 w1:1 squared:3 thesis:1 containing:2 slowly:3 d8:1 creating:1 derivative:5 account:1 potential:3 ad:1 vi:2 performed:1 view:2 try:1 observing:1 doing:1 start:1 contribution:2 variance:5 kaufmann:1 ensemble:1 yield:1 correspond:2 bayesian:13 produced:2 carlo:9 kennedy:1 r4400:1 m5s:1 touretzky:1 energy:10 initialised:1 workstation:1 treatment:2 dimensionality:1 cbms:1 specify:2 response:1 formulation:1 shrink:1 mar:1 rejected:1 correlation:1 hand:1 working:1 christopher:1 believe:1 effect:2 ranged:1 y2:4 true:1 regularization:2 analytically:1 neal:15 illustrated:1 deal:1 sin:2 during:2 width:1 generalized:1 tt:1 vo:3 performs:1 functional:1 b4:1 conditioning:2 volume:1 he:2 gibbs:4 ai:2 enter:1 smoothness:1 automatic:2 mathematics:2 similarly:1 had:1 robot:6 surface:1 posterior:6 multivariate:1 irrelevant:4 yi:2 morgan:1 minimum:2 employed:1 converge:1 maximize:1 ii:4 full:1 determination:2 adapt:1 cross:2 a1:2 va:1 prediction:13 involving:2 regression:11 iteration:2 sometimes:1 achieved:1 separately:2 rest:1 regional:1 comment:1 cowan:1 call:1 near:2 iii:2 concerned:1 gave:1 architecture:1 wahba:3 reduce:1 idea:2 domany:1 six:1 peter:1 logdet:1 useful:1 amount:1 induces:1 concentrated:1 generate:1 specifies:1 http:1 nsf:1 dotted:1 blown:1 estimated:1 xii:1 discrete:1 hyperparameter:1 vol:1 express:1 group:1 four:1 changing:2 v1:2 sum:4 run:6 letter:1 uncertainty:1 reasonable:2 followed:1 adapted:2 infinity:1 x2:4 nearby:2 relatively:2 department:1 according:1 combination:1 conjugate:2 wi:4 metropolis:1 equation:6 discus:1 needed:1 available:1 operation:3 gaussians:2 permit:1 apply:1 simulating:1 include:1 unifying:1 giving:2 ellipse:1 society:1 respected:1 added:2 quantity:1 gradient:2 distance:1 thank:1 cw:1 sensible:1 chris:1 reason:1 length:1 code:1 relationship:1 illustration:1 providing:1 difficult:1 susceptible:1 hmc:5 negative:1 rise:1 implementation:2 twenty:1 allowing:1 observation:1 neuron:1 markov:1 discarded:1 datasets:1 finite:3 y1:2 canada:1 introduced:1 david:1 pair:1 required:2 specified:3 discretised:1 connection:1 hanson:1 hour:1 geostatistics:1 suggested:1 below:1 usually:1 dynamical:3 hybrid:6 arm:6 zhu:1 aston:2 carried:2 huaiyu:1 prior:12 literature:1 acknowledgement:1 log27r:1 fully:1 loss:1 parameterise:1 interesting:1 limitation:1 fictitious:1 validation:2 integrate:1 sufficient:1 consistent:1 editor:3 supported:1 last:2 rasmussen:6 keeping:1 copy:1 bias:1 allow:2 van:1 dimension:3 evaluating:1 world:5 contour:1 valid:2 inertia:3 collection:1 made:1 san:1 avoided:1 emphasize:1 ignore:1 search:2 iterative:2 table:2 promising:1 ca:2 correlated:1 ignoring:1 excellent:1 complex:2 necessarily:1 significance:1 main:1 noise:9 hyperparameters:26 allowed:1 x1:2 hemmen:1 wiley:1 position:2 momentum:8 wish:2 schulten:1 xl:7 third:2 ix:1 minute:1 bishop:1 jt:1 utoronto:1 r2:2 decay:1 evidence:2 phd:1 rejection:2 lt:1 simply:1 partially:1 scalar:2 radford:2 duane:2 aa:1 springer:1 kinetic:1 feasible:2 change:2 infinite:1 except:1 specifically:1 uniformly:3 averaging:1 total:1 accepted:1 experimental:1 formally:1 relevance:2 dept:1 tested:1 avoiding:1 ex:3 |
56 | 1,049 | Modern Analytic Techniques to Solve the
Dynamics of Recurrent Neural Networks
A.C.C. Coolen
Dept. of Mathematics
King's College London
Strand, London WC2R 2LS, U.K.
S.N. Laughton
Dept. of Physics - Theoretical Physics
University of Oxford
1 Keble Road, Oxford OX1 3NP, U.K.
D. Sherrington ..
Center for Non-linear Studies
Los Alamos National Laboratory
Los Alamos, New Mexico 87545
Abstract
We describe the use of modern analytical techniques in solving the
dynamics of symmetric and nonsymmetric recurrent neural networks near saturation. These explicitly take into account the correlations between the post-synaptic potentials, and thereby allow
for a reliable prediction of transients.
1
INTRODUCTION
Recurrent neural networks have been rather popular in the physics community,
because they lend themselves so naturally to analysis with tools from equilibrium
statistical mechanics. This was the main theme of physicists between, say, 1985
and 1990. Less familiar to the neural network community is a subsequent wave of
theoretical physical studies, dealing with the dynamics of symmetric and nonsymmetric recurrent networks. The strategy here is to try to describe the processes
at a reduced level of an appropriate small set of dynamic macroscopic observables.
At first, progress was made in solving the dynamics of extremely diluted models
(Derrida et al, 1987) and of fully connected models away from saturation (for a
review see (Coolen and Sherrington, 1993)). This paper is concerned with more
recent approaches, which take the form of dynamical replica theories, that allow
for a reliable prediction of transients, even near saturation. Transients provide the
link between initial states and final states (equilibrium calculations only provide
?On leave from Department of Physics - Theoretical Physics, University of Oxford
A. C. C. COOLEN, S. N. LAUGHTON, D. SHERRINGTON
254
information on the possible final states). In view of the technical nature of the
subject, we will describe only basic ideas and results for simple models (full details
and applications to more complicated models can be found elsewhere).
2
RECURRENT NETWORKS NEAR SATURATION
Let us consider networks of N binary neurons ai E {-I, I}, where neuron states
are updated sequentially and stochastically, driven by the values of post-synaptic
potentials hi . The probability to find the system at time t in state 0' = (a1,' .. , aN)
is denoted by Pt(O'). For the rates Wi(O') of the transitions ai -t -(7i and for the
potentials hi (0') we make the usual choice
1
Wi (0') = - [1-ai tanh [,Bhi (0')]]
hi(O') =
Jijaj
L
2
j:f:i
The parameter ,B controls the degree of stochasticity: the ,B = 0 dynamics is completely random, whereas for ,B = 00 we find the deterministic rule ai -t sgn[hi(O')].
The evolution in time of Pt(O') is given by the master equation
d
N
(1)
dtPt (0') =
[Pt (FkO' )Wk (FkO') - Pt (0' )Wk (0')]
l:
k=l
with Fk<P(O') = <P(a1 , ... ,-(7k, ... , aN)' For symmetric models, where Jij = Jji
for all (ij), the dynamics (1) leads asymptotically to the Boltzmann equilibrium
distribution Peq(O') '" exp [-,BE(O')], with the energy E(O') = - Li<j adijaj.
For associative memory models with Hebbian-type synapses, required to store a set
of P random binary patterns e/.1 = (?i, .. . , ?~ ), the relevant macroscopic observable
is the overlap m between the current microscopic state 0' and the pattern to be
retrieved (say, pattern 1): m = -Iv Li ?lai. Each post-synaptic potential can now
be written as the sum of a simple signal term and an interference-noise term , e.g.
1 p=o:N
(2)
hi(O') = m?l + ~
?f ?jaj
Jij = N
?f?j
L
l: l:
/.1=1
/.1>1
j:f: i
All complications arise from the noise terms.
The 'Local Chaos Hypothesis' (LCH) consists of assuming the noise terms to be
independently distributed Gaussian variables. The macroscopic description then
consists of the overlap m and the width ~ of the noise distribution (Amari and
Maginu, 1987). This, however, works only for states near the nominated pattern,
see also (Nishimori and Ozeki, 1993). In reality the noise components in the potentials have far more complicated statistics l . Due to the build up of correlations
between the system state and the non-nominated patterns, the noise components
can be highly correlated and described by bi-modal distributions. Another approach
involves a description in terms of correlation- and response functions (with two timearguments). Here one builds a generating functional, which is a sum over all possible
trajectories in state space, averaged over the distribution of the non-nominated patterns. One finds equations which are exact for N -t 00 , but, unfortunately, also
rather complicated. For the typical neural network models solutions are known
only in equilibrium (Rieger et aI, 1988); information on transients has so far only
been obtained through cumbersome approximation schemes (Horner et aI, 1989).
We now turn to a theory that takes into account the non-trivial statistics of the
post-synaptic potentials, yet involves observables with one time-argument only.
lCorrelations are negligible only in extremely diluted (asymmetric) networks (Derrida
et aI , 1987) , and in networks with independently drawn (asymmetric) random synapses
Modem Analytic Techniques to Solve the Dynamics of Recurrent Neural Networks
3
255
DYNAMICAL REPLICA THEORIES
The evolution of macroscopic observables n( 0') = (0 1 (0'), ... , OK (0')) can be described by the so-called Kramers-Moyal expansion for the corresponding probability
distribution pt(n) (derived directly from (1)). Under certain conditions on the sensitivity of n to single-neuron transitions (7i -t -1J'i, one finds on finite time-scales
and for N -t 00 the macroscopic state n to evolve deterministically according to:
~n
dt
=
EO' pt(O')8 [n-n(O')] Ei Wi(O') [n(FiO')-n(O')]
EO' pt(O')8 [n-n(O')]
(3)
This equation depends explicitly on time through Pt(O'). However, there are two natural ways for (3) to become autonomous: (i) by the term Ei Wi(O') [n(FiO') -n(O')]
depending on u only through n(O') (as for attractor networks away from saturation), or (ii) by (1) allowing for solutions of the form Pt(O') = fdn(O')] (as for
extremely diluted networks). In both cases Pt(O') drops out of (3). Simulations further indicate that for N -t 00 the macroscopic evolution usually depends only on
the statistical properties of the patterns {ell}, not on their microscopic realisation
('self-averaging'). This leads us to the following closure assumptions:
1. Probability equipartitioning in the n subshells of the ensemble: Pt(O') '"
8 [nt-n(O')]. If n indeed obeys closed equations, this assumption is safe.
2. Self-averaging of the n flow with resfect to the microscopic details of the
non-nominated patterns:
n -t (dt n)patt.
tt
Our equations (3) are hereby transformed into the closed set:
~n _ (EO' 8 [n-n(O')] Ei Wi(O') [n(FiO') - n(O')])
dt
EO' 8 [n-n(O')]
patt
The final observation is that the tool for averaging fractions is replica theory:
lim lim
ddt n = n--tO
N --too
~
(~Wi(O'l) [n(FiO'1)-n(O' 1)] rrn 8[n-n(O'
~ ~
O'I ???O'
n
i
O
)])patt
(4)
0=1
The choice to be made for the observables n(O'), crucial for the closure assumptions
to make sense, is constrained by requiring the theory to be exact in specific limits:
exactness for a -t 0 :
exactness for t -t 00:
4
n = (m, ... )
n = (E, ... )
(for symmetric models only)
SIMPLE VERSION OF THE THEORY
For the Hopfield model (2) the simplest two-parameter theory which is exact for a -t
-t 00 is consequently obtained by choosing n = (m,E). Equivalently
we can choose n = (m,r), where r(O') measures the 'interference energy':
o and for t
m
= ~ L~I(7i
i
The result of working out (4) for
J
=; J
!m
"21 dtd r
n
= (m, r) is:
=
dz Dm,r[z] tanh,B (m+z) - m
1
dz Dm,r[z]z tanh,B (m+z)
+1-
r
256
A. C. C. COOLEN, S. N. LAUGHTON, D. SHERRINGTON
15
~----------------------------~
r
/
/
/
/
/
I
o
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
o
~
1
m
Figure 1: Simulations (N = 32000, dots) versus simple RS theory (solid lines), for
a = 0.1 and j3 = 00. Upper dashed line: upper boundary of the physical region.
Lower dashed line: upper boundary of the RS region (the AT instability).
in which Dm,r[z] is the distribution of 'interference-noise' terms in the PSP's, for
which the replica calculation gives the outcome (in so-called RS ansatz):
Dm,r[z] =
e-~2
{l-jDY
2 27rar
tanh [>.y
+e-~)2 {1-jDY
h2
= apr->.2jp
{q, {t, p}
2 27rar
with Dy = [27rj-t edy, ~
the remaining parameters
m
=j
Dy tanh[>'y+{tj
tanh [>.y
[~]
t+(~+Z)-~apr+{tl}
apr
[~]
t +(~-Z)~-{tl}
apr
apr
and>' = pyaq[l-p(l-q)]-l, and with
to be solved from the coupled equations:
q = j Dy tanh 2 [>.y+{t]
r
1-p(1-q)2
= [1-p(1-q)]2
Here we only give (partly new) results of the calculation; details can be found
in (Coolen and Sherrington, 1994). The noise distribution is not Gaussian (in
agreement with simulations, in contrast to LCH). Our simple two-parameter theory
is found to be exact for t '" 0, t -7 00 and for a -7 O. Solving numerically the
dynamic equations leads to the results shown in figures 1 and 2. We find a nice
agreement with numerical simulations in terms of the flow in the (m, r) plane.
However, for trajectories leading away from the recall state m '" 1, the theory
fails to reproduce an overall slowing down. These deviations can be quantified by
comparing cumulants of the noise distributions (Ozeki and Nishimori, 1994), or by
applying the theory to exactly solvable models (Coolen and Franz, 1994). Other
recent applications include spin-glass models (Coolen and Sherrington, 1994) and
more general classes of attractor neural network models (Laughton and Coolen,
1995). The simple two-parameter theory always predicts adequately the location of
the transients in the order parameter plane, but overestimates the relaxation speed.
In fact, figure 2 shows a remarkable resemblance to the results obtained for this
model in (Horner et al, 1989) with the functional integral formalism; the graphs of
m(t) are almost identical, but here they are derived in a much simpler way.
Modem Analytic Techniques to Solve the Dynamics of Recurrent Neural Networks
257
1
.8
10
--...,
...,
2 ?6
~
.....
..... .....
.4
'-'
!--
..... .....
.... ....
.... ........
.2
-- ---
--
5
.... ....
--
0
0
0
2
6
4
B
10
0
2
4
t
6
B
10
t
Figure 2: Simulations (N = 32000, dots) versus simple RS theory (RS stable: solid
lines, RS unstable: dashed lines), now as functions of time, for Q; = 0.1 and f3 = 00.
5
ADVANCED VERSION OF THE THEORY
Improving upon the simple theory means expanding the set n beyond n = (m,E).
Adding a finite number of observables will only have a minor impact; a qualitative
step forward, on the other hand, results from introducing a dynamic order parameter
function. Since the microscopic dynamics (1) is formulated entirely in terms of
neuron states and post-synaptic potentials we choose for n (u) the joint distribution:
1
D[(, h](u) = N
<5 [( -O"i] <5 [h-hi(U)]
L
i
This choice has the advantages that (a) both m and (for symmetric systems) E are
integrals over D[(, h], so the advanced theory automatically inherits the exactness
at t = 0 and t = 00 of the simple one, (b) it applies equally well to symmetric and
nonsymmetric models and (c) as with the simple version, generalisation to models
with continuous neural variables is straightforward. Here we show the result of
applying the theory to a model of the type (1) with synaptic interactions:
Jij =
~ ~i~j +
.iN [cos(~
)Xij +sin(~ )Yij ]
Xij = Xji, Yij = -Yji (independent random Gaussian variables)
(describing a nominated pattern being stored on a 'messy' synaptic background).
The parameter w controls the degree of synaptic symmetry (e.g. w = 0: symmetric,
w = 7r: anti-symmetric) . Equation (4) applied to the observable D[(, h](u) gives:
~
8
mDt[C h] = J2[1-(O"tanh(f3H))D t ] 8h2Dt [(,h]
+ :h
8
+ 8h A [( , h;Dt]
{DdCh] [h-Jo(tanh(f3 H ))Dt]}
1
1
+2 [l+(tanh(f3h)] Dd--(, h] - 2 [l-(tanh(f3h)] DdC h]
258
A. C. C. COOLEN, S. N. LAUGHfON, D. SHERRINGTON
o .------,------.------.------.------.------~
- .2
-.4
E
"-
-.6
"-
'~
~-
--- --- -- - --
- .8
_ 1 L-____- L_ _ _ _ _ _L -_ __ _
o
~
_ _ _ __ _
2
~
_ __ _
~
_ __ _ _ _
~
4
6
t
Figure 3: Comparison of simulations (N = 8000, solid line), simple two-parameter
theory (RS stable: dotted line, RS unstable: dashed line) and advanced theory
(solid line) , for the w = a (symmetric background) model, with Jo = 0, f3 = 00.
Note that the two solid lines are almost on top of each other at the scale shown.
".
0.5
0 .0
E
-0.5
-0.5
o
2
4
t
6
o
2
4
6
t
Figure 4: Advanced theory versus N = 5600 simulations in the w = ~7r (asymmetric
background) model, with f3 = 00 and J = 1. Solid: simulations; dotted: solving the
RS diffusion equation.
Modem Analytic Techniques to Solve the Dynamics of Recurrent Neural Networks
259
with (f(a,H))D = L:". JdH D[a,H]J(a, H). All complications are concentrated in
the kernel A[C h ; DJ, which is to be solved from a nontrivial set of equations emerging from the replica formalism. Some results of solving these equations numerically
are shown in figures 3 and 4 (for details of the calculations and more elaborate comparisons with simulations we refer to (Laughton, Coolen and Sherrington, 1995;
Coolen, Laughton and Sherrington, 1995)). It is clear that the advanced theory
quite convincingly describes the transients of the simulation experiments, including
the hitherto unexplained slowing down, for symmetric and nonsymmetric models.
6
DISCUSSION
In this paper we have described novel techniques for studying the dynamics of recurrent neural networks near saturation. The simplest two-parameter theory (exact
for t = 0, for t --+ 00 and for 0: --+ 0) , which employs as dynamic order parameters
the overlap with a pattern to be recalled and the total 'energy' per neuron, already
describes quite accurately the location of the transients in the order parameter
plane. The price paid for simplicity is that it overestimates the relaxation speed.
A more advanced version of the theory, which describes the evolution of the joint
distribution for neuron states and post-synaptic potentials, is mathematically more
involved, but predicts the dynamical data essentially perfectly, as far as present
applications allow us conclude. Whether this latter version is either exact, or just
a very good approximation, still remains to be seen.
In this paper we have restricted ourselves to models with binary neural variables,
for reasons of simplicity. The theories generalise in a natural way to models with
analogue neurons (here, however , already the simple version will generally involve
order parameter functions as opposed to a finite number of order parameters).
Ongoing work along these lines includes, for instance, the analysis of analogue and
spherical attractor networks and networks of coupled oscillators near saturation.
References
B. Derrida, E. Gardner and A. Zippelius (1987), Europhys. Lett. 4: 167-173
A.C .C. Coolen and D. Sherrington (1993), in J.G. Taylor (ed.), Mathematical Approaches to Neural Networks, 293-305. Amsterdam: Elsevier.
S. Amari and K. Maginu (1988), Neural Networks 1: 63-73
H. Nishimori and T. Ozeki (1993), J. Phys. A 26: 859-871
H. Rieger, M. Schreckenberg and J. Zittartz (1988), Z. Phys. B 72: 523-533
H. Horner, D. Bormann, M. Frick, H. Kinzelbach and A. Schmidt (1989), Z. Phys.
B 76: 381-398
A.C.C. Coolen and D. Sherrington (1994), Phys. Rev. E 49(3): 1921-1934
H. Nishimori and T. Ozeki (1994), J . Phys. A 27: 7061-7068
A.C.C. Coolen and S. Franz (1994), J. Phys. A 27: 6947-9954
A.C.C. Coolen and D. Sherrington (1994), J. Phys. A 27: 7687-7707
S.N. Laughton and A.C.C. Coolen (1995), Phys. Rev. E 51: 2581-2599
S.N. Laughton, A.C.C. Coolen and D. Sherrington (1995), J. Phys. A (in press)
A.C.C. Coolen, S.N . Laughton and D. Sherrington (1995), Phys. Rev. B (in press)
| 1049 |@word version:6 closure:2 r:9 simulation:10 paid:1 thereby:1 solid:6 initial:1 current:1 comparing:1 nt:1 yet:1 ddc:1 written:1 numerical:1 subsequent:1 analytic:4 drop:1 slowing:2 plane:3 complication:2 location:2 simpler:1 mathematical:1 along:1 become:1 qualitative:1 consists:2 indeed:1 xji:1 themselves:1 mechanic:1 f3h:3 spherical:1 automatically:1 hitherto:1 emerging:1 zippelius:1 exactly:1 control:2 overestimate:2 negligible:1 local:1 limit:1 physicist:1 oxford:3 quantified:1 co:1 bi:1 averaged:1 obeys:1 fko:2 road:1 applying:2 instability:1 deterministic:1 center:1 dz:2 straightforward:1 l:1 independently:2 fdn:1 simplicity:2 rule:1 autonomous:1 updated:1 pt:11 exact:6 hypothesis:1 agreement:2 maginu:2 asymmetric:3 predicts:2 solved:2 region:2 connected:1 subshells:1 messy:1 dynamic:15 solving:5 upon:1 observables:5 completely:1 joint:2 hopfield:1 describe:3 london:2 choosing:1 outcome:1 europhys:1 quite:2 solve:4 say:2 amari:2 statistic:2 final:3 associative:1 advantage:1 analytical:1 interaction:1 jij:3 j2:1 relevant:1 description:2 los:2 generating:1 leave:1 diluted:3 depending:1 recurrent:9 derrida:3 ij:1 minor:1 progress:1 involves:2 indicate:1 safe:1 sgn:1 transient:7 rar:2 wc2r:1 mathematically:1 yij:2 exp:1 equilibrium:4 coolen:18 tanh:11 unexplained:1 ozeki:4 tool:2 exactness:3 gaussian:3 always:1 rather:2 derived:2 inherits:1 contrast:1 sense:1 glass:1 elsevier:1 transformed:1 reproduce:1 overall:1 denoted:1 constrained:1 ell:1 f3:4 identical:1 np:1 realisation:1 employ:1 modern:2 national:1 familiar:1 ourselves:1 attractor:3 highly:1 tj:1 integral:2 iv:1 taylor:1 theoretical:3 instance:1 formalism:2 cumulants:1 introducing:1 deviation:1 alamo:2 too:1 stored:1 jdy:2 sensitivity:1 physic:5 ansatz:1 jo:2 opposed:1 choose:2 stochastically:1 leading:1 rrn:1 li:2 account:2 potential:8 wk:2 includes:1 explicitly:2 depends:2 try:1 view:1 closed:2 wave:1 complicated:3 spin:1 ensemble:1 accurately:1 trajectory:2 synapsis:2 phys:10 cumbersome:1 synaptic:9 ed:1 energy:3 involved:1 dm:4 naturally:1 hereby:1 popular:1 recall:1 lim:2 ok:1 dt:5 modal:1 response:1 just:1 correlation:3 working:1 hand:1 ei:3 ox1:1 resemblance:1 requiring:1 evolution:4 adequately:1 symmetric:10 laboratory:1 sin:1 width:1 self:2 tt:1 sherrington:14 dtd:1 chaos:1 novel:1 functional:2 physical:2 jp:1 nonsymmetric:4 moyal:1 numerically:2 refer:1 ai:7 fk:1 mathematics:1 stochasticity:1 dj:1 dot:2 stable:2 recent:2 retrieved:1 driven:1 store:1 certain:1 binary:3 seen:1 eo:4 signal:1 ii:1 dashed:4 full:1 rj:1 hebbian:1 technical:1 calculation:4 lai:1 post:6 equally:1 a1:2 impact:1 prediction:2 j3:1 basic:1 essentially:1 kernel:1 whereas:1 background:3 macroscopic:6 crucial:1 subject:1 rieger:2 flow:2 near:6 concerned:1 perfectly:1 idea:1 whether:1 jaj:1 generally:1 clear:1 involve:1 concentrated:1 simplest:2 reduced:1 xij:2 dotted:2 per:1 patt:3 ddt:1 drawn:1 diffusion:1 replica:5 asymptotically:1 jdh:1 relaxation:2 graph:1 fraction:1 sum:2 master:1 almost:2 dy:3 entirely:1 hi:6 jji:1 nontrivial:1 speed:2 argument:1 extremely:3 department:1 according:1 fio:4 psp:1 describes:3 wi:6 rev:3 restricted:1 interference:3 equation:11 remains:1 turn:1 describing:1 studying:1 away:3 appropriate:1 schmidt:1 top:1 remaining:1 include:1 build:2 mdt:1 already:2 strategy:1 usual:1 microscopic:4 link:1 unstable:2 trivial:1 reason:1 assuming:1 mexico:1 equivalently:1 bhi:1 unfortunately:1 boltzmann:1 allowing:1 upper:3 neuron:7 modem:3 observation:1 finite:3 anti:1 community:2 required:1 recalled:1 horner:3 beyond:1 dynamical:3 pattern:10 lch:2 usually:1 convincingly:1 saturation:7 reliable:2 memory:1 lend:1 including:1 analogue:2 overlap:3 natural:2 solvable:1 advanced:6 scheme:1 gardner:1 coupled:2 review:1 nice:1 laughton:9 nishimori:4 evolve:1 fully:1 versus:3 remarkable:1 h2:1 degree:2 dd:1 kramers:1 elsewhere:1 l_:1 allow:3 generalise:1 distributed:1 boundary:2 lett:1 transition:2 forward:1 made:2 frick:1 franz:2 far:3 observable:2 dealing:1 sequentially:1 conclude:1 yji:1 continuous:1 reality:1 nature:1 expanding:1 symmetry:1 improving:1 expansion:1 apr:5 main:1 noise:9 arise:1 peq:1 tl:2 elaborate:1 nominated:5 fails:1 theme:1 deterministically:1 down:2 specific:1 adding:1 amsterdam:1 strand:1 applies:1 formulated:1 king:1 consequently:1 oscillator:1 price:1 typical:1 generalisation:1 averaging:3 called:2 total:1 partly:1 college:1 latter:1 ongoing:1 dept:2 correlated:1 |
57 | 105 | 340
BACKPROPAGATION AND ITS
APPLICATION TO HANDWRITTEN
SIGNATURE VERIFICATION
Dorothy A. Mighell
Electrical Eng. Dept.
Info. Systems Lab
Stanford University
Stanford, CA 94305
Timothy S. Wilkinson
Electrical Eng. Dept.
Info. Systems Lab
Stanford University
Stanford, CA 94305
Joseph W. Goodman
Electrical Eng. Dept.
Info. Systems Lab
Stanford University
Stanford, CA 94305
ABSTRACT
A pool of handwritten signatures is used to train a neural network for the task of deciding whether or not a given signature is a
forgery. The network is a feedforward net, with a binary image as
input. There is a hidden layer, with a single unit output layer. The
weights are adjusted according to the backpropagation algorithm.
The signatures are entered into a C software program through the
use of a Datacopy Electronic Digitizing Camera. The binary signatures are normalized and centered. The performance is examined
as a function of the training set and network structure. The best
scores are on the order of 2% true signature rejection with 2-4%
false signature acceptance.
INTRODUCTION
Signatures are used everyday to authorize the transfer of funds for millions of people.
We use our signature as a form of identity, consent, and authorization. Bank checks,
credit cards, legal documents and waivers all require the everchanging personalized
signature. Forgeries on such transactions amount to millions of dollars lost each
year. A trained eye can spot most forgeries, but it is not cost effective to handcheck
all signatures due to the massive number of daily transactions. Consequently, only
disputed claims and checks written for large amounts are verified. The consumer
would certainly benefit from the added protection of automated verification. Neural
networks lend themselves very well to signature verification. Previously, they have
proven applicable to other signal processing tasks, such as character recognition
{Fukishima, 1986} {Jackel, 1988}, sonar target classification {Gorman, 1986}, and
control- as in the broom balancer {Tolat, 1988}.
HANDWRITING ANALYSIS
Signature verification is only one aspect of the study of handwriting analysis.
Recognition is the objective, whether it be of the writer or the characters. Writer
recognition can be further broken down into identification and verification. Identi-
Backpropagation and Handwritten Signature Verification
fication selects the author of a sample from among a group of writers. Verification
confirms or rejects a written sample for a single author. In both cases, it is the
style of writing that is important.
Deciphering written text is the basis of character recognition. In this task, linguistic
information such as the individual characters or words are extracted from the text.
Style must be eliminated to get at the content. A very important application of
character recognition is automated reading of zip-codes in the post office {Jackel,
1988}.
Data for handwriting analysis may be either dynamic or static. Dynamic data
requires special devices for capturing the temporal characteristics of the sample.
Features such as pressure, velocity, and position are examined in the dynamic
framework. Such analysis is usually performed on-line in real time.
Static analysis uses the final trace of the writing, as it appears on paper. Static
analysis does not require any special processing devices while the signature is being
produced. Centralized verification becomes possible, and the processing may be
done off-line.
Work has been done in both static and dynamic analysis {Sato, 1982} {Nemcek,
1974}. Generally, signature verification efforts have been more successful using
the dynamic information. It would be extremely useful though, to perform the
verification using only the written signature. This would eliminate the need for
costly machinery at every place of business. Personal checks may also be verified
through a static signature analysis.
TASK
The handwriting analysis task with which this paper is concerned is that of signature verification using an off-line method to detect casual forgeries. Casual forgeries
are non-professional forgeries, in which the writer does not practice reproducing
the signature. The writer may not even have a copy of the true signature. Casual
forgeries are very important to detect. They are far more abundant, and involve
greater monetary losses than professional forgeries. This signature verification task
falls into the writer recognition category, in which the style of writing is the important variable. The off-line analysis allows centralized verification at a lower cost
and broader use.
HANDWRITTEN SIGNATURES
The signatures for this project were gathered from individuals to produce a pool
of 80 true signatures and 66 forgeries. These are signatures, true and false, for one
person. There is a further collection of signatures, both true and false, for other
persons, but the majority of the results presented will be for the one individual. It
will be clear when other individuals are included in the demonstration.
The signatures are collected on 3x5 index cards which have a small blue box as
341
342
Wilkinson, Mighell and Goodman
a guideline. The cards are scanned with a CCD array camera from Datacopy,
and thresholded to produce binary images. These binary images are centered and
normalized to fit into a 128x64 matrix. Either the entire 128x64 image is presented
as input, or a 90x64 image of the three initials alone is presented. It is also possible
to present preprocessed inputs to the network.
SOFTWARE SIMULATION
The type of learning algorithm employed is that of backpropagation. Both dwell
and momentum are included. Dwell is the type of scheduling employed, in which
an image is presented to the network, and the network is allowed to "dwell" on that
input for a few iterations while updating its weights. C. Rosenberg and T. Sejnowski
have done a few studies on the effects of scheduling on learning {Rosenberg, 1986}.
Momentum is a term included in the change of weights equation to speed up learning
{Rumelhart, 1986}.
The software is written in Microsoft C, and run on an IBM PC/AT with an 80287
math co-processor chip.
Included in the simulation is a piece-wise linear approximation to the sigmoid transfer function as shown in Figure 1. This greatly improves the speed of calculation,
because an exponential is not calculated. The non-linearity is kept to allow for
layering of the network. Most of the details of initialization and update are the
same as that reported in NetTalk {Sejnowski, 1986}.
OUT
~-111111::::+~----'.
IN
Figure 1. Piece-wise linear transfer function.
Many different nets were trained in this signature verification project, all of which
were feed-forward. The output layer most often consisted of a single output neuron,
but 5 output neurons have been used as well. If a hidden layer was used, then
the number of hidden units ranged from 2 to 53. The networks were both fullyconnected and partially-connected.
SAMPLE RUN
The simplest network is that of a single neuron taking all 128x64 pixels as input,
plus one bias. Each pixel has a weight associated with it, so that the total number
of weights is 128x64 + 1 = 8193. Each white pixel is assigned an input value of + 1,
each black pixel has a value of -1. The training set consists of 10 true signatures
Backpropagation and Handwritten Signature Verification
with 10 forgeries. Figure 2a depicts the network structure of this sample run.
OUT
c::
1
0
-"
u
CD
CD
0.5
-
f- ..
CD
-:J
"-
Q.
0
0.5
0
P(false
~1~111J
111.
1
acceptance)
(b)
"~~~mlla
1
1/
(a)
~
en
LL.
C
0
0.5
f
0
0
(e)
0.5
1
Output Values
(d)
Figure 2. Sample run.
a) Network = one output neuron, one weight per pixel, fully connected. Training set = 10 true signatures + 10 forgeries.
b) ROC plot for the sample run. (Probability of fa1se acceptance
vs probability of true detection). Test set = 70 true signatures
+ 56 forgeries.
c) Clipped picture of the weights for the sample run. White
positive weight, black = negative weight.
=
d) Cumulative distribution function for the true signatures (+) and
for the forgeries (0) of the sample run.
The network is trained on these 20 signatures until all signatures are classified
343
344
Wilkinson, Mighell and Goodman
correctly. The trained network is then tested on the remaining 70 true signatures
and 56 forgeries.
The results are depicted in Figures 2b and 2d. Figure 2b is a radar operating
characteristic curve, or roc plot for short. In this presentation of data, the probability of detecting a true signature is plotted against the probability of accepting a
forgery. Roc plots have been used for some time in the radar sciences as a means
for visualizing performance {Marcum, 1960}. A perfect roc plot has a right angle
in the upper left-hand corner which would show perfect separation of true signatures from forgeries. The curve is plotted by varying the threshold for classification.
Everything above the threshold is labeled a true signature, everything below the
threshold is labeled a forgery. The roc plot in Figure 2b is close to perfect, but
there is some overlap in the output values of the true signatures and forgeries. The
overlap can be seen in the cumulative distribution functions (cdfs) for the true and
false signatures as shown in Figure 2d. As seen in the cdfs, there is fairly good
separation of the output values. For a given threshold of 0.5, the network produces
1% rejection of true signatures as false, with 4% acceptance of forgeries as being
true. IT one lowers the threshold for classification down to 0.43, the true rejection
becomes nil, with a false acceptance of 7% . A simplified picture of the weights is
shown in Figure 2c, with white pixels designating positive weights, and black pixels
negative weights.
OTHER NETWORKS
The sample run above was expanded to include 2 and 3 hidden neurons with the
single output neuron. The results were similar to the single unit network, implying
that the separation is linear.
The 128x64 input image was also divided into regions, with each region feeding into
a single neuron. In one network structure, the input was sectioned into 32 equally
sized regions of 16x16 pixels. The hidden layer thus has 32 neurons, each neuron
receiving 16x16 + 1 inputs. The output neuron had 33 inputs. Likewise, the input
image was divided into 53 regions of 16x16 pixels, this time overlapping.
Finally, only the initials were presented to the network. (Handwriting experts
have noted that leading strokes and separate capital letters are very significant in
classification {Osborn, 1929}.) In this case, two types of networks were devised.
The first had a single output neuron, the second had three hidden neurons plus one
output neuron. Each of the hidden neurons received inputs from only one initial,
rather than from all three. The network with the single output neuron produced
the best results of all, with 2% true rejection and 2% false acceptance.
IMPORTANCE OF FORGERIES IN THE TRAINING SET
In all cases, the networks performed much better when forgeries were included in the
training set. When an all-white image is presented as the only forgery, performance
deteriorates significantly. When no forgeries are present, the network decides that
Backpropagation and Handwritten Signature Verification
all signatures are true signatures. It is therefore desirable to include actual forgeries
in the training set, yet they may be impractical to obtain. One possibility for
avoiding ?the collection of forgeries is to use computer generated forgeries. Another
is to distort the true signatures. A third is to use true signatures of other people as
forgeries for the person in question. The attraction of this last option is that the
masquerading forgeries are already available for use.
NETWORK WITHOUT FORGERIES
To test the use of true signatures of other people for forgeries, the following network
is devised. Once again, the input is the 128x64 pixel image. The output layer is
comprised of five output neurons fully connected to the input image. The function
of each output neuron is to be active when presented with a particular persons'
signature. When a forgery is present, the output is to be low. Figure 3a depicts this
network. The training set has 50 true signatures, ten for each of five people. Each
signature has a desired output of true for one neuron, and false for the remaining
four neurons. Once the network is trained, it is tested on 210 true signatures and
150 forgeries. Figures 3b and 3c record the results. At a threshold of 0.5, the true
rejection is 3% and the false acceptance is 14%. Decreasing the threshold down to
0.41 gives 0% true rejection and 28% false acceptance. These results are similar
to the sample run, though not as good. This is a simple demonstration of the use
of other true signatures as forgeries. More sophisticated techniques could improve
the discrimination. For instance, selecting names with similar lengths or spelling
should improve the classification.
CONCLUSION
Automated signature verification systems would be extremely important in the
business world for verifying monetary transactions. Countless dollars are lost each
day to instances of casual forgeries. An artificial neural network employing the
backpropagation learning algorithm has been trained on both true and false signatures for classification. The results have been very good: 2% rejection of genuine
signatures with 2% acceptance of forgeries. The analysis requires only the static
picture of the signature, there by offering widespread use through centralized verification. True signatures of other people may substitute for the forgeries in the
training set - eliminating the need for collecting non-genuine signatures.
345
346
Wilkinson, Mighell and Goodman
JWG JTH TSW LDK ABH
- r--::iif1l---------..
oC
(a)
lr-----------~~~--_=~
1
( ,)
CD
Q)
.-
(f.
I
~
00.5
"t:S U.5
o
Q)
::J
~
o~----------~--------~
o
0.5
P(false
acceptance)
1
o~~----~~~--------~
o
0.5
1
Output Values
(b)
(c)
Figure 3. Network without forgeries for 5 individuals.
a) Network
5 output neurons, one for each individua~ as indicated by the initials. Training set = 10 true signatures for each
individual.
=
b) ROC plot for the network without forgeries.
210 true signatures + 150 forgeries.
Test set
=
c) Cumulative distribution function for the true signatures (+) and
for the forgeries (0) of the network without forgeries.
Referenees
K. Fukishima and S. Miyake, "Neocognitron: A biocybernetic approach to visual
pattern recognition Jt , in NHK Laboratorie~ Note, Vol. 336, Sep 1986 (NHK
Science and Technical Research Laboratories, Tokyo).
Backpropagation and Handwritten Signature Verification
P. Gorman and T. J. Sejnowski, "Learned classification of sonar targets using a
massively parallel network", in the proceedings of the IEEE ASSP Oct 21,
1986 DSP Workshop, Chatham, MA.
L. D. Jackel, H. P. Graf, W. Hubbard, J. S. Denker, and D. Henderson, "An
application of neural net chips: handwritten digit recognition", in IEEE International Oonference on Neural Networks 1988, II 107-115.
J. T. Marcum, "A statistical theory of target detection by pulsed radar", in IRE
Transactions in Information Theory, Vol. IT-6 (Apr.), pp 145-267, 1960.
W. F. Nemcek and W. C. Lin, "Experimental investigation of automatic signature
verification" in IEEE Transactions on Systems, Man, and Oybernetics, Jan.
1974, pp 121-126.
A. S. Osborn, Questioned Documents, 2nd edition (Boyd Printing Co, Albany NY)
1929.
C. R. Rosenberg and T. J. Sejnowski, "The spacing effect on NETtalk, a massively parallel network", in Proceedings of the Eighth Annual Oonference of
the Oognitive Science Society, (Hillsdale, New Jersey: Lawrence Erlbaum
Associates, 1986) 72-89.
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation", in Parallel Distributed Processing: Explorations
in the Microstructures of Oognition. Vol. 1: Foundations, edited by D. E.
Rumelhart & J. L. McClelland, (MIT Press, 1986).
Y. Sato and K. Kogure, "Online signature verification based on shape, motion,
and writing pressure", in Proceedings of the 6th International Oonference on
Pattern Recognition, Vol. 2, pp 823-826 (IEEE NY) 1982.
T. J. Sejnowski and C. R. Rosenberg, "NETtalk: A Parallel Network that Learns
to Read Aloud", Johns Hopkins University Department of Electrical Engineering and Computer Science Technical Report JHU /EECS-86/01, (1986).
V. V. Tolat and B. Widrow, "An adaptive 'broom balancer' with visual inputs" , in
IEEE International Oonference on Neural Networks 1988, II 641-647.
347
| 105 |@word eliminating:1 nd:1 confirms:1 simulation:2 eng:3 pressure:2 initial:4 score:1 selecting:1 offering:1 document:2 protection:1 yet:1 written:5 must:1 john:1 shape:1 plot:6 fund:1 update:1 v:1 alone:1 implying:1 discrimination:1 device:2 short:1 record:1 accepting:1 lr:1 detecting:1 math:1 ire:1 five:2 consists:1 fullyconnected:1 themselves:1 decreasing:1 actual:1 becomes:2 project:2 linearity:1 impractical:1 temporal:1 every:1 collecting:1 control:1 unit:3 positive:2 engineering:1 black:3 plus:2 initialization:1 examined:2 co:2 cdfs:2 camera:2 lost:2 practice:1 backpropagation:8 digit:1 spot:1 jan:1 jhu:1 reject:1 significantly:1 boyd:1 word:1 disputed:1 get:1 close:1 scheduling:2 writing:4 williams:1 miyake:1 attraction:1 array:1 fication:1 x64:7 target:3 massive:1 us:1 designating:1 associate:1 velocity:1 rumelhart:3 recognition:9 updating:1 labeled:2 forgery:41 electrical:4 verifying:1 region:4 connected:3 edited:1 broken:1 wilkinson:4 dynamic:5 personal:1 signature:66 trained:6 radar:3 writer:6 basis:1 sep:1 chip:2 jersey:1 train:1 effective:1 sejnowski:5 artificial:1 aloud:1 stanford:6 final:1 online:1 net:3 monetary:2 entered:1 consent:1 everyday:1 produce:3 perfect:3 widrow:1 received:1 tokyo:1 centered:2 exploration:1 everything:2 hillsdale:1 require:2 feeding:1 investigation:1 adjusted:1 credit:1 deciding:1 lawrence:1 claim:1 layering:1 albany:1 applicable:1 jackel:3 hubbard:1 mit:1 rather:1 varying:1 broader:1 office:1 rosenberg:4 linguistic:1 dsp:1 check:3 greatly:1 dollar:2 detect:2 eliminate:1 entire:1 hidden:7 selects:1 pixel:10 classification:7 among:1 special:2 fairly:1 genuine:2 once:2 eliminated:1 report:1 few:2 individual:6 microsoft:1 detection:2 acceptance:10 centralized:3 possibility:1 certainly:1 henderson:1 pc:1 daily:1 machinery:1 abundant:1 plotted:2 desired:1 instance:2 cost:2 deciphering:1 comprised:1 successful:1 erlbaum:1 balancer:2 reported:1 eec:1 person:4 international:3 off:3 receiving:1 pool:2 hopkins:1 referenees:1 again:1 corner:1 expert:1 style:3 leading:1 piece:2 performed:2 lab:3 option:1 parallel:4 characteristic:2 likewise:1 gathered:1 handwritten:8 identification:1 produced:2 casual:4 processor:1 classified:1 stroke:1 distort:1 against:1 pp:3 associated:1 handwriting:5 static:6 improves:1 sophisticated:1 appears:1 feed:1 day:1 done:3 though:2 box:1 until:1 abh:1 hand:1 overlapping:1 propagation:1 microstructures:1 widespread:1 indicated:1 name:1 effect:2 normalized:2 true:35 consisted:1 ranged:1 assigned:1 read:1 laboratory:1 nettalk:3 white:4 visualizing:1 x5:1 ll:1 noted:1 oc:1 neocognitron:1 motion:1 image:11 wise:2 sigmoid:1 million:2 digitizing:1 significant:1 tolat:2 automatic:1 nhk:2 had:3 operating:1 pulsed:1 massively:2 binary:4 seen:2 greater:1 zip:1 employed:2 signal:1 ii:2 desirable:1 technical:2 calculation:1 lin:1 divided:2 devised:2 post:1 equally:1 iteration:1 spacing:1 dorothy:1 goodman:4 feedforward:1 sectioned:1 concerned:1 automated:3 fit:1 whether:2 effort:1 questioned:1 generally:1 useful:1 clear:1 involve:1 amount:2 ten:1 category:1 simplest:1 mcclelland:1 deteriorates:1 per:1 correctly:1 blue:1 vol:4 group:1 four:1 threshold:7 capital:1 preprocessed:1 verified:2 thresholded:1 kept:1 year:1 osborn:2 run:9 angle:1 letter:1 place:1 clipped:1 electronic:1 separation:3 capturing:1 layer:6 dwell:3 annual:1 sato:2 scanned:1 software:3 personalized:1 aspect:1 speed:2 extremely:2 expanded:1 department:1 according:1 character:5 joseph:1 legal:1 equation:1 previously:1 available:1 denker:1 professional:2 substitute:1 remaining:2 include:2 ccd:1 society:1 objective:1 added:1 question:1 already:1 costly:1 spelling:1 separate:1 card:3 majority:1 collected:1 broom:2 consumer:1 code:1 length:1 index:1 demonstration:2 info:3 trace:1 negative:2 guideline:1 perform:1 upper:1 neuron:20 hinton:1 assp:1 reproducing:1 identi:1 learned:1 usually:1 below:1 pattern:2 eighth:1 reading:1 program:1 lend:1 overlap:2 business:2 improve:2 eye:1 picture:3 text:2 countless:1 graf:1 loss:1 fully:2 proven:1 foundation:1 verification:21 bank:1 cd:4 ibm:1 last:1 copy:1 jth:1 bias:1 allow:1 fall:1 taking:1 benefit:1 distributed:1 curve:2 calculated:1 world:1 cumulative:3 author:2 collection:2 forward:1 adaptive:1 simplified:1 far:1 employing:1 transaction:5 decides:1 active:1 sonar:2 transfer:3 ca:3 apr:1 edition:1 allowed:1 en:1 roc:6 depicts:2 x16:3 ny:2 position:1 momentum:2 exponential:1 third:1 printing:1 learns:1 down:3 jt:1 workshop:1 false:13 importance:1 gorman:2 rejection:7 marcum:2 depicted:1 timothy:1 visual:2 partially:1 extracted:1 ma:1 oct:1 identity:1 presentation:1 sized:1 consequently:1 man:1 content:1 change:1 included:5 total:1 nil:1 experimental:1 internal:1 people:5 dept:3 tested:2 avoiding:1 |
58 | 1,050 | Family Discovery
Stephen M. Omohundro
NEC Research Institute
4 Independence Way, Princeton, NJ 08540
om@research.nj.nec.com
Abstract
"Family discovery" is the task of learning the dimension and structure of a parameterized family of stochastic models. It is especially appropriate when the training examples are partitioned into
"episodes" of samples drawn from a single parameter value. We
present three family discovery algorithms based on surface learning and show that they significantly improve performance over two
alternatives on a parameterized classification task.
1
INTRODUCTION
Human listeners improve their ability to recognize speech by identifying the accent
of the speaker. "Might" in an American accent is similar to "mate" in an Australian
accent. By first identifying the accent, discrimination between these two words is
improved. We can imagine locating a speaker in a "space of accents" parameterized
by features like pitch, vowel formants, "r" -strength, etc. This paper considers the
task of learning such parameterized models from data.
Most speech recognition systems train hidden Markov models on labelled speech
data. Speaker-dependent systems train on speech from a single speaker. Speakerindependent systems are usually similar, but are trained on speech from many
different speakers in the hope that they will then recognize them all. This kind of
training ignores speaker identity and is likely to result in confusion between pairs of
words which are given the same pronunciation by speakers with different accents.
Speaker-independent recognition systems could more closely mimic the human approach by using a learning paradigm we call "family discovery". The system would
be trained on speech data partitioned into "episodes" for each speaker. From this
data, the system would construct a parameterized family of models representing dif-
Family Discovery
Affine
Family
403
Affine Patch
Family
Coupled Map
Family
Figure 1: The structure of the three family discovery algorithms.
ferent accents. The learning algorithms presented in this paper could determine the
dimension and structure of the parameterization. Given a sample of new speech,
the best-fitting accent model would be used for recognition.
The same paradigm applies to many other recognition tasks. For example, an OCR
system could learn a parameterized family of font models (Revow, et. al., 1994).
Given new text, the system would identify the document's font parameters and use
the corresponding character recognizer.
In general, we use "family discovery" to refer to the task of learning the dimension
and structure of a parameterized family of stochastic models. The methods we
present are equally applicable to parameterized density estimation, classification,
regression, manifold learning, reinforcement learning, clustering, stochastic grammar learning, and other stochastic settings. Here we only discuss classification and
primarily consider training examples which are explicitly partitioned into episodes.
This approach fits naturally into the neural network literature on "meta-learning"
(Schmidhuber, 1995) and "network transfer" (Pratt, 1994). It may also be considered as a particular case of the "bias learning" framework proposed by Baxter at
this conference (Baxter, 1996).
There are two primary alternatives to family discovery: 1) try to fit a single model
to the data from all episodes or 2) use separate models for each episode. The first
approach ignores the information that the different training sets came from distinct
models. The second approach eliminates the possibility of inductive generalization
from one set to another.
In Section 2, we present three algorithms for family discovery based on techniques
for "surface learning" (Bregler and Omohundro, 1994 and 1995). As shown in Figure
1, the three alternative representations of the family are: 1) a single affine subspace
of the parameter space, 2) a set of local affine patches smoothly blended together,
and 3) a pair of coupled maps from the parameter space into the model space and
back. In Section 3, we compare these three approaches to the two alternatives on a
parameterized classification task.
404
2
S. M. OMOHUNDRO
THE FIVE ALGORITHMS
Let the space of all classifiers under consideration be parameterized by 0 and assume
that different values of 0 correspond to different classifiers (ie. it is identifiable). For
example, 0 might represent the means, covariances, and class priors of a classifier
with normal class-conditional densities. O-space will typically have a much higher
dimension than the parameterized family we are seeking. We write P9(X) for the
total probability that the classifier 0 assigns to a labelled or unlabelled example x.
The true models are drawn from a d-dimensional family parameterized by , . Let the
training set be partitioned into N episodes where episode i consists of Ni training
examples tij, 1 :S j :S Ni drawn from a single underlying model with parameter
A family discovery learning algorithm uses this training data to estimate the
underlying parameterized family.
0:.
From a parameterized family, we may define the projection operator P from O-space
to itself which takes each 0 to the closest member of the family. Using this projection
operator, we may define a "family prior" on O-space which dies off exponentially
with the square distance of a model from the family mp(O) ex e-(9-P(9))2. Each
of the family discovery algorithms chooses a family so as to maximize the posterior
probability of the training data with respect to this prior. If the data is very
sparse, this MAP approximation to a full Bayesian solution can be supplemented
by "Occam" terms (MacKay, 1995) or by using a Monte Carlo approximation.
The outer loop of each of the algorithms performs the optimization of the fit of the
data by re-estimation in a manner similar to the Expectation Maximization (EM)
approach (Jordan and Jacobs, 1994). First, the training data in each episode i is
independently fit by a model Oi. Then the dimension of the family is determined
as described later and the family projection operator P is chosen to maximize the
probability that the episode models Oi came from that family
i mp(Oi). The
episode models Oi are then re-estimated including the new prior probability mp.
These newly re-estimated models are influenced by the other episodes through mp
and so exhibit training set "transfer". The re-estimation loop is repeated until
nothing changes.
n
The learned family can then be used to classify a set of N test unlabelled test examples Xk, 1 :S k :S N test drawn from a model O;est in the family. First, the parameter
Otest is estimated by selecting the member of the family with the highest likelihood
on the test samples. This model is then used to perform the classification. A good
approximation to the best-fit family member is often to take the image of the best-fit
model in the entire O-space under the projection operator P.
In the next five sections, we describe the two alternative approaches and the three
family discovery algorithms. They differ only in their choice of family representation
as encoded in the projection operator P.
2.1
The Single Model Approach
The first alternative approach is to train a single model on all of the training data.
It selects 0 to maximize the total likelihood L( 0) = n~l n~l P9 (tij ). New test
data is classified by this single selected model.
Family Discovery
2.2
405
The Separate Models Approach
The second alternative approach fits separate models for each training }?isode. It
chooses Bi for 1::; i::; N to maximize the episode likelihood Li(Bi ) = TIj~IPIJ(tij).
Given new test data, it determines which of the individual models Bi fit best and
classifies the data with it.
2.3
The Affine Algorithm
The affine model represents the underlying model family as an affine subspace of
the model parameter space. The projection operator Pal line projects a parameter
vector B orthogonally onto the affine subspace. The subspace is determined by
selecting the top principal vectors in a principal components analysis of the bestfit episode model parameters. As described in (Bregler & Omohundro, 1994) the
dimension is chosen by looking for a gap in the principal values.
2.4
The Affine Patch Algorithm
The second family discovery algorithm is based on the "surface learning" procedure described in (Bregler and Omohundro, 1994). The family is represented by
a collection of local affine patches which are blended together using Gaussian influence functions. The projection mapping Ppatch is a smooth convex combination
of projections onto the affine patches Ppatch(B) = 2::=1 10: (B)Ao: (B) where Ao: is
the projection operator for an affine patch and Io:(B) =
is a normalized
E:"J:)(IJ)
Gaussian blending function.
The patches are initialized using k-means clustering on the episode models to choose
k patch centers. A local principal components analysis is performed on the episode
models which are closest to each center. The family dimension is determined by
examining how the principal values scale as successive nearest neighbors are considered. Each patch may be thought of as a "pancake" lying in the surface. Dimensions
which belong to the surface grow quickly as more neighbors are considered while
dimensions across the surface grow only because of the curvature of the surface.
The Gaussian influence functions and the affine patches are then updated by the
EM algorithm (Jordan and Jacobs, 1994). With the affine patches held fixed, the
Gaussians Go: are refit to the errors each patch makes in approximating the episode
models. Then with the Gaussians held fixed, the affine patches Ao: are refit to the
epsiode models weighted by the the corresponding Gaussian Go:. Similar patches
may be merged together to form a more parsimonious model.
2.5
The Coupled Map Algorithm
The affine patch approach has the virtue that it can represent topologically complex
families (eg. families representing physical objects might naturally be parameterized
by the rotation group which is topologically a projective plane). It cannot, however,
provide an explicit parameterization of the family which is useful in some applications (eg. optimization searches). The third family discovery algorithm therefore
attempts to directly learn a parameterization of the model family.
Recall that the model parameters define B-space, while the family parameters de-
406
S. M. OMOHUNDRO
fine 'Y-space. We represent a family by a mapping G from B-space to 'Y-space together with a mapping F from 'Y-space back to B-space. The projection operation
is Pmap(B) = F(G(B)). The map G(O) defines the family parameter l' on the full
O-space.
This representation is similar to an "auto-associator" network in which we attempt
to "encode" the best-fit episode parameters Oi in the lower dimensional 'Y-space
by the mapping G in such a way that they can be correctly reconstructed by the
function F. Unfortunately, if we try to train F and G using back-propagation on
the identity error function, we get no training data away from the family. There is
no reason for G to project points away from the family to the closest family member.
We can rectify this by training F and G iteratively. First an arbitrary G is chosen
and F is trained to send the images 'Yi = G(Oi) back to 0i' G is trained, however,
on images under F corrupted by additive spherical Gaussian noise! This provides
samples away from the family and on average the training signal sends each point
in B space to the closest family member.
To avoid iterative training, our experiments used a simpler approach. G was taken to
be the affine projection operator defined by a global principal components analysis
of the best-fit episode model parameters. Once G is defined, F is chosen to minimize
the difference between F(G(Oi)) and Oi for each best-fit episode parameter Oi.
Any form of trainable nonlinear mapping could be used for F (eg. backprop neural
networks or radial basis function networks). We represent F as a mixture of experts
(Jordan and Jacobs, 1994) where each expert is an affine mapping and the mixture
coefficients are Gaussians. The mapping is trained by the EM algorithm.
3
ALGORITHM COMPARISON
To compare these five algorithms, we consider a two-class classification task with
unit-variance normal class-conditional distributions on a 5-dimensional feature
space. The means of the class distributions are parameterized by a nonlinear twoparameter family:
ml
m2
= (1'1
= ('Yl
+ ~cos??e~1 + ('Y2 + ~sin??e~2
- ~ cos ?>) e~1 + ('Y2 - ~ sin ?>) l2 .
where 0 ~ 1'1, 1'2 ~ 10 and ?> = ('Yl + 1'2)/3. The class means are kept at a unit
distance apart, ensuring significant class overlap over the whole family. The angle
?> varies with the parameters so that the correct classification boundary changes
orientation over the family. This choice of parameters introduces sufficient nonlinearity in the task to distinguish the non-linear algorithms from the linear one.
Figure 1 shows the comparative performance of the 5 algorithms. The x-axis is the
total number of training examples. Each set of examples consisted of approximately
N =
episodes of approximately Ni =
examples each. The classifier parameters for an episode were drawn uniformly from the classifier family. The episode
training examples were then sampled from the chosen classifier according to the
classifier's distribution. Each of the 5 algorithms was then trained on these examples. The number of patches in the surface patch algorithm and the number of affine
components in the surface map algorithm were both taken to be the square-root of
..;x
..;x
Family Discovery
0.52
407
r---.---.---""T""----r----,-----r---r---~-__,
Single model
Separate models
Affine family
Affine Patch family
Map Mixture family
0.5
-+-+-_.
-EJ -??x????
-A-.-
0.48
0.46
I!?
g
0.44
'0
0.42
w
c:
0
:uI!!
u.
0.4
0.38
0 .36
0 .34
400
600
800
1000
1200
Number of Examples
1400
1600
1800
2000
Figure 2: A comparison of the 5 family discovery algorithms on the classification
task.
the number of training episodes.
The y-axis shows the percentage correct for each algorithm on an independent test
set. Each test set consisted of 50 episodes of 50 examples each. The algorithms
were presented with unlabelled data and their classification predictions were then
compared with the correct classification label.
The results show significant improvement through the use of family discovery for
this classification task. The single model approach performed significantly worse
than any of the other approaches, especially for larger numbers of episodes (where
the family discovery becomes possible). The separate model approach improves with
the number of episodes, but is nearly always bested by the approaches which take
explicit account of the underlying parameterized family. Because of the nonlinearity
in this task, the simple affine model performs more poorly than the two nonlinear
methods. It is simple to implement, however, and may well be the method of choice
when the parameters aren 't so nonlinear. From this data, there is not a clear winner
between the surface patch and surface map approaches.
4
TRAINING SET DISCOVERY
Throughout this paper, we have assumed that the training set was partitioned into
episodes by the teacher. Agents interacting with the world may not be given this
explicit information. For example, a speech recognition system may not be told
when it is conversing with a new speaker. Similarly, a character recognition system
408
s. M. OMOHUNDRO
would probably not be given explicit information about font changes. Learners can
sometimes use the data itself to detect these changes, however. In many situations
there is a strong prior that successive events are likely to have come from a single
model with only occasional model changes. The EM algorithm is often used for
segmenting unlabelled speech. It may be used in a similar manner to find the
training set episode boundaries. First, a clustering algorithm is used to partition
the training examples into episodes. A parameterized family is then fit to these
episodes. The data is then repartitioned according to the similarity of the induced
family parameters and the process is repeated until it converges. A similar approach
may be applied when the model parameters vary slowly with time rather than
occasionally jumping discontinously.
Acknowledgements
I'd like to thank Chris Bregler for work on the affine patch approach to surface
learning, Alexander Linden for suggesting coupled maps for surface learning, and
Peter Blicher for discussions.
References
Baxter, J. (1995) Learning model bias. This volume.
Bregler, C. & Omohundro, S. (1994) Surface learning with applications to lipreading. In J. Cowan, G. Tesauro and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 43-50. San Francisco, CA: Morgan Kaufmann
Publishers.
Bregler, C. & Omohundro, S. (1995) Nonlinear image interpolation using manifold
learning. In G. Tesauro, D. Touretzky and T. Leen (eds .), Advances in Neural
Information Processing Systems 7. Cambridge, MA: MIT Press.
Bregler, C. & Omohundro, S. (1995) Nonlinear manifold learning for visual speech
recognition. In W . Grimson (ed.), Proceedings of the Fifth International Conference
on Computer Vision.
Jordan, M. & Jacobs, R. (1994) Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6:181-214.
MacKay, D. (1995) Probable networks and plausible predictions - a review of practical Bayesian methods for supervised neural networks. Network, to appear.
Pratt, L. (1994) Experiments on the transfer of knowledge between neural networks.
In S. Hanson, G. Drastal, and R. Rivest (eds.) , Computational Learning Theory and
Natural Learning Systems, Constraints and Prospects, pp. 523-560. Cambridge,
MA: MIT Press.
Revow, M., Williams, C. and Hinton, G. (1994) Using generative models for handwritten digit recognition. Technical report, University of Toronto.
Schmidhuber, J. (1995) On learning how to learn learning strategies. Technical
Report FKI-198-94, Fakultat fur Informatik, Technische Universitat Munchen.
| 1050 |@word covariance:1 jacob:4 selecting:2 document:1 com:1 additive:1 partition:1 speakerindependent:1 discrimination:1 generative:1 selected:1 parameterization:3 plane:1 xk:1 provides:1 toronto:1 successive:2 simpler:1 five:3 consists:1 pmap:1 fitting:1 manner:2 alspector:1 formants:1 spherical:1 p9:2 becomes:1 project:2 classifies:1 underlying:4 rivest:1 kind:1 nj:2 classifier:8 unit:2 appear:1 segmenting:1 local:3 io:1 interpolation:1 approximately:2 might:3 co:2 dif:1 projective:1 bi:3 practical:1 implement:1 digit:1 procedure:1 significantly:2 thought:1 projection:11 word:2 radial:1 get:1 onto:2 cannot:1 operator:8 influence:2 map:9 center:2 send:1 go:2 williams:1 independently:1 convex:1 identifying:2 assigns:1 m2:1 updated:1 imagine:1 us:1 recognition:8 episode:30 highest:1 prospect:1 grimson:1 ui:1 trained:6 learner:1 basis:1 represented:1 listener:1 train:4 distinct:1 describe:1 monte:1 pronunciation:1 encoded:1 larger:1 plausible:1 grammar:1 ability:1 itself:2 loop:2 poorly:1 comparative:1 converges:1 object:1 nearest:1 ij:1 strong:1 come:1 australian:1 differ:1 closely:1 merged:1 correct:3 stochastic:4 human:2 backprop:1 ao:3 generalization:1 probable:1 bregler:7 blending:1 lying:1 considered:3 normal:2 mapping:7 vary:1 recognizer:1 estimation:3 applicable:1 label:1 weighted:1 hope:1 mit:2 gaussian:5 always:1 rather:1 avoid:1 ej:1 encode:1 improvement:1 fur:1 likelihood:3 detect:1 dependent:1 typically:1 entire:1 hidden:1 selects:1 classification:11 orientation:1 mackay:2 construct:1 once:1 represents:1 nearly:1 mimic:1 report:2 primarily:1 recognize:2 individual:1 vowel:1 attempt:2 possibility:1 introduces:1 mixture:4 held:2 jumping:1 pancake:1 initialized:1 re:4 classify:1 blended:2 maximization:1 technische:1 examining:1 pal:1 universitat:1 varies:1 corrupted:1 teacher:1 chooses:2 density:2 international:1 ie:1 told:1 off:1 yl:2 together:4 quickly:1 choose:1 slowly:1 worse:1 american:1 expert:3 li:1 account:1 suggesting:1 de:1 coefficient:1 explicitly:1 mp:4 later:1 try:2 performed:2 root:1 om:1 square:2 oi:9 ni:3 minimize:1 variance:1 kaufmann:1 correspond:1 identify:1 bayesian:2 handwritten:1 fki:1 informatik:1 carlo:1 classified:1 influenced:1 touretzky:1 ed:4 pp:2 naturally:2 sampled:1 newly:1 recall:1 knowledge:1 improves:1 back:4 higher:1 supervised:1 improved:1 leen:1 until:2 nonlinear:6 propagation:1 accent:8 defines:1 normalized:1 true:1 y2:2 consisted:2 inductive:1 iteratively:1 eg:3 sin:2 speaker:10 omohundro:10 confusion:1 performs:2 image:4 consideration:1 rotation:1 physical:1 winner:1 exponentially:1 volume:1 belong:1 refer:1 significant:2 cambridge:2 similarly:1 nonlinearity:2 rectify:1 similarity:1 surface:14 etc:1 curvature:1 closest:4 posterior:1 apart:1 tesauro:2 schmidhuber:2 occasionally:1 meta:1 came:2 yi:1 lipreading:1 morgan:1 determine:1 paradigm:2 maximize:4 signal:1 stephen:1 full:2 smooth:1 technical:2 unlabelled:4 equally:1 ensuring:1 pitch:1 prediction:2 regression:1 vision:1 expectation:1 represent:4 sometimes:1 fine:1 grow:2 sends:1 publisher:1 conversing:1 eliminates:1 probably:1 induced:1 cowan:1 member:5 epsiode:1 jordan:4 call:1 pratt:2 baxter:3 independence:1 fit:12 locating:1 peter:1 speech:10 tij:4 useful:1 clear:1 percentage:1 estimated:3 correctly:1 write:1 group:1 drawn:5 kept:1 angle:1 parameterized:18 topologically:2 family:68 throughout:1 patch:20 parsimonious:1 dy:1 distinguish:1 identifiable:1 strength:1 constraint:1 according:2 combination:1 across:1 em:5 character:2 partitioned:5 taken:2 discus:1 gaussians:3 operation:1 munchen:1 ocr:1 away:3 appropriate:1 occasional:1 hierarchical:1 alternative:7 top:1 clustering:3 especially:2 approximating:1 seeking:1 font:3 strategy:1 primary:1 exhibit:1 subspace:4 distance:2 separate:5 thank:1 outer:1 chris:1 manifold:3 considers:1 reason:1 unfortunately:1 refit:2 perform:1 markov:1 mate:1 situation:1 hinton:1 looking:1 interacting:1 arbitrary:1 pair:2 hanson:1 fakultat:1 learned:1 usually:1 including:1 overlap:1 event:1 natural:1 representing:2 improve:2 orthogonally:1 axis:2 coupled:4 auto:1 text:1 prior:5 literature:1 discovery:20 l2:1 acknowledgement:1 review:1 agent:1 affine:23 sufficient:1 occam:1 bias:2 institute:1 neighbor:2 fifth:1 sparse:1 boundary:2 dimension:9 world:1 ferent:1 ignores:2 collection:1 reinforcement:1 san:1 reconstructed:1 ml:1 global:1 assumed:1 francisco:1 repartitioned:1 search:1 iterative:1 learn:3 transfer:3 associator:1 ca:1 complex:1 whole:1 noise:1 nothing:1 repeated:2 explicit:4 third:1 supplemented:1 virtue:1 linden:1 nec:2 gap:1 aren:1 smoothly:1 likely:2 visual:1 applies:1 determines:1 ma:2 conditional:2 identity:2 labelled:2 revow:2 change:5 determined:3 uniformly:1 principal:6 total:3 est:1 drastal:1 otest:1 alexander:1 princeton:1 trainable:1 ex:1 |
59 | 1,051 | Neural Networks with Quadratic VC
Dimension
Pascal Koiran*
Lab. de l'Informatique du Paraltelisme
Ecole Normale Superieure de Lyon - CNRS
69364 Lyon Cedex 07, France
Eduardo D. Sontag t
Department of Mathematics
Rutgers University
New Brunswick, NJ 08903, USA
Abstract
This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square
of the number of weights w. This result settles a long-standing
open question, namely whether the well-known O( w log w) bound,
known for hard-threshold nets, also held for more general sigmoidal
nets. Implications for the number of samples needed for valid generalization are discussed.
1
Introduction
One of the main applications of artificial neural networks is to pattern classification
tasks. A set of labeled training samples is provided, and a network must be obtained
which is then expected to correctly classify previously unseen inputs. In this context,
a central problem is to estimate the amount of training data needed to guarantee
satisfactory learning performance. To study this question, it is necessary to first
formalize the notion of learning from examples.
One such formalization is based on the paradigm of probably approximately correct
(PAC) learning, due to Valiant (1984). In this framework, one starts by fitting some
function /, chosen from a predetermined class F, to the given training data. The
class F is often called the "hypothesis class" , and for purposes of this discussion it
will be assumed that the functions in F take binary values {O, I} and are defined on a
common domain X. (In neural networks applications, typically F corresponds to the
set of all neural networks with a given architecture and choice of activation functions.
The elements of X are the inputs, possibly multidimensional.) The training data
consists of labeled samples (Xi,ci), with each Xi E X and each Ci E {O, I}, and
*koiranGlip. ens-lyon. fr.
tsontagGhilbert.rutgers.edu.
198
P. KOIRAN, E. D. SONTAG
"fitting" by an f means that f(xj) = Cj for each i. Given a new example x, one
uses f( x) as a guess of the "correct" classification of x. Assuming that both training
inputs and future inputs are picked according to the same probability distribution
on X, one needs that the space of possible inputs be well-sampled by the training
data, so that f is an accurate fit. We omit the details of the formalization of
PAC learning, since there are excellent references available, both in textbook (e.g.
Anthony and Biggs (1992), Natarajan (1991)) and survey paper (e.g. Maass (1994))
form, and the concept is by now very well-known.
After the work of Vapnik (1982) in statistics and of Blumer et. al. (1989) in computationallearning theory, one knows that a certain combinatorial quantity, called
the Vapnik-Chervonenkis (VC) dimension VC(F) of the class F of interest completely characterizes the sample sizes needed for learnability in the PAC sense. (The
appropriate definitions are reviewed below. In Valiant's formulation one is also interested in quantifying the computational effort required to actually fit a function
to the given training data, but we are ignoring that aspect in the current paper.)
Very roughly speaking, the number of samples needed in order to learn reliably is
proportional to VC(F). Estimating VC(F) then becomes a central concern. Thus
from now on, we speak exclusively of VC dimension, instead of the original PAC
learning problem.
The work of Cover (1988) and Baum and Haussler (1989) dealt with the computation of VC(F) when the class F consists of networks built up from hard-threshold
activations and having w weights; they showed that VC(F)= O(wlogw). (Conversely, Maass (1993) showed that there is also a lower bound of this form.) It
would appear that this definitely settled the VC dimension (and hence also the
sample size) question.
However, the above estimate assumes an architecture based on hard-threshold
("Heaviside") neurons. In contrast, the usually employed gradient descent learning
algorithms ("backpropagation" method) rely upon continuous activations, that is,
neurons with graded responses. As pointed out in Sontag (1989), the use of analog activations, which allow the passing of rich (not just binary) information among
levels, may result in higher memory capacity as compared with threshold nets. This
has serious potential implications in learning, essentially because more memory capacity means that a given function f may be able to "memorize" in a "rote" fashion
too much data, and less generalization is therefore possible. Indeed, Sontag (1992)
showed that there are conceivable (though not very practical) neural architectures
with extremely high VC dimensions. Thus the problem of studying VC(F) for analog networks is an interesting and relevant issue. Two important contributions in
this direction were the papers by Maass (1993) and by Goldberg and Jerrum (1995),
which showed upper bounds on the VC dimension of networks that use piecewise
polynomial activations. The last reference, in particular, established for that case
an upper bound of O(w2), where, as before, w is the number of weights. However
it was an open problem (specifically, "open problem number 7" in the recent survey
by Maass (1993) if there is a matching w 2 lower bound for such networks, and more
generally for arbitrary continuous-activation nets. It could have been the case that
the upper bound O( w 2 ) is merely an artifact of the method of proof in Goldberg
and Jerrum (1995), and that reliable learning with continuous-activation networks
is still possible with far smaller sample sizes, proportional to O( w log w). But this is
not the case, and in this paper we answer Maass' open question in the affirmative.
Assume given an activation (T which has different limits at ?oo, and is such that
there is at least one point where it has a derivative and the derivative is nonzero
(this last condition rules out the Heaviside activation). Then there are architectures with arbitrary large numbers of weights wand VC dimension proportional
Neural Networks with Quadratic VC Dimension
199
to w 2 ? The proof relies on first showing that networks consisting of two types of
activations, Heavisides and linear, already have this power. This is a somewhat
surprising result, since purely linear networks result in VC dimension proportional
to w, and purely threshold nets have, as per the results quoted above, VC dimension
bounded by w log w. Our construction was originally motivated by a related one,
given in Goldberg and Jerrum (1995), which showed that real-number programs (in
the Blum-Shub-Smale (1989) model of computation) with running time T have VC
dimension O(T2). The desired result on continuous activations is then obtained,
approximating Heaviside gates by IT-nets with large weights and approximating linear gates by IT-nets with small weights. This result applies in particular to the
standard sigmoid 1/(1 + e- X ). (However, in contrast with the piecewise-polynomial
case, there is still in that case a large gap between our O( w 2 ) lower bound and
the O( w 4 ) upper bound which was recently established in Karpinski and Macintyre (1995).) A number of variations, dealing with Boolean inputs, or weakening
the assumptions on IT, are discussed. The full version of this paper also includes
some remarks on thresholds networks with a constant number of linear gates, and
threshold-only nets with "shared" weights.
Basic Terminology and Definitions
Formally, a (first-order, feedforward) architecture or network A is a connected directed acyclic graph together with an assignment of a function to a subset of its
nodes. The nodes are of two types: those of fan-in zero are called input nodes and
the remaining ones are called computation nodes or gates. An output node is a node
of fan-out zero. To each gate g there is associated a function IT g : IR. -!- IR., called the
activation or gate function associated to g.
The number of weights or parameters associated to a gate 9 is the integer ng equal
to the fan-in of 9 plus one. (This definition is motivated by the fact that each input
to the gate will be multiplied by a weight, and the results are added together with
a "bias" constant term , seen as one more weight; see below.) The (total) number
of weights (or parameters) of A is by definition the sum of the numbers n g , over all
the gates 9 of A. The number of inputs m of A is the total number of input nodes
(one also says that "A has inputs in IR.m,,); it is assumed that m > O. The number
of outputs p of A is the number of output nodes (unless otherwise mentioned, we
assume by default that all nets considered have one-dimensional outputs, that is,
p = 1).
Two examples of gate functions that are of particular interest are the identity or
linear gate: Id( x)
x for all x, and the threshold or H eaviside function: H (x) 1
if x ~ 0, H(x) = 0 if x < O.
=
=
Let A be an architecture. Assume that nodes of A have been linearly ordered as
11"1, ... , 11"m, gl, ... , gl, where the 1I"j 's are the input nodes and the gj 's the gates. For
simplicity, write nj := n g ., for each i = 1, ... , I. Note that the total number of
parameters is n = L:~=1 nj and the fan-in of each gj is nj - 1. To each architecture
A (strictly speaking, an architecture together with such an ordering of nodes) we
associate a function
F : ]Rm x ]Rn -!-]RP ,
where p is the number of outputs of A, defined by first assigning an "output" to
each node, recursively on the distance from the the input nodes. Assume given
an input x E ]Rm and a vector of weights w E ]Rn. We partition w into blocks
(WI , ... , WI) of sizes nl, ... , nl respectively. First the coordinates of x are assigned
as the outputs of the input nodes 11"1, ... , 1I"m respectively. For each of the other
gates gj, we proceed as follows. Assume that outputs Yl, ... , Yn. -1 have already
P. KOIRAN, E. D. SONTAG
200
been assigned to the predecessor nodes of gi (these are input and/or computation
nodes, listed consistently with the order fixed in advance). Then the output of gi
is by definition
(1'g.
(Wi,O
+ Wi , lYI + Wi ,2Y2 + ... + wi,n.-lYn.-d
,
where we are writing Wi = (Wi,O, Wi,l, Wi ,2, ... , wi,n.-d. The value of F(x, w) is
then by definition the vector (scalar if p = 1) obtained by listing the outputs of the
output nodes (in the agreed-upon fixed ordering of nodes). We call F the function
computed by the architecture A. For each choice of weights W E IRn, there is a
function Fw : IR m _ IRP defined by Fw(x) := F(x, w) ; by abuse of terminology we
sometimes call this also the function computed by A (if the weight vector has been
fixed).
Assume that A is an architecture with inputs in IR m and scalar outputs, and that
the (unique) output gate has range {O, 1}. A subset A ~ IR m is said to be shattered
by A if for each Boolean function 13 : A - {O, 1} there is some weight W E IRn so
that Fw(x) = f3(x) for all x EA . The Vapnik-Chervonenkis (VC) dimension of A
is the maximal size of a subset A ~ IRm that is shattered by A. If the output gate
can take non-binary values, we implicitly assume that the result of the computation
is the sign of the output. That is, when we say that a subset A ~ IR m is shattered
by A , we really mean that A is shattered by the architecture H(A) in which the
output of A is fed to a sign gate .
2
Networks Made up of Linear and Threshold Gates
Proposition 1 For every n ;::: 1, there is a network architecture A with inputs in
IR 2 and O( VN) weights that can shatter a set of size N = n 2. This architecture is
made only of linear and threshold gates.
Proof. Our architecture has n parameters WI , ... , W n ; each of them is an element
ofT {O.WI . .. Wn ;Wi E {O, 1}}. The shattered set will be S = [n]2
{1, .. . ,nF.
=
=
For a given choice of W = (WI' ... ' W n ), A will compute the boolean function
fw : S - {O, 1} defined as follows: fw(x, y) is equal to the x-th bit of W y . Clearly,
for any boolean function f on S, there exists a (unique) W such that f = fw.
We first consider the obvious architecture which computes the function:
n
(1)
- Wz-dH(y - z + 1/2)
z=2
sending each point Y E [n] to W y. This architecture has n - 1 threshold gates,
3(n - 1) + 1 weights, and just one linear gate.
flv(Y) = WI
+ I)Wz
Next we define a second multi-output net which maps wET to its binary representation j2(w) = (WI' . .. ' wn ). Assume by induction that we have a net N?
that maps W to (WI, ... ,Wi,O.Wi+l ... Wn) . Since Wi+l = H(O .Wi+l . .. Wn -1/2)
and o.Wi+2 ... Wn = 2 x o. Wi+1 . .. Wn - Wi+!, .N;;'l can be obtained by adding one
threshold gate and one linear gate to .N;2 (as well as 4 weights). It follows that N~
has n threshold gates, n linear gates and 4n weights.
Finally, we define a net N3 which takes as input x E [n] and W = (WI , ... , w n ) E
{O, l}n, and outputs W X ? We would like this network to be as follows:
n
f3(X , w) = WI
+L
z=2
n
wzH(x - z + 1/2) -
L wz_IH(x z=2
z
+ 1/2).
201
Neural Networks with Quadratic VC Dimension
This is not quite possible, because the products between the Wi'S (which are inputs
in this context) and the Heavisides are not allowed. However, since we are dealing
with binary variables one can write uv = H(u + v - l.5). Thus N3 has one linear
gate, 4(n - 1) threshold gates and 12(n - 1) + n weights. Note that fw(x, y) =
p (x, P Ulv (y)). This can be realized by means of a net that has n + 2 linear gates,
(n-l)+n+4(n-l) = 6n-5 threshold gates, and (3n-2)+4n+(12n-ll) = 19n-13
weights. 0
The following is the main result of this section:
Theorem 1 For every n ;::: 1, there is a network architecture A with inputs in IR.
and O( VN) weights that can shatter a set of size N = n 2. This architecture is
made only of linear and threshold gates.
Proof. The shattered set will be S = {O, 1, .. . ,n 2 -I}. For every xES, there
are unique integers x, y E {O, 1, ... , n - I} such that u = nx + y. The idea of the
construction is to compute x and y, and then feed (x + 1, y + 1) to the network
constructed in Proposition 1. Note that x is the unique integer such that u - nx E
{O, 1, .. . , n - I}. It can therefore by computed by brute force search as follows:
n-1
X
=
L kH[H(u -
nk)
+ H(n -
1 - (u - nk)) - l.5].
k=O
This network has 3n threshold gates, one linear gate and 8n weights. Then of course
y = u - nx. 0
A Boolean version is as follows.
Theorem 2 For every d ;::: 1, there is a network architecture A with O( VN)
weights that can shatter the N = 22d points of {O, 1F d . This architecture is made
only of linear and threshold gates.
d, one can compute x = 1 + 2::=1 2i-1ui and y 1 +
Proof. Given u E {O,
2:1=12i-1Ui+d with two linear gates. Then (x, y) can be fed to the network of
Proposition 1 (with n 2d ). 0
=
IF
=
In other words, there is a network architecture with 2d weights that can compute
all boolean functions on 2d variables.
3
Arbitrary Sigmoids
We now extend the preceding VC dimension bounds to networks that use just
one activation function tr (instead of both linear and threshold gates). All that is
required is that the gate function have a sigmoidal shape and satisfy a very weak
smoothness property:
l. tr is differentiable at some point Xo (i.e., tr(xo+h) = tr(xo)+tr'(xo)h+o(h))
where tr'(xo)# 0.
2. limx __ oo tr(x) = and limx _+ oo tr(x) = 1 (the limits and 1 can be
replaced by any distinct numbers).
?
?
A function satisfying these two conditions will be called sigmoidal. Given any such
tr, we will show that networks using only tr gates provide quadratic VC dimension.
P. KOIRAN, E. D. SONTAG
202
Theorem 3 Let tT be an arbitrary sigmoidal function. There exist architectures Al
and A2 with O( VN) weights made only of tT gates such that:
? Al can shatter a subset ofIR of cardinality N = n 2 ,-
? A2 can shatter the N = 22d points of {O, 1}2d.
This follows directly from Theorems 1 and 2, together with the following simulation
result:
Theorem 4 Let tT be a an arbitrary sigmoidal function. Let N be a network of
T threshold and L linear gates, with a threshold gate at the output. Then N can
be simulated on any given finite set of inputs by a network N' of T + L gates that
all use the activation function tT (except the output gate which is still a threshold).
Moreover, if N has n weights then N' has O( n) weights.
Proof. Let S be a finite set of inputs. We can assume, by changing the thresholds of
threshold gates if necessary, that the net input Ig (x) to any threshold gate 9 of N
is different from for all inputs xES.
?
Given ? > 0, let N( be the net obtained by replacing the output functions of all gates
by the new output function x 1--+ tT( X / ?) if this output function is the sign function ,
and by x 1--+ tT(x) = [tT(xo+?x)-tT(xo))/[?tT'(xo)] ifit is the identity function. Note
that for any a > 0, lim(_o+ tT(x/?) = H(x) uniformly for x E) - 00, -a] U [a, +00]
and limHo tT(x) = x uniformly for x E [-l/a, l/a].
This implies by induction on the depth of 9 that for any gate 9 of N and any input
XES, the net input Ig,(x) to 9 in the transformed net N( satisfies li~_o IgAx) =
Ig(x) (here, we use the fact that the output function of every 9 is continuous at
Ig(x)). In particular, by taking 9 to be the output gate of N, we see that Nand
N( compute the same function on S if ? is small enough. Such a net N( can be
transformed into an equivalent net N' that uses only tT as gate function by a simple
transformation of its weights and thresholds. The number of weights remains the
same, except at most for a constant term that must be added to each net input to
a gate; thus if N has n weights, N' has at most 2n weights. 0
4
More General Gate Functions
The objective of this section is to establish results similar to Theorem 3, but for
even more arbitrary gate functions, in particular weakening the assumption that
limits exist at infinity. The main result is, roughly, that any tT which is piecewise
twice (continuously) differentiable gives at least quadratic VC dimension, save for
certain exceptional cases involving functions that are almost everywhere linear.
A function tT : IR --+ IR is said to be piecewise C 2 if there is a finite sequence
al < a2 < ... < a p such that on each interval I of the form] - 00, al [, )ai, ai+1 [ or
]a p , +00[, tTll is C2.
(Note: our results hold even if it is only assumed that the second derivative exists in
each of the above intervals; we do not use the continuity of these second derivatives.)
Theorem 5 Let tT be a piecewise C2 function. For every n ~ 1, there exists an
architecture made of tT-gates, and with O( n) weights, that can shatter a subset of
IR 2 of cardinality n 2 , except perhaps in the following cases:
1. tT is piecewise-constant, and in this case the VC dimension of any architecture of n weights is O( n log n),-
Neural Networks with Quadratic VC Dimension
203
2. u is affine, and in this case the VC dimension of any architecture of n
weights is at most n.
3. there are constants af; 0 and b such that u( x) = ax + b except at a finite
nonempty set of points. In this case, the VC dimension of any architecture of n weights is O(n 2 ), and there are architectures of VC dimension
O(nlogn).
Due to the lack of space, the proof cannot be included in this paper. Note that
the upper bound of the first special case is tight for threshold nets, and that of the
second special case is tight for linear functions in ]R n.
Acknowledgements
Pascal Koiran was supported by an INRIA fellowship , DIMACS, and the International Computer Science Institute. Eduardo Sontag was supported in part by US
Air Force Grant AFOSR-94-0293 .
References
M . ANTHONY AND N.L. BIGGS (1992) Computational Learning Th eory: An Introduction,
Cambridge U. Press.
E .B. BAUM AND D . HAUSSLER (1989) What size net gives valid generalization?, Neural
Computation 1, pp. 151-160.
L. BLUM, M. SHUB AND S. SMALE (1989) On the theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines,
Bulletin of the AMS 21 , pp. 1- 46 .
A. BLUMER, A . EHRENFEUCHT, D . HAUSSLER, AND M . WARMUTH (1989) Learnability
and the Vapnik- Chervonenkis dimension , J. of the ACM 36, pp. 929-965.
T.M. COVER (1988) Capacity problems for linear machines, in: Pattern Recognition , L.
Kanal ed. , Thompson Book Co., pp. 283-289.
P. GOLDBERG AND M . JERRUM (1995) Bounding the Vapnik-Chervonenkis dim ension of
concept classes parametrized by real numbers, Machine Learning 18, pp. 131-148.
M . KARPINSKI AND A. MACINTYRE (1995) Polynomial bounds for VC dimension of sigmoidal neural networks, in Proc. 27th ACM Symposium on Theory of Computing, pp. 200208.
W. MAASS (1993) Bounds for the computational power and learning complexity of analog
neural nets, in Proc. of the 25th ACM Symp. Theory of Computing, pp. 335-344.
W . MAASS (1994) Perspectives of current research about the complexity of learning in neural nets, in Theoretical Advances in N eural Computation and Learning , V.P. Roychowdhury, K.Y. Siu, and A . Orlitsky, editors, Kluwer, Boston , pp. 295-336.
B.K . NATARAJAN (1991) Machine Learning : A Theoretical Approach, M . Kaufmann Publishers, San Mateo , CA.
E .D. SONTAG (1989) Sigmoids distinguish better than Heavisides, Neural Computation 1,
pp. 470-472.
E.D. SONTAG (1992) Feedforward nets for interpolation and classification, J. Comp o
Syst. Sci 45 , pp. 20-48.
L.G. VALIANT (1984) A th eory of the learnable, Comm. of the ACM 27, pp. 1134-1142
V .N. VAPNIK (1982) Estimation of Dependencies Based on Empirical Data, Springer,
Berlin.
| 1051 |@word version:2 polynomial:3 open:4 simulation:1 tr:10 recursively:1 exclusively:1 chervonenkis:4 ecole:1 current:2 surprising:1 activation:15 assigning:1 must:2 partition:1 predetermined:1 shape:1 rote:1 guess:1 warmuth:1 completeness:1 node:18 sigmoidal:6 shatter:6 constructed:1 c2:2 predecessor:1 symposium:1 consists:2 fitting:2 symp:1 indeed:1 expected:1 roughly:2 multi:1 lyon:3 cardinality:2 becomes:1 provided:1 estimating:1 bounded:1 moreover:1 what:1 affirmative:1 textbook:1 transformation:1 nj:4 eduardo:2 guarantee:1 every:6 multidimensional:1 nf:1 orlitsky:1 rm:2 brute:1 grant:1 omit:1 appear:1 yn:1 before:1 limit:3 id:1 interpolation:1 approximately:1 abuse:1 inria:1 plus:1 twice:1 mateo:1 conversely:1 co:1 range:1 directed:1 practical:1 unique:4 recursive:1 block:1 backpropagation:1 nlogn:1 universal:1 empirical:1 matching:1 word:1 computationallearning:1 cannot:1 context:2 writing:1 equivalent:1 map:2 baum:2 thompson:1 survey:2 simplicity:1 rule:1 haussler:3 notion:1 variation:1 coordinate:1 construction:2 speak:1 goldberg:4 us:2 hypothesis:1 associate:1 element:2 satisfying:1 natarajan:2 recognition:1 labeled:2 connected:1 ordering:2 mentioned:1 comm:1 ui:2 complexity:3 tight:2 purely:2 upon:2 biggs:2 completely:1 informatique:1 distinct:1 artificial:1 quite:1 say:2 otherwise:1 statistic:1 jerrum:4 unseen:1 gi:2 sequence:1 differentiable:2 net:25 maximal:1 product:1 fr:1 j2:1 relevant:1 kh:1 oo:3 memorize:1 implies:1 direction:1 correct:2 vc:29 settle:1 generalization:3 really:1 proposition:3 strictly:1 hold:1 considered:1 koiran:5 a2:3 purpose:1 estimation:1 proc:2 wet:1 combinatorial:1 exceptional:1 clearly:1 normale:1 ax:1 consistently:1 contrast:2 sense:1 am:1 dim:1 irp:1 cnrs:1 shattered:6 typically:1 weakening:2 nand:1 irn:2 transformed:2 france:1 interested:1 issue:1 classification:3 among:1 pascal:2 special:2 equal:2 f3:2 having:1 ng:1 lyi:1 future:1 t2:1 np:1 piecewise:6 serious:1 replaced:1 consisting:1 interest:2 limx:2 ofir:1 nl:2 held:1 implication:2 accurate:1 necessary:2 unless:1 irm:1 desired:1 theoretical:2 classify:1 boolean:6 cover:2 assignment:1 subset:6 siu:1 too:1 learnability:2 dependency:1 answer:1 definitely:1 international:1 standing:1 yl:1 together:4 continuously:1 central:2 settled:1 possibly:1 book:1 derivative:4 li:1 syst:1 potential:1 de:2 includes:1 satisfy:1 picked:1 lab:1 characterizes:1 start:1 contribution:1 square:1 ir:12 air:1 kaufmann:1 listing:1 dealt:1 weak:1 comp:1 ed:1 definition:6 pp:11 obvious:1 proof:7 associated:3 sampled:1 lim:1 cj:1 formalize:1 agreed:1 actually:1 ea:1 feed:1 higher:1 originally:1 response:1 formulation:1 though:1 just:3 replacing:1 lack:1 continuity:1 artifact:1 perhaps:1 usa:1 concept:2 y2:1 hence:1 assigned:2 nonzero:1 satisfactory:1 maass:7 ehrenfeucht:1 ll:1 dimacs:1 tt:17 recently:1 common:1 sigmoid:1 discussed:2 analog:3 extend:1 kluwer:1 cambridge:1 ai:2 smoothness:1 uv:1 mathematics:1 pointed:1 gj:3 showed:5 recent:1 perspective:1 certain:2 binary:5 seen:1 somewhat:1 preceding:1 employed:1 paradigm:1 full:1 af:1 long:1 involving:1 basic:1 essentially:1 rutgers:2 karpinski:2 sometimes:1 fellowship:1 interval:2 publisher:1 w2:1 probably:1 cedex:1 integer:3 call:2 feedforward:2 enough:1 wn:6 xj:1 fit:2 architecture:27 idea:1 whether:1 motivated:2 effort:1 sontag:9 speaking:2 passing:1 proceed:1 remark:1 generally:1 listed:1 ifit:1 amount:1 eory:2 macintyre:2 exist:2 roychowdhury:1 sign:3 correctly:1 per:1 write:2 terminology:2 threshold:27 blum:2 changing:1 graph:1 merely:1 sum:1 wand:1 everywhere:1 almost:1 vn:4 bit:1 bound:12 distinguish:1 fan:4 quadratic:6 infinity:1 n3:2 aspect:1 extremely:1 department:1 according:1 smaller:1 wi:28 xo:8 previously:1 remains:1 nonempty:1 needed:4 know:1 fed:2 sending:1 studying:1 available:1 multiplied:1 appropriate:1 save:1 gate:51 rp:1 original:1 assumes:1 running:1 remaining:1 graded:1 approximating:2 establish:1 objective:1 question:4 quantity:1 already:2 added:2 realized:1 said:2 gradient:1 conceivable:1 distance:1 simulated:1 capacity:3 parametrized:1 sci:1 nx:3 berlin:1 induction:2 assuming:1 smale:2 reliably:1 shub:2 upper:5 neuron:2 finite:4 descent:1 lyn:1 rn:2 arbitrary:6 namely:1 required:2 established:2 able:1 below:2 pattern:2 usually:1 oft:1 program:1 built:1 reliable:1 memory:2 wz:2 power:2 rely:1 force:2 acknowledgement:1 afosr:1 interesting:1 proportional:4 acyclic:1 affine:1 editor:1 course:1 gl:2 last:2 supported:2 bias:1 allow:1 institute:1 taking:1 bulletin:1 dimension:24 default:1 valid:2 depth:1 rich:1 computes:1 made:6 san:1 ig:4 far:1 implicitly:1 dealing:2 assumed:3 xi:2 quoted:1 continuous:6 search:1 reviewed:1 learn:1 ca:1 ignoring:1 kanal:1 du:1 excellent:1 anthony:2 domain:1 main:3 linearly:1 bounding:1 allowed:1 eural:1 en:1 fashion:1 formalization:2 theorem:7 pac:4 showing:1 learnable:1 x:3 concern:1 exists:3 vapnik:6 adding:1 valiant:3 ci:2 sigmoids:2 nk:2 gap:1 boston:1 ordered:1 scalar:2 applies:1 springer:1 corresponds:1 satisfies:1 relies:1 dh:1 acm:4 identity:2 blumer:2 quantifying:1 shared:1 hard:3 fw:7 included:1 specifically:1 except:4 uniformly:2 called:6 total:3 formally:1 brunswick:1 superieure:1 heaviside:6 |
60 | 1,052 | Learning the structure of similarity
Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
jbt~psyche.mit.edu
Abstract
The additive clustering (ADCL US) model (Shepard & Arabie, 1979)
treats the similarity of two stimuli as a weighted additive measure
of their common features. Inspired by recent work in unsupervised
learning with multiple cause models, we propose anew, statistically
well-motivated algorithm for discovering the structure of natural
stimulus classes using the ADCLUS model, which promises substantial gains in conceptual simplicity, practical efficiency, and solution
quality over earlier efforts. We also present preliminary results with
artificial data and two classic similarity data sets.
1
INTRODUCTION
The capacity to judge one stimulus, object, or concept as similado another is thought
to play a pivotal role in many cognitive processes, including generalization , recognition, categorization, and inference. Consequently, modeling subjective similarity
judgments in order to discover the underlying structure of stimulus representations
in the brain/mind holds a central place in contemporary cognitive science. Mathematical models of similarity can be divided roughly into two families: spatial models,
in which stimuli correspond to points in a metric (typically Euclidean) space and
similarity is treated as a decreasing function of distance; and set-theoretic models, in
which stimuli are represented as members of salient subsets (presumably corresponding to natural classes or features in the world) and similarity is treated as a weighted
sum of common and distinctive subsets.
Spatial models, fit to similarity judgment data with familiar multidimensional scaling (MDS) techniques, have yielded concise descriptions of homogeneous, perceptual
domains (e.g. three-dimensional color space), often revealing the salient dimensions
of stimulus variation (Shepard, 1980). Set-theoretic models are more general , in
principle able to accomodate discrete conceptual structures typical of higher-level
cognitive domains, as well as dimensional stimulus structures more common in per-
4
1. B. TENENBAUM
ception (Tversky, 1977). In practice, however, the utility of set-theoretic models is
limited by the hierarchical clustering techniques that underlie conventional methods
for discovering the discrete features or classes of stimuli. Specifically, hierarchical
clustering requires that any two classes of stimuli correspond to disjoint or properly
inclusive subsets, while psychologically natural classes may correspond in general to
arbitrarily overlapping subsets of stimuli. For example, the subjective similarity of
two countries results from the interaction of multiple geographic and cultural factors, and there is no reason a priori to expect the subsets of communist, African, or
French-speaking nations to be either disjoint or properly inclusive.
In this paper we consider the additive clustering (ADCL US) model (Shepard & Arabie, 1979), the simplest instantiation of Tversky 's (1977) general contrast model that
accommodates the arbitrarily overlapping class structures associated with multiple
causes of similarity. Here, the similarity of two stimuli is modeled as a weighted
additive measure of their common clusters:
K
Sij
=
I:
wkfikfJk
+ C,
(1)
k=l
where Sij is the reconstructed similarity of stimuli i and j, the weight Wk captures
the salience of cluster k, and the binary indicator variable fik equals 1 if stimulus i
belongs to cluster k and 0 otherwise. The additive constant c is necessary because the
similarity data are assumed to be on an interval scale. 1 As with conventional clustering models, ADCLUS recovers a system of discrete subsets of stimuli, weighted by
salience, and the similarity of two stimuli increases with the number (and weight)
of their common subsets. ADCLUS, however, makes none of the structural assumptions (e.g. that any two clusters are disjoint or properly inclusive) which limit the
applicability of conventional set-theoretic models. Unfortunately this flexibility also
makes the problem of fitting the ADCL US model to an observed similarity matrix
exceedingly difficult.
Previous attempts to fit the model have followed a heuristic strategy to minimize a
squared-error energy function ,
E
= I:(Sij - Sij)2 = I:(Sij itj
itj
I:
wklikfJk)2,
(2)
k
by alternately solving for the best cluster configurations fik given the current weights
Wk and solving for the best weights given the current clusters (Shepard & Arabie,
1979; Arabie & Carroll, 1980). This strategy is appealing because given the cluster configuration, finding the optimal weights becomes a simple linear least-squares
problem.2 However, finding good cluster configurations is a difficult problem in combinatorial optimization, and this step has always been the weak point in previous
work . The original ADCLUS (Shepard & Arabie, 1979) and later MAPCLUS (Arabie & Carroll, 1980) algorithms employ ad hoc techniques of combinatorial optimization that sometimes yield unexpected or uninterpretable final results. Certainly, no
rigorous theory exists that would explain why these approaches fail to discover the
underlying structure of a stimulus set when they do.
Essentially, the ADCL US model is so challenging to fit because it generates similarities from the interaction of many independent underlying causes . Viewed this way,
modeling the structure of similarity looks very similar to the multiple-cause learning
In the remainder of this paper, we absorb c into the sum over k, taking the sum over
== c, and fixing !iO = 1, (Vi) .
2Strictly speaking, because the weights are typically constrained to be nonnegative, more
elaborate techniques than standard linear least-squares procedures may be required.
1
k
= 0, ... , K , defining Wo
5
Learning the Structure of Similarity
problems that are currently a major focus of study in the neural computation literature (Ghahramani, 1995; Hinton, Dayan, et al., 1995; Saund, 1995; Neal, 1992). Here
we propose a novel approach to additive clustering, inspired by the progress and
promise of work on multiple-cause learning within the Expectation-Maximization
(EM) framework (Ghahramani, 1995; Neal, 1992). Our BM approach still makes
use of the basic insight behind earlier approaches, that finding {wd given {lid is
easy, but obtains better performance from treating the unknown cluster memberships
probabilistically as hidden variables (rather than parameters of the model), and perhaps more importantly, provides a rigorous and well-understood theory. Indeed, it
is natural to consider {/ik} as "unobserved" features of the stimuli, complementing the observed data {Sij} in the similarity matrix. Moreover, in some experimental
paradigms, one or more of these features may be considered observed data, if subjects
report using (or are requested to use) certain criteria in their similarity judgments.
2
ALGORITHM
2.1
Maximum likelihood formulation
We begin by formulating the additive clustering problem in terms of maximum likelihood estimation with unobserved data. Treating the cluster weights w
{Wk}
as model parameters and the unobserved cluster memberships I = {lik} as hidden
causes for the observed similarities S {Sij}, it is natural to consider a hierarchical
generative model for the "complete data" (including observed and unobserved components) of the form p(s, Ilw) = p(sl/, w)p(flw). In the spirit of earlier approaches
to ADCLUS that seek to minimize a squared-error energy function, we take p(sl/, w)
to be gaussian with common variance u 2 :
=
=
p(sl/, w) ex: exp{ -~ 'L:(Sij - Sij )2} = exp{ -~ 'L:(Sij 2u itj
2u itj
'L: wklik/ik)2}.
(3)
k
Note that logp(sl/, w) is equivalent to -E/(2u 2 ) (ignoring an additive constant),
where E is the energy defined above. In general, priors p(flw) over the cluster
configurations may be useful to favor larger or smaller clusters, induce a dependence
between cluster size and cluster weight, or bias particular kinds of class structures,
but only uniform priors are considered here. In this case -E /(2u 2 ) also gives the
"complete data" loglikelihood logp(s, Ilw).
2.2
The EM algorithm for additive clustering
Given this probabilistic model, we can now appeal to the EM algorithm as the basis
for a new additive clustering technique. EM calls for iterating the following twostep procedure, in order to obtain successive estimates of the parameters w that are
guaranteed never to decrease in likelihood (Dempster et al., 1977). In the E-step, we
calculate
Q(wlw(n)) =
L,: p(f' Is, wen)) logp(s,f/lw) =
l'
2 \ (-E}3,w(n).
(4)
u
Q(wlw(n) is equivalent to the expected value of E as a function of w, averaged over
all possible configurations I' of the N K binary cluster memberships, given the observed data s and the current parameter estimates wen). In the M-step, we maximize
Q(wlw(n) with respect to w to obtain w(n+l).
Each cluster configuration I' contributes to the mean energy in proportion to its
probability under the gaussian generative model in (3). Thus the number of configurations making significant contributions depends on the model variance u 2 . For large
6
J. B. TENENBAUM
the probability is spread over many configurations. In the limiting case u 2 ---+ 0,
only the most likely configuration contributes, making EM effectively equivalent to
the original approaches presented in Section 1 that use only the single best cluster
configuration to solve for the best cluster weights at each iteration.
U2 ,
In line with the basic insight embodied less rigorously in these earlier algorithms, the
M-step still reduces to a simple (constrained) linear least-squares problem, because
the mean energy (E} = L:i#j (srj - 2Sij L:k Wk(fik!ik} + L:kl WkWl(fik!jk!il!il}) ,
like the energy E, is quadratic in the weights Wk. The E-step, which amounts to
computing the expectations mijk = (fik!ik} and mijkl = (fik !ik!il/j I} , is much
more involved , because the required sums over all possible cluster configurations f'
are intractable for any realistic case. We approximate these calculations using Gibbs
sampling, a Monte Carlo method that has been successfully applied to learning similar
generative models with hidden variables (Ghahramani, 1995; Neal 1992).3
Finally, the algorithm should produce not only estimates of the cluster weights, but
also a final cluster configuration that may be interpreted as the psychologically natural
features or classes of the relevant domain. Consider the expected cluster memberships
Pik = (fik}$ w(n) , which give the probability that stimulus i belongs to cluster k, given
the observed similarity matrix and the current estimates of the weights. Only when
all Pik are close to 0 or 1, i.e. when u 2 is small enough that all the probability becomes
concentrated on the most likely cluster configuration and its neighbors, can we fairly
assert which stimuli belong to which classes.
2.3
Simulated annealing
Two major computational bottlenecks hamper the efficiency of the algorithm as described so far. First, Gibbs sampling may take a very long time to converge to the
equilibrium distribution, particularly when u 2 is small relative to the typical energy
difference between neighboring cluster configurations. Second, the likelihood surfaces
for realistic data sets are typically riddled with local maxima. We solve both problems
by annealing on the variance. That is, we run Gibbs sampling using an effective variance
initially much greater than the assumed model variance 2 , and decrease
towards u 2 according to the following two-level scheme. We anneal within the
nth iteration of EM to speed the convergence of the Gibbs sampling E-step (Neal,
1993) , by lowering u;jJ from some high starting value down to a target U~arg(n) for
the nth EM iteration . We also anneal between iterations of EM to avoid local maxima
(Ros~ et al., 1990), by intializing U~arg(o) at a high value and taking U~arg(n) ---+ u 2
as n Increases.
u;"
3
u;"
u
RESULTS
In all of the examples below, one run of the algorithm consisted of 100-200 iterations
of EM, annealed both within and between iterations. Within each E-step, 10-100
cycles of Gibbs sampling were carried out at the target temperature UTarg while the
statistics for mik and mijk were recorded. These recorded cycles were preceeded
by 20-200 unrecorded cycles, during which the system was annealed from a higher
temperature (e.g. 8u~arg) down to U~arg, to ensure that statistics were collected as
close to equilibrium as possible. The precise numbers of recorded and unrecorded
iterations were chosen as a compromise between the need for longer samples as the
3We generally also approximate
sults with much greater efficiency.
miJkl
~
miJkmi;"l,
which usually yields satisfactory re-
7
Learning the Structure of Similarity
Table 1: Classes and weights recovered for the integers 0-9.
Rank
1
2
3
4
5
6
7
8
Weight
.444
.345
.331
.291
.255
.216
.214
.172
Variance accounted for
Stimuli in class
2
4
8
012
3
9
6
6 789
2 345 6
1
3
5
7
9
1 2 3 4
4 5 6 7 8
= 90.9% with
Interpretation
powers of two
small numbers
multiples of three
large numbers
middle numbers
odd numbers
smallish numbers
largish numbers
8 clusters (additive constant
= .148).
number of hidden variables is increased and the need to keep computation times
practical.
3.1
Artificial data
We first report results with artificial data, for which the true cluster memberships and
weights are known, to verify that the algorithm does in fact find the desired structure.
We generated 10 data sets by randomly assigning each of 12 stimuli independently
and with probability 1/2 to each of 8 classes, and choosing random weights for the
classes uniformly from [0.1,0.6]. These numbers are grossly typical of the real data
sets we examine later in this section. We then calculated the observed similarities
from (1), added a small amount of random noise (with standard deviation equal to
5% of the mean noise-free similarity), and symmeterized the similarity matrix.
The crucial free parameter is K, the assumed number of stimulus classes. When the
algorithm was configured with the correct number of clusters (K = 8), the original
classes and weights were recovered during the first run of the algorithm on all 10 data
sets, after an average of 58 EM iterations (low 30, high 92). When the algorithm
was configured with K = 7 clusters, one less than the correct number, the seven
classes with highest weight were recovered on 9/10 first runs. On these runs, the
recovered weights and true weights had a mean correlation of 0.948 (p < .05 on each
run). When configured with K = 5, the first run recovered either four of the top
five classes (6/10 trials) or three of the top five (4/10 trials). When configured with
too many clusters (K = 12), the algorithm typically recovered only 8 clusters with
significantly non-zero weights, corresponding to the 8 correct classes. Comparable
results are not available for ADCLUS or MAPCLUS, but at least we can be satisfied
that our algorithm achieves a basic level of competence and robustness.
3.2
Judged similarities of the integers 0-9
Shepard et al. (1975) had subjects judge the similarities of the integers 0 through
9, in terms of the "abstract concepts" of the numbers. We analyzed the similarity
matrix (Shepard, personal communication) obtained by pooling data across subjects
and across three conditions of stimulus presentation (verbal, written-numeral, and
written-dots). We chose this data set because it illustrates the power of additive
clustering to capture a complex, overlapping system of classes, and also because
it serves to compare the performance of our algorithm with the original ADCL US
algorithm. Observe first that two kinds of classes emerge in the solution. Classes
1, 3, and 6 represent familiar arithmetic concepts (e.g. "multiples of three", "odd
numbers"), while the remaining classes correspond to subsets of consecutive integers
8
1. B. TENENBAUM
Table 2: Classes and weights recovered for the 16 consonant phonemes.
Rank
1
2
3
4
5
6
7
8
Weight
.800
.572
.463
.424
.357
.292
.169
.132
Stimuli in class
f 0
dg
p k
b
v {t
pt k
mn
dgvCTz2
ptkfOs
Interpretation
front unvoiced fricatives
back voiced stops
unvoiced stops (omitting t)
front voiced
unvoiced stops
nasals
voiced (omitting b)
unvoiced (omittings)
Variance accounted for = 90.2% with 8 clusters (additive constant = .047).
and thus together represent the dimension of numerical magnitude. In general, both
arithmetic properties and numerical magnitude contribute to judged similarity, as
every number has features of both types (e.g. 9 is a "large" "odd" "multiple of three"),
except for 0, whose only property is "small." Clearly an overlapping clustering model
is necessary here to accomodate the multiple causes of similarity.
The best solution reported for these data using the original ADCLUS algorithm
consisted of 10 classes, accounting for 83.1% of the variance of the data (Shepard &
Arabie, 1979).4 Several of the clusters in this solution differed by only one or two
members (e.g. three of the clusters were {0,1}, {0,1,2}, and {0,1,2,3,4}), which led
us to suspect that a better fit might be obtained with fewer than 10 classes. Table 2
shows the best solution found in five runs of our algorithm, accounting for 90.9% of
the variance with eight classes. Compared with our solution, the original ADCLUS
solution leaves almost twice as much residual variance unaccounted for, and with 10
classes, is also less parsimonious.
3.3
Confusions between 16 consonant phonemes
Finally, we examine Miller & Nicely's (1955) classic data on the confusability of 16
consonant phonemes, collected under varying signal/noise conditions with the original intent of identifying the features of English phonology (compiled and reprinted
in Carroll & Wish, 1974). Note that the recovered classes have reasonably natural
interpretations in terms of the basic features of phonological theory, and a very different overall structure from the classes recovered in the previous example. Quite
significantly, the classes respect a hierarchical structure almost perfectly, with class
3 included in class 5, classes 1 and 5 included in class 8, and so on. Only the absence
of /b / in class 7 violates the strict hierarchy.
These data also provide the only convenient oppportunity to compare our algorithm
with the MAPCLUS approach to additive clustering (Arabie & Carroll, 1980). The
published MAPCLUS solution accounts for 88.3% of the variance in this data, using
eight clusters. Arabie & Carroll (1980) report being "substantively pe...turbed" (p.
232) that their algorithm does not recover a distinct cluster for the nasals /m n/,
which have been considered a very salient subset in both traditional phonology (Miller
& Nicely, 1955) and other clustering models (Shepard, 1980). Table 3 presents our
eight-cluster solution, accounting for 90.2% of the variance. While this represents
only a marginal improvement, our solution does contain a cluster for the nasals, as
expected on theoretical grounds.
4Variance accounted for = 1- Ej Ei#j(SiJ - 8)2, where
s is
the mea.n of the set {Sij}.
Learning the Structure of Similarity
3.4
9
Conclusion
These examples show that ADCLUS can discover meaningful representations of stimuli with arbitrarily overlapping class structures (arithmetic properties), as well as dimensional structure (numerical magnitude) or hierarchical structure (phoneme families) when appropriate. We have argued that modeling similarity should be a natural
application of learning generative models with multiple hidden causes, and in that
spirit, presented a new probabilistic formulation of the ADCLUS model and an algorithm based on EM that promises better results than previous approaches. We
are currently pursuing several extensions: enriching the generative model, e.g. by
incorporating significant prior structure, and improving the fitting process, e.g. by
developing efficient and accurate mean field approximations . More generally, we hope
this work illustrates how sophisticated techniques of computational learning can be
brought to bear on foundational problems of structure discovery in cognitive science.
Acknowledgements
I thank P. Dayan, W. Richards, S. Gilbert, Y. Weiss, A. Hershowitz, and M. Bernstein
for many helpful discussions, and Roger Shepard for generously supplying inspiration and
unpublished data. The author is a Howard Hughes Medical Institute Predoctoral Fellow.
References
Arabie, P. & Carroll, J. D. (1980). MAPCLUS: A mathematical programming approach to
fitting the ADCLUS model. Psychometrika 45, 211-235.
Carroll, J. D. & Wish, M. (1974) Multidimensional perceptual models and measurement
methods. In Handbook of Perception, Vol. 2. New York: Academic Press, 391-447.
Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood estimation from
incomplete data via the EM Algorithm (with discussion). J. Roy. Stat. Soc. B39, 1-38.
Ghahramani, Z. (1995). Factorial learning and the EM algorithm. In G. Tesauro, D. S.
Touretzky, & T . K. Leen (eds.), Advances in Neural Information Processing Systems 7.
Cambridge, MA: MIT Press, 617-624.
Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995) The ((wake-sleep" algorithm for
unsupervised neural networks. Science 268, 1158-1161.
Miller, G. A. & Nicely, P. E. (1955). An analysis of perceptual confusions among some
English consonants. J. Ac. Soc. Am. 27, 338-352.
Neal, R . M. (1992). Connectionist learning of belief networks. Arti/. Intell. 56, 71-113.
Neal, R. M. (1993). Probabilistic inference using Markov chain Monte Carlo methods.
Technical Report CRG-TR-93-1, Dept. of Computer Science, U. of Toronto.
Rose, K., Gurewitz, F., & Fox, G. (1990). Statistical mechanics and phase transitions in
clustering. Physical Review Letters 65, 945-948.
Saund, E. (1995). A multiple cause mixture model for unsupervised learning. Neural Computation 7, 51-71.
Shepard, R. N. & Arabie, P. (1979). Additive clustering: Representation of similarities as
combinations of discrete overlapping properties. Psychological Review 86, 87-123.
Shepard, R. N., Kilpatric, D. W., & Cunningham, J. P., (1975). The internal representation
of numbers. Cognitive Psychology 7, 82-138.
Shepard, R. N. (1980) . Multidimensional scaling, tree-fitting, and clustering. Science 210,
390-398.
Tversky, A. (1977). Features of similarity. Psychological Review 84, 327-352.
| 1052 |@word trial:2 middle:1 proportion:1 seek:1 accounting:3 b39:1 arti:1 concise:1 tr:1 configuration:14 subjective:2 current:4 wd:1 recovered:9 assigning:1 written:2 realistic:2 numerical:3 additive:15 treating:2 generative:5 discovering:2 fewer:1 leaf:1 complementing:1 supplying:1 provides:1 contribute:1 toronto:1 successive:1 five:3 mathematical:2 ik:5 fitting:4 expected:3 indeed:1 roughly:1 examine:2 mechanic:1 brain:2 inspired:2 decreasing:1 becomes:2 begin:1 discover:3 underlying:3 cultural:1 moreover:1 psychometrika:1 kind:2 interpreted:1 finding:3 unobserved:4 assert:1 fellow:1 every:1 multidimensional:3 nation:1 ro:1 underlie:1 medical:1 understood:1 local:2 treat:1 frey:1 limit:1 io:1 preceeded:1 might:1 chose:1 twice:1 challenging:1 limited:1 enriching:1 statistically:1 averaged:1 practical:2 practice:1 hughes:1 procedure:2 foundational:1 thought:1 revealing:1 significantly:2 convenient:1 induce:1 close:2 judged:2 mea:1 gilbert:1 conventional:3 equivalent:3 adcl:5 annealed:2 starting:1 independently:1 simplicity:1 identifying:1 fik:7 insight:2 importantly:1 classic:2 variation:1 limiting:1 mapclus:5 target:2 play:1 pt:1 hierarchy:1 programming:1 homogeneous:1 roy:1 recognition:1 jk:1 particularly:1 richards:1 observed:8 role:1 capture:2 calculate:1 cycle:3 decrease:2 contemporary:1 highest:1 substantial:1 rose:1 dempster:2 rigorously:1 arabie:11 personal:1 tversky:3 solving:2 compromise:1 distinctive:1 efficiency:3 basis:1 represented:1 distinct:1 effective:1 monte:2 artificial:3 choosing:1 whose:1 heuristic:1 larger:1 solve:2 quite:1 loglikelihood:1 otherwise:1 favor:1 statistic:2 laird:1 final:2 hoc:1 propose:2 interaction:2 remainder:1 neighboring:1 relevant:1 flexibility:1 wlw:3 description:1 srj:1 convergence:1 cluster:39 produce:1 categorization:1 object:1 communist:1 ac:1 stat:1 fixing:1 odd:3 progress:1 soc:2 judge:2 correct:3 violates:1 numeral:1 argued:1 generalization:1 preliminary:1 crg:1 strictly:1 extension:1 hold:1 considered:3 ground:1 exp:2 presumably:1 equilibrium:2 major:2 achieves:1 consecutive:1 estimation:2 combinatorial:2 currently:2 successfully:1 weighted:4 hope:1 mit:2 clearly:1 brought:1 always:1 gaussian:2 generously:1 rather:1 avoid:1 ej:1 fricative:1 varying:1 probabilistically:1 focus:1 flw:2 properly:3 improvement:1 rank:2 likelihood:5 contrast:1 rigorous:2 am:1 helpful:1 inference:2 dayan:3 membership:5 typically:4 initially:1 hidden:5 cunningham:1 arg:5 overall:1 among:1 priori:1 spatial:2 constrained:2 fairly:1 marginal:1 equal:2 field:1 phonological:1 never:1 nicely:3 sampling:5 represents:1 look:1 unsupervised:3 report:4 stimulus:26 connectionist:1 employ:1 wen:2 randomly:1 dg:1 hamper:1 intell:1 familiar:2 phase:1 attempt:1 certainly:1 analyzed:1 mixture:1 behind:1 chain:1 accurate:1 necessary:2 fox:1 tree:1 incomplete:1 euclidean:1 re:1 desired:1 twostep:1 theoretical:1 psychological:2 increased:1 earlier:4 modeling:3 logp:3 maximization:1 applicability:1 deviation:1 subset:9 uniform:1 too:1 front:2 reported:1 probabilistic:3 together:1 itj:4 squared:2 central:1 recorded:3 satisfied:1 cognitive:6 account:1 wk:5 configured:4 ad:1 vi:1 depends:1 later:2 saund:2 recover:1 voiced:3 contribution:1 minimize:2 square:3 il:3 variance:13 phoneme:4 miller:3 judgment:3 correspond:4 yield:2 weak:1 none:1 carlo:2 published:1 african:1 explain:1 touretzky:1 ed:1 grossly:1 energy:7 involved:1 associated:1 recovers:1 gain:1 stop:3 unrecorded:2 massachusetts:1 color:1 substantively:1 sophisticated:1 back:1 higher:2 wei:1 formulation:2 leen:1 roger:1 correlation:1 ei:1 overlapping:6 french:1 quality:1 perhaps:1 omitting:2 concept:3 geographic:1 consisted:2 true:2 verify:1 contain:1 riddled:1 inspiration:1 jbt:1 satisfactory:1 neal:7 during:2 criterion:1 theoretic:4 complete:2 confusion:2 temperature:2 novel:1 common:6 physical:1 unaccounted:1 shepard:13 belong:1 interpretation:3 significant:2 measurement:1 cambridge:2 gibbs:5 had:2 dot:1 similarity:35 carroll:7 surface:1 longer:1 compiled:1 recent:1 belongs:2 tesauro:1 certain:1 binary:2 arbitrarily:3 joshua:1 greater:2 converge:1 paradigm:1 maximize:1 signal:1 arithmetic:3 lik:1 multiple:11 reduces:1 technical:1 academic:1 calculation:1 long:1 divided:1 basic:4 essentially:1 metric:1 expectation:2 psychologically:2 sometimes:1 iteration:8 represent:2 interval:1 annealing:2 wake:1 kilpatric:1 country:1 crucial:1 strict:1 subject:3 pooling:1 suspect:1 member:2 spirit:2 call:1 integer:4 structural:1 bernstein:1 easy:1 enough:1 fit:4 psychology:1 perfectly:1 reprinted:1 bottleneck:1 motivated:1 utility:1 effort:1 wo:1 speaking:2 cause:9 jj:1 york:1 useful:1 iterating:1 generally:2 nasal:3 factorial:1 amount:2 tenenbaum:4 concentrated:1 simplest:1 sl:4 disjoint:3 per:1 discrete:4 promise:3 vol:1 salient:3 four:1 lowering:1 sum:4 run:8 letter:1 place:1 family:2 almost:2 pursuing:1 parsimonious:1 pik:2 scaling:2 uninterpretable:1 comparable:1 followed:1 guaranteed:1 quadratic:1 sleep:1 yielded:1 nonnegative:1 inclusive:3 generates:1 speed:1 formulating:1 department:1 developing:1 according:1 combination:1 smaller:1 across:2 em:13 psyche:1 appealing:1 lid:1 making:2 sij:13 fail:1 mind:1 serf:1 available:1 eight:3 observe:1 hierarchical:5 appropriate:1 robustness:1 original:7 top:2 clustering:16 ensure:1 remaining:1 phonology:2 ghahramani:4 added:1 strategy:2 dependence:1 md:1 traditional:1 distance:1 thank:1 simulated:1 capacity:1 accommodates:1 seven:1 collected:2 reason:1 modeled:1 difficult:2 unfortunately:1 ilw:2 intent:1 unknown:1 predoctoral:1 unvoiced:4 markov:1 howard:1 defining:1 hinton:2 communication:1 precise:1 competence:1 unpublished:1 required:2 kl:1 adclus:11 alternately:1 able:1 below:1 usually:1 perception:1 including:2 confusability:1 belief:1 power:2 natural:8 treated:2 indicator:1 residual:1 nth:2 mn:1 scheme:1 technology:1 sults:1 carried:1 embodied:1 gurewitz:1 prior:3 literature:1 discovery:1 acknowledgement:1 review:3 relative:1 expect:1 bear:1 rubin:1 principle:1 accounted:3 free:2 english:2 salience:2 bias:1 mik:1 verbal:1 institute:2 neighbor:1 taking:2 emerge:1 dimension:2 calculated:1 world:1 transition:1 exceedingly:1 author:1 bm:1 far:1 reconstructed:1 approximate:2 obtains:1 absorb:1 keep:1 anew:1 instantiation:1 handbook:1 conceptual:2 assumed:3 consonant:4 why:1 table:4 reasonably:1 ignoring:1 contributes:2 improving:1 requested:1 complex:1 anneal:2 domain:3 spread:1 noise:3 turbed:1 pivotal:1 elaborate:1 differed:1 wish:2 perceptual:3 pe:1 lw:1 down:2 appeal:1 exists:1 intractable:1 incorporating:1 effectively:1 magnitude:3 accomodate:2 illustrates:2 led:1 likely:2 unexpected:1 u2:1 ma:2 utarg:1 viewed:1 presentation:1 consequently:1 towards:1 absence:1 included:2 typical:3 specifically:1 uniformly:1 except:1 experimental:1 meaningful:1 internal:1 dept:1 ex:1 |
61 | 1,053 | Reorganisation of Somatosensory Cortex after
Tactile Training
Rasmus S. Petersen
John G. Taylor
Centre for Neural Networks, King's College London
Strand, London WC2R 2LS, UK
Abstract
Topographic maps in primary areas of mammalian cerebral cortex reorganise as a result of behavioural training. The nature of this reorganisation seems consistent with the behaviour of competitive neural networks, as has been demonstrated in the past by computer simulation.
We model tactile training on the hand representation in primate somatosensory cortex, using the Neural Field Theory of Amari and his colleagues. Expressions for changes in both receptive field size and magnification factor are derived, which are consistent with owl monkey experiments and make a prediction which goes beyond them.
1. INTRODUCTION
The primary cortical areas of mammals are now known to be plastic throughout life; reviewed recently by Kaas(1995). The problem of how and why the underlying learning
processes work is an exciting one, for which neural network modelling appears well
suited. In this contribution, we model the long-term effects of tactile training (Jenkins et
ai, 1990) on the functional organisation of monkey primary somatosensory cortex, by
perturbing a topographic net (Takeuchi and Amari, 1979).
1.1 ADAPTATION IN ADULT SOMATOSENSORY CORTEX
Light touch activates skin receptors which in primates are mapped, largely topographically, in area 3b. In a series of papers, Merzenich and colleagues describe how area 3b
becomes reorganised following peripheral nerve damage (Merzenich et ai, 1983a; 1983b)
or digit amputation (Merzenich et ai, 1984). The underlying learning processes may also
explain the phenomenon of phantom limb "telescoping" (Haber, 1955). Recent advances
in brain scanning are beginning to make them observable even in the human brain
(Mogilner et ai, 1993).
1.2 ADAPTATION ASSOCIATED WITH TACTILE TRAINING
Jenkins et al trained owl monkeys to maintain contact with a rotating disk. The apparatus
was arranged so that success eventually involved touching the disk with only the digit
tips. Hence these regions received selective stimulation. Some time after training had
been completed electro-physiological recordings were made from area 3b. These revealed an increase in Magnification Factor (MF) for the stimulated skin and a decrease in
83
Reorganization of Somatosensory Cortex after Tactile Training
the size of Receptive Fields (RFs) for that region. The net territory gained for light touch
of the digit tips came from area 3a and/or the face region of area 3b, but details of any
changes in these representations were not reported.
2. THEORETICAL FRAMEWORK
2.1 PREVIOUS WORK
Takeuchi and Amari(1979), Ritter and Schulten(1986), Pearson et al(1987) and Grajski
and Merzenich( 1990) have all modelled amputationldenervation by computer simulation
of competitive neural networks with various Hebbian weight dynamics. Grajski and
Merzenich(1990) also modelled the data of Jenkins et al. We build on this research
within the Neural Field Theory framework (Amari, 1977; Takeuchi and Amari, 1979;
Amari, 1980) of the Neural Activity Model of Willshaw and von der Malsburg(1976).
2.2 NEURAL ACTIVITY MODEL
Consider a "cortical" network of simple, laterally connected neurons. Neurons sum inputs linearly and output a sigmoidal function of this sum. The lateral connections are
excitatory at short distances and inhibitory at longer ones. Such a network is competitive: the steady state consists of blobs of activity centred around those neurons locally receiving the greatest afferent input (Amari, 1977). The range of the competition is limited
by the range of the lateral inhibition.
Suppose now that the afferent synapses adapt in a Hebbian manner to stimuli that are localised in the sensory array; the lateral ones are fixed. Willshaw and von der Malsburg(1976) showed by computer simulation that this network is able to form a topographic map of the sensory array. Takeuchi and Amari( 1979) amended the WillshawMalsburg model slightly: neurons possess an adaptive firing threshold in order to prevent
synaptic weight explosion, rather than the more usual mechanism of weight normalisation. They proved that a topographic mapping is stable under certain conditions.
2.3 TAKEUCHI-AMARI THEORY
Consider a one-dimensional model. The membrane dynamics are:
au(~y,t) = -u(x,y,t)+ f s(x,y' ,t)a(y- y')dy'-
(1)
f
so(x,t)ao + w(x-x')f[u(x' ,y,t)]dx'-h
Here u(x,y,t) is the membrane potential at time I for point x when a stimulus centred at y is
being presented; h is a positive resting potential; w(z) is the lateral inhibitory weight between two points in the neural field separated by a distance z - positive for small Izl and
negative for larger Izl; s(x,y,t) is the excitatory synaptic weight from y to x at time I and
sr/X,I) is an inhibitory weight from a tonically active inhibitory input aD to x at time t - it is
the adaptive firing threshold . f[u] is a binary threshold function that maps positive membrane potentials to 1 and non-positive ones to O.
Idealised, point-like stimuli are assumed, which "spread out" somewhat on the sensory
surface or subcortically. The spreading process is assumed to be independent of y and is
described in the same coordinates. It is represented by the function a(y-y'), which describes the effect of a point input at y spreading to the point y'. This is a decreasing, positive, symmetric function of Iy-y'l. With this type of input, the steady-state activity of the
network is a single blob, localised around the neuron with maximum afferent input.
R. S. PETERSEN, J. O. TAYLOR
84
The afferent synaptic weights adapt in a leaky Hebbian manner but with a time constant
much larger than that of the membrane dynamics (1). Effectively this means that learning
occurs on the steady state of the membrane dynamics. The following averaged weight
dynamics can be justified (Takeuchi and Amari, 1979; Geman 1979):
J) (
as( x,aty, t) =-s(x,y,t)+b p(y' a Y-Y')f [Au(x,y' )]dy'
(2)
aso(~y,t) =-so(x,y,t)+b' aoJ p(y')f[u(x,y')]dy'
where r1(x,y') is the steady-state of the membrane dynamics at x given a stimulus at y' and
p(y') is the probability of a stimulus at y '; b, b' are constants.
Empirically, the "classical" Receptive Field (RF) of a neuron is defined as the region of
the input field within which localised stimulation causes change in its activity. This concept can be modelled in neural field theory as: the RF of a neuron at x is the portion of the
input field within which a stimulus evokes a positive membrane potential (inhibitory RFs
are not considered). If the neural field is a continuous map of the sensory surface then the
RF of a neuron is fully described by its two borders rdx), rix), defined formally:
i
= 1,2
(3)
which are illustrated in figure 1.
Let RF size and RF position be denoted respectively by the functions rex) and m(x), which
represent experimentally measurable quantities. In terms of the border functions they can
be expressed:
r(x) = r2 (x) - r1 (x)
(4)
m(x) =-} (rl {x} + r2 (x))
y
~---------------------------
x
Figure 1:
RF
boundaries as a
function of position
in the neural field,
for a topographically ordered network. Only the region
in-between
rdx) and rix) has
positive
steadystate
membrane
potential
r1(x,y).
rdx) and rix) are
defined
by
the
condition
r1(x,r;(x))=O
for
i=J,2.
Using (1), (2) and the definition (3), Takeuchi and Amari(1979) derived dynamical equations for the change in RF borders due to learning. In the case of uniform stimulus probability, they found solutions for the steady-state RF border functions. With periodic
boundary conditions, the basic solution is a linear map with constant RF size:
85
Reorganization of Somatosensory Cortex after Tactile Training
r(x) = ro = const
m(x) = px ++ro
= px
r~tni (x) = px+ ro
r l uni ( x )
(5)
This means that both RF size and activity blob size are uniform across the network and
that RF position m(x) is a linear function of network location. (The value of p is determined by boundary conditions; ro is then determined from the joint equilibrium of (I),
(2?. The inverse of the RF position function, denoted by m-l(y), is the centre of the cortical active region caused by a stimulus centred at y. The change in m-l(y) over a unit interval in the input field is, by empirical definition, the cortical magnification factor (MF).
Here we model MF as the rate of change of m-l(y). The MF for the system described by
(5) is:
d
_I ( )
-m
y =p -I
(6)
dy
3. ANALYSIS OF TACTILE TRAINING
3.1 TRAINING MODEL AND ASSUMPTIONS
Jenkins et aI's training sessions caused an increase in the relative frequency of stimulation
to the finger tips, and hence a decrease in relative frequency of stimulation elsewhere.
Over a long time, we can express this fact as a localised change in stimulus probability
(figure 2). (This is not sufficient to cause cortical reorganisation - Recanzone et al( 1992)
showed that attention to the stimulation is vital. We consider only attended stimulation in
this model). To account for such data it is clearly necessary to analyse non-uniform
stimulus probabilities, which demands extending the results of Takeuchi and Amari. Unfortunately, it seems to be hard to obtain general results. However, a perturbation analysis around the uniform probability solution (5) is possible.
To proceed in this way, we must be able to assume that the change in the stimulus probability density function away from uniformity is small. This reasoning is expressed by the
following equation:
p(y) = Po + E p(y)
(7)
where pry) is the new stimulus probability in terms of the uniform one and a perturbation
due to training: E is a small constant. The effect of the perturbation is to ease the weight
dynamics (2) away from the solution (5) to a new steady-state. Our goal is to discover the
effect of this on the RF border functions, and hence for RF size and MF.
p(y)
Figure 2: The type
of
change
in
stimulus probability density that we
assume to model
the effects of behavioural training.
o
y
86
R. S. PETERSEN, J. G. TAYLOR
3.2 PERTURBATION ANALYSIS
3.2.1 General Case
For a small enough perturbation, the effect on the RF borders and on the activity blob size
ought also to be small. We consider effects to first order in E, seeking new solutions of
the form:
i
= 1,2
,{x} = r; {x} - ~ {x}
m{x} = +(~ (X}+'2 (x})
(8)
where the superscript peT denotes the new, perturbed equilibrium and uni denotes the unperturbed, uniform probability equilibrium. Using (1) and (2) in (3) for the post-training
RF borders, expanding to first order in E, a pair of difference equations may be obtained
for the changes in RF borders. It is convenient to define the following terms:
J
rt '(x)
o
r,"no (x)
ro
At (x) = p(y+ px)k(y)dy-b' a~
o
Jp(y)dy
r;-n' (x )
Jp(y + px + TO )k(y)dy - b' a~ Jp(y)dy
k(y) = bJa(y - y' )a(y' )dy'
A2 {x} =
(9)
B = b' a~p() -k(ro)po > 0
C=
w(p-tTo)p-t <0
where the signs of Band C arise due to stability conditions (Amari, 1977; Takeuchi and
Amari, 1979). In terms of RF size and RF position (4), the general result is:
= ~(~ + I)At (x) - M2 (x)
BC~2m{X) = (B- C -+ C~)(~+ I}A t (x) + (C- B++(C -2B)~)A2 (x)
B~2 ,(X}
(10)
where ~ is the difference operator:
~ f{ x) = f( x + p - t To) - f( x)
(11 )
3.2.2 Particular Case
The second order difference equations (l0) are rather opaque. This is partly due to coupling in y caused by the auto-correlation function key): (10) simplifies considerably if very
narrow stimuli are assumed - a(y)=O(y) (see also Amari, 1980). For periodic boundary
conditions:
(12)
where:
Reorganization of Somatosensory Cortex after Tactile Training
m -I P(W (y)
87
= m -I pre (y) + Em -I (y)
=p-l(y_+ro)+Em-l(y)
(13)
and we have used the crude approximation:
t;:
d _() 1
(
dx m x "" ~m x -
1
2" P
_I
ro
)
(14)
which demands smoothness on the scale of 10 . However, for perturbations like that
sketched in figure 2, this is sufficient to tell us about the constant regions of MF. (We
would not expect to be able to model the data in the transition region in any case, as its
form is too dependent upon fine detail of the model).
Our results (12) show that the change in RF size of a neuron is simply minus the total
change in stimulus probability over its RF. Hence RF size decreases where p(y) increases
and vice versa. Conversely, the change in MF at a given stimulus location is roughly the
local average change in stimulus probability there. Note that changes in RF size correlate
inversely with changes in MF. Figure 3 is a sketch of these results for the perturbation of
figure 2.
MF
RF
o
o
y
I
\
I
L.J
Figure 3: Results of perturbation analysis for how behavioural training (figure 2) changes
RF size and MF respectively, in the case where stimulus width can be neglected. For MF
- due to the approximation (14) - predictions do not apply near the transitions.
4. DISCUSSION
Equations (12) are the results of our model for RF size and MF after area 3b has fully
adapted to the behavioural task, in the case where stimulus width can be neglected. They
appear to be fully consistent with the data of Jenkins et al described above: RF size decreases in the region of cortex selective for the stimulated body part and the MF for this
body part increases. Our analysis also makes a specific prediction that goes beyond
Jenkins et aI's data, directly due to the inverse relationship between changes in RF size
and those in MF. Within the regions that surrender territory to the entrained finger tips
(sometimes the face region), for which MF decreases, RF sizes should increase.
Surprisingly perhaps, these changes in RF size are not due to adaptation of the afferent
weights s(x,y). The changes are rather due to the adaptive threshold term six). This
point will be discussed more fully elsewhere.
A limitation of our analysis is the assumption that the change in stimulus probability is in
some sense small. Such an approximation may be reasonable for behavioural training but
seems less so as regards important experimental protocols like amputation or denervation.
Evidently a more general analysis would be highly desirable.
88
R. S. PETERSEN,J. O. TAYLOR
5. CONCLUSION
We have analysed a system with three interacting features: lateral inhibitory interactions;
Hebbian adaptivity of afferent synapses and an adaptive firing threshold. Our results indicate that such a system can account for the data of Jenkins et aI, concerning the response of adult somatosensory cortex to the changing environmental demands imposed by
tactile training. The analysis also brings out a prediction of the model, that may be testable.
Acknowledgements
RSP is very grateful for a travel stipend from the NIPS Foundation and for a Nick
Hughes bursary from the School of Physical Sciences and Engineering, King's College
London, that enabled him to participate in the conference.
References
Amari S. (1977) BioI. Cybern. 2777-87
Amari S. (1980) Bull. Math. Biology 42339-364
Geman S. (1979) SIAM 1. App. Math. 36 86-105
Grajski K.A., Merzenich M.M. (1990) in Neural Information Processing Systems 2
Touretzky D.S. (Ed) 52-59
HaberW.B. (1955)1. Psychol. 40115-123
Jenkins W.M ., Merzenich M.M., Ochs M.T., Allard T., Gufc-Robles E. (1990) 1. Neurophysiol. 63 82-104
Kaas J.H. (1995) in The Cognitive Neurosciences Gazzaniga M.S. (Ed ic) 51-71
Merzenich M.M., Kaas J.H., Wall J.T., Nelson R.J., Sur M., Felleman DJ. (1983a) Neuroscience 8 35-55
Merzenich M .M., Kaas J.H., Wall J.T., Sur M., Nelson R.I., Felleman DJ . (1983b) Neuroscience 10639-665
Merzenich M.M., Nelson R.I., Stryker M.P., Cynader M.S., Schoppmann A., Zook J.M.
(1984) 1. Compo Neural. 224591-605
Mogilner A., Grossman A.T., Ribrary V., Joliot M., Vol mann J., Rapaport D., Beasley R.,
L1inas R. (1993) Proc. Natl. Acad. Sci. USA 90 3593-3597
Pearson J.e., Finkel L.H., Edelman G.M. (1987) 1. Neurosci. 124209-4223
Recanzone G.H., Merzenich M.M., Jenkins W.M., Grajski K.A., Dinse H.R. (1992) 1.
Neurophysiol. 67 1031-1056
Ritter H., Schulten K. (1986) BioI. Cybern. 5499-106
Takeuchi A., Amari S. (1979) BioI. Cybern. 35 63-72
Willshaw DJ., von der Malsburg e. (1976) Proc. R. Soc. Lond. B194 203-243
| 1053 |@word seems:3 disk:2 simulation:3 attended:1 mammal:1 minus:1 series:1 bc:1 past:1 analysed:1 dx:2 must:1 john:1 beginning:1 short:1 compo:1 math:2 location:2 sigmoidal:1 edelman:1 consists:1 manner:2 roughly:1 brain:2 decreasing:1 becomes:1 discover:1 underlying:2 monkey:3 ought:1 laterally:1 willshaw:3 ro:8 uk:1 unit:1 appear:1 positive:7 engineering:1 local:1 apparatus:1 acad:1 receptor:1 firing:3 au:2 conversely:1 ease:1 limited:1 range:2 averaged:1 hughes:1 digit:3 area:8 empirical:1 convenient:1 pre:1 petersen:4 operator:1 cybern:3 measurable:1 map:5 demonstrated:1 phantom:1 imposed:1 go:2 attention:1 l:1 m2:1 grajski:4 array:2 his:1 enabled:1 stability:1 coordinate:1 suppose:1 magnification:3 ochs:1 mammalian:1 geman:2 beasley:1 region:11 connected:1 decrease:5 neglected:2 dynamic:7 trained:1 uniformity:1 grateful:1 topographically:2 upon:1 cynader:1 neurophysiol:2 po:2 joint:1 various:1 represented:1 finger:2 separated:1 describe:1 london:3 tell:1 pearson:2 larger:2 amari:18 topographic:4 analyse:1 superscript:1 blob:4 evidently:1 net:2 interaction:1 adaptation:3 tni:1 competition:1 r1:4 extending:1 coupling:1 school:1 received:1 soc:1 somatosensory:8 indicate:1 human:1 owl:2 mann:1 wc2r:1 behaviour:1 stipend:1 ao:1 wall:2 around:3 considered:1 ic:1 equilibrium:3 mapping:1 a2:2 tonically:1 proc:2 travel:1 spreading:2 him:1 vice:1 aso:1 clearly:1 activates:1 rather:3 rdx:3 finkel:1 derived:2 l0:1 modelling:1 sense:1 dependent:1 selective:2 sketched:1 denoted:2 field:12 biology:1 stimulus:20 joliot:1 maintain:1 normalisation:1 highly:1 light:2 natl:1 explosion:1 necessary:1 taylor:4 rotating:1 theoretical:1 bull:1 uniform:6 too:1 rex:1 reported:1 scanning:1 periodic:2 perturbed:1 considerably:1 density:2 siam:1 ritter:2 receiving:1 tip:4 iy:1 von:3 cognitive:1 grossman:1 account:2 potential:5 centred:3 caused:3 afferent:6 ad:1 kaas:4 portion:1 competitive:3 contribution:1 takeuchi:10 largely:1 modelled:3 territory:2 plastic:1 recanzone:2 app:1 explain:1 synapsis:2 touretzky:1 synaptic:3 ed:2 definition:2 colleague:2 amputation:2 involved:1 frequency:2 associated:1 proved:1 nerve:1 appears:1 response:1 arranged:1 correlation:1 hand:1 sketch:1 touch:2 brings:1 perhaps:1 usa:1 effect:7 concept:1 idealised:1 hence:4 merzenich:11 symmetric:1 illustrated:1 width:2 steady:6 felleman:2 reasoning:1 steadystate:1 recently:1 functional:1 stimulation:6 empirically:1 perturbing:1 rl:1 physical:1 jp:3 cerebral:1 discussed:1 resting:1 versa:1 ai:7 smoothness:1 session:1 centre:2 had:1 dj:3 stable:1 cortex:10 longer:1 inhibition:1 surface:2 recent:1 touching:1 showed:2 certain:1 binary:1 success:1 came:1 life:1 allard:1 der:3 somewhat:1 desirable:1 hebbian:4 adapt:2 long:2 y_:1 concerning:1 post:1 prediction:4 basic:1 aty:1 represent:1 sometimes:1 izl:2 justified:1 fine:1 interval:1 posse:1 sr:1 recording:1 electro:1 entrained:1 near:1 revealed:1 vital:1 enough:1 simplifies:1 expression:1 six:1 tactile:9 proceed:1 cause:2 locally:1 band:1 inhibitory:6 sign:1 neuroscience:3 vol:1 express:1 key:1 threshold:5 changing:1 prevent:1 sum:2 inverse:2 opaque:1 evokes:1 throughout:1 reasonable:1 rix:3 dy:9 tto:1 activity:7 adapted:1 lond:1 px:5 peripheral:1 membrane:8 describes:1 slightly:1 em:2 across:1 primate:2 behavioural:5 equation:5 eventually:1 mechanism:1 bja:1 reorganisation:3 jenkins:9 apply:1 limb:1 away:2 denotes:2 completed:1 malsburg:3 const:1 testable:1 build:1 classical:1 contact:1 seeking:1 skin:2 quantity:1 occurs:1 receptive:3 primary:3 damage:1 usual:1 rt:1 stryker:1 distance:2 mapped:1 lateral:5 sci:1 participate:1 nelson:3 pet:1 reorganization:3 sur:2 relationship:1 rasmus:1 unfortunately:1 localised:4 negative:1 neuron:9 interacting:1 perturbation:8 pair:1 connection:1 nick:1 narrow:1 nip:1 adult:2 beyond:2 able:3 gazzaniga:1 dynamical:1 rf:32 haber:1 greatest:1 telescoping:1 pry:1 inversely:1 psychol:1 auto:1 acknowledgement:1 relative:2 fully:4 expect:1 adaptivity:1 limitation:1 rapaport:1 foundation:1 sufficient:2 consistent:3 exciting:1 excitatory:2 elsewhere:2 surprisingly:1 face:2 leaky:1 regard:1 boundary:4 cortical:5 transition:2 sensory:4 amended:1 made:1 adaptive:4 correlate:1 observable:1 uni:2 active:2 assumed:3 continuous:1 why:1 reviewed:1 stimulated:2 nature:1 expanding:1 protocol:1 spread:1 linearly:1 neurosci:1 border:8 arise:1 body:2 position:5 schulten:2 crude:1 specific:1 unperturbed:1 r2:2 physiological:1 organisation:1 effectively:1 gained:1 roble:1 demand:3 mf:15 suited:1 simply:1 rsp:1 expressed:2 strand:1 ordered:1 environmental:1 bioi:3 goal:1 king:2 change:21 experimentally:1 hard:1 determined:2 total:1 partly:1 experimental:1 formally:1 college:2 phenomenon:1 |
62 | 1,054 | Implementation Issues in the Fourier
Transform Algorithm
Yishay Mansour" Sigal Sahar t
Computer Science Dept.
Tel-Aviv University
Tel-Aviv, ISRAEL
Abstract
The Fourier transform of boolean functions has come to play an
important role in proving many important learnability results. We
aim to demonstrate that the Fourier transform techniques are also
a useful and practical algorithm in addition to being a powerful
theoretical tool. We describe the more prominent changes we have
introduced to the algorithm, ones that were crucial and without
which the performance of the algorithm would severely deteriorate. One of the benefits we present is the confidence level for each
prediction which measures the likelihood the prediction is correct.
1
INTRODUCTION
Over the last few years the Fourier Transform (FT) representation of boolean functions has been an instrumental tool in the computational learning theory community. It has been used mainly to demonstrate the learnability of various classes of
functions with respect to the uniform distribution . The first connection between the
Fourier representation and learnability of boolean functions was established in [6]
where the class ACo was learned (using its FT representation) in O(nPoly-log(n))
time. The work of [5] developed a very powerful algorithmic procedure: given a
function and a threshold parameter it finds in polynomial time all the Fourier coefficients of the function larger than the threshold. Originally the procedure was
used to learn decision trees [5], and in [8, 2, 4] it was used to learn polynomial size
DNF. The FT technique applies naturally to the uniform distribution, though some
of the learnability results were extended to product distribution [1, 3] .
.. e-mail: manSQur@cs.tau.ac.il
t e-mail: gales@cs.tau .ac.il
Implementation Issues in the Fourier Transform Algorithm
261
A great advantage of the FT algorithm is that it does not make any assumptions
on the function it is learning. We can apply it to any function and hope to obtain
"large" Fourier coefficients. The prediction function simply computes the sum of
the coefficients with the corresponding basis functions and compares the sum to
some threshold. The procedure is also immune to some noise and will be able to
operate even if a fraction of the examples are maliciously misclassified. Its drawback
is that it requires to query the target function on randomly selected inputs.
We aim to demonstrate that the FT technique is not only a powerful theoretical
tool, but also a practical one. In the process of implementing the Fourier algorithm
we enhanced it in order to improve the accuracy of the hypothesis we generate while
maintaining a desirable run time. We have added such feartures as the detection
of inaccurate approximations "on the fly" and immediate correction of the errors
incurred at a minimal cost. The methods we devised to choose the "right" parameters proved to be essential in order to achieve our goals. Furthermore, when making
predictions, it is extremely beneficial to have the prediction algorithm supply an
indicator that provides the confidence level we have in the prediction we made. Our
algorithm provides us naturally with such an indicator as detailed in Section 4.1.
The paper is organized as follows: section 2 briefly defines the FT and describes
the algorithm. In Section 3 we describe the experiments and their outcome and in
Section 4 the enhancements made. We end with our conclusions in Section 5.
2
FOURIER TRANSFORM (FT) THEORY
In this section we briefly introduce the FT theory and algorithm. its connection to
learning and the algorithm that finds the large coefficients. A comprehensive survey
of the theoretical results and proofs can be found in [7].
We consider boolean functions of n variables: f : {O, l}n - t {-I, I}. We define the
inner product: < g, f >= 2- n L::XE{O,l}R f(x)g(x) = E[g . f], where E is the expected value with respect to the uniform distribution . The basis is defined as follows:
for each z E {O,l}n, we define the basis function :\:z(Xl,???,X n ) = (_1)L::~=lx;z ?.
Any function of n boolean inputs can be uniquely expressed as a linear combination
of the basis functions . For a function f, the zth Fourier coefficient of f is denoted
by j(z) , i.e. , f(x) = L::zE{O,l}R j(z)XAx) . The Fourier coefficients are computed
by j(z) =< f, Xz > and we call z the coefficient-name of j(z). We define at-sparse
function to be a function that has at most t non-zero Fourier coefficients.
2.1
PREDICTION
Our aim is to approximate the target function f by a t-sparse function h. In many
cases h will simply include the "large" coefficients of f. That is, if A = {Zl' ... , zm}
is the set of z's for which j(Zi) is "large", we set hex) = L::z;EA aiXz;(x), where
at is our approximation of j(Zi). The hypothesis we generate using this process,
hex), does not have a boolean output. In order to obtain a boolean prediction
we use Sign(h(x)), i.e., output +1 if hex) 2 0 and -1 if hex) < o. We want to
bound the error we get from approximating f by h using the expected error squared,
E[(J - h )2]. It can be shown that bounding it bounds the boolean prediction error
probability, i.e., Pr[f(x) f. sign(h(x))] ~ E[(J - h)2] . For a given t, the t-sparse
Y. MANSOUR, S. SAHAR
262
hypothesis h that minimizes E[(J - h)2] simply includes the t largest coefficients of
f. Note that the more coefficients we include in our approximation and the better
we approximate their values, the smaller E[(J - h)2] is going to be. This provides
us with the motivation to find the "large" coefficients.
2.2
FINDING THE LARGE COEFFICIENTS
The algorithm that finds the "large" coefficients receives as inputs a function 1 (a
black-box it can query) and an interest threshold parameter (J > 0. It outputs a list
of coefficient-names that (1) includes all the coefficients-names whose corresponding coefficients are "large", i.e., at least (J , and (2) does not include "too many"
coefficient-names. The algorithm runs in polynomial time in both 1/() and n .
SUBROUTINE search( a)
IF TEST[J, a, II] THEN IF
lal
=n
THEN OUTPUT a
ELSE search(aO); search(al);
Figure 1: Subroutine search
The basic idea of the algorithm is to perform a search in the space of the coefficientnames of I. Throughout the search algorithm (see Figure (1)) we maintain a prefix
of a coefficient-name and try to estimate whether any of its extensions can be
a coefficient-name whose value is "large". The algorithm commences by calling
search(A) where A is the empty string. On each invocation it computes the predicate TEST[/, a, (J]. If the predicate is true, it recursively calls search(aO) and
search(al). Note that if TEST is very permissive we may reach all the coefficients, in which case our running time will not be polynomial; its implementation
is therefore of utmost interest. Formally, T EST[J, a, (J] computes whether
E xe {O,l}n-"E;e{O,lP.[J(YX)Xa(Y)] 2: (J2,
where k = Iiali .
(1)
Define la(x) = L:,ae{O,l}n-" j(aj3)x.,a(x). It can be shown that the expected value
in (1) is exactly the sum of the squares of the coefficients whose prefix is a , i.e.,
E xe {o,l}n-"E;e{o,l}d/(yx)x.a(Y)] = Ex[/~(x)] = L:,ae{o,l}n-" p(aj3), implying
that if there exists a coefficient Ii( a,8)1 2: (), then E[/;] 2: (J2 . This condition
guarantees the correctness of our algorithm, namely that we reach all the "large"
coefficients. We would like also to bound the number of recursive calls that search
performs. We can show that for at most 1/(J2 of the prefixes of size k, TEST[!, a , (J]
is true. This bounds the number of recursive calls in our procedure by O(n/(J2).
In TEST we would like to compute the expected value, but in order to do so
efficiently we settle for an approximation of its value. This can be done as follows:
(1) choose ml random Xi E {a, l}n-k, (2) choose m2 random Yi,j E {a, l}k , (3)
query 1 on Yi,jXi (which is why we need the query model-to query f on many
points with the same prefix Xi) and receive I(Yi,j xd, and (4) compute the estimate
as, Ba =
3
';1 L:~\ (~~ L:~l I(Yi,iXdXa(Yi,j)f
. Again , for more details see [7].
EXPERIMENTS
We implemented the FT algorithm (Section 2.2) and went forth to run a series of
experiments. The parameters of each experiment include the target function , (J , ml
Implementation Issues in the Fourier Transform Algorithm
263
and m2. We briefly introduce the parameters here and defer the detailed discussion.
The parameter () determines the threshold between "small" and "large" coefficients,
thus controlling the number of coefficients we will output. The parameters wI and
w2 determine how accurately we approximate the TEST predicate. Failure to approximate it accurately may yield faulty, even random, results (e.g., for a ludicrous
choice of m1 = 1 and m2 = 1) that may cause the algorithm to fail (as detailed in
Section 4.3). An intelligent choice of m1 and m2 is therefore indispensable. This
issue is discussed in greater detail in Sections 4.3 and 4.4.
Figure 2:
Typical frequency plots and typical errors . Errors occur in two cases: (1) the algorithm
predicts a +1 response when the actual response is -1 (the lightly shaded area), and (2) the algorithm
predicts a -1 response , while the true response is +1 (the darker shaded area) .
Figures (3)-(5) present representative results of our experiments in the form of
graphs that evaluate the output hypothesis of the algorithm on randomly chosen
test points. The target function, I, returns a boolean response, ?1, while the FT
hypothesis returns a real response. We therefore present, for each experiment, a
graph constituting of two curves: the frequency of the values of the hypothesis,
h( x), when I( x) = +1, and the second curve for I( x) = -1. If the two curves
intersect, their intersection represents the inherent error the algorithm makes.
Figure 3: Decision trees of depth 5 and 3 with 41 variables . The 5-deep (3-deep) decision tree
returns -1 about 50% (62.5%) of the time . The results shown above are for values (J
0.03, ml
100
and m2 = 5600 ?(J = 0.06, ml = 100 and m2 = 1300). Both graphs are disjoint, signifying 0% error.
=
4
4.1
=
RESULTS AND ALGORITHM ENHANCEMENTS
CONFIDENCE LEVELS
One of our most consistent and interesting empirical findings was the distribution
of the error versus the value of the algorithm's hypothesis: its shape is always that
of a bell shaped curve. Knowing the error distribution permits us to determine with
a high (often 100%) confidence level the result for most of the instances, yielding
the much sought after confidence level indicator. Though this simple logic thus far
has not been supported by any theoretical result, our experimental results provide
overwhelming evidence that this is indeed the case.
Let us demonstrate the strength of this technique: consider the results of the 16-term
DNF portrayed in Figure (4) . If the algorithm's hypothesis outputs 0.3 (translated
264
Y. MANSOUR, S. SAHAR
Figure 4: 16 terlD DNF. This (randomly generated) DNF of 40 variables returns -1 about 61 % of
the time. The results shown above are for the values of 9
0 .02 , m2
12500 and ml
100. The
hypothesis uses 186 non-zero coefficients . A total of 9 .628% error was detected.
=
=
=
into 1 in boolean terms by the Sign function), we know with an 83% confidence
level that the prediction is correct. If the algorithm outputs -0.9 as its prediction,
we can virtually guarantee that the response is correct. Thus, although the total
error level is over 9% we can supply a confidence level for each prediction. This is
an indispensable tool for practical usage of the hypothesis .
4.2
DETERMINING THE THRESHOLD
Once the list of large coefficients is built and we compute the hypothesis h( x), we
still need to determine the threshold, a, to which we compare hex) (i.e., predict +1
iff hex) > a). In the theoretical work it is assumed that a = 0, since a priori one
cannot make a better guess . We observed that fixing a's value according to our
hypothesis, improves the hypothesis. a is chosen to minimize the error with respect
to a number of random examples.
Figure 5:
8 terlD DNF . This (randomly generated) DNF of 40 variables returns -1 about 43% of the
time. The results shown above are for the values of 9
0 .03, m2
5600 and ml
100. The hypothesis
consists of 112 non-zero coefficients.
=
=
=
For example, when trying to learn an 8-term DNF with the zero threshold we will
receive a total of 1.22% overall error as depicted in Figure (5). However, if we
choose the threshold to be 0.32, we will get a diminished error of 0.068%.
4.3
ERROR DETECTION ON THE FLY - RETRY
During our experimentations we have noticed that at times the estimate Ba for
E[J~] may be inaccurate. A faulty approximation may result in the abortion of the
traversal of "interesting" subtreees, thus decreasing the hypothesis' accuracy, or in
traversal of "uninteresting" subtrees, thereby needlessly increasing the algorithm's
runtime. Since the properties of the FT guarantee that E[J~] = E[f~o] + E[J~d,
we expect Ba :::::: Bao + Bal . Whenever this is not true, we conclude that at least
one of our approximations is somewhat lacking. We can remedy the situation by
265
Implementation Issues in the Fourier Transform Algorithm
running the search procedure again on the children, i.e., retry node a. This solution increases the probability of finding all the "large" coefficients. A brute force
implementation may cost us an inordinate amount of time since we may retraverse
subtrees that we have previously visited. However, since any discrepancies between
the parent and its children are discovered-and corrected-as soon as they appear,
we can circumvent any retraversal. Thus, we correct the errors without any superfluous additions to the run time.
-J:
,-
Figure 6:
and
(J
i\"
o
" .......
Majority function of 41 variables. The result portrayed are for values m1 = 100 , m2 = 800
=0 .08 . Note the majority-function characteristic distribution of the results 1 .
We demonstrate the usefulness of this approach with an example of learning the
majority function of 41 boolean variables . Without the retry mechanism, 8 (of a
total of 42) large coefficients were missed, giving rise to 13.724% error represented by
the shaded area in Figure (6). With the retries all the correct coefficients were found,
yielding perfect (flawless) results represented in the dotted curve in Figure (6).
4.4
DETERMINING THE PARAMETERS
One of our aims was to determine the values of the different parameters, m1, m2 and
(}. Recall that in our algorithm we calculate B a , the approximation of Ex[f~(x)]
where m1 is the number of times we sample x in order to make this approximation.
We sample Y randomly m2 times to approximate fa(Xi) = Ey[f(YXih:a(Y)), for each
Xi ? This approximation of fa(Xi) has a standard deviation of approximately
Assume that the true value is 13i, i.e. f3i = fa(Xi), then we expect the contribution
of the ith element to Ba to be (13i ? )n;? = 131 ?
+ rr!~. The algorithm tests
Ba =
L 131 ? (}2, therefore, to ensure a low error, based on the above argument,
we choose m2 = (J52 ?
A.
J&;
rr!1
Choosing the right value for m2 is of great importance. We have noticed on more
than one occasion that increasing the value of m2 actually decreases the overall run
time. This is not obvious at first : seemingly, any increase in the number of times we
loop in the algorithm only increases the run time. However, a more accurate value
for m2 means a more accurate approximation of the TEST predicate, and therefore
less chance of redundant recursive calls (the run time is linear in the number of
recursive calls) . We can see this exemplified in Figure (7) where the number of
recursive calls increase drastically as m2 decreases. In order to present Figure (7) ,
1The "peaked" distribution of the results is not coincidental. The FT of the majority function has 42 large
equal coefficients, labeled cmaj' one for each singleton (a vector of the form 0 .. 010 .. 0) and one for parity (the
all-ones vector). The zeros of an input vector with z zeros we will contribute ?1(2z - 41). cmajl to the result
and the parity will contribute ?cma ) (depending on whether z is odd or even), so that the total contribution is
an even factor of c ma )' Since c ma ) =
around the peaks is due to the
f~ct
(~g);tcr
- 0 .12, we have peaks around factors of 0.24 . The distribution
we only approximate each coefficient and get a value close to c ma )'
Y. MANSOUR, S. SAHAR
266
we learned the same 3 term DNF always using e = 0.05 and mr
The trials differ in the specific values chosen in each trial for m2.
* m2
100000.
Figure 7: Deter01ining 012' Note that the number of recursive calls grows dramatically as m2 's
value decreases. For example, for m2
400, the number of recursive calls is 14,433 compared with only
1,329 recursive calls for m2
500 .
=
=
SPECIAL CASES: When k = 110'11 is either very small or very large, the values we
choose for ml and m2 can be self-defeating: when k ,..... n we still loop ml (~ 2n - k )
times, though often without gaining additional information. The same holds for very
small values of k, and the corresponding m2 (~ 2k) values. We therefore add the
following feature: for small and large values of k we calculate exactly the expected
value thereby decreasing the run time and increasing accuracy.
5
CONCLUSIONS
In this work we implemented the FT algorithm and showed it to be a useful practical
tool as well as a powerful theoretical technique. We reviewed major enhancements
the algorithm underwent during the process. The algorithm successfully recovers
functions in a reasonable amount of time. Furthermore, we have shown that the
algorithm naturally derives a confidence parameter. This parameter enables the user
in many cases to conclude that the prediction received is accurate with extremely
high probability, even if the overall error probability is not negligible.
Acknowledgements
This research was supported in part by The Israel Science Foundation administered by The Israel
Academy of Science and Humanities and by a grant of the Israeli Ministry of Science and Technology.
References
[1) Mihir Bellare. A technique for upper bounding the spectral norm with applications to learning.
Annual Work&hop on Computational Learning Theory, pages 62-70, July 1992.
In 5 th
(2) Avrim Blum, Merrick Furst, Jeffrey Jackson, Michael Kearns, Yishay Mansour, and Steven Rudich. Weakly
learning DNF and characterizing statistical query learning using fourier analysis. In The 26 th Annual AC M
Sympo&ium on Theory of Computing, pages 253 - 262, 1994 .
(3) Merrick L . Furst , Jeffrey C. Jackson, and Sean W. Smith. Improved learning of AC O functions .
Annual Work&hop on Computational Learning Theory, pages 317-325, August 1991.
In 4th
(4) J. Jackson . An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. In Annual Sympo&ium on Switching and Automata Theory, pages 42 - 53, 1994.
(5) E. Kushilevitz and Y . Mansour. Learning decision trees using the fourier spectrum. SIAM Journal on
Computing 22(6): 1331-1348, 1993.
(6) N. Linial, Y. Mansour, and N . Nisan. Constant depth circuits, fourier transform and learnability.
JACM
40(3):607-620, 1993.
(7) Y. Mansour . Learning Boolean Functions via the Fourier Transform. Advance& in Neural Computation,
edited by V.P. Roychodhury and K-Y. Siu and A. Orlitsky, Kluwer Academic Pub. 1994. Can be accessed
via Up :/ /ftp .math.tau.ac.iJ/pub/mansour/PAPERS/LEARNING/fourier-survey.ps.Z.
(8) Yishay Mansour. An o(nlog log n) learning algorihm for DNF under the uniform distribution . J. of Computer
and Sy&tem Science, 50(3):543-550, 1995.
| 1054 |@word trial:2 briefly:3 polynomial:4 instrumental:1 norm:1 thereby:2 recursively:1 series:1 pub:2 prefix:4 merrick:2 shape:1 enables:1 plot:1 implying:1 selected:1 guess:1 ith:1 smith:1 provides:3 math:1 node:1 contribute:2 lx:1 accessed:1 supply:2 consists:1 introduce:2 deteriorate:1 indeed:1 expected:5 xz:1 decreasing:2 actual:1 overwhelming:1 increasing:3 npoly:1 circuit:1 israel:3 coincidental:1 minimizes:1 string:1 developed:1 finding:3 guarantee:3 orlitsky:1 xd:1 runtime:1 exactly:2 zl:1 brute:1 grant:1 appear:1 negligible:1 severely:1 switching:1 inordinate:1 approximately:1 black:1 shaded:3 practical:4 recursive:8 procedure:5 area:3 intersect:1 empirical:1 bell:1 confidence:8 get:3 cannot:1 close:1 faulty:2 automaton:1 survey:2 m2:23 kushilevitz:1 maliciously:1 jackson:3 proving:1 yishay:3 play:1 target:4 enhanced:1 user:1 controlling:1 us:1 humanity:1 hypothesis:15 element:1 ze:1 predicts:2 labeled:1 observed:1 role:1 ft:13 fly:2 steven:1 calculate:2 went:1 decrease:3 edited:1 traversal:2 weakly:1 linial:1 basis:4 translated:1 various:1 represented:2 describe:2 dnf:11 query:7 detected:1 outcome:1 choosing:1 whose:3 larger:1 cma:1 transform:10 seemingly:1 advantage:1 rr:2 nlog:1 product:2 zth:1 zm:1 j2:4 loop:2 iff:1 achieve:1 academy:1 forth:1 bao:1 parent:1 enhancement:3 empty:1 p:1 perfect:1 ftp:1 depending:1 ac:5 fixing:1 ij:1 odd:1 received:1 implemented:2 c:2 come:1 differ:1 drawback:1 correct:5 settle:1 implementing:1 sympo:2 ao:2 extension:1 correction:1 hold:1 around:2 great:2 algorithmic:1 predict:1 furst:2 major:1 sought:1 visited:1 largest:1 correctness:1 defeating:1 tool:5 successfully:1 hope:1 jxi:1 always:2 aim:4 likelihood:1 mainly:1 membership:1 inaccurate:2 misclassified:1 going:1 subroutine:2 issue:5 overall:3 denoted:1 priori:1 special:1 equal:1 once:1 shaped:1 hop:2 represents:1 peaked:1 discrepancy:1 tem:1 intelligent:1 inherent:1 few:1 randomly:5 comprehensive:1 jeffrey:2 maintain:1 detection:2 interest:2 yielding:2 superfluous:1 subtrees:2 accurate:3 tree:4 theoretical:6 minimal:1 instance:1 boolean:12 cost:2 deviation:1 uniform:5 uninteresting:1 usefulness:1 predicate:4 siu:1 too:1 learnability:5 peak:2 siam:1 michael:1 squared:1 again:2 choose:6 gale:1 return:5 singleton:1 includes:2 coefficient:34 nisan:1 try:1 defer:1 contribution:2 minimize:1 il:2 square:1 accuracy:3 characteristic:1 efficiently:1 sy:1 yield:1 xax:1 accurately:2 reach:2 whenever:1 failure:1 frequency:2 obvious:1 naturally:3 proof:1 permissive:1 recovers:1 proved:1 recall:1 improves:1 organized:1 sean:1 ea:1 actually:1 originally:1 response:7 improved:1 done:1 though:3 box:1 furthermore:2 xa:1 receives:1 flawless:1 defines:1 aviv:2 grows:1 usage:1 name:6 true:5 remedy:1 during:2 self:1 uniquely:1 bal:1 occasion:1 prominent:1 trying:1 demonstrate:5 performs:1 aco:1 discussed:1 m1:5 kluwer:1 immune:1 add:1 showed:1 indispensable:2 xe:3 yi:5 ministry:1 greater:1 somewhat:1 additional:1 mr:1 ey:1 determine:4 redundant:1 july:1 ii:2 desirable:1 academic:1 dept:1 devised:1 prediction:13 basic:1 ae:2 receive:2 addition:2 want:1 else:1 crucial:1 w2:1 operate:1 virtually:1 call:10 zi:2 f3i:1 inner:1 idea:1 knowing:1 administered:1 whether:3 cause:1 deep:2 dramatically:1 useful:2 detailed:3 utmost:1 amount:2 bellare:1 generate:2 dotted:1 sign:3 retries:1 disjoint:1 threshold:9 blum:1 graph:3 fraction:1 year:1 sum:3 run:8 powerful:4 throughout:1 reasonable:1 missed:1 decision:4 bound:4 abortion:1 ct:1 annual:4 strength:1 occur:1 calling:1 lightly:1 fourier:20 argument:1 extremely:2 according:1 combination:1 beneficial:1 describes:1 smaller:1 wi:1 lp:1 making:1 pr:1 previously:1 fail:1 mechanism:1 know:1 end:1 permit:1 experimentation:1 apply:1 spectral:1 running:2 include:4 ensure:1 maintaining:1 yx:2 giving:1 approximating:1 noticed:2 added:1 fa:3 rudich:1 needlessly:1 majority:4 mail:2 rise:1 ba:5 implementation:6 perform:1 upper:1 immediate:1 situation:1 extended:1 mansour:10 discovered:1 august:1 community:1 introduced:1 namely:1 connection:2 ium:2 lal:1 learned:2 established:1 israeli:1 able:1 exemplified:1 built:1 gaining:1 tau:3 force:1 circumvent:1 indicator:3 improve:1 technology:1 acknowledgement:1 determining:2 lacking:1 expect:2 sahar:4 interesting:2 versus:1 foundation:1 incurred:1 consistent:1 sigal:1 supported:2 last:1 soon:1 parity:2 hex:6 drastically:1 underwent:1 characterizing:1 sparse:3 benefit:1 curve:5 depth:2 computes:3 made:2 far:1 constituting:1 approximate:6 logic:1 ml:8 assumed:1 conclude:2 xi:6 spectrum:1 search:11 why:1 reviewed:1 learn:3 tel:2 tcr:1 bounding:2 noise:1 motivation:1 child:2 representative:1 darker:1 xl:1 invocation:1 portrayed:2 specific:1 list:2 evidence:1 derives:1 essential:1 exists:1 avrim:1 importance:1 intersection:1 depicted:1 simply:3 jacm:1 expressed:1 applies:1 determines:1 chance:1 ma:3 goal:1 change:1 diminished:1 typical:2 corrected:1 kearns:1 total:5 experimental:1 la:1 est:1 formally:1 commences:1 signifying:1 evaluate:1 ex:2 |
63 | 1,055 | Adaptive Retina with Center-Surround
Receptive Field
Shih-Chii Lin and Kwabena Boahen
Computation and Neural Systems
139-74 California Institute of Technology
Pasadena, CA 91125
shih@pcmp.caltech.edu, buster@pcmp.caltech.edu
Abstract
Both vertebrate and invertebrate retinas are highly efficient in extracting contrast independent of the background intensity over five
or more decades. This efficiency has been rendered possible by
the adaptation of the DC operating point to the background intensity while maintaining high gain transient responses. The centersurround properties of the retina allows the system to extract information at the edges in the image. This silicon retina models the
adaptation properties of the receptors and the antagonistic centersurround properties of the laminar cells of the invertebrate retina
and the outer-plexiform layer of the vertebrate retina. We also illustrate the spatio-temporal responses of the silicon retina on moving
bars. The chip has 59x64 pixels on a 6.9x6.8mm2 die and it is
fabricated in 2 J-tm n-well technology.
1
Introduction
It has been observed previously that the initial layers of the vertebrate and invertebrate retina systems perform very similar processing functions on the incoming
input signal[1]. The response versus log intensity curves of the receptors in invertebrate and vertebrate retinas look similar. The curves show that the receptors
have a larger gain for changes in illumination than to steady illumination, i.e, the
receptors adapt. This adaptation property allows the receptor to respond over a
large input range without saturating.
Anatomically, the eyes of invertebrates differ greatly from that of vertebrates. Ver-
Adaptive Retina with Center-Surround Receptive Field
679
tebrates normally have two simple eyes while insects have compound eyes. Each
compound eye in the fly consists of 3000-4000 ommatidia and each ommatidium
consists of 8 photoreceptors. Six of these receptors (which are also called RI-R6)
are in a single spectral class. The other two receptors, R7 and R8 provide channels
for wavelength discrimination and polarization.
The vertebrate eye is divided into the outer-plexiform layer and the inner-plexiform
layer. The outer-plexiform layer consists of the rods and cones, horizontal cells
and bipolar cells. Invertebrate receptors depolarise in response to an increase in
light, in contrast to vertebrate receptors, which hyperpolarise to an increase in light
intensity. Both vertebrate and invertebrate receptors show light adaptation over at
least five decades of background illumination. This adaptation property allows the
retina to maintain a high transient gain to contrast over a wide range of background
intensities.
The invertebrate receptors project to the next layer which is called the lamina layer.
This layer consists primarily of monopolar cells which show a similar response versus log intensity curve to that of vertebrate bipolar cells in the outer-plexiform
layer. Both cells respond with graded potentials to changes in illumination. These
cells also show a high transient gain to changes in illumination while ignoring the
background intensity and they possess center-surround receptive fields. In vertebrates, the cones which are excited by the incoming light, activate the horizontal
cells which in tum inhibit the cones. The horizontal cells thus mediate the lateral
inhibition which produces the center-surround properties. In insects, a possible
process of this lateral inhibition is done by current flow from the photoreceptors
through the epithelial glial cells surrounding an ommatidium or the modulation
of the local field potential in the lamina to influence the transmembrane potential
of the photoreceptor[2]. The center-surround receptive fields allow contrasts to be
accentuated since the surround computes a local mean and subtracts that from the
center signal.
Mahowald[3] previously described a silicon retina with adaptive photoreceptors and
Boahen et al.[4] recently described a compact current-mode analog model of the
outer-plexiform layer of the vertebrate retina and analysed the spatio-temporal
processing properties of this retina[5]. A recent array of photoreceptors from
Delbriick[6] uses an adaptive photoreceptor circuit that adapts its operating point
to the background intensity so that the pixel shows a high transient gain over 5
decades of background illumination. However this retina does not have spatial
coupling between pixels.
The pixels in the silicon retina described here has a compact circuit that incorporates both spatial and temporal filtering with light adaptation over 5 decades
of background intensity. The network exhibits center-surround behavior. Boahen
et al.[4] in their current-mode diffusor retina, draw an analogy between parts of
the diffusor circuit and the different cells in the outer-plexiform layer. While the
same analogy cannot be drawn from this silicon retina to the invertebrate retina
since the function of the cells are not completely understood, the output responses
of the retina circuit are similar to the output responses of the photoreceptor and
monopolar cells in invertebrates.
The circuit details are described in Section 2 and the spatio-temporal processing
performed by the retina on stimulus moving at different speeds is shown in Section
S.-C. LIU, K. BOAHEN
680
3.
2
Circuit
-----VI
Vb
VI
1
p1
1
VI
M4
VI?I
Vh
VI+I
Vh
.1.
.bel
.1.
Vr
---------
MI
(a)
im.l
iia
rrr
iI...I
rrr
rrr
'II
'II
(b)
Figure 1: (a) One-dimensional version of the retina. (b) Small-signal equivalent of
circuit in (a).
A one-dimensional version of the retina is shown in Figure l(a). The retina consists
of an adaptive photoreceptor circuit at each pixel coupled together with diffusors,
controlled by voltages, Vg and Vh. The output of this network can either be obtained
at the voltage output, V, or at the current output, 10 but the outputs have different
properties. Phototransduction is obtained by using a reverse-biased photodiode
which produces current that is proportional to the incident light. The logarithmic
properties are obtained by operating the feedback transistor shown in Figure l(a)
in the subthreshold region. The voltage change at the output photoreceptor, V r , is
proportional to a small contrast since
UT
Vr
UTdI
U
i
T
= -d(logl)
==K,
K,
1
K, h
g
CO:rCd '
where UT is the thermal voltage, K, =
Coz is the oxide capacitance and
Cd is the depletion capacitance of a transistor. The circuit works as follows: If
the photocurrent through the photodiode increases, Vr will be pulled low and the
output voltage at V, increases by VI = AVr where A is the amplifier gain of the
output stage. This output change in V, is coupled into Vel through a capacitor
Adaptive Retina with Center-Surround Receptive Field
681
divider ratio, Cl~2C2. The feedback transistor, M4, operates in the subthreshold
region and supplies the current necessary to offset the photocurrent. The increase
in Vel (i.e. the gate voltage of M4) causes the current supplied by M3 to increase
which pulls the node voltage, Vr , back to the voltage level needed by Ml to sink
the bias current from transistor, M2.
3.5
3.45
...-=
3.4
??c
?
a:?
3.35
0
~
-2
0
Q.
3.3
-1
3.25
0
3.2
0
5
10
15
20
25
Time (Sec)
Figure 2: This figure shows the output response of the receptor to a variation of
about 40% p-p in the intensity of a flickering LED light incident on the chip. The
response shows that the high sensitivity of the receptor to the LED is maintained
over 5 decades of differing background intensities. The numbers on the section of
the curve indicate the log intensity of the mean value. 0 log is the absolute intensity
from the LED.
The adaptive element, M3, has an I-V curve which looks like a hyperbolic sine.
The small slope of the I-V curve in the middle means that for small changes of
voltages across M3, the element looks like an open-circuit. With large changes of
voltage across M3, the current through M3 becomes exponential and Vel is charged
or discharged almost instantaneously.
Figure 2 shows the output response of the photoreceptor to a square-wave variation
of about 40% p-p in the intensity of a red LED (635 nm). The results show that
the circuit is able to discern the small contrast over five decades of background intensity while the steady-state voltage of the photoreceptor output varies only about
15mV. Further details of the photoreceptor circuit and its adaptation properties
are described in Delbriick[6].
3
Spatio-Temporal Response
The spatio-temporal response of the network to different moving stimuli is explored
in this section. The circuit shown in Figure l(a) can be transferred to an equivalent
network of resistors and capacitors as shown in Figure l(b) to obtain the transfer
function of the circuit. The capacitors at each node are necessary to model the
S.-C. LIU, K. BOAHEN
682
8.5
i
1lJ;
~ 7.5
:;
:;
...
o
i
I
~
...
r.
0.4
(a)
0.6
0.8
1.2
1.4
Time (Sec)
3.8 ~_---:-":--_--::'=_ _':"':-_--::':-_--::,'::-_--.J
0.3
0.4
0.5
0 .6
0.7
0.8
(b)
Time (Sec)
Figure 3: (a) Response of a pixel to a grey strip 2 pixels wide of gray-level "0.4"
on a dark background of level "0" moving past the pixel at different speeds. (b)
Response of a pixel to a dark strip of gray-level "0.6" on a white background of level
"1" moving past the pixel at different speeds. The voltage shown on these curves is
not the direct measurement of the voltage at V, but rather V, drives a current-sensing
transistor and this current is then sensed by an offchip current sense-amplifier.
Adaptive Retina with Center-Surround Receptive Field
683
temporal responses of the circuit.
The chip results from the experiments below illustrate the center-surround properties of the network and the difference in time-constants between the surround and
center.
3.1
Chip Results
Data from the 2D chip is shown in the next few figures. In these experiments, we
are only looking at one pixel of the 2D array. A rotating circular fly-wheel stimulus
with strips of alternating contrasts is mounted above the chip. The stimulus was
created using Mathematica. Figure 3a shows the spati~temporal impulse response
of one pixel measured at V, with a small strip at level "0.4" on a dark background of
level "0" moving past the pixels on the row. At slow speeds, the impulse response
shows a center-surround behavior where the pixel first receives inhibition from the
preceding pixels which are excited by the stimulus. When the stimulus moves by
the pixel of interest, it is excited and then it is inhibited by the subsequent pixels
seeing the stimulus.
I
o
f
I
i
Tim. (Sec)
Figure 4: Response of a pixel to a strip of varying contrasts on a dark background
moving past the pixel at a constant speed.
At faster speeds, the initial inhibition in the response grows smaller until at some
even faster speed, the initial inhibition is no longer observed. This response comes
about because the inhibition from the surround has a longer-time constant than the
center. When the stimulus moves past the pixel of interest, the inhibition from the
preceding pixels excited by the stimulus does not have time to inhibit the pixel of
interest. Hence the excitation is seen first and then the inhibition comes into place
when the stimulus passes by. Note that in these figures (Figures 3-4), the curves
have been displaced to show the pixel response at different speeds of the moving
stimulus. The voltage shown on these curves is not the direct measurement of the
voltage at V, but rather V, drives a current-sensing transistor and this current is
then sensed by an off-chip current sense-amplifier.
Figure 3b shows the
spati~temporal
impulse response of one pixel with a similar
s.-c. LlU, K. BOAHEN
684
size strip of level "0.6" on a light background of level "1" moving past the row of
pixels. The same inhibition behavior is seen for increasing stimulus speeds. Figure 4 shows the output response at V, for the same stimulus of gray-levels varying
from "0.2" to "0.8" on a dark background of level "0" moving at one speed. The
peak excitation response is plotted against the contrast in Figure 5. A level of "0.2"
corresponds to a irradiance of 15mW/m2 while a level of "0.8" corresponds to a irradiance of 37.4mW/m2. These measurements are done with a photometer mounted
about 1.5in above a piece of paper with the contrast which is being measured. The
irradiance varies exponentially with increasing level.
4
Conclusion
In this paper, we described an adaptive retina with a center-surround receptive
field. The system properties of this retina allows it to model functionally either the
responses of the laminar cells in the invertebrate retina or the outer-plexiform layer
of vertebrate retina. We show that the circuit shows adaptation to changes over
5 decades of background intensities. The center-surround property of the network
can be seen from its spatio-temporal response to different stimulus speeds. This
property serves to remove redundancy in space and time of the input signal.
Acknowledgements
We thank Carver Mead for his support and encouragement. SC Liu is supported by
an NIMH fellowship and K Boahen is supported by a Sloan fellowship. We thank
Tobias Delbriick for the inspiration and help in testing the design. We also thank
Rahul Sarpeshkar and Bradley Minch for comments. Fabrication was provided by
MOSIS.
References
[1] S. B. Laughlin, "Coding efficiency and design in retinal processing", In: Facets
of Vision (D. G. Stavenga and R. C. Hardie, eds) pp. 213-234. Springer, Berlin,
1989.
[2] S. R. Shaw, "Retinal resistance barriers and electrica1lateral inhibition", Nature, Lond.255,: 480-483, 1975.
[3] M. A. Mahowald, "Silicon Retina with Adaptive Photoreceptors" in
SPIE/SPSE Symposium on Electronic Science and Technology: From Neurons
to Chips. Orlando, FL, April 1991.
[4] K. A. Boahen and A. G. Andreou, "A Contrast Sensitive Silicon Retina with
Reciprocal Synapses", In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 4, 764-772. San Mateo, CA: Morgan Kaufmann, 1992.
[5] K. A. Boahen, "Spatiotemporal sensitivity of the retina: A physical model",
CNS Memo CNS-TR-91-06, California Institute of Technology, Pasadena, CA
91125, June 1991.
[6] T. Delbriick, "Analog VLSI Phototransduction by continous-time, adaptive,
logarithmic photoreceptor circuits", CNS Memo No.30, California Institute of
Technology, Pasadena, CA 91125, 1994.
| 1055 |@word middle:1 version:2 open:1 grey:1 sensed:2 excited:4 tr:1 initial:3 liu:3 past:6 bradley:1 current:15 analysed:1 subsequent:1 remove:1 discrimination:1 reciprocal:1 node:2 five:3 c2:1 direct:2 supply:1 symposium:1 consists:5 behavior:3 p1:1 monopolar:2 vertebrate:12 becomes:1 project:1 increasing:2 provided:1 circuit:17 differing:1 fabricated:1 temporal:10 bipolar:2 normally:1 understood:1 local:2 receptor:13 mead:1 modulation:1 mateo:1 co:1 range:2 testing:1 hyperbolic:1 seeing:1 cannot:1 wheel:1 influence:1 equivalent:2 charged:1 center:15 m2:3 array:2 pull:1 his:1 x64:1 variation:2 antagonistic:1 us:1 element:2 photodiode:2 observed:2 fly:2 region:2 inhibit:2 buster:1 transmembrane:1 boahen:9 nimh:1 tobias:1 coz:1 efficiency:2 completely:1 sink:1 chip:8 sarpeshkar:1 surrounding:1 activate:1 sc:1 larger:1 transistor:6 adaptation:8 adapts:1 produce:2 lamina:2 illustrate:2 coupling:1 tim:1 help:1 measured:2 offchip:1 indicate:1 come:2 differ:1 transient:4 accentuated:1 orlando:1 im:1 ommatidium:3 epithelial:1 sensitive:1 instantaneously:1 spati:2 rather:2 varying:2 voltage:15 june:1 greatly:1 contrast:11 sense:2 lj:1 pasadena:3 vlsi:1 pixel:25 insect:2 spatial:2 field:8 kwabena:1 mm2:1 look:3 r7:1 stimulus:14 inhibited:1 primarily:1 retina:33 few:1 m4:3 cns:3 maintain:1 amplifier:3 interest:3 highly:1 circular:1 light:8 edge:1 necessary:2 carver:1 rotating:1 plotted:1 facet:1 mahowald:2 fabrication:1 varies:2 spatiotemporal:1 minch:1 peak:1 sensitivity:2 off:1 together:1 nm:1 oxide:1 potential:3 retinal:2 sec:4 coding:1 sloan:1 mv:1 vi:6 piece:1 performed:1 sine:1 red:1 wave:1 slope:1 square:1 kaufmann:1 subthreshold:2 discharged:1 chii:1 drive:2 synapsis:1 touretzky:1 strip:6 ed:2 against:1 mathematica:1 pp:1 mi:1 spie:1 gain:6 ut:2 irradiance:3 back:1 tum:1 x6:1 response:26 rahul:1 april:1 done:2 vel:3 stage:1 until:1 receives:1 horizontal:3 logl:1 hardie:1 mode:2 gray:3 impulse:3 grows:1 divider:1 hence:1 polarization:1 inspiration:1 alternating:1 white:1 maintained:1 die:1 steady:2 excitation:2 photometer:1 image:1 recently:1 physical:1 exponentially:1 analog:2 functionally:1 silicon:7 measurement:3 surround:15 encouragement:1 iia:1 phototransduction:2 moving:10 longer:2 operating:3 inhibition:10 recent:1 reverse:1 compound:2 caltech:2 seen:3 morgan:1 preceding:2 signal:4 ii:3 faster:2 adapt:1 lin:1 divided:1 controlled:1 vision:1 cell:14 background:17 fellowship:2 biased:1 plexiform:8 posse:1 pass:1 comment:1 flow:1 incorporates:1 capacitor:3 extracting:1 mw:2 glial:1 inner:1 tm:1 depolarise:1 rod:1 six:1 resistance:1 cause:1 dark:5 supplied:1 redundancy:1 shih:2 drawn:1 mosis:1 cone:3 respond:2 discern:1 place:1 almost:1 electronic:1 draw:1 vb:1 layer:12 fl:1 laminar:2 ri:1 invertebrate:11 speed:11 lond:1 rendered:1 transferred:1 across:2 smaller:1 rrr:3 anatomically:1 depletion:1 previously:2 needed:1 serf:1 spectral:1 photocurrent:2 shaw:1 gate:1 pcmp:2 maintaining:1 llu:1 graded:1 move:2 capacitance:2 receptive:7 exhibit:1 thank:3 lateral:2 berlin:1 centersurround:2 outer:7 avr:1 ratio:1 memo:2 design:2 perform:1 neuron:1 displaced:1 thermal:1 looking:1 dc:1 diffusor:3 delbriick:4 intensity:16 continous:1 bel:1 andreou:1 california:3 able:1 bar:1 below:1 technology:5 eye:5 created:1 extract:1 coupled:2 vh:3 acknowledgement:1 filtering:1 proportional:2 analogy:2 versus:2 vg:1 mounted:2 incident:2 cd:1 row:2 supported:2 bias:1 allow:1 pulled:1 laughlin:1 institute:3 wide:2 barrier:1 absolute:1 curve:9 feedback:2 computes:1 adaptive:11 san:1 subtracts:1 compact:2 ml:1 incoming:2 ver:1 photoreceptors:5 spatio:6 decade:7 channel:1 transfer:1 nature:1 ca:4 ignoring:1 cl:1 mediate:1 rcd:1 slow:1 vr:4 resistor:1 exponential:1 r6:1 r8:1 offset:1 explored:1 sensing:2 illumination:6 logarithmic:2 led:4 wavelength:1 saturating:1 springer:1 corresponds:2 flickering:1 change:8 operates:1 called:2 m3:5 photoreceptor:9 support:1 |
64 | 1,056 | Forward-backward retraining of recurrent
neural networks
Andrew Senior ?
Tony Robinson
Cambridge University Engineering Department
Trumpington Street, Cambridge, England
Abstract
This paper describes the training of a recurrent neural network
as the letter posterior probability estimator for a hidden Markov
model, off-line handwriting recognition system. The network estimates posterior distributions for each of a series of frames representing sections of a handwritten word. The supervised training
algorithm, backpropagation through time, requires target outputs
to be provided for each frame. Three methods for deriving these
targets are presented. A novel method based upon the forwardbackward algorithm is found to result in the recognizer with the
lowest error rate.
1
Introduction
In the field of off-line handwriting recognition, the goal is to read a handwritten
document and produce a machine transcription. Such a system could be used
for a variety of purposes, from cheque processing and postal sorting to personal
correspondence reading for the blind or historical document reading. In a previous
publication (Senior 1994) we have described a system based on a recurrent neural
network (Robinson 1994) which can transcribe a handwritten document.
The recurrent neural network is used to estimate posterior probabilities for character classes, given frames of data which represent the handwritten word. These
probabilities are combined in a hidden Markov model framework, using the Viterbi
algorithm to find the most probable state sequence.
To train the network, a series of targets must be given. This paper describes three
methods that have been used to derive these probabilities. The first is a naive bootstrap method, allocating equal lengths to all characters, used to start the training
procedure. The second is a simple Viterbi-style segmentation method that assigns a
single class label to each of the frames of data. Such a scheme has been used before
in speech recognition using recurrent networks (Robinson 1994). This representation, is found to inadequately represent some frames which can represent two letters,
or the ligatures between letters. Thus, by analogy with the forward-backward algorithm (Rabiner and Juang 1986) for HMM speech recognizers, we have developed a
?Now at IDM T .J.Watson Research Center, Yorktown Heights NYI0598, USA.
744
A. SENIOR, T. ROBINSON
forward-backward method for retraining the recurrent neural network. This assigns
a probability distribution across the output classes for each frame of training data,
and training on these 'soft labels' results in improved performance of the recognition
system.
This paper is organized in four sections. The following section outlines the system
in which the neural network is used, then section 3 describes the recurrent network
in more detail. Section 4 explains the different methods of target estimation and
presents the results of experiments before conclusions are presented in the final
section.
2
System background
The recurrent network is the central part of the handwriting recognition system.
The other parts are summarized here and described in more detail in another publication (Senior 1994). The first stage of processing converts the raw data into
an invariant representation used as an input to the neural network. The network
outputs are used to calculate word probabilities in a hidden Markov model.
First, the scanned page image is automatically segmented into words and then normalized. Normalization removes variations in the word appearance that do not
affect its identity, such as rotation, scale, slant, slope and stroke thickness. The
height of the letters forming the words is estimated, and magnifications, shear and
thinning transforms are applied, resulting in a more robust representation of the
word. The normalized word is represented in a compact canonical form encoding
both the shape and salient features. All those features falling within a narrow vertical strip across the word are termed a frame. The representation derived consists
of around 80 values for each of the frames, denoted Xt. The T frames (Xl,' .. , x r )
for a whole word are written xl' Five frames would typically be enough to represent a single character. The recurrent network takes these frames sequentially and
estimates the posterior character probability distribution given the data: P(Ai IxD,
for each of the letters, a, .. ,z, denoted Ao, ... , A25 ? These posterior probabilities are
scaled by the prior class probabilities, and are treated as the emission probabilities
in a hidden Markov model.
A separate model is created for each word in the vocabulary, with one state per
letter. Transitions are allowed only from a state to itself or to the next letter in the
word. The set of states in the models is denoted Q = {ql, ... , qN} and the letter
represented by qi is given by L(qi), L : Q 1-+ Ao, ... , A 25 ?
Word error rates are presented for experiments on a single-writer task tested with
a 1330 word vocabulary!. Statistical significance of the results is evaluated using
Student's t-test, comparing word recognition rates taken from a number of networks
trained under the same conditions but with different random initializations. The
results of the t-test are written: T( degrees of freedom) and the tabulated values:
tsignificance (degrees of freedom).
3
Recurrent networks
This section describes the recurrent error propagation network which has been used
as the probability distribution estimator for the handwriting recognition system.
Recurrent networks have been successfully applied to speech recognition (Robinson 1994) but have not previously been used for handwriting recognition, on-line
or off-line. Here a left-to-right scanning process is adopted to map the frames of
a word into a sequence, so adjacent frames are considered in consecutive instants.
lThe experimental data are available in ftp:/ /svr-ftp.eng.cam.ac.uk/pub/data
Forward-backward Retraining of Recurrent Neural Networks
745
A recurrent network is well suited to the recognition of patterns occurring in a
time-series because series of arbitrary length can be processed, with the same processing being performed on each section of the input stream. Thus a letter 'a'
can be recognized by the same process, wherever it occurs in a word. In addition, internal 'state' units are available to encode multi-frame context information
so letters spread over several frames can be recognized. The recurrent network
Input Frames
_
IT
:J ._- ---.
TT,
Network
,;
,
-- JD
:!
,
Output
(Characlcrprobabllllles )
( .. vv le e W WW II ... )
i
:
f
---l-- _. ;---------r
inpu tiOulpul Uni ts
-ie~~;k-Um, s
Untt Time Iklay
Figure 1: A schematic of the recurrent error propagation network.
For clarity only a few of the units and links are shown.
architecture used here is a single layer of standard perceptrons with nonlinear activation functions. The output 0 i of a unit i is a function of the inputs aj and
the network parameters, which are the weights of the links Wij with a bias bi :
bi + Lakwik.
(2)
The network is fully connected - that is, each input is connected to every output. However, some of the input units receive no external input and are connected one-to-one to corresponding output units through a unit time-delay (figure 1). The remaining input units accept a single frame of parametrized input and the remaining 26 output units estimate letter probabilities for the 26
character classes. The feedback units have a standard sigmoid activation function (3), but the character outputs have a 'softmax' activation function (4).
0i
!i({O"j}),
(1)
O"i
eO' ?
(3)
L: j
eO" ?
(4)
During recognition ('forward propagation'), the first frame is presented at the input
and the feedback units are initialized to activations of 0.5. The outputs are calculated (1 and 2) and read off for use in the Markov model. In the next iteration, the
outputs of the feedback units are copied to the feedback inputs, and the next frame
presented to the inputs. Outputs are again calculated, and the cycle is repeated for
each frame of input, with a probability distribution being generated for each frame.
To allow the network to assimilate context information, several frames of data are
passed through the network before the probabilities for the first frame are read
off, previous output probabilities being discarded. This input/output latency is
maintained throughout the input sequence, with extra, empty frames of inputs
being presented at the end to give probability distributions for the last frames of
true inputs. A latency of two frames has been found to be most satisfactory in
experiments to date.
3.1
Training
To be able to train the network the target values (j (t) desired for the outputs
= 0, ... ,25 for frame Xt must be specified. The target specification is dealt
OJ (Xt) j
746
A. SENIOR. T. ROBINSON
with in the next section. It is the discrepancy between the actual outputs and these
targets which make up the objective function to be maximized by adjusting the
internal weights of the network. The usual objective function is the mean squared
error, but here the relative entropy, G, of the target and output distributions is
used:
G
(j(t)-)'
- "
L- "
L- (j (t) log -.-(
t
j
(5)
oJ Xt
At the end of a word, the errors between the network's outputs and the targets
are propagated back using the generalized delta rule (Rumelhart et al. 1986) and
changes to the network weights are calculated. The network at successive time
steps is treated as adjacent layers of a multi-layer network. This process is generally known as 'back-propagation through time' (Werbos 1990). After processing T
frames of data with an input/output latency, the network is equivalent to a (T +
latency) layer perceptron sharing weights between layers. For a detailed description
of the training procedure, the reader is referred elsewhere (Rumelhart et al. 1986;
Robinson 1994).
4
Target re-estimation
The data used for training are only labelled by word. That is, each image represents
a single word, whose identity is known, but the frames representing that word are
not labelled to indicate which part of the word they represent. To train the network,
a label for each frame's identity must be provided. Labels are indicated by the state
St E Q and the corresponding letter L(St) of which a frame Xt is part.
4.1
A simple solution
To bootstrap the network, a naive method was used, which simply divided the word
up into sections of equal length, one for each letter in the word. Thus, for an Nletter word of T frames, xI, the first letter was assumed to be represented by frames
xr, the next by k+
as follows:
..
2r
x
1
and so on. The segmentation is mapped into a set of targets
I'J.(t)
{ 1 if L(St) = Aj
(6)
..
0 otherwise.
Figure 2a shows such a segmentation for a single word. Each line, representing
(j(t) for some j, has a broad peak for the frames representing letter Aj. Such a
segmentation is inaccurate, but can be improved by adding prior knowledge. It
is clear that some letters are generally longer than others, and some shorter. By
weighting letters according to their a priori lengths it is possible to give a better,
but still very simple, segmentation. The letters Ii, I' are given a length of and
'm, w' a length ~ relative to other letters. Thus in the word 'wig', the first half
of the frames would be assigned the label 'w', the next sixth Ii' and the last third
the label 'g'. While this segmentation is constructed with no regard for the data
being segmented, it is found to provide a good initial approximation from which it
is possible to train the network to recognize words, albeit with high error rates.
!
4.2
Viterbi re-estimation
Having trained the network to some accuracy, it can be used to calculate a good
estimate of the probability of each frame belonging to any letter. The probability
of any state sequence can then be calculated in the hidden Markov model, and
the most likely state sequence through the correct word S* found using dynamic
programming. This best state sequence S* represents a new segmentation giving a
label for each frame. For a network which models the probability distributions well,
this segmentation will be better than the automatic segmentation of section 4.1
Forward-backward Retraining of Recurrent Neural Networks
747
" Each line represents
Figure 2: Segmentations of the word 'butler'.
P(St = AilS) for one letter ~ and is high for framet when S; = Ai.
(a) is the equal-length segmentation discussed in section 4.1 (b) is
a segmentation of an untrained network. (c) is the segmentation
re-estimated with a trained network.
since it takes the data into account. Finding the most probable state sequence S? is
termed a forced alignment. Since only the correct word model need be considered,
such an alignment is faster than the search through the whole lexicon that is required
for recognition. Training on this automatic segmentation gives a better recognition
rate, but still avoids the necessity of manually segmenting any of the database.
Figure 2 shows two Viterbi segmentations of the word 'butler'. First, figure 2b
shows the segmentation arrived at by taking the most likely state sequence before
training the network. Since the emission probability distributions are random, there
is nothing to distinguish between the state sequences, except slight variations due
to initial asymmetry in the network, so a poor segmentation results. After training the network (2c), the durations deviate from the prior assumed durations to
match the observed data. This re-estimated segmentation represents the data more
accurately, so gives better targets towards which to train. A further improvement
in recognition accuracy can be obtained by using the targets determined by the reestimated segmentation. This cycle can be repeated until the segmentations do not
change and performance ceases to improve. For speed, the network is not trained
to convergence at each iteration.
It can be shown (Santini and Del Bimbo 1995) that, assuming that the network has
enough parameters, the network outputs after convergence will approximate the
posterior probabilities P(~lxD. Further, the approximation P(AilxD ~ P(Adxt)
is made. The posteriors are scaled by the class priors P(Ai) (Bourlard and Morgan
1993), and these scaled posteriors are used in the hidden Markov model in place of
data likelihoods since, by Bayes' rule,
P(XtIAi)
()(
P(~lxt)
P(Ai)?
(7)
Table 1 shows word recognition error rates for three 80-unit networks trained towards fixed targets estimated by another network, and then retrained, re-estimating
the targets at each iteration. The retraining improves the recognition performance
(T(2) = 3.91, t.9s(2) = 2.92).
4.3
Forward-backward re-estimation
The system described above performs well and is the method used in previous recurrent network systems, but examining the speech recognition literature, a potential
method of improvement can be seen. Viterbi frame alignment has so far been used
to determine targets for training. This assigns one class to each frame, based on
the most likely state sequence. A better approach might be to allow a distribution across all the classes indicating which are likely and which are not, avoiding a
748
A. SENIOR, T. ROBINSON
Table 1: Error rates for 3 networks with 80 units trained with fixed
alignments, and retrained with re-estimated alignments.
Training
Error (%)
(7
method
J.I.
Fixed targets 21.2 1.73
17.0 0.68
Retraining
'hard' classification at points where a frame may indeed represent more than one
class (such as where slanting characters overlap), or none (as in a ligature). A 'soft'
classification would give a more accurate portrayal of the frame identities.
<?
Such a distribution, 'Yp(t) = P(St = qplxI, W), can be calculated with the forwardbackward algorithm (Rabiner and Juang 1986). To obtain 'Yp(t), the forward probabilities Ctp(t) = P(St = qp, xD must be combined with the backward probabilities
f3p(t) = P(St = qp, x;+l)' The forward and backward probabilities are calculated
recursively in the same manner.
Ctr(t + 1)
Ctp(t)P(xtIL(qp))ap,r,
(8)
L
/3p(t - 1)
(9)
r
Suitable initial distributions Ctr(O) = 7l'r and f3r(r + 1) = Pr are chosen, e.g. 7l' and
P are one for respectively the first and last character in the word, and zero for the
others. The likelihood of observing the data Xl and being in state qp at time t is
then given by:
e (t) = Ctp(t)/3p(t).
(10)
Then the probabilities 'Yp(t) of being in state qp at time t are obtained by normalization and used as the targets (j (t) for the recurrent network character probability
outputs:
ep(t)
(11)
(j (t)
'Yp(t). (12)
L
l:r er(t)'
p:L(qp)=Aj
Figure 3a shows the initial estimate of the class probabilities for a sample of the
word' butler'. The probabilities shown are those estimated by the forward-backward
algorithm when using an untrained network, for which the P(XtISt = qp) will be
independent of class. Despite the lack of information, the probability distributions
can be seen to take reasonable shapes. The first frame must belong to the first
letter, and the last frame must belong to the last letter, of course, but it can also
be seen that half way through the word, the most likely letters are those in the
middle of the word. Several class probabilities are non-zero at a time, reflecting
the uncertainty caused since the network is untrained. Nevertheless, this limited
information is enough to train a recurrent network, because as the network begins
to approximate these probabilities, the segmentations become more definite. In
contrast, using Viterbi segmentations from an untrained network, the most likely
alignment can be very different from the true alignment (figure 2b). The segmentation is very definite though, and the network is trained towards the incorrect
targets, reinforcing its error. Finally, a trained network gives a much more rigid
segmentation (figure 3b), with most of the probabilities being zero or one, but with
a boundary of uncertainty at the transitions between letters. This uncertainty,
where a frame might truly represent parts of two letters, or a ligature between
two, represents the data better. Just as with Viterbi training, the segmentations
can be re-estimated after training and retraining results in improved performance.
The final probabilistic segmentation can be stored with the data and used when
subsequent networks are trained on the same data. Training is then significantly
quicker than when training towards the approximate bootstrap segmentations and
re-estimating the targets.
Forward-backward Retraining of Recurrent Neural Networks
749
Figure 3: Forward-backward segmentations of the word 'butler'.
(a) is the segmentation of an untrained network with a uniform
class prior. (b) shows the segmentation after training.
The better models obtained with the forward-backward algorithm give improved
recognition results over a network trained with Viterbi alignments. The improvement is shown in table 2. It can be seen that the error rates for the networks
trained with forward-backward targets are lower than those trained on Viterbi targets (T(2) 5.24, t.97S(2) 4.30).
=
=
Table 2: Error rates for networks with
or Forward-Backward alignments.
Training
method
Viterbi
Forward-Backward
5
80 units trained with Viterbi
Error 1%)
J.I.
(7
17.0
15.4
0.68
0.74
Conclusions
This paper has reviewed the training methods used for a recurrent network, applied
to the problem of off-line handwriting recognition. Three methods of deriving target probabilities for the network have been described, and experiments conduded
using all three. The third method is that of the forward-backward procedure, which
has not previously been applied to recurrent neural network training. This method
is found to improve the performance of the network, leading to reduced word error
rates. Other improvements not detailed here (including duration models and stochastic language modelling) allow the error rate for this task to be brought below
10%.
Acknowledgments
The authors would like to thank Mike Hochberg for assistance in preparing this
paper.
References
BOURLARD, H. and MORGAN, N. (1993) Connectionist Speech Recognition: A Hybrid
Approach . Kluwer .
RABINER, L. R. and JUANG, B . H. (1986) An introduction to hidden Markov models.
IEEE ASSP magazine 3 (1): 4-16.
ROBINSON, A. (1994) The application ofrecuIIent nets to phone probability estimation.
IEEE 'lransactions on Neural Networks.
RUMELHART, D. E ., HINTON, G. E. and WILLIAMS, R. J. (1986) Learning internal
representations by eIIor propagation. In Parallel Distributed Processing: Explorations
in the Microstructure of Cognition, ed. by D . E. Rumelhart and J . L. McClelland,
volume 1, chapter 8, PE. 318-362. Bradford Books.
SANTINI, S. and DEL BIMBO, A . (1995) RecuIIent neural networks can be trained to
be maximum a posteriori probability classifiers. Neural Networks 8 (1): 25-29.
SENIOR, A . W ., (1994) Off-line Cursive Handwriting Recognition using Recurrent
Neural Networks. Cambridge University Engineering Department Ph.D. thesis. URL:
~_~: / / svr-ft.p . enK. cam. ac . uk/pub/reports/senioLthesis . ps . gz.
WERBOS, P. J. (1990) Backpropagation through time: What it does and how to do it.
Proceedings of the IEEE 78: 1550-60.
| 1056 |@word middle:1 retraining:8 eng:1 recursively:1 necessity:1 initial:4 series:4 pub:2 document:3 comparing:1 activation:4 must:6 written:2 subsequent:1 shape:2 remove:1 half:2 postal:1 lexicon:1 successive:1 five:1 height:2 constructed:1 become:1 incorrect:1 consists:1 manner:1 indeed:1 multi:2 automatically:1 actual:1 provided:2 estimating:2 begin:1 lowest:1 what:1 ail:1 developed:1 finding:1 every:1 xd:1 um:1 scaled:3 classifier:1 uk:2 unit:14 segmenting:1 before:4 engineering:2 despite:1 encoding:1 ap:1 might:2 initialization:1 limited:1 bi:2 acknowledgment:1 definite:2 backpropagation:2 bootstrap:3 xr:1 procedure:3 significantly:1 word:39 svr:2 context:2 equivalent:1 map:1 center:1 williams:1 duration:3 ixd:1 assigns:3 estimator:2 rule:2 deriving:2 variation:2 target:23 magazine:1 programming:1 rumelhart:4 recognition:21 magnification:1 werbos:2 database:1 observed:1 ep:1 mike:1 quicker:1 ft:1 calculate:2 connected:3 cycle:2 forwardbackward:2 cam:2 dynamic:1 personal:1 trained:14 upon:1 writer:1 represented:3 chapter:1 train:6 forced:1 whose:1 otherwise:1 itself:1 final:2 inadequately:1 sequence:10 net:1 date:1 description:1 lxt:1 juang:3 empty:1 asymmetry:1 convergence:2 p:1 produce:1 ftp:2 derive:1 recurrent:24 andrew:1 ac:2 indicate:1 correct:2 stochastic:1 exploration:1 explains:1 ao:2 microstructure:1 probable:2 slanting:1 around:1 considered:2 viterbi:11 cognition:1 consecutive:1 purpose:1 recognizer:1 estimation:5 label:7 cheque:1 successfully:1 brought:1 publication:2 encode:1 derived:1 emission:2 improvement:4 modelling:1 likelihood:2 contrast:1 posteriori:1 rigid:1 inaccurate:1 typically:1 accept:1 hidden:7 wij:1 classification:2 denoted:3 priori:1 softmax:1 field:1 equal:3 having:1 manually:1 preparing:1 represents:5 broad:1 discrepancy:1 others:2 connectionist:1 report:1 few:1 recognize:1 freedom:2 assimilate:1 alignment:9 truly:1 allocating:1 accurate:1 shorter:1 initialized:1 desired:1 re:9 soft:2 uniform:1 delay:1 examining:1 stored:1 thickness:1 scanning:1 combined:2 st:7 peak:1 ie:1 probabilistic:1 off:7 reestimated:1 ctr:2 again:1 central:1 squared:1 thesis:1 external:1 book:1 style:1 leading:1 yp:4 account:1 potential:1 summarized:1 student:1 caused:1 blind:1 stream:1 performed:1 observing:1 start:1 bayes:1 parallel:1 slope:1 accuracy:2 maximized:1 rabiner:3 dealt:1 handwritten:4 raw:1 accurately:1 none:1 stroke:1 sharing:1 strip:1 ed:1 sixth:1 handwriting:7 propagated:1 adjusting:1 knowledge:1 improves:1 segmentation:30 organized:1 thinning:1 back:2 reflecting:1 supervised:1 improved:4 evaluated:1 though:1 just:1 stage:1 until:1 nonlinear:1 propagation:5 del:2 lack:1 aj:4 indicated:1 usa:1 normalized:2 true:2 assigned:1 read:3 satisfactory:1 adjacent:2 assistance:1 during:1 maintained:1 yorktown:1 generalized:1 arrived:1 outline:1 tt:1 performs:1 image:2 novel:1 sigmoid:1 rotation:1 shear:1 qp:7 volume:1 discussed:1 slight:1 belong:2 kluwer:1 cambridge:3 slant:1 ai:4 automatic:2 language:1 specification:1 recognizers:1 longer:1 posterior:8 phone:1 termed:2 watson:1 santini:2 morgan:2 seen:4 eo:2 recognized:2 determine:1 f3p:1 ii:3 segmented:2 faster:1 england:1 match:1 divided:1 inpu:1 qi:2 schematic:1 iteration:3 represent:7 normalization:2 receive:1 background:1 addition:1 extra:1 enough:3 variety:1 affect:1 architecture:1 url:1 passed:1 reinforcing:1 tabulated:1 speech:5 generally:2 latency:4 detailed:2 clear:1 cursive:1 transforms:1 ph:1 processed:1 mcclelland:1 reduced:1 canonical:1 estimated:7 delta:1 per:1 four:1 salient:1 nevertheless:1 falling:1 clarity:1 backward:16 convert:1 letter:26 uncertainty:3 place:1 throughout:1 reader:1 reasonable:1 hochberg:1 layer:5 distinguish:1 correspondence:1 copied:1 portrayal:1 scanned:1 speed:1 department:2 trumpington:1 according:1 poor:1 belonging:1 describes:4 across:3 character:9 wherever:1 invariant:1 pr:1 taken:1 previously:2 end:2 adopted:1 available:2 jd:1 remaining:2 tony:1 instant:1 giving:1 objective:2 occurs:1 usual:1 separate:1 link:2 mapped:1 thank:1 street:1 hmm:1 parametrized:1 lthe:1 idm:1 assuming:1 length:7 ql:1 vertical:1 markov:8 discarded:1 t:1 hinton:1 assp:1 frame:44 ww:1 arbitrary:1 retrained:2 lxd:1 required:1 specified:1 narrow:1 robinson:9 able:1 below:1 pattern:1 reading:2 oj:2 including:1 overlap:1 suitable:1 treated:2 hybrid:1 bourlard:2 representing:4 scheme:1 improve:2 created:1 gz:1 naive:2 deviate:1 prior:5 literature:1 relative:2 fully:1 analogy:1 degree:2 elsewhere:1 course:1 last:5 bias:1 senior:7 vv:1 allow:3 perceptron:1 taking:1 distributed:1 regard:1 feedback:4 calculated:6 vocabulary:2 transition:2 avoids:1 boundary:1 qn:1 forward:17 made:1 author:1 historical:1 far:1 approximate:3 compact:1 uni:1 transcription:1 sequentially:1 assumed:2 xi:1 butler:4 search:1 table:4 ctp:3 reviewed:1 robust:1 untrained:5 significance:1 spread:1 whole:2 ligature:3 nothing:1 allowed:1 repeated:2 referred:1 xl:3 pe:1 weighting:1 third:2 xt:5 er:1 cease:1 albeit:1 adding:1 occurring:1 sorting:1 suited:1 entropy:1 simply:1 appearance:1 likely:6 forming:1 wig:1 transcribe:1 goal:1 identity:4 towards:4 labelled:2 change:2 hard:1 determined:1 except:1 bradford:1 experimental:1 perceptrons:1 indicating:1 enk:1 internal:3 tested:1 avoiding:1 |
65 | 1,057 | When is an Integrate-and-fire Neuron
like a Poisson Neuron?
Charles F. Stevens
Salk Institute MNL/S
La Jolla, CA 92037
cfs@salk.edu
Anthony Zador
Salk Institute MNL/S
La Jolla, CA 92037
zador@salk.edu
Abstract
In the Poisson neuron model, the output is a rate-modulated Poisson process (Snyder and Miller, 1991); the time varying rate parameter ret) is an instantaneous function G[.] of the stimulus,
ret) = G[s(t)]. In a Poisson neuron, then, ret) gives the instantaneous firing rate-the instantaneous probability of firing at any
instant t-and the output is a stochastic function of the input. In
part because of its great simplicity, this model is widely used (usually with the addition of a refractory period) , especially in in vivo
single unit electrophysiological studies, where set) is usually taken
to be the value of some sensory stimulus. In the integrate-and-fire
neuron model, by contrast, the output is a filtered and thresholded
function of the input: the input is passed through a low-pass filter
(determined by the membrane time constant T) and integrated until the membrane potential vet) reaches threshold 8, at which point
vet) is reset to its initial value. By contrast with the Poisson model,
in the integrate-and-fire model the ouput is a deterministic function
of the input. Although the integrate-and-fire model is a caricature
of real neural dynamics, it captures many of the qualitative features, and is often used as a starting point for conceptualizing the
biophysical behavior of single neurons. Here we show how a slightly
modified Poisson model can be derived from the integrate-and-fire
model with noisy inputs yet) = set) + net). In the modified model,
the transfer function G[.] is a sigmoid (erf) whose shape is determined by the noise variance /T~. Understanding the equivalence
between the dominant in vivo and in vitro simple neuron models
may help forge links between the two levels.
c. F. STEVENS. A. ZADOR
104
1
Introduction
In the Poisson neuron model, the output is a rate-modulated Poisson process; the
time varying rate parameter ret) is an instantaneous function G[.] of the stimulus, ret) = G[s(t)]. In a Poisson neuron, then, ret) gives the instantaneous firing
rate-the instantaneous probability of firing at any instant t-and the output is a
stochastic function of the input. In part because of its great simplicity, this model
is widely used (usually with the addition of a refractory period), especially in in
vivo single unit electrophysiological studies, where set) is usually taken to be the
value of some sensory stimulus.
In the integrate-and-fire neuron model, by contrast, the output is a filtered and
thresholded function of the input: the input is passed through a low-pass filter
(determined by the membrane time constant T) and integrated until the membrane
potential vet) reaches threshold 0, at which point vet) is reset to its initial value.
By contrast with the Poisson model, in the integrate-and-fire model the ouput is
a deterministic function of the input. Although the integrate-and-fire model is a
caricature of real neural dynamics, it captures many of the qualitative features, and
is often used as a starting point for conceptualizing the biophysical behavior of single
neurons (Softky and Koch , 1993; Amit and Tsodyks, 1991; Shadlen and Newsome,
1995; Shadlen and Newsome, 1994; Softky, 1995; DeWeese, 1995; DeWeese, 1996;
Zador and Pearlmutter, 1996).
Here we show how a slightly modified Poisson model can be derived from the
integrate-and-fire model with noisy inputs yet) = set) + net). In the modified
model, the transfer function G[.] is a sigmoid (erf) whose shape is determined by
the noise variance (j~ . Understanding the equivalence between the dominant in vivo
and in vitro simple neuron models may help forge links between the two levels.
2
The integrate-and-fire model
Here we describe the the forgetful leaky integrate-and-fire model. Suppose we add
a signal set) to some noise net),
yet) = net) + set),
and threshold the sum to produce a spike train
z(t) = F[s(t) + net)],
where F is the thresholding functional and z(t) is a list of firing times generated by
the input. Specifically, suppose the voltage vet) of the neuron obeys
vet) = - vet)
+ yet)
(1)
T
where T is the membrane time constant. We assume that the noise net) has O-mean
and is white with variance (j~. Thus yet) can be thought of as a Gaussian white
process with variance (j~ and a time-varying mean set) . If the voltage reaches the
threshold 00 at some time t, the neuron emits a spike at that time and resets to
the initial condition Vo. This is therefore a 5 parameter model: the membrane
time constant T, the mean input signal Il, the variance of the input signal 17 2 , the
threshold 0, and the reset value Vo. Of course, if net) = 0, we recover a purely
deterministic integrate-and-fire model.
When Is an Integrate-and-fire Neuron like a Poisson Neuron?
105
In order to forge the link between the integrate-and-fire neuron dynamics and the
Poisson model, we will treat the firing times T probabilistically. That is, we will
express the output of the neuron to some particular input set) as a conditional
distribution p(Tls(t?, i.e. the probability of obtaining any firing time T given
some particular input set) .
Under these assumptions, peT) is given by the first passage time distribution
(FPTD) of the Ornstein-Uhlenbeck process (Uhlenbeck and Ornstein, 1930; Tuckwell, 1988). This means that the time evolution of the voltage prior to reaching
threshold is given by the Fokker-Planck equation (FPE),
8
8t g(t, v)
u; 8v8
=2
8
vet)
v) - av [(set) - --;:- )g(t, v)],
2
2 get,
where uy = Un and get, v) is the distribution at time t of voltage
Then the first passage time distribution is related to g( v, t) by
peT) = -
81
at
90
-00
(2)
-00
< v ::;
(}o.
(3)
get, v)dv.
The integrand is the fraction of all paths that p.ave not yet crossed threshold. peT)
is therefore just the interspike interval (lSI) distribution for a given signal set). A
general eigenfunction expansion solution for the lSI distribution is known, but it
converges slowly and its terms offer little insight into the behavior (at least to us) .
We now derive an expression for the probability of crossing threshold in some very
short interval ~t, starting at some v. We begin with the "free" distribution of g
(Tuckwell, 1988): the probability of the voltage jumping to v' at time t'
t + ~t,
given that it was at v at time t, assuming von Neumann boundary conditions at
plus and minus infinity,
=
1
get', v'lt, v) =
[
J27r q(~t;Uy)
exp -
?)2]
(v' - m( ~t; u y ,
2 q(~t;Uy)
(4)
with
and
m(~t)
= ve- at / + set) * T(l _ e- at /
T
T ),
where * denotes convolution. The free distribution is a Gaussian with a timedependent mean m(~t) and variance q(~t; u y ). This expression is valid for all ~t.
The probability of making a jump
~v
in a short interval
~t ~ T
depends only on
ga(~t, ~v; uy) =
For small
~t,
= v' 1
v
~v
..j27r qa(u y )
and
~t,
exp [_
~~2
)].
2 qa uy
(5)
we expand to get
qa(uy) :::::: 2u;~t,
which is independent of T, showing that the leak can be neglected for short times.
c. F. STEVENS, A. ZADOR
106
Now the probability Pt>, that the voltage exceeds threshold in some short Ilt, given
that it started at v, depends on how far v is from threshold; it is
Pr[v + Ilv
~
0] = Pr[llv
~
0 - v].
Thus
(Xl dvgt>,(llt, v; O"y)
J9-v
1
-erfc
2
1
-erfc
2
(6)
(o-v)
(o-v)
J2qt>,(O"y)
O"yJ21lt
I;
where erfc(x) = 1 - -j;
e-t~ dt goes from [2 : 0]. This then is the key result:
it gives the instantaneous probability of firing as a function of the instantaneous
voltage v. erfc is sigmoidal with a slope determined by O"y, so a smaller noise yields
a steeper (more deterministic) transfer function; in the limit of 0 noise, the transfer
function is a step and we recover a completely deterministic neuron.
Note that Pt>, is actually an instantaneous function of v(t), not the stimulus itself
s(t). If the noise is large compared with s(t) we must consider the distribution
g$ (v, t; O"y) of voltages reached in response to the input s(t):
(7)
Py(t)
3
Ensemble of Signals
What if the inputs s(t) are themselves drawn from an ensemble? If their distribution
is also Gaussian and white with mean Jl and variance
and if the firing rate is
low (E[T] ~ T), then the output spike train is Poisson. Why is firing Poisson only
in the slow firing limit? The reason is that, by assumption, immediately following
a spike the membrane potential resets to 0; it must then rise (assuming Jl > 0) to
some asymptotic level that is independent of the initial conditions. During this rise
the firing rate is lower than the asymptotic rate, because on average the membrane
is farther from threshold, and its variance is lower. The rate at which the asymptote
is achieved depends on T. In the limit as t ~ T, some asymptotic distribution of
voltage qoo(v), is attained. Note that if we make the reset Vo stochastic, with a
distribution given by qoo (v), then the firing probability would be the same even
immediately after spiking, and firing would be Poisson for all firing rates.
0";,
A Poisson process is characterized by its mean alone. We therefore solve the FPE
(eq. 2) for the steady-state by setting ?tg(t, v) = 0 (we consider only threshold
crossings from initial values t ~ T; negYecting the early events results in only a
small error, since we have assumed E{T} ~ T). Thus with the absorbing boundary
107
When Is an Integrate-and-fire Neuron like a Poisson Neuron?
at 0 the distribution at time t
~ T
(given here for JJ
= 0) is
(8)
g~(Vj uy)= kl (1 - k2 erfi [uyfi]) exp [~i:] ,
where u; = u; + u~, erfi(z) = -ierf(iz), kl determines the normalization (the sign
of kl determines whether the solution extends to positive or negative infinity) and
k2 = l/erfi(O/(uy Vr)) is determined by the boundary. The instantaneous Poisson
rate parameter is then obtained through eq. (7),
(9)
Fig. 1 tests the validity of the exponential approximation. The top graph shows
the lSI distribution near the "balance point" , when the excitation is in balance with
the inhibition and the membrane potential hovers just subthreshold. The bottom
curves show the lSI distribution far below the balance point. In both cases, the
exponential distribution provides a good approximation for t ~ T.
4
Discussion
The main point of this paper is to make explicit the relation between the Poisson
and integrate-and-fire models of neuronal acitivity. The key difference between
them is that the former is stochastic while the latter is deterministic. That is, given
exactly the same stimulus, the Poisson neuron produces different spike trains on
different trials, while the integrate-and-fire neuron produces exactly the same spike
train each time. It is therefore clear that if some degree of stochasticity is to be
obtained in the integrate-and-fire model, it must arise from noise in the stimulus
itself.
The relation we have derived here is purely formalj we have intentionally remained
agnostic about the deep issues of what is signal and what is noise in the inputs to a
neuron. We observe nevertheless that although we derive a limit (eq. 9) where the
spike train of an integrate-and-fire neuron is a Poisson process-i.e. the probability
of obtaining a spike in any interval is independent of obtaining a spike in any other
interval (except for very short intervals)-from the point of view of information
processing it is a very different process from the purely stochastic rate-modulated
Poisson neuron. In fact, in this limit the spike train is deterministically Poisson
if u y = u., i. e. when n( t) = OJ in this case the output is a purely deterministic
function of the input, but the lSI distribution is exponential.
108
C. F. STEVENS, A. ZADOR
References
Amit, D. and Tsodyks, M. (1991). Quantitative study of attractor neural network retrieving at low spike rates. i. substrate-spikes, rates and neuronal gain.
Network: Computation in Neural Systems , 2:259-273 .
DeWeese, M. (1995). Optimization principles for the neural code. PhD thesis, Dept
of Physics, Princeton University.
DeWeese, M. (1996). Optimization principles for the neural code. In Hasselmo,
M., editor, Advances in Neural Information Processing Systems, vol. 8. MIT
Press, Cambridge, MA.
Shadlen, M. and Newsome, W. (1994) . Noise, neural codes and cortical organization.
Current Opinion in Neurobiology, 4:569-579.
Shadlen, M. and Newsome, W. (1995) . Is there a signal in the noise? [comment].
Current Opinion in Neurobiology, 5:248-250.
Snyder, D. and Miller, M. (1991). Random Point Processes in Time and Space, 2 nd
edition. Springer-Verlag.
Softky, W. (1995) . Simple codes versus efficient codes. Current Opinion in Neurobiology, 5:239-247 .
Softky, W. and Koch, C. (1993). The highly irregular firing of cortical cells is
inconsistent with temporal integration of random epsps. J. Neuroscience . ,
13:334-350.
Tuckwell, H. (1988). Introduction to theoretical neurobiology (2 vols.). Cambridge.
Uhlenbeck, G. and Ornstein, L. (1930). On the theory of brownian motion. Phys.
Rev., 36:823-84l.
Zador, A. M. and Pearlmutter, B. A. (1996) . VC dimension of an integrate and fire
neuron model. Neural Computation, 8(3) . In press.
When Is an Integrate-and-fire Neuron like a Poisson Neuron?
109
lSI distributions at balance point and the exponential limit
0.02
0.015
.~
15
.8
ea.
0.01
0.005
50
100
150
200
300
250
Time (msec)
350
400
450
500
200
400
600
800
1000 1200
lSI (msec)
1400
1600
1800
2000
2 x 10-3
1.5
~
~
.0
...0a.
1
0.5
0
0
Figure 1: lSI distributions. (A; top) lSI distribution for leaky integrate-and-fire
model at the balance point, where the asymptotic membrane potential is just subthreshold, for two values of the signal variance (1'2 . Increasing (1'2 shifts the distribution to the left . For the left curve, the parameters were chosen so that E{T} ~ T,
giving a nearly exponential distribution; for the right curve, the distribution would
be hard to distinguish experimentally from an exponential distribution with a refractory period. (T = 50 msec; left: E{T} = 166 msec; right: E{T} = 57 msec).
(B; bottom) In the subthreshold regime, the lSI distribution (solid} is nearly exponential (dashed) for intervals greater than the membrane time constant. (T = 50
msec; E{T} = 500 msec)
| 1057 |@word trial:1 nd:1 minus:1 solid:1 initial:5 current:3 yet:6 must:3 interspike:1 shape:2 asymptote:1 alone:1 short:5 farther:1 filtered:2 provides:1 sigmoidal:1 ouput:2 qualitative:2 retrieving:1 behavior:3 themselves:1 little:1 increasing:1 begin:1 agnostic:1 what:3 ret:6 temporal:1 quantitative:1 exactly:2 k2:2 unit:2 planck:1 positive:1 treat:1 limit:6 fpe:2 firing:16 path:1 plus:1 equivalence:2 obeys:1 uy:8 timedependent:1 thought:1 get:5 ga:1 py:1 deterministic:7 go:1 zador:7 starting:3 simplicity:2 immediately:2 insight:1 j27r:2 pt:2 suppose:2 substrate:1 crossing:2 bottom:2 capture:2 tsodyks:2 leak:1 neglected:1 dynamic:3 purely:4 completely:1 train:6 describe:1 whose:2 widely:2 solve:1 erf:2 noisy:2 itself:2 biophysical:2 net:7 reset:6 neumann:1 produce:3 converges:1 help:2 derive:2 eq:3 epsps:1 conceptualizing:2 stevens:4 filter:2 stochastic:5 vc:1 opinion:3 koch:2 exp:3 great:2 early:1 ilv:1 hasselmo:1 mit:1 gaussian:3 modified:4 reaching:1 qoo:2 varying:3 voltage:9 probabilistically:1 derived:3 contrast:4 ave:1 integrated:2 hovers:1 relation:2 expand:1 caricature:2 issue:1 integration:1 nearly:2 stimulus:7 ve:1 fire:22 attractor:1 organization:1 highly:1 jumping:1 theoretical:1 newsome:4 tg:1 physic:1 von:1 thesis:1 slowly:1 potential:5 ornstein:3 crossed:1 depends:3 view:1 steeper:1 reached:1 recover:2 slope:1 vivo:4 il:1 variance:9 ensemble:2 miller:2 yield:1 subthreshold:3 llt:1 reach:3 phys:1 intentionally:1 emits:1 gain:1 electrophysiological:2 actually:1 ea:1 attained:1 dt:1 response:1 just:3 until:2 vols:1 validity:1 evolution:1 former:1 tuckwell:3 white:3 during:1 steady:1 excitation:1 vo:3 pearlmutter:2 motion:1 passage:2 instantaneous:10 charles:1 sigmoid:2 absorbing:1 functional:1 spiking:1 vitro:2 refractory:3 jl:2 cambridge:2 stochasticity:1 inhibition:1 add:1 dominant:2 brownian:1 jolla:2 verlag:1 greater:1 period:3 signal:8 dashed:1 exceeds:1 characterized:1 offer:1 poisson:25 normalization:1 uhlenbeck:3 achieved:1 cell:1 irregular:1 addition:2 interval:7 comment:1 inconsistent:1 near:1 shift:1 whether:1 expression:2 passed:2 jj:1 v8:1 deep:1 clear:1 j9:1 lsi:10 sign:1 neuroscience:1 iz:1 snyder:2 express:1 vol:1 key:2 threshold:12 nevertheless:1 drawn:1 deweese:4 thresholded:2 graph:1 fraction:1 sum:1 ilt:1 extends:1 distinguish:1 infinity:2 integrand:1 forgetful:1 membrane:11 smaller:1 slightly:2 rev:1 making:1 dv:1 pr:2 taken:2 equation:1 forge:3 observe:1 denotes:1 top:2 cf:1 instant:2 giving:1 especially:2 amit:2 erfc:4 spike:12 softky:4 link:3 reason:1 pet:3 assuming:2 code:5 balance:5 negative:1 rise:2 av:1 neuron:29 convolution:1 neurobiology:4 kl:3 erfi:3 eigenfunction:1 qa:3 usually:4 below:1 regime:1 oj:1 event:1 started:1 prior:1 understanding:2 asymptotic:4 versus:1 integrate:22 degree:1 shadlen:4 thresholding:1 principle:2 editor:1 llv:1 course:1 free:2 institute:2 leaky:2 boundary:3 curve:3 cortical:2 valid:1 dimension:1 sensory:2 jump:1 far:2 assumed:1 vet:8 un:1 why:1 transfer:4 ca:2 obtaining:3 expansion:1 anthony:1 vj:1 mnl:2 main:1 noise:11 arise:1 edition:1 neuronal:2 fig:1 tl:1 salk:4 slow:1 vr:1 explicit:1 deterministically:1 exponential:7 xl:1 msec:7 remained:1 showing:1 list:1 phd:1 acitivity:1 lt:1 springer:1 fokker:1 determines:2 ma:1 conditional:1 hard:1 experimentally:1 determined:6 specifically:1 except:1 pas:2 la:2 latter:1 modulated:3 dept:1 princeton:1 |
66 | 1,058 | From Isolation to Cooperation:
An Alternative View of a System of Experts
Stefan Schaal:!:*
sschaal@cc.gatech.edu
http://www.cc.gatech.eduifac/Stefan.Schaal
Christopher C. Atkeson:!:
cga@cc.gatech.edu
http://www.cc.gatech.eduifac/Chris.Atkeson
+College of Computing, Georgia Tech, 801 Atlantic Drive, Atlanta, GA 30332-0280
*ATR Human Infonnation Processing, 2-2 Hikaridai, Seiko-cho, Soraku-gun, 619-02 Kyoto
Abstract
We introduce a constructive, incremental learning system for regression
problems that models data by means of locally linear experts. In contrast
to other approaches, the experts are trained independently and do not
compete for data during learning. Only when a prediction for a query is
required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to
find a local distance metric by adjusting the size and shape of the receptive field in which its predictions are valid, and also to detect relevant input features by adjusting its bias on the importance of individual input
dimensions. We derive asymptotic results for our method. In a variety of
simulations the properties of the algorithm are demonstrated with respect
to interference, learning speed, prediction accuracy, feature detection,
and task oriented incremental learning.
1. INTRODUCTION
Distributing a learning task among a set of experts has become a popular method in computationallearning. One approach is to employ several experts, each with a global domain of
expertise (e.g., Wolpert, 1990). When an output for a given input is to be predicted, every
expert gives a prediction together with a confidence measure. The individual predictions
are combined into a single result, for instance, based on a confidence weighted average.
Another approach-the approach pursued in this paper-of employing experts is to create
experts with local domains of expertise. In contrast to the global experts, the local experts
have little overlap or no overlap at all. To assign a local domain of expertise to each expert,
it is necessary to learn an expert selection system in addition to the experts themselves.
This classifier determines which expert models are used in which part of the input space.
For incremental learning, competitive learning methods are usually applied. Here the experts compete for data such that they change their domains of expertise until a stable configuration is achieved (e.g., Jacobs, Jordan, Nowlan, & Hinton, 1991). The advantage of
local experts is that they can have simple parameterizations, such as locally constant or locally linear models. This offers benefits in terms of analyzability, learning speed, and robustness (e.g., Jordan & Jacobs, 1994). For simple experts, however, a large number of experts is necessary to model a function. As a result, the expert selection system has to be
more complicated and, thus, has a higher risk of getting stuck in local minima and/or of
learning rather slowly. In incremental learning, another potential danger arises when the
input distribution of the data changes. The expert selection system usually makes either
implicit or explicit prior assumptions about the input data distribution. For example, in the
classical mixture model (McLachlan & Basford, 1988) which was employed in several local expert approaches, the prior probabilities of each mixture model can be interpreted as
606
S. SCHAAL. C. C. ATKESON
the fraction of data points each expert expects to experience. Therefore, a change in input
distribution will cause all experts to change their domains of expertise in order to fulfill
these prior assumptions. This can lead to catastrophic interference.
In order to avoid these problems and to cope with the interference problems during incremental learning due to changes in input distribution, we suggest eliminating the competition among experts and instead isolating them during learning. Whenever some new data is
experienced which is not accounted for by one of the current experts, a new expert is created. Since the experts do not compete for data with their peers, there is no reason for them
to change the location of their domains of expertise. However, when it comes to making a
prediction at a query point, all the experts cooperate by giving a prediction of the output
together with a confidence measure. A blending of all the predictions of all experts results
in the final prediction. It should be noted that these local experts combine properties of
both the global and local experts mentioned previously. They act like global experts by
learning independently of each other and by blending their predictions, but they act like local experts by confining themselves to a local domain of expertise, i.e., their confidence
measures are large only in a local region.
The topic of data fitting with structurally simple local models (or experts) has received a
great deal of attention in nonparametric statistics (e.g., Nadaraya, 1964; Cleveland, 1979;
Scott, 1992, Hastie & Tibshirani, 1990). In this paper, we will demonstrate how a nonparametric approach can be applied to obtain the isolated expert network (Section 2.1),
how its asymptotic properties can be analyzed (Section 2.2), and what characteristics such
a learning system possesses in terms of the avoidance of interference, feature detection,
dimensionality reduction, and incremental learning of motor control tasks (Section 3).
2. RECEPTIVE FIELD WEIGHTED REGRESSION
This paper focuses on regression problems, i.e., the learning of a map from 9t n ~ 9t m ?
Each expert in our learning method, Receptive Field Weighted Regression (RFWR), consists of two elements, a locally linear model to represent the local functional relationship,
and a receptive field which determines the region in input space in which the expert's
knowledge is valid. As a result, a given data set will be modeled by piecewise linear elements, blended together. For 1000 noisy data points drawn from the unit interval of the
function z == max[exp(-10x 2 ),exp(-50l),1.25exp(-5(x 2 + l)], Figure 1 illustrates an
example of function fitting with RFWR. This function consists of a narrow and a wide
ridge which are perpendicular to each other, and a Gaussian bump at the origin. Figure 1b
shows the receptive fields which the system created during the learning process. Each experts' location is at the center of its receptive field, marked by a $ in Figure 1b. The recep1.5
0 .5
0
-0.5
-1
1.5
0.5
,1
,.,
10. 5%
0
-0.5
0
I
1- 0 .5
1
-1
0
-1.5
-1.5
- 0 .5
(a)
-1
x
(b)
-1
-0.5
o
0.5
1.5
x
Figure 1: (a) result of function approximation with RFWR. (b) contour lines of 0.1 iso-activation of
each expert in input space (the experts' centers are marked by small circles).
From Isolation to Cooperation: An Alternative View of a System of Experts
607
tive fields are modeled by Gaussian functions, and their 0.1 iso-activation lines are shown
in Figure 1b as well. As can be seen, each expert focuses on a certain region of the input
space, and the shape and orientation of this region reflects the function's complexity, or
more precisely, the function's curvature, in this region. It should be noticed that there is a
certain amount of overlap among the experts, and that the placement of experts occurred on
a greedy basis during learning and is not globally optimal. The approximation result
(Figure 1a) is a faithful reconstruction of the real function (MSE = 0.0025 on a test set, 30
epochs training, about 1 minute of computation on a SPARC1O). As a baseline comparison,
a similar result with a sigmoidal 3-layer neural network required about 100 hidden units
and 10000 epochs of annealed standard backpropagation (about 4 hours on a SPARC1O).
2.1 THE ALGORITHM
. .?... '.
~"" "
WeighBd' /
Average
Output
li'Iear
~:~~
Galng
Unrt
ConnectIOn
centered at e
y,
Figure 2: The RFWR network
RFWR can be sketched in network form as
shown in Figure 2. All inputs connect to all expert networks, and new experts can be added as
needed. Each expert is an independent entity. It
consists of a two layer linear subnet and a receptive field subnet. The receptive field subnet has a
single unit with a bell-shaped activation profile,
centered at the fixed location c in input space.
The maximal output of this unit is "I" at the center, and it decays to zero as a function of the distance from the center. For analytical convenience,
we choose this unit to be Gaussian:
(1)
x is the input vector, and D the distance metric, a positive definite matrix that is generated
from the upper triangular matrix M. The output of the linear subnet is:
(2)
y=x Tb + bo=x-Tf3
A
The connection strengths b of the linear subnet and its bias bO will be denoted by the d-dimensional vector f3 from now on, and the tilde sign will indicate that a vector has been
augmented by a constant "I", e.g., i = (x T , Il. In generating the total output, the receptive
field units act as a gating component on the output, such that the total prediction is:
(3)
The parameters f3 and M are the primary quantities which have to be adjusted in the learning process: f3 forms the locally linear model, while M determines the shape and orientation of the receptive fields . Learning is achieved by incrementally minimizing the cost
function:
(4)
The first term of this function is the weighted mean squared cross validation error over all
experienced data points, a local cross validation measure (Schaal & Atkeson, 1994). The
second term is a regularization or penalty term. Local cross validation by itself is consistent, i.e., with an increasing amount of data, the size of the receptive field of an expert
would shrink to zero. This would require the creation of an ever increasing number of experts during the course of learning. The penalty term introduces some non-vanishing bias
in each expert such that its receptive field size does not shrink to zero. By penalizing the
squared coefficients of D, we are essentially penalizing the second derivatives of the function at the site of the expert. This is similar to the approaches taken in spline fitting
608
S. SCHAAL, C. C. A TI(ESON
(deBoor, 1978) and acts as a low-pass filter: the higher the second derivatives, the more
smoothing (and thus bias) will be introduced. This will be analyzed further in Section 2.2.
The update equations for the linear subnet are the standard weighted recursive least squares
equation with forgetting factor A (Ljung & SOderstrom, 1986):
f3 n+1 =f3n+wpn+lxe
1(
wherepn+1 =_ pn_
A
cv'
pn- -Tpn )
xx
ande =(y-x T f3n)
Ajw + xTpnx
cv
(5)
This is a Newton method, and it requires maintaining the matrix P, which is size
0.5d x (d + 1) . The update of the receptive field subnet is a gradient descent in J:
Mn+l=Mn- a dJ!aM
(6)
Due to space limitations, the derivation of the derivative in (6) will not be explained here.
The major ingredient is to take this derivative as in a batch update, and then to reformulate
the result as an iterative scheme. The derivatives in batch mode can be calculated exactly
due to the Sherman-Morrison-Woodbury theorem (Belsley, Kuh, & Welsch, 1980; Atkeson, 1992). The derivative for the incremental update is a very good approximation to
the batch update and realizes incremental local cross validation.
A new expert is initialized with a default M de! and all other variables set to zero, except the
matrix P. P is initialized as a diagonal matrix with elements 11 r/, where the ri are usually
small quantities, e.g., 0.01. The ri are ridge regression parameters. From a probabilistic
view, they are Bayesian priors that the f3 vector is the zero vector. From an algorithmic
view, they are fake data points of the form [x = (0, ... , '12 ,o, ... l,y = 0] (Atkeson, Moore, &
Schaal, submitted). Using the update rule (5), the influence of the ridge regression parameters would fade away due to the forgetting factor A. However, it is useful to make the
ridge regression parameters adjustable. As in (6), rj can be updated by gradient descent:
1'I n+1
= 1'n I
a aJ/ar
I
(7)
There are d ridge regression parameters, one for each diagonal element of the P matrix. In
order to add in the update of the ridge parameters as well as to compensate for the forgetting factor, an iterative procedure based on (5) can be devised which we omit here. The
computational complexity of this update is much reduced in comparison to (5) since many
computations involve multiplications by zero.
In sum, a RFWR expert consists of
three sets of parameters, one for
the locally linear model, one for
end;
the size and shape of the receptive
b)
Ir no expert was activated by more than Wgen :
- create a new expert with c=x
fields,
and one for the bias. The
end;
c)
Ir two experts are acti vated more than W pn..~
linear model parameters are up- erase the expert with the smaller receptive field
dated by a Newton method, while
end;
d)
calculate the mean, err""an' and standard de viation errslIl of the
the other parameters are updated
incrementally accumulated error er,! of all experts;
by gradient descent. In our implee)
For k.= I to #experts:
Ir (Itrr! - err_I> 9 er'Sld) reinitialize expert k with M = 2 ? Mdef
mentations, we actually use second
end;
end;
order gradient descent based on
Sutton (1992), since, with minor
extra effort, we can obtain estimates of the second derivatives of the cost function with respect to all parameters. Finally, the logic of RFWR becomes as shown in the pseudo-code
above. Point c) and e) of the algorithm introduce a pruning facility. Pruning takes place either when two experts overlap too much, or when an expert has an exceptionally large
mean squared error. The latter method corresponds to a simple form of outlier detection.
Local optimization of a distance metric always has a minimum for a very large receptive
field size. In our case, this would mean that an expert favors global instead of locally linear
regression. Such an expert will accumulate a very large error which can easily be detected
Initialize the RFWR network. with no expert;
For every new training sample (x,y):
a)
For k= I to #experts:
- calculate the activation from (I)
- update the expert's parameters according to (5), (6), and (7)
From Isolation to Cooperation: An Alternative View of a System of Experts
609
in the given way. The mean squared error term, err, on which this outlier detection is
based, is a bias-corrected mean squared error, as will be explained below.
2.2 ASYMPTOTIC BIAS AND PENALTY SELECTION
The penalty term in the cost function (4) introduces bias. In order to assess the asymptotic
value of this bias, the real function f(x) , which is to be learned, is assumed to be represented as a Taylor series expansion at the center of an expert's receptive field. Without loss
of generality, the center is assumed to be at the origin in input space. We furthermore assume that the size and shape of the receptive field are such that terms higher than 0(2) are
negligible. Thus, the cost (4) can be written as:
J
~ (1w(f. +fTX+~XTFX-bo -bTxYdx )/(1wdx )+r~Dnm
(8)
where fo' f, and F denote the constant, linear, and quadratic terms of the Taylor series
expansion, respectively. Inserting Equation (1), the integrals can be solved analytically after the input space is rotated by an orthonormal matrix transforming F to the diagonal matrix F'. Subsequently, bo' b, and D can be determined such that J is minimized:
b~ =fa + bias = fa + ~075 ~ sgn(F:')~IF;,:I,
0.25
(
)
b'
= f,
D::
~
= (2r)2
(9)
This states that the linear model will asymptotically acquire the correct locally linear
model, while the constant term will have bias proportional to the square root of the sum of
the eigenvalues of F, i.e., the n ? The distance metric D, whose diagonalized counterpart
is D', will be a scaled image of the Hessian F with an additional square root distortion.
Thus, the penalty term accomplishes the intended task: it introduces more smoothing the
higher the curvature at an expert's location is, and it prevents the receptive field of an expert shrinking to zero size (which would obviously happen for r ~ 0). Additionally,
Equation (9) shows how to determine rfor a given learning problem from an estimate of
the eigenvalues and a permissible bias. Finally, it is possible to derive estimates of the bias
and the mean squared error of each expert from the current distance metric D:
F:
biasesl = ~0. 5r IJeigenvalues(D)l.; en,,~, = r
Ln.mD;m
(10)
The latter term was incorporated in the mean squared error, err, in Section 2.1. Empirical
evaluations (not shown here) verified the validity of these asymptotic results.
3. SIMULATION RESULTS
This section will demonstrate some of the properties of RFWR. In all simulations, the
threshold parameters of the algorithm were set to = 3.5, w prune = 0.9, and wmin = 0.1.
These quantities determine the overlap of the experts as well as the outlier removal threshold; the results below are not affected by moderate changes in these parameters.
e
3.1 AVOIDING INTERFERENCE
In order to test RFWR's sensitivity with respect to changes in input data distribution, the
data of the example of Figure 1 was partitioned into three separate training sets
1; = {(x, y, z) 1-1.0 < x < -O.2} , 1; = {(x, y, z) 1-0.4 < x < OA}, 1; = {(x, y, z) I 0.2 < x < 1.0} .
These data sets correspond to three overlapping stripes of data, each having about 400 uniformly distributed samples. From scratch, a RFWR network was trained first on I; for 20
epochs, then on T2 for 20 epochs, and finally on 1; for 20 epochs. The penalty was chosen
as in the example of Figure 1 to be r = I.e - 7 , which corresponds to an asymptotic bias of
S. SCHAAL, C. C. ATKESON
610
0.1 at the sharp ridge of the function. The default distance metric D was 50*1, where I is
the identity matrix. Figure 3 shows the results of this experiment. Very little interference
can be found. The MSE on the test set increased from 0.0025 (of the original experiment of
Figure 1) to 0.003, which is still an excellent reconstruction of the real function.
y
0 .5
-0 . 5
-0.5
(a)
(b)
Figure 3: Reconstructed function after training on (a)
(c)
7;, (b) then
-1
~,(c)
and finally
1;.
3.2 LOCAL FEATURE DETECTION
The examples of RFWR given so far did not require ridge regression parameters. Their importance, however, becomes obvious when dealing with locally rank deficient data or with
irrelevant input dimensions. A learning system should be able to recognize irrelevant input
dimensions. It is important to note that this cannot be accomplished by a distance metric.
The distance metric is only able to decide to what spatial extent averaging over data in a
certain dimension should be performed. However, the distance metric has no means to exclude an input dimension. In contrast, bias learning with ridge regression parameters is able
to exclude input dimensions. To demonstrate this, we added 8 purely noisy inputs
(N(0,0.3)) to the data drawn from the function of Figure 1. After 30 epochs of training on a
10000 data point training set, we analyzed histograms of the order of magnitude of the
ridge regression parameters in all 100bias input dimensions over all the 79 experts that had
been generated by the learning algorithm. All experts recognized that the input dimensions
3 to 8 did not contain relevant information, and correctly increased the corresponding ridge
parameters to large values. The effect of a large ridge regression parameter is that the associated regression coefficient becomes zero. In contrast, the ridge parameters of the inputs 1,
2, and the bias input remained very small. The MSE on the test set was 0.0026, basically
identical to the experiment with the original training set.
3.3 LEARNING AN INVERSE DYNAMICS MODEL OF A ROBOT ARM
Robot learning is one of the domains where incremental learning plays an important role. A
real movement system experiences data at a high rate, and it should incorporate this data
immediately to improve its performance. As learning is task oriented, input distributions
will also be task oriented and interference problems can easily arise. Additionally, a real
movement system does not sample data from a training set but rather has to move in order
to receive new data. Thus, training data is always temporally correlated, and learning must
be able to cope with this. An example of such a learning task is given in Figure 4 where a
simulated 2 DOF robot arm has to learn to draw the figure "8" in two different regions of
the work space at a moderate speed (1.5 sec duration). In this example, we assume that the
correct movement plan exists, but that the inverse dynamics model which is to be used to
control this movement has not been acquired. The robot is first trained for 10 minutes (real
movement time) in the region of the lower target trajectory where it performs a variety of
rhythmic movements under simple PID control. The initial performance of this controller is
shown in the bottom part of Figure 4a. This training enables the robot to learn the locally
appropriate inverse dynamics model, a ~6 ~ ~2 continuous mapping. Subsequent per-
From Isolation to Cooperation: An Alternative View of a System of Experts
0.5
t
0.'
GralMy
0.'
0.2
0.1
..,.~t
~.
~
8
Z
8
8
?0.4
(b)
(a)
(0)
~.5
0
0.1
0.2
0.3
0.4
0.!5
Figure 4: Learning to draw the figure "8" with a 2-joint
arm: (a) Performance of a PID controller before learning (the dimmed lines denote the desired trajectories,
the solid lines the actual performance); (b) Performance after learning using a PD controller with feedforward commands from the learned inverse model; (c)
Performance of the learned controller after training on
the upper "8" of (b) (see text for more explanations).
611
formance using this inverse model for
control is depicted in the bottom part
of Figure 4b. Afterwards, the same
training takes place in the region of the
upper target trajectory in order to acquire the inverse model in this part of
the world. The figure "8" can then
equally well be drawn there (upper
part of Figure 4a,b). Switching back to
the bottom part of the work space
(Figure 4c), the first task can still be
performed as before. No interference
is recognizable. Thus, the robot could
learn fast and reliably to fulfill the two
tasks. It is important to note that the
data generated by the training movements did not always have locally full
rank. All the parameters of RFWR
were necessary to acquire the local inverse model appropriately. A total of
39 locally linear experts were generated.
4. DISCUSSION
We have introduced an incremental learning algorithm, RFWR, which constructs a network
of isolated experts for supervised learning of regression tasks. Each expert determines a locally linear model, a local distance metric, and local bias parameters by incrementally
minimizing a penalized local cross validation error. Our algorithm differs from other local
learning techniques by entirely avoiding competition among the experts, and by being
based on nonparametric instead of parametric statistics. The resulting properties of RFWR
are a) avoidance of interference in the case of changing input distributions, b) fast incremental learning by means of Newton and second order gradient descent methods, c) analyzable asymptotic properties which facilitate the selection of the fit parameters, and d) local feature detection and dimensionality reduction. The isolated experts are also ideally
suited for parallel implementations. Future work will investigate computationally less
costly delta-rule implementations of RFWR, and how well RFWR scales in higher dimensions.
5. REFERENCES
Atkeson, C. G., Moore, A. W. , & Schaal, S.
(submitted). "Locally weighted learning." Artificial Intelligence Review.
Atkeson, C. G. (1992). "Memory-based approaches to
approximating continuous functions." In: Casdagli, M.,
& Eubank, S. (Eds.), Nonlinear Modeling and Forecasting, pp.503-521. Addison Wesley.
Belsley, D. A., Kuh, E., & Welsch, R. E. (1980). Regression diagnostics: Identifying influential data and
sources ofcollinearity. New York: Wiley.
Cleveland, W. S. (1979). "Robust locally weighted regression and smoothing scatterplots." J. American Stat.
Association, 74, pp.829-836.
de Boor, C. (1978). A practical guide to splines. New
York: Springer.
Hastie, T. J., & Tibshirani, R. J. (1990). Generalized
additive models. London: Chapman and Hall.
Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton,
G. E. (1991). "Adaptive mixtures of local experts."
Neural Computation, 3, pp.79-87.
Jordan, M. I., & Jacobs, R. (1994). "Hierarchical mixtures of experts and the EM algorithm." Neural Computation, 6, pp.79-87.
Ljung, L., & S_derstr_m, T. (1986). Theory and practice of recursive identification. Cambridge, MIT Press.
McLachlan, G. J., & Basford, K. E. (1988). Mixture
models . New York: Marcel Dekker.
Nadaraya, E. A. (1964). "On estimating regression ."
Theor. Prob. Appl., 9, pp.141-142.
Schaal, S., & Atkeson, C. G. (l994b). "Assessing the
quality of learned local models." In: Cowan, J. ,Tesauro, G., & Alspector, J. (Eds.), Advances in Neural
Information Processing Systems 6. Morgan Kaufmann.
Scott, D. W. (1992). Multivariate Density Estimation.
New York: Wiley.
Sutton, R. S. (1992). "Gain adaptation beats least
squares." In: Proc. of 7th Yale Workshop on Adaptive
and Learning Systems, New Haven, CT.
Wolpert, D. H. (1990). "Stacked genealization." Los
Alamos Technical Report LA-UR-90-3460.
| 1058 |@word eliminating:1 casdagli:1 dekker:1 simulation:3 jacob:4 solid:1 reduction:2 initial:1 configuration:1 series:2 atlantic:1 err:3 current:2 diagonalized:1 nowlan:2 activation:4 written:1 must:1 eduifac:2 subsequent:1 happen:1 additive:1 shape:5 enables:1 motor:1 update:9 pursued:1 greedy:1 intelligence:1 iso:2 vanishing:1 parameterizations:1 location:4 analyzability:1 sigmoidal:1 become:1 consists:4 acti:1 combine:1 fitting:3 recognizable:1 introduce:2 boor:1 acquired:1 forgetting:3 alspector:1 themselves:2 sparc1o:2 globally:1 little:2 actual:1 increasing:2 erase:1 cleveland:2 xx:1 becomes:3 estimating:1 what:2 interpreted:1 pseudo:1 every:2 act:4 ti:1 exactly:1 classifier:1 scaled:1 control:4 unit:6 omit:1 before:2 positive:1 negligible:1 local:28 switching:1 sutton:2 dnm:1 appl:1 nadaraya:2 perpendicular:1 faithful:1 woodbury:1 practical:1 recursive:2 practice:1 definite:1 differs:1 backpropagation:1 procedure:1 danger:1 empirical:1 bell:1 confidence:4 computationallearning:1 suggest:1 convenience:1 ga:1 selection:5 cannot:1 risk:1 influence:1 www:2 map:1 demonstrated:1 center:6 annealed:1 attention:1 independently:2 duration:1 lxe:1 immediately:1 sld:1 identifying:1 fade:1 rule:2 avoidance:2 orthonormal:1 updated:2 target:2 play:1 origin:2 element:4 dimmed:1 stripe:1 bottom:3 role:1 solved:1 calculate:2 region:8 movement:7 mentioned:1 transforming:1 pd:1 complexity:2 ideally:1 dynamic:3 trained:4 creation:1 purely:1 ande:1 basis:1 easily:2 joint:1 represented:1 derivation:1 stacked:1 fast:2 london:1 query:2 detected:1 artificial:1 dof:1 peer:1 whose:1 distortion:1 triangular:1 favor:1 statistic:2 noisy:2 itself:1 final:1 obviously:1 advantage:1 eigenvalue:2 analytical:1 reconstruction:2 maximal:1 adaptation:1 inserting:1 relevant:2 competition:2 getting:1 los:1 assessing:1 generating:1 incremental:11 rotated:1 derive:2 stat:1 minor:1 received:1 predicted:1 marcel:1 come:1 indicate:1 rfor:1 correct:2 filter:1 subsequently:1 centered:2 human:1 sgn:1 subnet:7 require:2 assign:1 ftx:1 theor:1 blending:3 adjusted:1 hall:1 exp:3 great:1 algorithmic:1 mapping:1 bump:1 sschaal:1 major:1 estimation:1 proc:1 realizes:1 infonnation:1 create:2 weighted:7 reflects:1 stefan:2 mclachlan:2 mit:1 gaussian:3 always:3 rather:2 fulfill:2 avoid:1 pn:2 command:1 gatech:4 focus:2 schaal:9 rank:2 tech:1 contrast:4 baseline:1 detect:1 am:1 accumulated:1 hidden:1 sketched:1 among:4 orientation:2 denoted:1 plan:1 smoothing:3 spatial:1 initialize:1 field:20 construct:1 f3:5 shaped:1 having:1 chapman:1 identical:1 future:1 minimized:1 t2:1 report:1 seiko:1 piecewise:1 viation:1 spline:2 employ:1 oriented:3 haven:1 recognize:1 individual:3 intended:1 atlanta:1 detection:6 investigate:1 evaluation:1 introduces:3 mixture:5 analyzed:3 diagnostics:1 activated:1 integral:1 necessary:3 experience:2 soderstrom:1 taylor:2 initialized:2 circle:1 tf3:1 desired:1 isolating:1 isolated:3 instance:1 increased:2 modeling:1 blended:1 ar:1 tpn:1 cost:4 expects:1 alamo:1 too:1 connect:1 cho:1 combined:1 density:1 sensitivity:1 probabilistic:1 together:3 squared:7 choose:1 slowly:1 expert:86 derivative:7 american:1 li:1 potential:1 exclude:2 de:3 sec:1 coefficient:2 performed:2 view:6 root:2 competitive:1 complicated:1 parallel:1 ass:1 il:1 square:4 accuracy:1 ir:3 formance:1 characteristic:1 kaufmann:1 correspond:1 bayesian:1 identification:1 eubank:1 basically:1 trajectory:3 expertise:7 cc:4 drive:1 submitted:2 fo:1 whenever:1 ed:2 pp:5 obvious:1 associated:1 basford:2 gain:1 adjusting:2 popular:1 knowledge:1 dimensionality:2 actually:1 back:1 wesley:1 higher:5 supervised:1 shrink:2 generality:1 furthermore:1 implicit:1 until:1 christopher:1 nonlinear:1 overlapping:1 incrementally:3 mode:1 aj:1 quality:1 facilitate:1 effect:1 validity:1 contain:1 counterpart:1 facility:1 regularization:1 analytically:1 moore:2 deal:1 during:6 noted:1 generalized:1 ridge:13 demonstrate:3 performs:1 cooperate:2 image:1 functional:1 association:1 occurred:1 accumulate:1 cambridge:1 cv:2 dj:1 sherman:1 had:1 stable:1 robot:6 add:1 curvature:2 multivariate:1 moderate:2 irrelevant:2 tesauro:1 certain:3 accomplished:1 seen:1 minimum:2 additional:1 morgan:1 employed:1 accomplishes:1 prune:1 determine:2 recognized:1 morrison:1 afterwards:1 full:1 rj:1 kyoto:1 technical:1 constructive:1 cross:6 offer:1 compensate:1 devised:1 equally:1 prediction:12 regression:18 controller:4 essentially:1 metric:10 histogram:1 represent:1 achieved:2 receive:1 addition:1 interval:1 source:1 permissible:1 extra:1 appropriately:1 posse:1 deficient:1 cowan:1 ajw:1 jordan:4 feedforward:1 variety:2 isolation:4 fit:1 hastie:2 distributing:1 mentation:1 effort:1 forecasting:1 penalty:6 soraku:1 hessian:1 cause:1 rfwr:17 york:4 useful:1 fake:1 cga:1 involve:1 amount:2 nonparametric:3 locally:15 reduced:1 http:2 sign:1 delta:1 tibshirani:2 correctly:1 per:1 affected:1 threshold:2 drawn:3 changing:1 penalizing:2 verified:1 asymptotically:1 fraction:1 sum:2 compete:3 inverse:7 prob:1 wgen:1 place:2 decide:1 draw:2 entirely:1 layer:2 ct:1 yale:1 quadratic:1 strength:1 placement:1 precisely:1 ri:2 speed:3 influential:1 according:1 smaller:1 em:1 ur:1 partitioned:1 deboor:1 making:1 explained:2 outlier:3 interference:9 taken:1 ln:1 equation:4 pid:2 previously:1 computationally:1 needed:1 addison:1 end:5 hierarchical:1 away:1 appropriate:1 alternative:4 robustness:1 batch:3 original:2 maintaining:1 newton:3 giving:1 hikaridai:1 approximating:1 classical:1 move:1 noticed:1 added:2 quantity:3 reinitialize:1 receptive:19 primary:1 fa:2 md:1 diagonal:3 parametric:1 costly:1 gradient:5 distance:11 separate:1 atr:1 entity:1 oa:1 simulated:1 chris:1 gun:1 topic:1 extent:1 reason:1 code:1 modeled:2 relationship:1 reformulate:1 minimizing:3 acquire:3 implementation:2 reliably:1 adjustable:1 confining:1 upper:4 descent:5 beat:1 tilde:1 hinton:2 ever:1 incorporated:1 sharp:1 tive:1 introduced:2 required:2 connection:2 learned:4 narrow:1 hour:1 able:5 usually:3 below:2 scott:2 tb:1 max:1 memory:1 explanation:1 overlap:5 mn:2 arm:3 scheme:1 improve:1 dated:1 temporally:1 created:2 text:1 prior:4 epoch:6 review:1 removal:1 multiplication:1 asymptotic:7 loss:1 ljung:2 limitation:1 proportional:1 wdx:1 ingredient:1 validation:6 consistent:1 course:1 penalized:2 cooperation:4 accounted:1 bias:18 guide:1 wide:1 wmin:1 rhythmic:1 benefit:1 distributed:1 dimension:9 calculated:1 valid:2 default:2 contour:1 world:1 stuck:1 adaptive:2 atkeson:10 employing:1 far:1 cope:2 reconstructed:1 pruning:2 kuh:2 logic:1 dealing:1 global:5 assumed:2 continuous:2 iterative:2 additionally:2 learn:4 robust:1 expansion:2 mse:3 excellent:1 domain:8 did:3 arise:1 profile:1 augmented:1 site:1 en:1 georgia:1 wiley:2 analyzable:1 shrinking:1 experienced:2 structurally:1 scatterplots:1 explicit:1 minute:2 theorem:1 remained:1 gating:1 vated:1 er:2 decay:1 exists:1 workshop:1 importance:2 magnitude:1 illustrates:1 suited:1 wolpert:2 depicted:1 welsch:2 prevents:1 bo:4 springer:1 corresponds:2 determines:4 marked:2 identity:1 exceptionally:1 change:8 determined:1 except:1 corrected:1 uniformly:1 averaging:1 total:3 f3n:2 pas:1 catastrophic:1 belsley:2 la:1 college:1 latter:2 arises:1 avoiding:2 incorporate:1 scratch:1 correlated:1 |
67 | 1,059 | From Isolation to Cooperation:
An Alternative View of a System of Experts
Stefan Schaal:!:*
sschaal@cc.gatech.edu
http://www.cc.gatech.eduifac/Stefan.Schaal
Christopher C. Atkeson:!:
cga@cc.gatech.edu
http://www.cc.gatech.eduifac/Chris.Atkeson
+College of Computing, Georgia Tech, 801 Atlantic Drive, Atlanta, GA 30332-0280
*ATR Human Infonnation Processing, 2-2 Hikaridai, Seiko-cho, Soraku-gun, 619-02 Kyoto
Abstract
We introduce a constructive, incremental learning system for regression
problems that models data by means of locally linear experts. In contrast
to other approaches, the experts are trained independently and do not
compete for data during learning. Only when a prediction for a query is
required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to
find a local distance metric by adjusting the size and shape of the receptive field in which its predictions are valid, and also to detect relevant input features by adjusting its bias on the importance of individual input
dimensions. We derive asymptotic results for our method. In a variety of
simulations the properties of the algorithm are demonstrated with respect
to interference, learning speed, prediction accuracy, feature detection,
and task oriented incremental learning.
1. INTRODUCTION
Distributing a learning task among a set of experts has become a popular method in computationallearning. One approach is to employ several experts, each with a global domain of
expertise (e.g., Wolpert, 1990). When an output for a given input is to be predicted, every
expert gives a prediction together with a confidence measure. The individual predictions
are combined into a single result, for instance, based on a confidence weighted average.
Another approach-the approach pursued in this paper-of employing experts is to create
experts with local domains of expertise. In contrast to the global experts, the local experts
have little overlap or no overlap at all. To assign a local domain of expertise to each expert,
it is necessary to learn an expert selection system in addition to the experts themselves.
This classifier determines which expert models are used in which part of the input space.
For incremental learning, competitive learning methods are usually applied. Here the experts compete for data such that they change their domains of expertise until a stable configuration is achieved (e.g., Jacobs, Jordan, Nowlan, & Hinton, 1991). The advantage of
local experts is that they can have simple parameterizations, such as locally constant or locally linear models. This offers benefits in terms of analyzability, learning speed, and robustness (e.g., Jordan & Jacobs, 1994). For simple experts, however, a large number of experts is necessary to model a function. As a result, the expert selection system has to be
more complicated and, thus, has a higher risk of getting stuck in local minima and/or of
learning rather slowly. In incremental learning, another potential danger arises when the
input distribution of the data changes. The expert selection system usually makes either
implicit or explicit prior assumptions about the input data distribution. For example, in the
classical mixture model (McLachlan & Basford, 1988) which was employed in several local expert approaches, the prior probabilities of each mixture model can be interpreted as
606
S. SCHAAL. C. C. ATKESON
the fraction of data points each expert expects to experience. Therefore, a change in input
distribution will cause all experts to change their domains of expertise in order to fulfill
these prior assumptions. This can lead to catastrophic interference.
In order to avoid these problems and to cope with the interference problems during incremental learning due to changes in input distribution, we suggest eliminating the competition among experts and instead isolating them during learning. Whenever some new data is
experienced which is not accounted for by one of the current experts, a new expert is created. Since the experts do not compete for data with their peers, there is no reason for them
to change the location of their domains of expertise. However, when it comes to making a
prediction at a query point, all the experts cooperate by giving a prediction of the output
together with a confidence measure. A blending of all the predictions of all experts results
in the final prediction. It should be noted that these local experts combine properties of
both the global and local experts mentioned previously. They act like global experts by
learning independently of each other and by blending their predictions, but they act like local experts by confining themselves to a local domain of expertise, i.e., their confidence
measures are large only in a local region.
The topic of data fitting with structurally simple local models (or experts) has received a
great deal of attention in nonparametric statistics (e.g., Nadaraya, 1964; Cleveland, 1979;
Scott, 1992, Hastie & Tibshirani, 1990). In this paper, we will demonstrate how a nonparametric approach can be applied to obtain the isolated expert network (Section 2.1),
how its asymptotic properties can be analyzed (Section 2.2), and what characteristics such
a learning system possesses in terms of the avoidance of interference, feature detection,
dimensionality reduction, and incremental learning of motor control tasks (Section 3).
2. RECEPTIVE FIELD WEIGHTED REGRESSION
This paper focuses on regression problems, i.e., the learning of a map from 9t n ~ 9t m ?
Each expert in our learning method, Receptive Field Weighted Regression (RFWR), consists of two elements, a locally linear model to represent the local functional relationship,
and a receptive field which determines the region in input space in which the expert's
knowledge is valid. As a result, a given data set will be modeled by piecewise linear elements, blended together. For 1000 noisy data points drawn from the unit interval of the
function z == max[exp(-10x 2 ),exp(-50l),1.25exp(-5(x 2 + l)], Figure 1 illustrates an
example of function fitting with RFWR. This function consists of a narrow and a wide
ridge which are perpendicular to each other, and a Gaussian bump at the origin. Figure 1b
shows the receptive fields which the system created during the learning process. Each experts' location is at the center of its receptive field, marked by a $ in Figure 1b. The recep1.5
0 .5
0
-0.5
-1
1.5
0.5
,1
,.,
10. 5%
0
-0.5
0
I
1- 0 .5
1
-1
0
-1.5
-1.5
- 0 .5
(a)
-1
x
(b)
-1
-0.5
o
0.5
1.5
x
Figure 1: (a) result of function approximation with RFWR. (b) contour lines of 0.1 iso-activation of
each expert in input space (the experts' centers are marked by small circles).
From Isolation to Cooperation: An Alternative View of a System of Experts
607
tive fields are modeled by Gaussian functions, and their 0.1 iso-activation lines are shown
in Figure 1b as well. As can be seen, each expert focuses on a certain region of the input
space, and the shape and orientation of this region reflects the function's complexity, or
more precisely, the function's curvature, in this region. It should be noticed that there is a
certain amount of overlap among the experts, and that the placement of experts occurred on
a greedy basis during learning and is not globally optimal. The approximation result
(Figure 1a) is a faithful reconstruction of the real function (MSE = 0.0025 on a test set, 30
epochs training, about 1 minute of computation on a SPARC1O). As a baseline comparison,
a similar result with a sigmoidal 3-layer neural network required about 100 hidden units
and 10000 epochs of annealed standard backpropagation (about 4 hours on a SPARC1O).
2.1 THE ALGORITHM
. .?... '.
~"" "
WeighBd' /
Average
Output
li'Iear
~:~~
Galng
Unrt
ConnectIOn
centered at e
y,
Figure 2: The RFWR network
RFWR can be sketched in network form as
shown in Figure 2. All inputs connect to all expert networks, and new experts can be added as
needed. Each expert is an independent entity. It
consists of a two layer linear subnet and a receptive field subnet. The receptive field subnet has a
single unit with a bell-shaped activation profile,
centered at the fixed location c in input space.
The maximal output of this unit is "I" at the center, and it decays to zero as a function of the distance from the center. For analytical convenience,
we choose this unit to be Gaussian:
(1)
x is the input vector, and D the distance metric, a positive definite matrix that is generated
from the upper triangular matrix M. The output of the linear subnet is:
(2)
y=x Tb + bo=x-Tf3
A
The connection strengths b of the linear subnet and its bias bO will be denoted by the d-dimensional vector f3 from now on, and the tilde sign will indicate that a vector has been
augmented by a constant "I", e.g., i = (x T , Il. In generating the total output, the receptive
field units act as a gating component on the output, such that the total prediction is:
(3)
The parameters f3 and M are the primary quantities which have to be adjusted in the learning process: f3 forms the locally linear model, while M determines the shape and orientation of the receptive fields . Learning is achieved by incrementally minimizing the cost
function:
(4)
The first term of this function is the weighted mean squared cross validation error over all
experienced data points, a local cross validation measure (Schaal & Atkeson, 1994). The
second term is a regularization or penalty term. Local cross validation by itself is consistent, i.e., with an increasing amount of data, the size of the receptive field of an expert
would shrink to zero. This would require the creation of an ever increasing number of experts during the course of learning. The penalty term introduces some non-vanishing bias
in each expert such that its receptive field size does not shrink to zero. By penalizing the
squared coefficients of D, we are essentially penalizing the second derivatives of the function at the site of the expert. This is similar to the approaches taken in spline fitting
608
S. SCHAAL, C. C. A TI(ESON
(deBoor, 1978) and acts as a low-pass filter: the higher the second derivatives, the more
smoothing (and thus bias) will be introduced. This will be analyzed further in Section 2.2.
The update equations for the linear subnet are the standard weighted recursive least squares
equation with forgetting factor A (Ljung & SOderstrom, 1986):
f3 n+1 =f3n+wpn+lxe
1(
wherepn+1 =_ pn_
A
cv'
pn- -Tpn )
xx
ande =(y-x T f3n)
Ajw + xTpnx
cv
(5)
This is a Newton method, and it requires maintaining the matrix P, which is size
0.5d x (d + 1) . The update of the receptive field subnet is a gradient descent in J:
Mn+l=Mn- a dJ!aM
(6)
Due to space limitations, the derivation of the derivative in (6) will not be explained here.
The major ingredient is to take this derivative as in a batch update, and then to reformulate
the result as an iterative scheme. The derivatives in batch mode can be calculated exactly
due to the Sherman-Morrison-Woodbury theorem (Belsley, Kuh, & Welsch, 1980; Atkeson, 1992). The derivative for the incremental update is a very good approximation to
the batch update and realizes incremental local cross validation.
A new expert is initialized with a default M de! and all other variables set to zero, except the
matrix P. P is initialized as a diagonal matrix with elements 11 r/, where the ri are usually
small quantities, e.g., 0.01. The ri are ridge regression parameters. From a probabilistic
view, they are Bayesian priors that the f3 vector is the zero vector. From an algorithmic
view, they are fake data points of the form [x = (0, ... , '12 ,o, ... l,y = 0] (Atkeson, Moore, &
Schaal, submitted). Using the update rule (5), the influence of the ridge regression parameters would fade away due to the forgetting factor A. However, it is useful to make the
ridge regression parameters adjustable. As in (6), rj can be updated by gradient descent:
1'I n+1
= 1'n I
a aJ/ar
I
(7)
There are d ridge regression parameters, one for each diagonal element of the P matrix. In
order to add in the update of the ridge parameters as well as to compensate for the forgetting factor, an iterative procedure based on (5) can be devised which we omit here. The
computational complexity of this update is much reduced in comparison to (5) since many
computations involve multiplications by zero.
In sum, a RFWR expert consists of
three sets of parameters, one for
the locally linear model, one for
end;
the size and shape of the receptive
b)
Ir no expert was activated by more than Wgen :
- create a new expert with c=x
fields,
and one for the bias. The
end;
c)
Ir two experts are acti vated more than W pn..~
linear model parameters are up- erase the expert with the smaller receptive field
dated by a Newton method, while
end;
d)
calculate the mean, err""an' and standard de viation errslIl of the
the other parameters are updated
incrementally accumulated error er,! of all experts;
by gradient descent. In our implee)
For k.= I to #experts:
Ir (Itrr! - err_I> 9 er'Sld) reinitialize expert k with M = 2 ? Mdef
mentations, we actually use second
end;
end;
order gradient descent based on
Sutton (1992), since, with minor
extra effort, we can obtain estimates of the second derivatives of the cost function with respect to all parameters. Finally, the logic of RFWR becomes as shown in the pseudo-code
above. Point c) and e) of the algorithm introduce a pruning facility. Pruning takes place either when two experts overlap too much, or when an expert has an exceptionally large
mean squared error. The latter method corresponds to a simple form of outlier detection.
Local optimization of a distance metric always has a minimum for a very large receptive
field size. In our case, this would mean that an expert favors global instead of locally linear
regression. Such an expert will accumulate a very large error which can easily be detected
Initialize the RFWR network. with no expert;
For every new training sample (x,y):
a)
For k= I to #experts:
- calculate the activation from (I)
- update the expert's parameters according to (5), (6), and (7)
From Isolation to Cooperation: An Alternative View of a System of Experts
609
in the given way. The mean squared error term, err, on which this outlier detection is
based, is a bias-corrected mean squared error, as will be explained below.
2.2 ASYMPTOTIC BIAS AND PENALTY SELECTION
The penalty term in the cost function (4) introduces bias. In order to assess the asymptotic
value of this bias, the real function f(x) , which is to be learned, is assumed to be represented as a Taylor series expansion at the center of an expert's receptive field. Without loss
of generality, the center is assumed to be at the origin in input space. We furthermore assume that the size and shape of the receptive field are such that terms higher than 0(2) are
negligible. Thus, the cost (4) can be written as:
J
~ (1w(f. +fTX+~XTFX-bo -bTxYdx )/(1wdx )+r~Dnm
(8)
where fo' f, and F denote the constant, linear, and quadratic terms of the Taylor series
expansion, respectively. Inserting Equation (1), the integrals can be solved analytically after the input space is rotated by an orthonormal matrix transforming F to the diagonal matrix F'. Subsequently, bo' b, and D can be determined such that J is minimized:
b~ =fa + bias = fa + ~075 ~ sgn(F:')~IF;,:I,
0.25
(
)
b'
= f,
D::
~
= (2r)2
(9)
This states that the linear model will asymptotically acquire the correct locally linear
model, while the constant term will have bias proportional to the square root of the sum of
the eigenvalues of F, i.e., the n ? The distance metric D, whose diagonalized counterpart
is D', will be a scaled image of the Hessian F with an additional square root distortion.
Thus, the penalty term accomplishes the intended task: it introduces more smoothing the
higher the curvature at an expert's location is, and it prevents the receptive field of an expert shrinking to zero size (which would obviously happen for r ~ 0). Additionally,
Equation (9) shows how to determine rfor a given learning problem from an estimate of
the eigenvalues and a permissible bias. Finally, it is possible to derive estimates of the bias
and the mean squared error of each expert from the current distance metric D:
F:
biasesl = ~0. 5r IJeigenvalues(D)l.; en,,~, = r
Ln.mD;m
(10)
The latter term was incorporated in the mean squared error, err, in Section 2.1. Empirical
evaluations (not shown here) verified the validity of these asymptotic results.
3. SIMULATION RESULTS
This section will demonstrate some of the properties of RFWR. In all simulations, the
threshold parameters of the algorithm were set to = 3.5, w prune = 0.9, and wmin = 0.1.
These quantities determine the overlap of the experts as well as the outlier removal threshold; the results below are not affected by moderate changes in these parameters.
e
3.1 AVOIDING INTERFERENCE
In order to test RFWR's sensitivity with respect to changes in input data distribution, the
data of the example of Figure 1 was partitioned into three separate training sets
1; = {(x, y, z) 1-1.0 < x < -O.2} , 1; = {(x, y, z) 1-0.4 < x < OA}, 1; = {(x, y, z) I 0.2 < x < 1.0} .
These data sets correspond to three overlapping stripes of data, each having about 400 uniformly distributed samples. From scratch, a RFWR network was trained first on I; for 20
epochs, then on T2 for 20 epochs, and finally on 1; for 20 epochs. The penalty was chosen
as in the example of Figure 1 to be r = I.e - 7 , which corresponds to an asymptotic bias of
S. SCHAAL, C. C. ATKESON
610
0.1 at the sharp ridge of the function. The default distance metric D was 50*1, where I is
the identity matrix. Figure 3 shows the results of this experiment. Very little interference
can be found. The MSE on the test set increased from 0.0025 (of the original experiment of
Figure 1) to 0.003, which is still an excellent reconstruction of the real function.
y
0 .5
-0 . 5
-0.5
(a)
(b)
Figure 3: Reconstructed function after training on (a)
(c)
7;, (b) then
-1
~,(c)
and finally
1;.
3.2 LOCAL FEATURE DETECTION
The examples of RFWR given so far did not require ridge regression parameters. Their importance, however, becomes obvious when dealing with locally rank deficient data or with
irrelevant input dimensions. A learning system should be able to recognize irrelevant input
dimensions. It is important to note that this cannot be accomplished by a distance metric.
The distance metric is only able to decide to what spatial extent averaging over data in a
certain dimension should be performed. However, the distance metric has no means to exclude an input dimension. In contrast, bias learning with ridge regression parameters is able
to exclude input dimensions. To demonstrate this, we added 8 purely noisy inputs
(N(0,0.3)) to the data drawn from the function of Figure 1. After 30 epochs of training on a
10000 data point training set, we analyzed histograms of the order of magnitude of the
ridge regression parameters in all 100bias input dimensions over all the 79 experts that had
been generated by the learning algorithm. All experts recognized that the input dimensions
3 to 8 did not contain relevant information, and correctly increased the corresponding ridge
parameters to large values. The effect of a large ridge regression parameter is that the associated regression coefficient becomes zero. In contrast, the ridge parameters of the inputs 1,
2, and the bias input remained very small. The MSE on the test set was 0.0026, basically
identical to the experiment with the original training set.
3.3 LEARNING AN INVERSE DYNAMICS MODEL OF A ROBOT ARM
Robot learning is one of the domains where incremental learning plays an important role. A
real movement system experiences data at a high rate, and it should incorporate this data
immediately to improve its performance. As learning is task oriented, input distributions
will also be task oriented and interference problems can easily arise. Additionally, a real
movement system does not sample data from a training set but rather has to move in order
to receive new data. Thus, training data is always temporally correlated, and learning must
be able to cope with this. An example of such a learning task is given in Figure 4 where a
simulated 2 DOF robot arm has to learn to draw the figure "8" in two different regions of
the work space at a moderate speed (1.5 sec duration). In this example, we assume that the
correct movement plan exists, but that the inverse dynamics model which is to be used to
control this movement has not been acquired. The robot is first trained for 10 minutes (real
movement time) in the region of the lower target trajectory where it performs a variety of
rhythmic movements under simple PID control. The initial performance of this controller is
shown in the bottom part of Figure 4a. This training enables the robot to learn the locally
appropriate inverse dynamics model, a ~6 ~ ~2 continuous mapping. Subsequent per-
From Isolation to Cooperation: An Alternative View of a System of Experts
0.5
t
0.'
GralMy
0.'
0.2
0.1
..,.~t
~.
~
8
Z
8
8
?0.4
(b)
(a)
(0)
~.5
0
0.1
0.2
0.3
0.4
0.!5
Figure 4: Learning to draw the figure "8" with a 2-joint
arm: (a) Performance of a PID controller before learning (the dimmed lines denote the desired trajectories,
the solid lines the actual performance); (b) Performance after learning using a PD controller with feedforward commands from the learned inverse model; (c)
Performance of the learned controller after training on
the upper "8" of (b) (see text for more explanations).
611
formance using this inverse model for
control is depicted in the bottom part
of Figure 4b. Afterwards, the same
training takes place in the region of the
upper target trajectory in order to acquire the inverse model in this part of
the world. The figure "8" can then
equally well be drawn there (upper
part of Figure 4a,b). Switching back to
the bottom part of the work space
(Figure 4c), the first task can still be
performed as before. No interference
is recognizable. Thus, the robot could
learn fast and reliably to fulfill the two
tasks. It is important to note that the
data generated by the training movements did not always have locally full
rank. All the parameters of RFWR
were necessary to acquire the local inverse model appropriately. A total of
39 locally linear experts were generated.
4. DISCUSSION
We have introduced an incremental learning algorithm, RFWR, which constructs a network
of isolated experts for supervised learning of regression tasks. Each expert determines a locally linear model, a local distance metric, and local bias parameters by incrementally
minimizing a penalized local cross validation error. Our algorithm differs from other local
learning techniques by entirely avoiding competition among the experts, and by being
based on nonparametric instead of parametric statistics. The resulting properties of RFWR
are a) avoidance of interference in the case of changing input distributions, b) fast incremental learning by means of Newton and second order gradient descent methods, c) analyzable asymptotic properties which facilitate the selection of the fit parameters, and d) local feature detection and dimensionality reduction. The isolated experts are also ideally
suited for parallel implementations. Future work will investigate computationally less
costly delta-rule implementations of RFWR, and how well RFWR scales in higher dimensions.
5. REFERENCES
Atkeson, C. G., Moore, A. W. , & Schaal, S.
(submitted). "Locally weighted learning." Artificial Intelligence Review.
Atkeson, C. G. (1992). "Memory-based approaches to
approximating continuous functions." In: Casdagli, M.,
& Eubank, S. (Eds.), Nonlinear Modeling and Forecasting, pp.503-521. Addison Wesley.
Belsley, D. A., Kuh, E., & Welsch, R. E. (1980). Regression diagnostics: Identifying influential data and
sources ofcollinearity. New York: Wiley.
Cleveland, W. S. (1979). "Robust locally weighted regression and smoothing scatterplots." J. American Stat.
Association, 74, pp.829-836.
de Boor, C. (1978). A practical guide to splines. New
York: Springer.
Hastie, T. J., & Tibshirani, R. J. (1990). Generalized
additive models. London: Chapman and Hall.
Jacobs, R. A., Jordan, M. I., Nowlan, S. J., & Hinton,
G. E. (1991). "Adaptive mixtures of local experts."
Neural Computation, 3, pp.79-87.
Jordan, M. I., & Jacobs, R. (1994). "Hierarchical mixtures of experts and the EM algorithm." Neural Computation, 6, pp.79-87.
Ljung, L., & S_derstr_m, T. (1986). Theory and practice of recursive identification. Cambridge, MIT Press.
McLachlan, G. J., & Basford, K. E. (1988). Mixture
models . New York: Marcel Dekker.
Nadaraya, E. A. (1964). "On estimating regression ."
Theor. Prob. Appl., 9, pp.141-142.
Schaal, S., & Atkeson, C. G. (l994b). "Assessing the
quality of learned local models." In: Cowan, J. ,Tesauro, G., & Alspector, J. (Eds.), Advances in Neural
Information Processing Systems 6. Morgan Kaufmann.
Scott, D. W. (1992). Multivariate Density Estimation.
New York: Wiley.
Sutton, R. S. (1992). "Gain adaptation beats least
squares." In: Proc. of 7th Yale Workshop on Adaptive
and Learning Systems, New Haven, CT.
Wolpert, D. H. (1990). "Stacked genealization." Los
Alamos Technical Report LA-UR-90-3460.
Boosting Decision Trees
Harris Drucker
AT&T Bell Laboratories
Holmdel, New Jersey 07733
Corinna Cortes
AT&T Bell Laboratories
Murray Hill, New Jersey 07974
Abstract
A new boosting algorithm of Freund and Schapire is used to improve
the performance of decision trees which are constructed using the
information ratio criterion of Quinlan's C4.5 algorithm. This boosting
algorithm iteratively constructs a series of decision trees, each decision
tree being trained and pruned on examples that have been filLered by
previously trained trees. Examples that have been incorrectly classified
by the previous trees in the ensemble are resampled with higher
probability to give a new probability distribution for the next tree in the
ensemble to train on. Results from optical character recognition
(OCR), and knowledge discovery and data mining problems show that
in comparison to single trees, or to trees trained independently, or to
trees trained on subsets of the feature space, the boosting ensemble is
much better.
1 INTRODUCTION
A new boosting algorithm termed AdaBoost by their inventors (Freund and Schapire,
1995) has advantages over the original boosting algorithm (Schapire, 1990) and a second
version (Freund, 1990). The implications of a boosting algorithm is that one can take a
series of learning machines (termed weak learners) each having a poor error rate (but no
worse than .5-y, where y is some small positive number) and combine them to give an
ensemble that has very good performance (termed a strong learner). The first practical
implementation of boosting was in OCR (Drucker, 1993, 1994) using neural networks as
the weak learners. In a series of comparisons (Bottou, 1994) boosting was shown to be
superior to other techniques on a large OCR problem.
The general configuration of AdaBoost is shown in Figure 1. Each box is a decision tree
built using Quinlans C4.5 algorithm (Quinlan, 1993) The key idea is that each weak
learner is trained sequentially. The first weak learner is trained on a set of patterns picked
randomly (with replacement) from a training set. After training and pruning, the training
patterns are passed through this first decision tree. In the two class case the hypothesis hi
is either class 0 or class 1. Some of the patterns will be in error. The training set for the
480
H. DRUCKER. C. CORTES
INPUT FEATURES
#1
#2
h1
~l
T
#3
h
~2
2
h
~3
hT
~T
3
~)t log (11 ~t )
FIGURE 1. BOOSTING ENSEMBLE
.5
WEAK LEARNER WEIGHTED
TRAINING ERROR RATE
w
~
a:
a:
oa:
a:
ENSEMBLE TEST ERROR RATE
w
ENSEMBLE
TRAINING
ERROR RATE
NUMBER OF WEAK LEARNERS
FIGURE 2. INDIVIDUAL WEAK LEARNER ERROR RATE
AND ENSEMBLE TRAINING AND TEST ERROR RATES
Boosting Decision Trees
481
second weak learner will consist of patterns picked from the training set with higher
probability assigned to those patterns the first weak learner classifies incorrectly. Since
patterns are picked with replacement, difficult patterns are more likely to occur multiple
times in the training set. Thus as we proceed to build each member of the ensemble,
patterns which are more difficult to classify correctly appear more and more likely. The
training error rate of an individual weak learner tends to grow as we increase the number
of weak learners because each weak learner is asked to classify progressively more
difficult patterns. However the boosting algorithm shows us that the ensemble training
and test error rate decrease as we increase the number of weak learners. The ensemble
output is determined by weighting the hypotheses with the log of (l!~i) where ~ is
proportional to the weak learner error rate. If the weak learner has good error rate
performance, it will contribute significantly to the output, because then 1/ ~ will be large.
Figure 2 shows the general shape of the curves we would expect. Say we have
constructed N weak learners where N is a large number (right hand side of the graph).
The N'th weak learner (top curve) will have a training error rate that approaches .5
because it is trained on difficult patterns and can do only sightly better than guessing.
The bottom two curves show the test and training error rates of the ensemble using all N
weak learners. which decrease as weak learners are added to the ensemble.
2 BOOSTING
Boosting arises from the PAC (probably approximately correct) learning model which
has as one of its primary interests the efficiency of learning. Schapire was the first one to
show that a series of weak learners could be converted to a strong learner. The detailed
algorithm is show in Figure 3. Let us call the set of N 1 distinct examples the original
training set. We distinguish the original training set from what we will call the filtered
training set which consists of N 1 examples picked with replacement from the original
training set. Basically each of N 1 original examples is assigned a weight which is
proportional to the probability that the example will appear in the filtered training set
(these weights have nothing to do with the weights usually associated with neural
networks). Initially all examples are assigned a weight of unity so that all the examples
are equally likely to show up in the initial set of training examples. However, the weights
are altered at each state of boosting (Step 5 of Figure 3) and if the weights are high we
may have multiple copies of some of the original examples appearing in the filtered
training set. In step three of this algorithm, we calculate what is called the weighted
training error and this is the error rate over all the original N 1 training examples
weighted by their current respective probabilities. The algorithms terminates if this error
rate is .5 (no better than guessing) or zero (then the weights of step 5 do not change).
Although not called for in the original C4.5 algorithm, we also have an original set of
pruning examples which also are assigned weights to form a filtered pruning set and used
to prune the classification trees constructed using the filtered training set. It is known
(Mingers, 1989a) that reducing the size of the tree (pruning) improves generalization.
3 DECISION TREES
For our implementation of decision trees, we have a set of features (attributes) that
specifies an example along with their classification (we discuss the two-class problem
primarily). We pick a feature that based on some criterion, best splits the examples into
two subsets. Each of these two subsets will usually not contain examples of just one
class, so we recursively divide the subsets until the final subsets each contain examples of
just one class. Thus, each internal node specifies a feature and a value for that feature that
determines whether one should take the left or right branch emanating from that node. At
terminal nodes, we make the final decision, class 0 or 1. Thus, in decision trees one
starts at a root node and progressively traverses the tree from the root node to one of the
H. DRUCKER,C. CORTES
482
Inputs: N I training paUans. N 2 pruning paUems. N 3 test paUans
laitialize the weight veco of the N I training pattems: wI = 1 for i = 1?...? N I
laitialize the weight veco of the N 2 pruning paUmls: sl = 1 for i = 1?...?N 2
laitialize the number of trees in the ensemble to t = 1
Do Vatil weighted training enol' rate is 0 or .5 or ensemble test enoI'rate asymptotes
1. For the training set and pruning sets
w'
p ' =N.- -
r'
a'
= -w.-
1:wl
i-I
1:sl
Pick N I samples from original training set with probability P(i) to form filtered training set
Pick N 2 samples from original pruning set with probability r(i) to form filtered pruning set
2. Train tree t using filtered training set and prune using filtered pruning set
3. Pass the N I mginal training examples through the IRJICd tree whose output h, (i) is
either 0 or 1 and classification c(i) is either 0 or 1. Calculate the weighted training error
N.
1: pll
rate: E, =
h, (i) - c(i)
I
i-I
4. Set Ii, = 1
E,
- E,
5. Set the new training weight vectm' to be
wI+ 1 = wf{Ii,**(1-lh,(i) - c(i)I?)
i = 1?...?N I
Pass the N 2 original pruning paUems through the pruned tree and calculate new pruning
weight vector:
6. F<r each tree t in the ensemble (total trees 1) ? pass the j'th test pattern through and
obtain h, (j) for each t The final hypothesis hr(j) for this pattern:
hr (j)={I.
O.
Do for each test paUml and calculate the ensemble test enu rate:
7.t=t+l
End Vatil
Figure 3: Boosting Algorithm
Boosting Decision Trees
483
terminal nodes where a final decision is made. CART (Brei man, 1984) and C4.5
(Quinlan 1993) are perhaps the two most popular tree building algorithms. Here, C4.5 is
used. The attraction of trees is that the simplest decision tree can be respecified as a
series of rules and for certain potential users this is more appealing than a nonlinear
"black box" such as a neural network. That is not to say that one can not design trees
where the decision at each node depends on some nonlinear combination of features, but
this will not be our implementation.
Other attractions of decision trees are speed of learning and evaluation. Whether trees are
more accurate than other techniques depends on the application domain and the
effectiveness of the particular implementation. In OCR, our neural networks are more
accurate than trees but the penalty is in training and evaluation times. In other
applications which we will discuss later a boosting network of trees is more accurate. As
an initial example of the power of boosting, we will use trees for OCR of hand written
digits. The main rationale for using OCR applications to evaluate AdaBoost is that we
have experience in the use of a competing technology (neural networks) and we have
from the National Institute of Standards and Technology (NISn a large database of
120,000 digits, large enough so we can run multiple experiments. However, we will not
claim that trees for OCR have the best error performance.
Once the tree is constructed, it is pruned to give hopefully better generalization
performance than if the original tree was used. C4.5 uses the original training set for
what is called "pessimistic pruning" justified by the fact that there may not be enough
extra examples to form a set of pruning examples. However, we prefer to use an
independent set of examples to prune this tree. In our case, we have (for each tree in the
ensemble) an independent filtered pruning set of examples whose statistical distribution is
similar to that of the filtered training set. Since the filtering imposed by the previous
members of the ensemble can severely distort the original training distribution, we trust
this technique more than pessimistic pruning. In pruning (Mingers, 1989), we pass the
pruning set though the tree recording at each node (including non-terminal nodes) how
many errors there would be if the tree was terminated there. Then, for each node (except
for terminal nodes), we examine the subtree of that node. We then calculate the number
of errors that would be obtained if that node would be made a terminal node and compare
it to the number of errors at the terminal nodes of that subtree. If the number of errors at
the root node of this subtree is less than or equal to that of the subtree, we replace the
subtree with that node and make it a terminal node. Pruning tends to substantially reduce
the size of the tree, even if the error rates are not substantially decreased.
4 EXPERIMENTS
In order to run enough experiments to claim statistical validity we needed a large supply
of data and few enough features that the information ratio could be determined in a
reasonable amount of time. Thus we used the 120,000 examples in a NIST database of
digits subsampled to give us a IOxlO pixel array (100 features) where the features are
continuous values. We do not claim that OCR is best done by using classification trees
and certainly not in l00-dimensional space. We used 10,000 training examples, 2000
pruning examples and 2000 test examples for a total of 14,000 examples.
We also wanted to test our techniques on a wide range of problems, from easy to hard.
Therefore, to make the problem reasonably difficult, we assigned class 0 to all digits from
o to 4 (inclusive) and assigned class 1 to the remainder of the digits. To vary the
difficulty of the problem, we prefiltered the data to form data sets of difficulty f Think of
f as the fraction of hard examples generated by passing the 120,000 examples through a
poorly trained neural network and accepting the misclassified examples with probability f
and the correctly classified examples with probability 1- f. Thus f = .9 means that the
training set consists of 10,000 examples that if passed through this neural network would
H.DRUCKER,C. CORTES
484
have an error rate of .9. Table I compares the boosting performance with single tree
performance. Also indicated is the average number of trees required to reach that
performance. Overtraining never seems to be a problem for these weak learners, that is,
as one increases the number of trees, the ensemble test error rate asymptotes and never
increases.
Table 1. For fraction f of difficult examples, the error rate for a single tree and a boosting
ensemble and the number of trees required to reach the error rate for that ensemble.
f
.1
.3
.5
.7
.9
single
tree
boosting
trees
number of
trees
12%
13
16
21
23
3.5%
4.5
7.1
7.7
8.1
25
28
31
60
72
We wanted to compare the boosting ensemble to other techniques for constructing
ensembles using 14,000 examples, holding out 2000 for testing. The problem with
decision trees is that invariably, even if the training data is different (but drawn from the
same distribution), the features chosen for the first few nodes are usually the same (at
least for the OCR data). Thus, different decision surfaces are not created. In order to
create different decision regions for each tree, we can force each decision tree to consider
another attribute as the root node, perhaps choosing that attribute from the first few
attributes with largest information ratio. This is similar to what Kwok and Carter (1990)
have suggested but we have many more trees and their interactive approach did not look
feasible here. Another technique suggested by T.K. Ho (1992) is to construct independent
trees on the same 10,000 examples but randomly striking out the use of fifty of the 100
possible features. Thus, for each tree, we randomly pick 50 features to construct the tree.
When we use up to ten trees, the results using Ho's technique gives similar results to that
of boosting but the asymptotic performance is far better for boosting. After we had
performed these experiments, we learned of a technique termed "bagging" (Breiman,
1994) and we have yet to resolve the issue of whether bagging or boosting is better.
5 CONCLUSIONS
Based on preliminary evidence, it appears that for these applications a new boosting
algorithm using trees as weak learners gives far superior performance to single trees and
any other technique for constructing ensemble of trees. For boosting to work on any
problem, one must find a weak learner that gives an error rate of less than 0.5 on the
filtered training set. An important aspect of the building process is to prune based on a
separate pruning set rather than pruning based on a training set. We have also tried this
technique on knowledge discovery and data mining problems and the results are better
than single neural networks.
References
L. Bottou, C. Cortes, J.S. Denker, H. Drucker, I. Guyon, L.D. Jackel, Y. LeCun, U.A.
Muller, E. Sackinger, P. Simard, and V. Vapnik (1994), "Comparison of Classifier
Methods: A Case Study in Handwritten Digit Recognition", 1994 International
Conference on Pattern Recognition, Jerusalem.
L. Breiman, J. Friedman, R.A. Olshen, and C.J. Stone (1984), Classification and
Regression Trees, Chapman and Hall.
Boosting Decision Trees
485
L. Breiman, "Bagging Predictors", Technical Report No. 421, Department of Statistics
University of California, Berkeley, California 94720, September 1994.
H. Drucker (1994), C. Cortes, LD Jackel, Y. LeCun "Boosting and Other Ensemble
Methods", Neural Computation, vol 6, no. 6, pp. 1287-1299.
H. Drucker, R.E. Schapire, and P. Simard (1993) "Boosting Performance in Neural
Networks", International Journal of Pattern Recognition and Artificial Intelligence, Vol
7. N04, pp. 705-719.
Y. Freund (1990), "Boosting a Weak Learning Algorithm by Majority", Proceedings of
the Third Workshop on Computational Learning Theory, Morgan-Kaufmann, 202-216.
Y. Freund and R.E. Schapire (1995), "A decision-theoretic generalization of on-line
leaming and an application to boosting", Proceeding of the Second European Conference
on Computational Learning.
T.K. Ho (1992), A theory of MUltiple Classifier Systems and Its Applications to Visual
Word Recognition, Doctoral Dissertation, Department of Computer Science, SUNY at
Buffalo.
S.W. Kwok and C. Carter (1990), "Multiple Decision Trees", Uncertainty in ArtifiCial
Intelligence 4, R.D. Shachter, T.S. Levitt, L.N. Kanal, J.F Lemmer (eds) Elsevier Science
Publishers.
J.R. Quinlan (1993), C4.5: Programs For Machine Learning, Morgan Kauffman.
J. Mingers (1989), "An Empirical Comparison of Pruning Methods for Decision Tree
Induction", Machine Learning, 4:227-243.
R.E. Schapire (1990), The strength of weak learnability, Machine Learning, 5(2):197227.
| 1059 |@word version:1 eliminating:1 seems:1 casdagli:1 dekker:1 simulation:3 tried:1 jacob:4 pick:4 solid:1 recursively:1 ld:1 reduction:2 initial:3 configuration:2 series:7 atlantic:1 err:3 current:3 diagonalized:1 nowlan:2 activation:4 yet:1 written:2 must:2 eduifac:2 subsequent:1 happen:1 additive:1 shape:6 enables:1 motor:1 asymptote:2 wanted:2 update:9 progressively:2 pursued:1 greedy:1 intelligence:3 iso:2 vanishing:1 dissertation:1 accepting:1 filtered:12 parameterizations:1 boosting:33 location:4 contribute:1 node:20 analyzability:1 sigmoidal:1 traverse:1 along:1 constructed:4 become:1 supply:1 consists:6 acti:1 combine:2 fitting:3 recognizable:1 introduce:2 boor:1 acquired:1 forgetting:3 alspector:1 themselves:2 examine:1 sparc1o:2 terminal:7 globally:1 resolve:1 little:2 actual:1 increasing:2 erase:1 cleveland:2 xx:1 becomes:3 estimating:1 classifies:1 what:6 interpreted:1 substantially:2 pseudo:1 berkeley:1 every:2 act:4 ti:1 interactive:1 exactly:1 classifier:3 scaled:1 control:4 unit:6 omit:1 appear:2 positive:2 negligible:1 before:2 local:28 tends:2 severely:1 switching:1 sutton:2 dnm:1 approximately:1 black:1 doctoral:1 appl:1 nadaraya:2 perpendicular:1 range:1 faithful:1 woodbury:1 practical:2 testing:1 lecun:2 recursive:2 practice:1 definite:1 differs:1 backpropagation:1 digit:6 procedure:1 danger:1 empirical:2 bell:3 significantly:1 confidence:4 word:1 computationallearning:1 suggest:1 convenience:1 ga:1 selection:5 cannot:1 risk:1 influence:1 www:2 map:1 demonstrated:1 center:6 imposed:1 annealed:1 jerusalem:1 attention:1 independently:3 duration:1 lxe:1 immediately:1 sld:1 identifying:1 fade:1 rule:3 avoidance:2 attraction:2 array:1 orthonormal:1 updated:2 target:2 play:1 user:1 pauans:2 us:1 hypothesis:3 origin:2 element:4 recognition:5 dimmed:1 stripe:1 database:2 bottom:4 role:1 solved:1 calculate:7 region:9 movement:7 decrease:2 mentioned:1 transforming:1 pd:1 complexity:2 ideally:1 asked:1 dynamic:3 trained:12 creation:1 purely:1 ande:1 learner:25 basis:1 efficiency:1 easily:2 joint:1 represented:1 jersey:2 derivation:1 stacked:1 train:2 distinct:1 fast:2 london:1 query:2 detected:1 artificial:3 emanating:1 choosing:1 dof:1 peer:1 whose:3 distortion:1 say:2 triangular:1 favor:1 statistic:3 think:1 noisy:2 itself:1 final:5 obviously:1 advantage:2 eigenvalue:2 analytical:1 reconstruction:2 maximal:1 adaptation:1 remainder:1 inserting:1 relevant:2 poorly:1 pll:1 competition:2 getting:1 los:1 assessing:1 generating:1 incremental:11 rotated:1 derive:2 stat:1 minor:1 received:1 strong:2 predicted:1 marcel:1 come:1 indicate:1 rfor:1 correct:3 attribute:4 filter:1 subsequently:1 centered:2 human:1 sgn:1 subnet:7 require:2 assign:1 sightly:1 generalization:3 ftx:1 preliminary:1 pessimistic:2 theor:1 blending:3 adjusted:1 hall:2 exp:3 great:1 algorithmic:1 mapping:1 claim:3 bump:1 sschaal:1 major:1 vary:1 estimation:1 proc:1 realizes:1 infonnation:1 jackel:2 largest:1 wl:1 create:3 weighted:12 reflects:1 stefan:2 mclachlan:2 mit:1 gaussian:3 always:3 rather:3 fulfill:2 avoid:1 pn:2 breiman:3 command:1 gatech:4 focus:2 schaal:9 rank:2 tech:1 contrast:4 baseline:1 detect:1 am:1 elsevier:1 wf:1 accumulated:1 initially:1 hidden:1 pattems:1 misclassified:1 sketched:1 issue:1 pixel:1 among:4 orientation:2 classification:5 denoted:1 plan:1 smoothing:3 spatial:1 initialize:1 field:20 construct:4 f3:5 shaped:1 having:2 once:1 chapman:2 identical:1 never:2 equal:1 look:1 future:1 minimized:1 report:2 seiko:1 piecewise:1 spline:2 employ:1 viation:1 t2:1 oriented:3 haven:1 randomly:3 primarily:1 recognize:1 national:1 individual:5 subsampled:1 intended:1 replacement:3 friedman:1 atlanta:1 detection:6 interest:1 invariably:1 investigate:1 mining:2 evaluation:3 certainly:1 introduces:3 mixture:5 analyzed:3 diagnostics:1 activated:1 implication:1 accurate:3 integral:1 necessary:3 experience:3 lh:1 respective:1 soderstrom:1 tree:67 taylor:2 divide:1 initialized:2 circle:1 tf3:1 desired:1 isolating:1 isolated:3 instance:1 increased:2 modeling:1 classify:2 blended:1 ar:1 tpn:1 cost:4 subset:5 expects:1 alamo:1 predictor:1 too:1 learnability:1 connect:1 cho:1 combined:1 density:1 international:2 sensitivity:1 probabilistic:1 together:3 squared:7 choose:1 slowly:1 worse:1 expert:86 derivative:7 american:1 simard:2 li:1 potential:2 exclude:2 de:3 converted:1 sec:1 coefficient:2 depends:2 performed:3 view:6 root:6 picked:4 h1:1 later:1 competitive:1 start:1 complicated:1 parallel:1 ass:1 il:1 square:4 accuracy:1 ir:3 formance:1 characteristic:1 kaufmann:2 ensemble:26 correspond:1 weak:25 bayesian:1 identification:1 eubank:1 handwritten:1 basically:2 trajectory:3 expertise:7 cc:4 drive:1 classified:2 submitted:2 overtraining:1 fo:1 reach:2 whenever:1 ed:3 distort:1 pp:7 obvious:1 associated:2 basford:2 gain:1 adjusting:2 popular:2 knowledge:3 dimensionality:2 improves:1 actually:1 back:1 appears:1 wesley:1 higher:7 supervised:1 adaboost:3 done:1 shrink:2 box:2 generality:1 furthermore:1 just:2 implicit:1 though:1 until:2 hand:2 christopher:1 trust:1 nonlinear:3 overlapping:1 hopefully:1 incrementally:3 sackinger:1 mode:1 aj:1 quality:1 perhaps:2 indicated:1 building:2 effect:1 validity:2 contain:3 facilitate:1 counterpart:1 facility:1 regularization:1 analytically:1 assigned:6 moore:2 laboratory:2 iteratively:1 deal:1 during:6 noted:1 criterion:2 generalized:1 stone:1 hill:1 ridge:13 demonstrate:3 theoretic:1 performs:1 cooperate:2 image:1 superior:2 functional:1 association:1 occurred:1 accumulate:1 cambridge:1 cv:2 dj:1 sherman:1 had:2 stable:1 robot:6 surface:1 add:1 curvature:2 multivariate:1 moderate:2 irrelevant:2 tesauro:1 termed:4 certain:4 accomplished:1 muller:1 seen:1 minimum:2 additional:1 morgan:3 employed:1 accomplishes:1 prune:5 determine:2 recognized:1 morrison:1 ii:2 branch:1 afterwards:1 full:1 rj:1 kyoto:1 multiple:5 technical:2 constructive:1 cross:6 offer:1 compensate:1 devised:1 equally:2 prediction:12 regression:19 controller:4 essentially:1 metric:10 histogram:1 represent:1 achieved:2 receive:1 addition:1 justified:1 interval:1 decreased:1 grow:1 source:1 publisher:1 permissible:1 extra:2 appropriately:1 fifty:1 posse:1 probably:1 cart:1 recording:1 deficient:1 cowan:1 ajw:1 member:2 effectiveness:1 jordan:4 call:2 feedforward:1 split:1 enough:4 easy:1 variety:2 isolation:4 fit:1 hastie:2 competing:1 reduce:1 idea:1 drucker:8 lemmer:1 whether:3 distributing:1 passed:2 mentation:1 effort:1 forecasting:1 penalty:7 inventor:1 soraku:1 hessian:1 cause:1 rfwr:17 york:4 proceed:1 passing:1 useful:1 fake:1 cga:1 involve:1 detailed:1 amount:3 nonparametric:3 locally:15 ten:1 carter:2 simplest:1 reduced:1 http:2 schapire:7 specifies:2 sl:2 sign:1 delta:1 tibshirani:2 correctly:3 per:1 vol:2 affected:1 key:1 threshold:2 drawn:4 suny:1 changing:1 penalizing:2 verified:1 ht:1 asymptotically:1 graph:1 ioxlo:1 fraction:3 sum:2 compete:3 inverse:7 prob:1 wgen:1 run:2 striking:1 uncertainty:1 place:2 reasonable:1 decide:1 guyon:1 draw:2 decision:24 prefer:1 holmdel:1 entirely:1 layer:2 ct:1 resampled:1 hi:1 distinguish:1 yale:1 quadratic:1 strength:2 occur:1 placement:1 precisely:1 inclusive:1 ri:2 aspect:1 speed:4 pruned:3 optical:1 influential:1 department:2 according:1 combination:1 poor:1 smaller:1 terminates:1 em:1 character:1 ur:1 partitioned:1 deboor:1 unity:1 wi:2 making:1 appealing:1 explained:2 outlier:3 interference:9 taken:1 ln:1 equation:4 pid:2 previously:2 computationally:1 discus:2 needed:2 addison:1 end:6 denker:1 kwok:2 hierarchical:1 away:1 appropriate:1 ocr:9 appearing:1 alternative:4 robustness:1 batch:3 corinna:1 ho:3 original:17 bagging:3 top:1 maintaining:1 newton:3 quinlan:4 giving:1 murray:1 hikaridai:1 approximating:1 classical:1 build:1 move:1 noticed:1 added:3 quantity:3 reinitialize:1 receptive:19 primary:2 fa:2 md:1 diagonal:3 parametric:1 costly:1 guessing:2 gradient:5 september:1 distance:11 separate:2 atr:1 entity:1 oa:2 simulated:1 majority:1 chris:1 gun:1 topic:1 extent:1 reason:1 induction:1 code:1 modeled:2 relationship:1 reformulate:1 ratio:3 minimizing:3 acquire:3 difficult:6 olshen:1 holding:1 implementation:6 reliably:1 design:1 n04:1 adjustable:1 confining:1 upper:4 nist:1 descent:5 buffalo:1 beat:1 tilde:1 incorrectly:2 hinton:2 ever:1 incorporated:1 sharp:1 tive:1 introduced:2 required:4 connection:2 c4:7 california:2 learned:5 narrow:1 hour:1 able:5 suggested:2 usually:6 below:2 scott:2 pattern:14 kauffman:1 tb:1 program:1 built:1 max:1 memory:1 explanation:1 including:1 power:1 overlap:5 difficulty:2 force:1 hr:2 mn:2 arm:3 scheme:1 improve:2 altered:1 dated:1 technology:2 temporally:1 created:3 text:1 prior:4 epoch:6 review:1 removal:1 discovery:2 multiplication:1 asymptotic:8 freund:5 loss:1 expect:1 ljung:2 rationale:1 limitation:1 proportional:3 filtering:1 wdx:1 ingredient:1 validation:6 consistent:1 course:1 penalized:2 cooperation:4 accounted:1 copy:1 bias:18 guide:1 side:1 institute:1 wide:2 wmin:1 rhythmic:1 benefit:1 distributed:1 curve:3 dimension:9 calculated:1 valid:2 default:2 contour:1 world:1 stuck:1 made:2 adaptive:2 atkeson:10 employing:1 far:3 cope:2 reconstructed:1 pruning:25 l00:1 kuh:2 logic:1 dealing:1 global:5 sequentially:1 assumed:2 continuous:3 iterative:2 table:2 additionally:2 learn:4 reasonably:1 robust:1 kanal:1 expansion:2 mse:3 excellent:1 bottou:2 european:1 constructing:2 domain:9 did:4 main:1 terminated:1 arise:1 profile:1 nothing:1 augmented:1 site:1 levitt:1 en:1 georgia:1 wiley:2 analyzable:1 shrinking:1 experienced:2 structurally:1 scatterplots:1 explicit:1 enu:1 weighting:1 third:1 few:3 minute:2 theorem:1 remained:1 gating:1 vated:1 er:2 pac:1 decay:1 cortes:6 evidence:1 exists:1 workshop:2 consist:1 vapnik:1 importance:2 magnitude:1 subtree:5 illustrates:1 suited:1 wolpert:2 depicted:1 welsch:2 likely:3 shachter:1 visual:1 prevents:1 bo:4 springer:1 corresponds:2 determines:5 harris:1 marked:2 identity:1 leaming:1 replace:1 man:1 exceptionally:1 change:9 hard:2 feasible:1 determined:3 except:2 corrected:1 uniformly:1 averaging:1 reducing:1 total:5 f3n:2 pas:5 catastrophic:1 belsley:2 la:1 called:3 college:1 internal:1 mingers:3 latter:2 arises:2 avoiding:2 incorporate:1 evaluate:1 scratch:1 correlated:1 |
68 | 106 | 2
CONSTRAINTS ON ADAPTIVE NETWORKS
FOR MODELING HUMAN GENERALIZATION
M. Pavel
Mark A. Gluck
Van Henkle
Departm?1Il of Psychology
Stanford University
Stanford. CA 94305
ABSTRACT
The potential of adaptive networks to learn categorization rules and to
model human performance is studied by comparing how natural and
artificial systems respond to new inputs, i.e., how they generalize. Like
humans, networks can learn a detenninistic categorization task by a
variety of alternative individual solutions. An analysis of the constraints imposed by using networks with the minimal number of hidden
units shows that this "minimal configuration" constraint is not
sufficient to explain and predict human performance; only a few solutions were found to be shared by both humans and minimal adaptive
networks. A further analysis of human and network generalizations
indicates that initial conditions may provide important constraints on
generalization. A new technique, which we call "reversed learning",
is described for finding appropriate initial conditions.
INTRODUCTION
We are investigating the potential of adaptive networks to learn categorization tasks and
to model human performance. In particular we have studied how both natural and
artificial systems respond to new inputs, that is, how they generalize. In this paper we
first describe a computational technique to analyze generalizations by adaptive networks.
For a given network structure and a given classification problem, the technique
enumerates all possible network solutions to the problem. We then report the results of
an empirical study of human categorization learning. The generalizations of human subjects are compared to those of adaptive networks. A cluster analysis of both human and
network generalizations indicates, significant differences between human perfonnance
and possible network behaviors. Finally, we examine the role of the initial state of a network for biasing the solutions found by the network. Using data on the relations between
human subjects' initial and final performance during training, we develop a new technique, called "reversed learning", which shows some potential for modeling human
learning processes using adaptive networks. The scope of our analyses is limited to generalizations in deterministic pattern classification (categorization) tasks.
Modeling Human Generalization
The basic difficulty in generalization is that there exist many different classification rules
("solutions") that that correctly classify the training set but which categorize novel
objects differently. The number and diversity of possible solutions depend on the
language defining the pattern recognizer. However, additional constraints can be used in
conjunction with many types of pattern categorizers to eliminate some, hopefully
undesirable, solutions.
One typical way of introducing additional constraints is to minimize the representation.
For example minimizing the number of equations and parameters in a mathematical
expression, or the number of rules in a rule-based system would assure that some
identification maps would not be computable. In the case of adaptive networks, minimizing the size of adaptive networks, which reduces the number of possible encoded functions, may result in improved generalization perfonnance (Rumelhart, 1988).
The critical theoretical and applied questions in pattern recognition involve characterization and implementation of desirable constraints. In the first part of this paper we
describe an analysis of adaptive networks that characterizes the solution space for any
particular problem.
ANALYSES OF ADAPTIVE NETWORKS
Feed-forward adaptive networks considered in this paper will be defined as directed
graphs with linear threshold units (LTV) as nodes and with edges labeled by real-valued
weights. The output or activations of a unit is detennined by a monotonic nonlinear function of a weighted sum of the activation of all units whose edges tenninate on that unit
There are three types of units within a feed-forward layered architecture: (1) Input units
whose activity is determined by external input; (2) output units whose activity is taken as
the response; and (3) the remaining units, called hidden units. For the sake of simplicity
our discussion will be limited to objects represented by binary valued vectors.
A fully connected feed-forward network with an unlimited number of hidden units can
compute any boolean function. Such a general network, therefore, provides no constraints on the solutions. Therefore, additional constraints must be imposed for the network to prefer one generalization over another. One such constraint is minimizing the
size of the network. In order to explore the effect of minimizing the number of hidden
units we first identify the minimal network architecture and then examine its generalizations.
Most of the results in this area have been limited to finding bounds on the expected
number of possible patterns that could be classified by a given network (e.g. Cover, 1965;
Volper and Hampson, 1987; Valiant, 1984; Baum & Haussler, 1989). The bounds found
by these researchers hold for all possible categorizations and are, therefore, too broad to
be useful for the analysis of particular categorization problems.
To determine the generalization behavior for a particular network architecture, a specific
3
4
Gluck, Pavel and Henkle
categorization problem and a training set it is necessary to find find all possible solutions
and the corresponding generalizations. To do this we used a computational (not a simulation) procedure developed by Pavel and Moore (1988) for finding minimal networks
solving specific categorization problems. Pavel and Moore (1988) defined two network
solutions to be different if at least one hidden unit categorized at least one object in the
training set differently. Using this definition their algorithm finds all possible different
solutions. Because finding network solutions is NP-complete (Judd, 1987), for larger
problems Pavel and Moore used a probabilistic version of the algorithm to estimate the
distribution of generalization responses.
One way to characterize the constraints on generalization is in terms of the number of
possible solutions. A larger number of possible solutions indicates that generalizations
will be less predictable. The critical result of the analysis is that, even for minimal networks. the number of different network solutions is often quite large. Moreover. the
number of solutions increases rapidly with increases in the number of hidden units. The
apparent lack of constraints can also be demonstrated by finding the probability that a
network with a randomly selected hidden layer can solve a given categorization problem.
That is, suppose that we se~t n different hidden units, each unit representing a linear
discriminant fwction. The activations of these random hidden wits can be viewed as a
ttansformation of the input patterns. We can ask what is the probability that an output
unit can be found to perfonn the desired dichotomization. A typical example of a result
of this analysis is shown in Figure 1 for the three-dimensional (3~) parity problem. In
the minimal configuration involving three hidden units there were 62 different solutions
to the 3D parity problem. The rapid increase in probability (high slope of the curve in
Figure 1) indicates that adding a few more hidden units rapidly increases the probability
that a random hidden layer will solve the 3D parity problem.
100
,,
II
,,
80
"
,
,,,
!;
-'
i
...... -.
,
,,
10
?gz
~.
,,
,, ,
40
~
20
-----
,~
,,
0
0
2
4
6
HIOOENUNITS
Figure 1 1be proportion of solutions
?
EXPERIMENT
3D PARITY
10
12
3D parity problem (solid line) and the
experimental task (dashed line) as a function of the number of hidden units.
to
The results of a more detailed analysis of the generalization performance of the minimal
networks will be discussed following a description of a categorization experiment with
Modeling Human Generalization
human subjects.
HUMAN CATEGORIZATION EXPERIMENT
In this experiment human subjects learned to categorize objects which were defined by
four dimensional binary vectors. Of the 24 possible objects, subjects were trained to classify a subset of 8 objects into two categories of 4 objects each. The specific assignments
of objects into categories was patterned after Medin et aI. (1982) and is shown in Figure
2. Eight of the patterns are designated as a training set and the remaining eight comprise
the test seL The assignment of the patterns in the training set into two categories was
such that there were many combinations of rules that could be used to correctly perfonn
the categorization. For example, the first two dimensions could be used with one other
dimension. The training patterns could also be categorized on the basis of an exclusive
or (XOR) of the last two dimensions. The type of solution obtained by a human subject
could only be determined by examining responses to the test set as well as the training
seL
TRAINING SET
1 10
1 11
101
X. 101
X1
DIMENSIONS ~
~
CATEGORY
1
0
0
0
001
000
101
o1 0
TEST SET
0
1
0
1
AAAA BBBB
000 1
001 0
o1 0 1
o10 1
??? ?
1 10 1
1 1 10
o1 0 1
o1 0 1
????
FigllTe 2. PattemI to be clulmed. (Adapted from Medin et aI .? 1982).
In the actual experiments, subjects were asked to perform a medical diagnosis for each
pattern of four symptoms (dimensions). The experimental procedure will be described
here only briefly because the details of this experiment have been described elsewhere in
detail (pavel, Gluck, Henkle, 1988). Each of the patterns was presented serially in a randomized order. Subjects responded with one of the categories and then received feedback. The training of each individual continued until he reached a criterion (responding
correctly to 32 consecutive stimuli) or until each pattern had been presented 32 times.
The data reported here is based on 78 subjects, half (39) who learned the task to criterion
and half who did DOL
Following the training phase, subjects were tested using all 16 possible patterns. The
results of the test phase enabled us to determine the generalizations perfonned by the
subjects. Subjects' generalizations were used to estimate the "functions" that they may
have been using. For example, of the 39 criterion subjects, 15 used a solution that was
consistent with the exclusive-or (XOR) of the dimensions x 3 and X4.
We use "response profiles" to graph responses for an ensemble of functions, in this case
for a group of subjects. A response profile represents the probability of assigning each
5
6
Gluck, Pavel and Henkle
pattern to category "A". For example, the response profile for the XOR solution is
shown in Figure 3A. For convenience we define the responses to the test set as the "generalization profile". The response profile of all subjects who reached the criterion is
shown in Figure 3D. The responses of our criterion subjects to the training set were basically identical and correct The distribution of subjects' genezalization profiles reflected
in the overall generalization profile are indicative of considerable individual differences
/I)
z
a::
loll
~
C
~
1001
1001
0110
1101
0110
1110
1011
1110
1101
-=:===::--
1011~
0100
0011
/I)
z
0000
0101
1010
0001
a::
loll
~
C
~
0100
0011~
0000
0101
1010
0001
0010
1000
0010
1000
0111
1100
0111
1100
1111
1111
00
02
04
06
08
10
12
PROPORTION " .. -
r---
.
00
02
04
06
01
10
12
PROPORTION " It.-
Figwe 3. (A) Response profile of the XOR solution. and (B) a proportion of
the response "A" to all patterns for human subjects (dark bars) and minimal
networks (light bars). The lower 8 patterns are from the training set and the
upper 8 patterns from the test set.
MODEliNG THE RESPONSE PROFILE
One of our goals is to model subjects' distribution of categorizations as represented by
the response profile in Figure 3D. We considered three natural approaches to such
modeling: (1) Statistical/proximity models, (2) Minimal disjunctive normal forms
(DNF), and (3) Minimal two-layer networks.
The statistical approach is based on the assumption that the response profile over subjects
represents the probability of categorizations performed by each subject Our data are not
consistent with that assumption because each subject appeared to behave deterministically. The second approach, using the minimal DNF is also not a good candidate because
there are only four such solutions and the response profile over those solutions differs
considerably from that of the SUbjects. Turning to the adaptive network solutions, we
found all the solutions using the linear programming technique described above (pavel &
Moore, 1988). The minimal two-layer adaptive network that was capable of solving the
training set problem consisted of two hidden units. The proportion of solutions as a
Modeling Human Generalization
function of the number of hidden units is shown in Figure 1 by the dashed line.
For the minimal network there were 18 different solutions. These 18 solutions had 8 different individual generalization profiles. Assuming that each of the 18 network solution
is equally likely. we computed the generalization profile for minimal network shown in
Figure 3B. The response profile for the minimal network represents the probability that a
randomly selected minimal network will assign a given pattern to category "A". Even
without statistical testing we can conclude that the generalization profiles for humans and
networks are quite different. It is possible. however. that humans and minimal networks
obtain similar solutions and that the differences in the average responses are due to the
particular statistical sampling assumption used for the minimal networks (i.e. each solution is equally likely). In order to determine the overlap of solutions we examined the
generalization profiles in more detail.
CLUSTERING ANALYSIS OF GENERALIZATION PROFILES
To analyze the similarity in solutions we defined a metric on generalization profiles. The
Hamming distance between two profiles is equal to the number of patterns that are
categorized differently. For example. the distance between generalization profile ?? A A
B A B B B B" and "A A B B B B A B" is equal to two. because the two profiles differ
on only the fourth and seventh pattern. Figure 4 shows the results of a cluster analysis
using a hierarchical clustering procedure that maximizes the average distance between
clusters.
?c
?c
?c
?c
c
??
??c
c
c
???
?
3
c
~
?
?
???
o
??? ???
c? ~
c
3~
? ?c
~
? =
?
??c c??
;
~
??~ ??~
??
?~
I
?
??
c
?~
??
I
!?
??c
c
~
??
??
Figlll'll 4. Results of hierarchical clustering for human (left) and network
(right) generalization profiles.
In this graph the average distance between any two clusters is shown by the value of the
lowest common node in the tree. The clustering analysis indicates that humans and
7
8
Gluck, Pavel and Henkle
networks obtained widely different generalization profiles. Only three generalization
profiles were found to be common to human and networks. This number of common
generalizations is to be expected by chance if the human and network solutions are
independent Thus, even if there exists a learning algorithm that approximates the human
probability distribution of responses, the minimal network would not be a good model of
human perfonnance in this task.
It is clear from the previously described network analysis that somewhat larger networks
with different constraints could account for human solutions. In order to characterize the
additional constraints, we examined subjects' individual strategies to find out why individual subjects obtained different solutions.
ANALYSIS OF HUMAN LEARNING STRATEGIES
Human learning strategies that lead to preferences for particular solutions may best be
modeled in networks by imposing constraints and providing hints (Abu-Mostafa 1989).
These include choosing the network architecture and a learning rule, constraining connectivity, and specifying initial conditions. We will focus on the specification of initial
conditions.
30
CI ..CONSISTENT
?
CONSISTENT
20
10
o
lOR
NON lOR
NO CRrTERION
SUBJECT TYPES
FiglU'e 5. The number of consistent or non-stable responses (black) and the
nwnber of stable incorrect responses (light) for XOR, Non-XOR criterion su~
jeers, and for those who never reached criterion.
Our effort to examine initial conditions was motivated by large differences in learning
curves (Pavel et al., 1988) between subjects who obtained the XOR solutions and those
who did not The subjects who did not obtain the XOR solutions would perfonn much
better on some patterns (e.g. 0001) then the XOR subjects, but worse on other patterns
(e.g. 10(0). We concluded that these subjects during the first few trials discovered rules
Modeling Human Generalization
that categorized most of the training patterns correctly but failed on one or two training
patterns.
We examined the sequences of subjects' responses to see how well they adhered to
"incorrect" rules. We designated a response to a pattern as stable if the individual
responded the same way to that pattern at least four times in a row. We designated a
response as consistent if the response was stable and correct The results of the analysis
are shown in Figure 5. These results indicate that the subjects who eventually achieved
the XOR solution were less likely to generate stable incorrect solutions. Another important result is that those subjects who never learned the correct responses to the training
set were not responding randomly. Rather, they were systematically using incorrect
rules. On the basis of these results, we conclude that subjects' initial strategies may be
important detenninants of their final solutions.
REVERSED LEARNING
For simplicity we identify subjects' initial conditions by their responses on the first few
trials. An important theoretical question is whether or not it is possible to find a network
structure, initial conditions and a learning rule such that the network can represent both
the initial and final behavior of the subject In order to study this problem we developed
a technique we call ""reversed leaming". It is based on a perturbation analysis of feedforward networks. We use the fact that the error surface in a small neighborhood of a
minimum is well approximated by a quadratic surface. Hence, a well behaved gradient
descent procedure with a starting point in the neighborhood of the minimum will find that
'minimum.
The reversed learning procedure consists of three phases. (1) A network is trained to a
final desired state of a particular individual, using both the training and the test patterns.
(2) Using only the training patterns, the network is then trained to achieve the initial state
of that individual subject closest to the desired final state (3) The network is trained with
only the training patterns and the solution is compared to the subject's response profiles.
Our preliminary results indicate that this procedure leads in many cases to initial conditions that favor the desired solutions. We are currently investigating conditions for
finding the optimal initial states.
CONCLUSION
The main goal of this study was to examine constraints imposed by humans (experimentally) and networks (linear programming) on learning of simple binary categorization
tasks. We characterize the constraints by analyzing responses to novel stimuli. We
showed that. like the humans, networks learn the detenninistic categorization task and
find many, very different. individual solutions. Thus adaptive networks are better models
than statistical models and DNF rules. The constraints imposed by minimal networks,
however, appear to differ from those imposed by human learners in that there are only a
few solutions shared between human and adaptive networks. After a detailed analysis of
9
10
Gluck, Pavel and Henkle
the human learning process we concluded that initial conditions may provide imPOl'Wlt
constraints. In fact we consider the set of initial conditions as .powerful "hints" (AbuMostafa, 1989) which reduces the number of potential solutions. without reducing the
complexity of the problem. We demonstrated the potential effectiveness of these constraints using a perturbation technique. which we call reversed learning, for finding
appropriate initial conditions.
Acknowledgements
This work was supported by research grants from the National Science Foundation
(BNS-86-18049) to Gordon Bower and Mark Gluck. and (IST-8511589) to M. Pavel. and
by a grant from NASA Ames (NCC 2-269) to Stanford University. We thank Steve Sloman and Bob Rehder for useful discussions and their comments on this draft
References
Abu-Mostafa, Y. S. Learning by example with hints. NIPS. 1989.
Baum, E. B.? & Haussler. D. What size net gives vaUd generalization? NIPS, 1989.
Cover. T. (June 1965). Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic
Computers. EC-14. 3. 326-334.
Judd. J. S. Complexity of connectionist learning with various node functions. Presented at
the First IEEE International Conference on Neural Networks. San Diego, June
1987.
Medin. D. L.? Altom. M. W.? Edelson. S. M.? & Freko. D. (1982). Correlated symptoms
and simulated medical classification. Journal of Experimental Psychology: Learning. Memory. & Cognition, 8(1).37-50.
Pavel. M.? Gluck, M. A.? & Henkle. V. Generalization by humans and multi-layer adaptive networks. Submitted to Tenth Annual Conference of the Cognitive Science
Society. August 17-19, 1988.
Pavel. M.? & Moore, R. T. (1988). Computational analysis of solutions of two-layer
adaptive networks. APL Technical Repon, Dept. of Psychology. Stanford University.
Valiant, L. G. (1984). A theory of the learnable. Comm. ACM. 27.11.1134-1142.
Volper. D. J?? & Hampson. S. E. (1987). Learning and using specific instances. Biological
Cybernetics, 56?.
| 106 |@word trial:2 hampson:2 briefly:1 version:1 proportion:5 simulation:1 pavel:14 solid:1 initial:17 configuration:2 comparing:1 activation:3 assigning:1 must:1 half:2 selected:2 indicative:1 rehder:1 provides:1 characterization:1 node:3 ames:1 preference:1 draft:1 lor:2 mathematical:1 loll:2 incorrect:4 consists:1 expected:2 rapid:1 behavior:3 examine:4 aaaa:1 multi:1 actual:1 moreover:1 maximizes:1 lowest:1 what:2 developed:2 finding:7 perfonn:3 unit:22 medical:2 grant:2 appear:1 analyzing:1 black:1 studied:2 examined:3 specifying:1 limited:3 patterned:1 medin:3 directed:1 testing:1 differs:1 procedure:6 area:1 empirical:1 convenience:1 undesirable:1 layered:1 imposed:5 deterministic:1 map:1 baum:2 demonstrated:2 starting:1 wit:1 simplicity:2 rule:11 haussler:2 continued:1 enabled:1 diego:1 suppose:1 programming:2 categorizers:1 assure:1 rumelhart:1 recognition:2 approximated:1 labeled:1 role:1 disjunctive:1 connected:1 adhered:1 predictable:1 comm:1 complexity:2 asked:1 trained:4 depend:1 solving:2 learner:1 basis:2 differently:3 represented:2 various:1 describe:2 dnf:3 artificial:2 choosing:1 neighborhood:2 whose:3 encoded:1 stanford:4 valued:2 larger:3 quite:2 solve:2 apparent:1 widely:1 favor:1 final:5 sequence:1 net:1 rapidly:2 detennined:1 achieve:1 description:1 cluster:4 categorization:17 object:8 develop:1 received:1 indicate:2 differ:2 correct:3 human:39 assign:1 generalization:38 preliminary:1 biological:1 hold:1 proximity:1 considered:2 normal:1 scope:1 predict:1 cognition:1 mostafa:2 abumostafa:1 consecutive:1 recognizer:1 currently:1 weighted:1 rather:1 ltv:1 sel:2 conjunction:1 focus:1 june:2 indicates:5 eliminate:1 hidden:15 relation:1 overall:1 classification:4 equal:2 comprise:1 never:2 sampling:1 x4:1 represents:3 broad:1 identical:1 report:1 np:1 stimulus:2 hint:3 few:5 wlt:1 gordon:1 randomly:3 connectionist:1 national:1 individual:10 phase:3 light:2 edge:2 capable:1 detenninistic:2 necessary:1 perfonnance:3 tree:1 desired:4 theoretical:2 minimal:21 instance:1 classify:2 modeling:8 boolean:1 cover:2 assignment:2 introducing:1 subset:1 examining:1 seventh:1 too:1 characterize:3 reported:1 considerably:1 international:1 randomized:1 probabilistic:1 connectivity:1 o10:1 worse:1 external:1 cognitive:1 account:1 potential:5 diversity:1 performed:1 analyze:2 characterizes:1 dichotomization:1 reached:3 slope:1 minimize:1 il:1 xor:10 responded:2 who:9 ensemble:1 identify:2 generalize:2 identification:1 basically:1 researcher:1 bob:1 cybernetics:1 classified:1 ncc:1 submitted:1 explain:1 definition:1 hamming:1 henkle:7 ask:1 enumerates:1 nasa:1 feed:3 steve:1 reflected:1 response:29 improved:1 symptom:2 until:2 su:1 nonlinear:1 hopefully:1 lack:1 behaved:1 effect:1 consisted:1 hence:1 moore:5 ll:1 during:2 criterion:7 complete:1 geometrical:1 novel:2 common:3 discussed:1 he:1 approximates:1 significant:1 imposing:1 ai:2 language:1 had:2 specification:1 stable:5 similarity:1 surface:2 closest:1 showed:1 inequality:1 binary:3 minimum:3 additional:4 somewhat:1 determine:3 dashed:2 ii:1 figlll:1 desirable:1 reduces:2 technical:1 equally:2 involving:1 basic:1 metric:1 represent:1 achieved:1 concluded:2 volper:2 comment:1 subject:38 effectiveness:1 call:3 constraining:1 feedforward:1 variety:1 psychology:3 architecture:4 computable:1 whether:1 expression:1 motivated:1 effort:1 useful:2 detailed:2 se:1 involve:1 repon:1 clear:1 dark:1 category:7 generate:1 exist:1 correctly:4 diagnosis:1 abu:2 group:1 ist:1 four:4 threshold:1 tenth:1 graph:3 sum:1 fourth:1 respond:2 powerful:1 electronic:1 prefer:1 bound:2 layer:6 quadratic:1 annual:1 activity:2 adapted:1 constraint:20 unlimited:1 sake:1 bns:1 designated:3 combination:1 taken:1 equation:1 previously:1 eventually:1 eight:2 hierarchical:2 appropriate:2 alternative:1 responding:2 remaining:2 clustering:4 include:1 society:1 question:2 strategy:4 exclusive:2 gradient:1 sloman:1 reversed:6 distance:4 thank:1 simulated:1 discriminant:1 assuming:1 o1:4 modeled:1 providing:1 minimizing:4 implementation:1 perform:1 upper:1 descent:1 behave:1 defining:1 discovered:1 perturbation:2 august:1 nwnber:1 learned:3 nip:2 bar:2 pattern:30 biasing:1 appeared:1 memory:1 critical:2 perfonned:1 natural:3 difficulty:1 serially:1 overlap:1 turning:1 representing:1 gz:1 acknowledgement:1 apl:1 fully:1 foundation:1 sufficient:1 consistent:6 systematically:1 row:1 elsewhere:1 supported:1 parity:5 last:1 van:1 curve:2 judd:2 dimension:6 feedback:1 forward:3 adaptive:18 san:1 ec:1 transaction:1 investigating:2 conclude:2 why:1 learn:4 ca:1 did:3 main:1 profile:26 categorized:4 x1:1 deterministically:1 candidate:1 bower:1 specific:4 dol:1 learnable:1 exists:1 adding:1 valiant:2 ci:1 gluck:8 explore:1 likely:3 failed:1 monotonic:1 chance:1 acm:1 viewed:1 goal:2 leaming:1 shared:2 considerable:1 experimentally:1 typical:2 determined:2 reducing:1 called:2 experimental:3 mark:2 categorize:2 dept:1 tested:1 correlated:1 |
69 | 1,060 | Statistical Theory of Overtraining - Is
Cross-Validation Asymptotically
Effective?
s. Amari, N. Murata, K.-R. Miiller*
Dept. of Math. Engineering and Inf. Physics, University of Tokyo
Hongo 7-3-1, Bunkyo-ku, Tokyo 113, Japan
M. Finke
Inst. f. Logik , University of Karlsruhe
76128 Karlsruhe, Germany
H. Yang
Lab . f. Inf. Representation, RIKEN,
Wakoshi, Saitama, 351-01, Japan
Abstract
A statistical theory for overtraining is proposed. The analysis
treats realizable stochastic neural networks, trained with KullbackLeibler loss in the asymptotic case. It is shown that the asymptotic
gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio
the examples should be divided into training and testing sets in order to obtain the optimum performance. In the non-asymptotic
region cross-validated early stopping always decreases the generalization error. Our large scale simulations done on a CM5 are in
nice agreement with our analytical findings.
1
Introduction
Training multilayer neural feed-forward networks, there is a folklore that the generalization error decreases in an early period of training, reaches the minimum and
then increases as training goes on, while the training error monotonically decreases.
Therefore, it is considered advantageous to stop training at an adequate time or to
use regularizers (Hecht-Nielsen [1989), Hassoun [1995), Wang et al. [1994)' Poggio
and Girosi [1990), Moody [1992)' LeCun et al. [1990] and others). To avoid overtraining, the following stopping rule has been proposed based on cross-validation:
*Permanent address: GMD FIRST, Rudower Chaussee 5, 12489 Berlin, Germany.
E-mail: Klaus@first .gmd.de
Statistical Theory of Overtraining-Is Cross-Validation Asymptotically Effective?
177
Divide all the available examples into two disjoint sets. One set is used for training. The other set is used for testing such that the behavior of the trained network
is evaluated by using the test examples and training is stopped at the point that
minimizes the testing error.
The present paper gives a mathematical analysis of the so-called overtraining phenomena to elucidate the folklore. We analyze the asymptotic case where the number
t of examples are very large. Our analysis treats 1) a realizable stochastic machine,
2) Kullback-Leibler loss (negative ofthe log likelihood loss), 3) asymptotic behavior
where the number t of examples is sufficiently large (compared with the number m
of parameters). We firstly show that asymptotically the gain of the generalization
error is small even if we could find the optimal stopping time. We then answer the
question: In what ratio, the examples should be divided into training and testing
sets in order to obtain the optimum performance. We give a definite answer to this
problem. When the number m of network parameters is large, the best strategy is
to use almost all t examples in the training set and to use only l/v2m examples
in the testing set, e.g. when m = 100, this means that only 7% of the training
patterns are to be used in the set determining the point for early stopping.
Our analytic results were confirmed by large-scale computer simulations of threelayer continuous feedforward networks where the number m of modifiable parameters are m = 100. When t > 30m, the theory fits well with simulations, showing
cross-validation is not necessary, because the generalization error becomes worse
by using test examples to obtain an adaequate stopping time. For an intermediate
range, where t < 30m overtraining occurs surely and the cross-validation stopping
improves the generalization ability strongly.
2
Stochastic feedforward networks
Let us consider a stochastic network which receives input vector x and emits
output vector y. The network includes a modifiable vector parameter w =
(WI,"', w m ) and is denoted by N(w). The input-output relation of the network N(w) is specified by the conditional probability p(Ylx; w). We assume (a)
that there exists a teacher network N(wo) which generates training examples
for the student N(w). And (b) that the Fisher information matrix Gij(w) =
E
[a~. logp(x, y; w) a~j logp(x, y; w)] exists, is non-degenerate and is smooth in
w, where E denotes the expectation with respect to p(x, Y; w) = q(x)p(Ylx; w).
The training set Dt = {(Xl, YI), ... , (Xt, Yt)} consists of t independent examples
generated by the distribution p(x, Y; wo) of N(wo). The maximum likelihood estimator (m.l.e.) Vi is the one that maximizes the likelihood of producing D t , or
equivalently minimizes the training error or empirical risk function
1
Rtrain(w) =
t
-i I:logp(xi,Yi;w).
(2.1)
i=l
The generalization error or risk function R(w) of network N(w) is the expectation
with respect to the true distribution,
R(w)
= -Eo[logp(x, Y; w)] = Ho+D(wo II w) = Ho+Eo
[log p~x, Y; wojJ, (2.2)
p x,y;w
where Eo denotes the expectation with respect to p(x, Y; wo), Ho is the entropy
of the teacher network and D(wo II w) is the Kullback-Leibler divergence from
probability distribution p(x,y;wo) to p(x,y;w) or the divergence of N(w) from
N(wo). Hence, minimizing R(w) is equivalent to minimizing D(wo II w), and the
S. AMARI, N. MURATA, K. R. MULLER, M. FINKE, H. YANG
178
minimum is attained at w = Wo. The asymptotic theory of statistics proves that the
m.l.e. Wt is asymptotically subject to the normal distribution with mean Wo and
variance G-1 It, where G- 1 is the inverse of the Fisher information matrix G. We
can expand for example the risk R(w) = Ho+ t(w -wo)TG(wo)(w -wo) + 0 (/2)
to obtain
(Rgen(w)) = Ho
+ ~ +0
C~
),
(Rtrain(w)) = Ho -
~ + 0 C~),
(2.3)
as asymptotic result for training and test error (see Murata et al. [1993] and Amari
and Murata [1990)) . An extension of (2.3) including higher order corrections was
recently obtained by Mliller et al. [1995].
Let us consider the gradient descent learning rule (Amari [1967], Rumelhart et al.
[1986], and many others), where the parameter w(n) at the nth step is modified by
w(n
+ 1) =
w(n) _
?
f)Rtr~~(wn) ,
(2.4)
and where ? is a small positive constant. This is batch learning where all the
training examples are used for each iteration of modifying w( n).l The batch process
is deterministic and w( n) converges to W, provided the initial w(O) is included in
its basin of attraction. For large n we can argue, that w(n) is approaching w
isotropically and the learning trajectory follows a linear ray towards w (for details
see Amari et al. [1995]).
3
Virtual optimal stopping rule
During learning as the parameter w(n) approaches W, the generalization behavior
of network N {w(n)} is evalulated by the sequence R(n) = R{w(n)}, n = 1,2, . ..
The folklore says that R(n) decreases in an early period oflearning but it increases
later. Therefore, there exists an optimal stopping time n at which R(n) is minimized. The stopping time nopt is a random variable depending on wand the initial
w(O) . We now evaluate the ensemble average of (R(nopd).
The true Wo and the m.l.e. ware in general different, and they are apart of order
1/Vt. Let us compose a sphere S of which the center is at (1/2)(wo+w) and which
passes through both Wo and W, as shown in Fig.1b. Its diameter is denoted by d,
where d2 = Iw - Wo 12 and
Eo [d 2 ]
Eo[(w - wo? G- 1(w - wo)] =
~tr(G-1G) =
t
m.
t
(3 .1)
Let A be the ray, that is the trajectory w(n) starting at w(O) which is not in the
neighborhood of Wo . The optimal stopping point w" that minimizes
R(n)
= Ho + ~Iw(n) -
wol 2
(3.2)
is given by the first intersection of the ray A and the sphere S.
Since w" is the point on A such that Wo - w" is orthogonal to A, it lies on the
sphere S (Fig.1b). When ray A' is approaching w from the opposite side ofwo (the
right-hand side in the figure), the first intersection point is w itself. In this case,
the optimal stopping never occurs until it converges to W.
Let () be the angle between the ray A and the diameter Wo - w of the sphere S.
We now calculate the distribution of () when the rays are isotropically distributed.
lWe can alternatively use on-line learning, studied by Amari [1967], Heskes and Kappen
[1991] , and recently by Barkai et al. [1994] and SolI a and Saard [1995].
Statistical Theory of Overtraining-Is Cross-Validation Asymptotically Effective?
179
Lemma 1. When ray A is approaching V. from the side in which Wo is included, the
probability density of 0, 0 :::; 0 :::; 7r /2, is given by
reO)
= -1- sinm- 2 0,
where
1m
= 17r/2 sinm OdO.
(3.3)
0
1m-2
The det,ailed proof of this lemma can be found in Amari et aI. [1995]. Using the
density of 0 given by Eq.(3.3) and we arrive at the following theorem.
Theorem 1. The average generalization error at the optimal stopping point is
given by
(3.4)
Proof When ray A is at angle 0, 0 :::; 0 < 7r /2, the optimal stopping point w* is on
the sphere S. It is easily shown that Iw* - wol = dsinO. This is the case where A
is from the same side as Wo (from the left-hand side in Fig.l b), which occurs with
probability 0.5, and the average of (d sin 0)2 is
Eo[(dsinO?]
Eo[d 2 ] r/\in 2 Osinm- 2 OdO
Jo
= m ~ = m (1- ~).
t 1m-2
t
m
When 0 is 7r/2 :::; 0 :::; 7r, that is A approaches V. from the opposite side, it does
not stop until it reaches V., so that Iw* - Wo 12 = IV. - Wo I = d 2 ? This occurs with
probability 0.5. Hence, we proved the theorem.
1m - 2
The theorem shows that, if we could know the optimal stopping time nopt for
each trajectory, the generalization error decreases by 1/2t, which has an effect of
decreasing the effective dimensions by 1/2. This effect is neglegible when m is large.
The optimal stopping time is of the order logt. However, it is impossible to know
the optimal stopping time. If we stop learning at an estimated optimal time nopt,
we have a small gain when the ray A is from the same side as Wo but we have
some loss when ray A is from the opposite direction. This shows that the gain is
even smaller if we use a common stopping time iiopt independent of V. and w(O) as
proposed by Wang et aI. [1994]. However, the point is that there is neither direct
means to estimate nopt nor iiopt rather than for example cross-validation. Hence,
we analyze cross-validation stopping in the following .
4
Optimal stopping by cross-validation
The present section studies asymptotically two fundamental problems: 1) Given t
examples , how many examples should be used in the training set and how many
in the testing set? 2) How much gain can one expect by the above cross-validated
stopping?
Let us divide t examples into rt examples of the training set and r't examples of the
testing set, where r + r' = 1. Let V. be the m.I.e. from rt training examples, and let
w be the m .I.e. from the other r't testing examples. Since the training examples
and testing examples are independent, V. and ware subject to independent normal distributions with mean Wo and covariance matrices G-1/(rt) and G-l/(r't),
respecti vely.
Let us compose the triangle with vertices Wo, V. and w. The trajectory A starting
at w(O) enters V. linearly in the neighborhood . The point w" on the trajectory A
which minimizes the testing error is the point on A that is closest to W, since the
testing error defined by
1
Rtest(w) = r't ~{-logp(xi'Yi; w)},
(4.1)
t
S. AMARI, N. MURATA, K. R. MULLER, M. FINKE, H. YANG
180
where summation is taken over r't testing examples, can be expanded as
Rtest(w) == Ho -
~Iw - wol 2 + ~Iw - w1 2 .
(4.2)
Let S be the sphere centered at (w + w)/2 and passing through both wand w.
It 's diameter is given by d == Iw - wi. Then, the optimal stopping point w* is
given by the intersection of the trajectory A and sphere S . When the trajectory
comes from the opposite side of W, it does not intersect S until it converges to w,
so that the optimal point is w* == w in this case. Omitting the detailed proof, the
generalization error of w* is given by Eq.(??) , so that we calculate the expectation
E[lw*
-woI 2 ] == m _ ~ (~_~).
tr
2t
1"
l'
Lemma 2. The average generalization error by the optimal cross-validated stopping
IS
*
(R(w ,1')) = Ho
+
2m - 1
4rt
1
+ 4r't
(4.3)
We can then calculate the optimal division rate
J2m -1-1
ropt = 1 -
2(m _ 1)
and
1
ropt = 1 - J2m
(large m limit).
(4.4)
of examples, which minimizes the generalization error. So for large m only
(1/J2m) x 100% of examples should be used for testing and all others for training.
For example, when m = 100 , this shows that 93% of examples are to be used for
training and only 7% are to be kept for testing. From Eq.( 4.4) we obtain as optimal
generalization error for large m
(R(w', ropt? = Ho
+; (1 +If) .
( 4.5)
This shows that the generalization error asymptotically increases slightly by crossvalidation compared with non-stopped learning which is using all the examples for
training.
5
Simulations
We use standard feed-forward classifier networks with N inputs, H sigmoid hidden
units and M softmax outputs (classes). The output activity 0/ of the lth output
unit is calculated via the softmax squashing function
_
.
_
_
p(y?-GI!x,w)-O/-l
exp(h/)
+ 2: k exp (h k )'
/=l ,?? ?,M,
wg
where h? = Lj
Sj - '19? is the local field potential. Each output 0/ codes the aposteriori probability of being in class G/, 0 0 denotes a zero class for normalization
purposes. The m network parameters consist of biases '19 and weights w . When x
is input, the activity of the j-th hidden unit is
N
Sj
= [1
+ exp( -
L
Wf{:Xk -
'I9.f)]-I ,
j = 1, .. " H .
k=1
The input layer is connected to the hidden layer via w H , the hidden layer is connected to the output layer via wo, but no short-cut connections are present . Although the network is completely deterministic, it is constructed to approximate
Statistical Theory of Overtraining-Is Cross-Validation Asymptotically Effective?
181
class conditional probabilities (Finke and Miiller [1994]) .
The examples {(x}, yd, .. " (Xt , Yt)} are produced randomly, by drawing Xi, i =
1, .. . , t, from a uniform distribution independently and producing the labels Yi
stochastically from the t eacher classifier. Conjugate gradient learning with linesearch on the empirical risk function Eq.(2.1) is applied, starting from some random initial vector . The generalization ability is measured using Eq. (2.2) on a large
test set (50000 patterns). Note that we use Eq. (2.1) on the cross-validation set ,
because only the empirical risk is available on the cross-validation set in a practical
situation. We compare the generalisation error for the settings: exhaustive training
(no stopping), early stopping (controlled by the cross-validation set) and optimal
stopping (controlled by the large testset) . The simulations were performed on a
parallel computer (CM5). Every curve in the figures takes about 8h of computing
time on a 128 respectively 256 partition of the CM5, i.e. we perform 128-256 parallel trials . This setting enabled us to do extensive statistics (cf. Amari et al. [1995]) .
Fig. la shows the results of simulations, where N = 8, H = 8, M = 4, so that
the number m of modifiable parameters is m = (N + I)H + (H + I)M = 108. We
observe clearly, that saturated learning without early stopping is the best in the
asymptotic range of t > 30m , a range which is due to the limited size of the data
sets often unaccessible in practical applications . Cross-validated early stopping does
not improve the generalization error here, so that no overtraining is observed on
the average in this range. In the asymptotic area (figure 1) we observe that the
smaller the percentage of the training set , which is used to determine the point of
early stopping, the better the performance of the generalization ability. When we
use cross-validation, the optimal size of the test set is about 7% of all the examples ,
as the theory predicts.
Clearly, early stopping does improve the generalization ability to a large extent in
an intermediate range for t < 30m (see Miiller et al. [1995]) . Note , that our theory also gives a good estimate of the optimal size of the early stopping set in this
intermediate range.
0.05
"
opt, 4 -
20.% -+---
0.045
,3:l% ..
/ 42% ...... .. n9,gtopping -.. >E} ...
0.04
;'
/ "". >:i<:;_=-~;:~~::::~::~~~:------
0.035
'i
Cit
0.03
.~~;;?;;;;?/
0.025
0.02
0.015
om
0.005
A
~.
~A.~~.::.../
" ,
L....I..._--'-_-'-_-'--_-'------'_--'-_--'-_-'-----l
5e-5
le-4 1.5e-4 2e-4 2.5e-4 3e-4 3.5e-4 4e-4 4.5e-4 5e-4
lIt
(a)
(b)
Figure 1: (a) R(w) plotted as a function of lit for different sizes r' of the early
stopping set for an 8-8-4 classifier network. opt. denotes the use of a very large
cross-validation set (50000) and no stopping adresses the case where 100% of the
training set is used for exhaustive learning. (b) Geometrical picture to determine
the optimal stopping point w* .
s. AMARI. N. MURATA. K. R. MOLLER. M. FINKE. H. YANG
182
6
Conclusion
We proposed an asymptotic theory for overtraining. The analysis treats realizable
stochastic neural networks, trained with Kullback-Leibler loss.
It is demonstrated both theoretically and in simulations that asymptotically the gain
in the generalization error is small if we perform early stopping, even if we have
access to the optimal stopping time. For cross-validation stopping we showed for
large m that optimally only r~pt = 1/ J2m examples should be used to determine
the point of early stopping in order to obtain the best performance. For example,
if m = 100 this corresponds to using 93% of the t training patterns for training and
only 7% for testing where to stop. Yet, even if we use rapt for cross-validated stopping the generalization error is always increased comparing to exhaustive training.
Nevertheless note, that this range is due to the limited size of the data sets often
unaccessible in practical applications.
In the non-asymptotic region simulations show that cross-validated early stopping
always helps to enhance the performance since it decreases the generalization error.
In this intermediate range our theory also gives a good estimate of the optimal size
of the early stopping set. In future we will consider higher order correction terms
to extend our theory to give also a quantitative description of the non-asymptotic
regIOn.
Acknowledgements: We would like to thank Y. LeCun, S. Bos and K Schulten
for valuable discussions. K -R. M. thanks K Schulten for warm hospitality during
his stay at the Beckman Inst. in Urbana, Illinois. We acknowledge computing time
on the CM5 in Urbana (NCSA) and in Bonn, supported by the National Institutes
of Health (P41RRO 5969) and the EC S & T fellowship (FTJ3-004, K . -R. M.).
References
Amari , S. [1967], IEEE Trans., EC-16, 299- 307.
Amari, S., Murata, N. [1993], Neural Computation 5, 140
Amari, S., Murata, N., Muller, K-R., Finke, M., Yang, H. [1995], Statistical Theory
of Overtraining and Overfitting, Univ. of Tokyo Tech. Report 95-06, submitted
Barkai, N. and Seung, H. S. and Sompolinski, H. [1994], On-line learning of dichotomies, NIPS'94
Finke, M. and Muller, K-R. [1994] in Proc. of the 1993 Connectionist Models summer school, Mozer, M., Smolensky, P., Touretzky, D.S ., Elman, J.L. and Weigend,
A.S. (Eds.), Hillsdale, NJ: Erlenbaum Associates, 324
Hassoun, M. H. [1995], Fundamentals of Artificial Neural Networks, MIT Press.
Hecht-Nielsen, R. [1989], Neurocomputing, Addison-Wesley.
Heskes, T. and Kappen, B. [1991]' Physical Review, A44, 2718- 2762.
LeCun, Y., Denker, J .S., Solla, S. [1990], Optimal brain damage, NIPS'89
Moody, J . E. [1992]' The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, NIPS 4
Murata, N., Yoshizawa, S., Amari , S. [1994], IEEE Trans ., NN5, 865-872.
Muller, K-R., Finke, M., Murata, N., Schulten, K and Amari, S. [1995] A numerical study on learning curves in stochastic multilayer feed-forward networks, Univ.
of Tokyo Tech. Report METR 95-03 and Neural Computation in Press
Poggio, T. and Girosi, F. [1990], Science, 247, 978- 982.
Rissanen, J. [1986], Ann, Statist., 14, 1080- 1100.
Rumelhart, D., Hinton, G. E., Williams, R. J. [1986], in PDP, Vol.1, MIT Press.
Saad, D., Solla, S. A. [1995], PRL, 74,4337 and Phys. Rev. E, 52,4225
Wang, Ch., Venkatesh, S. S., Judd, J. S. [1994], Optimal stopping and effective machine complexity in learning, to appear, (revised and extended version of NIPS'93).
| 1060 |@word trial:1 version:1 advantageous:1 d2:1 simulation:8 covariance:1 tr:2 kappen:2 initial:3 rapt:1 comparing:1 yet:1 numerical:1 partition:1 girosi:2 analytic:1 xk:1 short:1 math:1 firstly:1 mathematical:1 constructed:1 direct:1 consists:1 compose:2 ray:10 theoretically:1 ofwo:1 behavior:3 elman:1 nor:1 brain:1 decreasing:1 considering:1 becomes:1 provided:1 maximizes:1 what:2 minimizes:5 finding:1 nj:1 quantitative:1 every:1 classifier:3 unit:3 appear:1 producing:2 positive:1 engineering:1 local:1 treat:3 limit:1 ware:2 yd:1 studied:1 limited:2 range:8 practical:3 lecun:3 testing:15 definite:1 area:1 intersect:1 empirical:3 risk:5 impossible:1 equivalent:1 deterministic:2 demonstrated:1 yt:2 center:1 go:1 williams:1 starting:3 independently:1 rule:3 estimator:1 attraction:1 his:1 enabled:1 elucidate:1 pt:1 agreement:1 associate:1 rumelhart:2 cut:1 predicts:1 observed:1 wang:3 enters:1 calculate:3 region:3 connected:2 solla:2 decrease:6 valuable:1 mozer:1 complexity:1 seung:1 trained:3 rudower:1 division:1 threelayer:1 completely:1 triangle:1 easily:1 riken:1 univ:2 effective:7 artificial:1 dichotomy:1 klaus:1 neighborhood:2 exhaustive:3 say:1 drawing:1 amari:15 wg:1 ability:4 statistic:2 gi:1 itself:1 sequence:1 analytical:1 degenerate:1 description:1 crossvalidation:1 optimum:2 converges:3 help:1 depending:1 measured:1 school:1 eq:6 come:1 direction:1 tokyo:4 modifying:1 stochastic:6 centered:1 wol:3 virtual:1 hillsdale:1 generalization:23 opt:2 summation:1 extension:1 correction:2 sufficiently:1 considered:1 normal:2 exp:3 rgen:1 early:16 purpose:1 proc:1 beckman:1 label:1 iw:7 odo:2 mit:2 clearly:2 hospitality:1 always:3 modified:1 rather:1 avoid:1 validated:6 likelihood:3 tech:2 realizable:3 wf:1 inst:2 bos:1 stopping:44 lj:1 hidden:4 relation:1 expand:1 germany:2 denoted:2 respecti:1 softmax:2 field:1 never:1 lit:2 future:1 minimized:1 others:3 report:2 connectionist:1 randomly:1 divergence:2 national:1 neurocomputing:1 saturated:1 regularizers:1 solo:1 necessary:1 poggio:2 orthogonal:1 vely:1 iv:1 divide:2 plotted:1 stopped:2 lwe:1 increased:1 linesearch:1 finke:8 logp:5 tg:1 oflearning:1 vertex:1 saitama:1 uniform:1 kullbackleibler:1 optimally:1 answer:3 teacher:2 thanks:1 density:2 fundamental:2 stay:1 physic:1 j2m:4 enhance:1 moody:2 jo:1 w1:1 worse:1 stochastically:1 japan:2 potential:1 de:1 student:1 includes:1 permanent:1 vi:1 later:1 performed:1 lab:1 analyze:2 parallel:2 om:1 variance:1 murata:10 ensemble:1 ofthe:1 produced:1 trajectory:7 confirmed:1 submitted:1 overtraining:11 reach:2 phys:1 touretzky:1 ed:1 adresses:1 yoshizawa:1 proof:3 gain:6 stop:4 emits:1 proved:1 improves:1 nielsen:2 feed:3 wesley:1 attained:1 dt:1 higher:2 done:1 evaluated:1 strongly:1 until:3 hand:2 receives:1 nonlinear:1 karlsruhe:2 barkai:2 effect:2 omitting:1 true:2 hence:3 regularization:1 leibler:3 sin:1 during:2 geometrical:1 recently:2 common:1 sigmoid:1 physical:1 extend:1 ai:2 heskes:2 illinois:1 access:2 closest:1 showed:1 inf:2 apart:1 vt:1 yi:4 muller:5 minimum:2 eo:7 surely:1 determine:3 period:2 monotonically:1 ii:3 smooth:1 cross:23 sphere:7 dept:1 divided:2 hecht:2 controlled:2 multilayer:2 expectation:4 iteration:1 normalization:1 fellowship:1 chaussee:1 a44:1 saad:1 pass:1 subject:2 yang:5 prl:1 feedforward:2 intermediate:4 wn:1 fit:1 approaching:3 opposite:4 det:1 wo:31 miiller:3 bunkyo:1 passing:1 adequate:1 detailed:1 ylx:2 statist:1 gmd:2 diameter:3 cit:1 percentage:1 estimated:1 disjoint:1 modifiable:3 logik:1 vol:1 nevertheless:1 rissanen:1 neither:1 kept:1 sompolinski:1 asymptotically:9 wand:2 weigend:1 inverse:1 angle:2 arrive:1 hassoun:2 almost:1 i9:1 layer:4 summer:1 activity:2 generates:1 bonn:1 expanded:1 conjugate:1 logt:1 smaller:2 slightly:1 wi:2 rev:1 taken:1 know:2 addison:1 available:2 denker:1 observe:2 batch:2 rtrain:2 ho:10 n9:1 denotes:4 cf:1 folklore:3 prof:1 question:2 occurs:4 strategy:1 damage:1 rt:4 gradient:2 thank:1 berlin:1 mail:1 argue:1 extent:1 code:1 ratio:2 minimizing:2 equivalently:1 negative:1 perform:3 revised:1 urbana:2 acknowledge:1 descent:1 situation:1 hinton:1 extended:1 pdp:1 venkatesh:1 specified:1 extensive:1 connection:1 ropt:3 nip:4 trans:2 address:1 pattern:3 reo:1 smolensky:1 including:1 warm:1 nth:1 improve:2 picture:1 health:1 ailed:1 nice:1 review:1 acknowledgement:1 determining:1 asymptotic:12 loss:5 expect:1 aposteriori:1 validation:17 basin:1 squashing:1 supported:1 side:8 bias:1 unaccessible:2 institute:1 distributed:1 curve:2 dimension:1 calculated:1 judd:1 forward:3 testset:1 ec:2 sj:2 approximate:1 kullback:3 hongo:1 overfitting:1 nopt:4 xi:3 alternatively:1 continuous:1 ku:1 moller:1 linearly:1 fig:4 rtr:1 schulten:3 xl:1 lie:1 rtest:2 lw:1 theorem:4 xt:2 showing:1 woi:1 exists:3 consist:1 entropy:1 intersection:3 isotropically:2 ch:1 corresponds:1 conditional:2 lth:1 ann:1 towards:1 fisher:2 included:2 generalisation:1 ncsa:1 wt:1 lemma:3 called:1 gij:1 la:1 evaluate:1 phenomenon:1 |
70 | 1,061 | Stable Dynamic Parameter Adaptation
Stefan M. Riiger
Fachbereich Informatik, Technische Universitat Berlin
Sekr. FR 5-9, Franklinstr. 28/29
10587 Berlin, Germany
async~cs. tu-berlin.de
Abstract
A stability criterion for dynamic parameter adaptation is given. In
the case of the learning rate of backpropagation, a class of stable
algorithms is presented and studied, including a convergence proof.
1
INTRODUCTION
All but a few learning algorithms employ one or more parameters that control the
quality of learning. Backpropagation has its learning rate and momentum parameter; Boltzmann learning uses a simulated annealing schedule; Kohonen learning
a learning rate and a decay parameter; genetic algorithms probabilities, etc. The
investigator always has to set the parameters to specific values when trying to solve
a certain problem. Traditionally, the metaproblem of adjusting the parameters is
solved by relying on a set of well-tested values of other problems or an intensive
search for good parameter regions by restarting the experiment with different values. In this situation, a great deal of expertise and/or time for experiment design
is required (as well as a huge amount of computing time).
1.1
DYNAMIC PARAMETER ADAPTATION
In order to achieve dynamic parameter adaptation, it is necessary to modify the
learning algorithm under consideration: evaluate the performance of the parameters
in use from time to time, compare them with the performance of nearby values, and
(if necessary) change the parameter setting on the fly. This requires that there
exist a measure of the quality of a parameter setting, called performance, with the
following properties: the performance depends continuously on the parameter set
under consideration, and it is possible to evaluate the performance locally, i. e., at
a certain point within an inner loop of the algorithm (as opposed to once only at
the end of the algorithm). This is what dynamic parameter adaptation is all about.
226
S.M.RUOER
Dynamic parameter adaptation has several virtues. It is automatic; and there is no
need for an extra schedule to find what parameters suit the problem best. When
the notion of what the good values of a parameter set are changes during learning,
dynamic parameter adaptation keeps track of these changes.
1.2
EXAMPLE: LEARNING RATE OF BACKPROPAGATION
Backpropagation is an algorithm that implements gradient descent in an error
function E: IRn ~ llt Given WO E IRn and a fixed '" > 0, the iteration rule is
W H1 = w t - ",V E(wt). The learning rate", is a local parameter in the sense that
at different stages of the algorithm different learning rates would be optimal. This
property and the following theorem make", especially interesting.
Trade-off theorem for backpropagation. Let E: JR1l ~ IR be the error function of
a neural net with a regular minimum at w? E IRn , i. e., E is expansible into a
Taylor series about w? with vanishing gradient V E( w?) and positive definite Hessian
matrix H(w?) . Let A denote the largest eigenvalue of H(w?). Then, in general,
backpropagation with a fixed learning rate", > 2/ A cannot converge to w? .
Proof. Let U be an orthogonal matrix that diagonalizes H(w?), i. e., D :=
UT H (w?) U is diagonal. Using the coordinate transformation x = UT (w - w?)
and Taylor expansion, E(w) - E(w?) can be approximated by F(x) := x T Dx/2.
Since gradient descent does not refer to the coordinate system, the asymptotic behavior of backpropagation for E near w? is the same as for F near O. In the latter
case, backpropagation calculates the weight components x~ = x~(I- Dii",)t at time
step t. The diagonal elements Dii are the eigenvalues of H(w?); convergence for all
geometric sequences t 1-7 x~ thus requires", < 2/ A.
I
The trade-off theorem states that, given "', a large class of minima cannot be found,
namely, those whose largest eigenvalue of the corresponding Hessian matrix is larger
than 2/",. Fewer minima might be overlooked by using a smaller "', but then the
algorithm becomes intolerably slow. Dynamic learning-rate adaptation is urgently
needed for backpropagation!
2
STABLE DYNAMIC PARAMETER ADAPTATION
Transforming the equation for gradient descent, wt+l = w t - ",VE(wt), into a
differential equation, one arrives at awt fat = -",V E(wt). Gradient descent with
constant step size", can then be viewed as Euler's method for solving the differential
equation. One serious drawback of Euler's method is that it is unstable: each finite
step leaves the trajectory of a solution without trying to get back to it. Virtually
any other differential-equation solver surpasses Euler's method, and there are even
some featuring dynamic parameter adaptation [5].
However, in the context of function minimization, this notion of stability ("do not
drift away too far from a trajectory") would appear to be too strong. Indeed,
differential-equation solvers put much effort into a good estimation of points that
are as close as possible to the trajectory under consideration. What is really needed
for minimization is asymptotic stability: ensuring that the performance of the parameter set does not decrease at the end of learning. This weaker stability criterion
allows for greedy steps in the initial phase of learning.
There are several successful examples of dynamic learning-rate adaptation for backpropagation: Newton and quasi-Newton methods [2] as an adaptive ",-tensor; individual learning rates for the weights [3, 8]; conjugate gradient as a one-dimensional
",-estimation [4]; or straightforward ",-adaptation [1, 7].
Stable Dynamic Parameter Adaptation
227
A particularly good example of dynamic parameter adaptation was proposed by
Salomon [6, 7]: let ( > 1; at every step t of the backpropagation algorithm test two
values for 17, a somewhat smaller one, 17d(, and a somewhat larger one, 17t(; use as
17HI the value with the better performance, i. e., the smaller error:
The setting of the new parameter (proves to be uncritical (all values work, especially
sensible ones being those between 1.2 and 2.1). This method outperforms many
other gradient-based algorithms, but it is nonetheless unstable.
b)
Figure 1: Unstable Parameter Adaptation
The problem arises from a rapidly changing length and direction of the gradient,
which can result in a huge leap away from a minimum, although the latter may have
been almost reached. Figure 1a shows the niveau lines of a simple quadratic error
function E: 1R2 -+ IR along with the weight vectors wo, WI , . .. (bold dots) resulting
from the above algorithm. This effect was probably the reason why Salomon suggested using the normalized gradient instead of the gradient, thus getting rid of the
changes in the length of the gradient. Although this works much better, Figure 1b
shows the instability of this algorithm due to the change in the gradient's direction.
There is enough evidence that these algorithms converge for a purely quadratic
error function [6, 7]. Why bother with stability? One would like to prove that an
algorithm asymptotically finds the minimum, rather than occasionally leaping far
away from it and thus leaving the region where the quadratic Hessian term of a
globally nonquadratic error function dominates.
3
A CLASS OF STABLE ALGORITHMS
In this section, a class of algorithms is derived from the above ones by adding
stability. This class provides not only a proof of asymptotic convergence, but also
a significant improvement in speed.
Let E: IRn -+ IR be an error function of a neural net with random weight vector
W O E IRn. Let ( > 1, 170 > 0, 0 < c ~ 1, and 0 < a ~ 1 ~ b. At step t of the algorithm, choose a vector gt restricted only by the conditions gtV E(wt)/Igtllv Ew t I ~ c
and that it either holds for all t that 1/1gtl E [a, b) or that it holds for all t that
IVE(wt)I/lgtl E [a, b), i. e., the vectors g have a minimal positive projection onto
the gradient and either have a uniformly bounded length or are uniformly bounded
by the length of the gradient. Note that this is always possible by choosing gt as the
gradient or the normalized gradient.
Let e: 17 t-t E (wt - 17gt) denote a one-dimensional error function given by E, w t and
gt. Repeat (until the gradient vanishes or an upper limit of t or a lower limit Emin
S.M.ROOER
228
of E is reached) the iteration
'T/* ..'T/Hl
=
'T/d(
'T/t(
W H1
= w t - 'T/tHg t with
'T/t(/2
1 + e('T/t() - e(O)
'T/t(gt\1 E(wt)
if e(O) < e('T/t()
(1)
if e('T/d() ::; e('T/t() ::; e(O)
otherwise.
The first case for 'T/Hl is a stabilizing term 'T/*, which definitely decreases the error
when the error surface is quadratic, i. e., near a minimum. 'T/* is put into effect
when the errOr e(T}t() , which would occur in the next step if'T/t+l = 'T/t( was chosen,
exceeds the error e(O) produced by the present weight vector w t . By construction,
'T/* results in a value less than 'T/t(/2 if e('T/t() > e(O); hence, given ( < 2, the learning
rate is decreased as expected, no matter what E looks like. Typically, (if the values
for ( are not extremely high) the other two cases apply, where 'T/t( and 'T/d ( compete
for a lower error.
Note that, instead of gradient descent, this class of algorithms proposes a "gt descent," and the vectors gt may differ from the gradient. A particular algorithm is
given by a specification of how to choose gt.
4
PROOF OF ASYMPTOTIC CONVERGENCE
Asymptotic convergence. Let E: w f-t 2:~=1 AiW; /2 with Ai > O. For all ( > 1,
1, 0 < a ::; 1 ::; b, 'T/o > 0, and WO E IRn , every algorithm from Section :1
produces a sequence t f-t wt that converges to the minimum 0 of E with an at least
exponential decay of t f-t E(wt).
o < c ::;
Proof. This statement follows if a constant q < 1 exists with E(W H1 ) ::; qE(wt) for
all t. Then, limt~oo w t = 0, since w f-t ..jE(w) is a norm in IRn.
Fix a w t , 'T/t, and a gt according to the premise. Since E is a positive definite
quadratic form, e: 'T/ f-t E( wt - 'T/g t ) is a one-dimensional quadratic function with
a minimum at, say, 'T/*. Note that e(O) = E(wt) and e('T/tH) = E(wt+l). e is
completely determined by e(O), e'(O) = -gt\1 E(wt), 'T/te and e('T/t(). Omitting the
algebra, it follows that 'T/* can be identified with the stabilizing term of (1).
e(O)
.A'-~--I
-...-...J'----+I
e"----r-++--+j
qe( 0)
(1 - q11)e(0) + q11e('T/*)
qee(O)
__~<-+--+I 11t+~:11? e(O)
e('T/*)
1--_ _ _ _---""'......-- --A~-_+_--+t
+ (1 -
11t?~:11? )e('T/*)
e('T/tH)
o
Figure 2: Steps in Estimating a Bound q for the Improvement of E.
229
Stable Dynamic Parameter Adaptation
If e(17t() > e(O), by (1) 17t+l will be set to 17?; hence, Wt+l has the smallest possible
error e(17?) along the line given by l. Otherwise, the three values 0, 17t!(, and 17t(
cannot have the same error e, as e is quadratic; e(17t() or e(17t!() must be less than
e(O), and the argument with the better performance is used as 17tH' The sequence
t I-t E(wt) is strictly decreasing; hence, a q ~ 1 exists. The rest of the proof shows
the existence of a q < 1.
Assume there are two constants 0
< qe, qT/ < 1 with
E
~
Let 17tH
~
(2)
(3)
[qT/,2 - qT/]
qee(O).
17?; using first the convexity of e, then (2), and (3), one obtains
e(17tH -17? 2 ?
17.
17
+ (1- 17t+l -17?)
17.
17
.)
< 17t+l -17? e(O) + (1- 17tH -17? )e(17.)
<
<
17?
(1 - qT/)e(O) + qf/e(17?)
(1- qT/(1 - qe))e(O).
17?
Figure 2 shows how the estimations work. The symmetric case 0
the same result E(wt+l) ~ qE(wt) with q := 1 - qT/(1 - qe) < 1.
< 17tH
~
17? has
Let ,X < := minPi} and ,X> := max{'xi}. A straightforward estimation for qe yields
,X<
qe := 1 - c2 ,X> < 1.
Note that 17? depends on w t and gt. A careful analysis of the recursive dependence
of 17t+l /17? (w t , gt) on 17t /17?( wt - 1 ,l-l) uncovers an estimation
( <)
._ min _2_ ~ ca ~
qT/ .{(2 + l' (2 + 1 b'x>
5
3/2
<
17o (,X
, bmax{1, J2'x> E(WO)}}
>0
.
?
NON-GRADIENT DIRECTIONS CAN IMPROVE
CONVERGENCE
It is well known that the sign-changed gradient of a function is not necessarily the
best direction to look for a minimum. The momentum term of a modified backpropagation version uses old gradient directions; Newton or quasi-Newton methods
explicitly or implicitly exploit second-order derivatives for a change of direction;
another choice of direction is given by conjugate gradient methods [5].
The algorithms from Section 3 allow almost any direction, as long as it is not nearly
perpendicular to the gradient. Since they estimate a good step size, these algorithms
can be regarded as a sort of "trial-and-error" line search without bothering to find
an exact minimum in the given direction, but utilizing any progress made so far.
One could incorporate the Polak-Ribiere rule, ctt H
gate directions with dO = \1 E (WO), a = 1, and
(3
=
= \1 E(Wt+l) + a(3ctt, for
(\1E(Wt+l) - \1E(wt))\1E(wt+l)
(\1 E(Wt))2
conju-
S.M. RUOER
230
to propose vectors gt := ett /Iettl for an explicit algorithm from Section 3. As in
the conjugate gradient method, one should reset the direction ett after each n (the
number of weights) updates to the gradient direction. Another reason for resetting
the direction arises when gt does not have the minimal positive projection c onto
the normalized gradient.
a = 0 sets the descent direction gt to the normalized gradient "V E(wt)/I"V E(wt)lj
this algorithm proves to exhibit a behavior very similar to Salomon's algorithm with
normalized gradients. The difference lies in the occurrence of some stabilization
steps from time to time, which, in general, improve the convergence.
Since comparisons of Salomon's algorithm to many other methods have been published [7], this paper confines itself to show that significant improvements are
brought about by non-gradient directions, e. g., by Polak-Ribiere directions (a = 1).
Table 1: Average Learning Time for Some Problems
PROBLEM
Emin
a = 0
a = 1
(a) 3-2-4 regression
(b) 3-2-4 approximation
(c) Pure square (n = 76)
(d) Power 1.8 (n = 76)
(e) Power 3.8 (n = 76)
(f) 8-3-8 encoder
10?
10- 4
10- 16
10- 4
10- 16
10- 4
195? 95%
1070 ? 140%
464? 17%
486? 29%
28 ? 10%
1380? 60%
58 ? 70%
189? 115%
118? 9%
84? 23%
37? 14%
300? 60%
Table 1 shows the average number of epochs of two algorithms for some problems.
The average was taken over many initial random weight vectors and over values of
( E [1.7,2.1]j the root mean square error of the averaging process is shown as a
percentage. Note that, owing to the two test steps for ",t/( and "'t(, one epoch has
an overhead of around 50% compared to a corresponding epoch of backpropagation.
a f:. 0 helps: it could be chosen by dynamic parameter adaptation.
Problems (a) and (b) represent the approximation of a function known only from
some example data. A neural net with 3 input, 2 hidden, and 4 output nodes was
used to generate the example dataj artificial noise was added for problem (a). The
same net with random initial weights was then used to learn an approximation.
These problems for feedforward nets are expected to have regular minima.
Problem (c) uses a pure square error function E: w rt L:~1 ilwil P /2 with p = 2
and n = 76. Note that conjugate gradient needs exactly n epochs to arrive at the
minimum [5]. However, the few additional epochs that are needed by the a = 1
algorithm to reach a fairly small error (here 118 as opposed to 76) must be compared
to the overhead of conjugate gradient (one line search per epoch).
Powers other than 2, as used in (d) or (e), work well as long as, say, p > 1.5. A power
< 1 will (if n ~ 2) produce a "trap" for the weight vector at a location near a
coordinate axis, where, owing to an infinite gradient component, no gradient-based
algorithm can escape1 . Problems are expected even for p near 1: the algorithms of
Section 3 exploit the fact that the gradient vanishes at a minimum, which in turn
is numerically questionable for a power like 1.1. Typical minima, however, employ
powers 2,4, ... Even better convergence is expected and found for large powers.
p
IDynamic parameter adaptation as in (1) can cope with the square-root singularity
(p = 1/2) in one dimension, because the adaptation rule allows a fast enough decay of
the learning rate; the ability to minimize this one-dimensional square-root singularity is
somewhat overemphasized in [7].
Stable Dynamic Parameter Adaptation
231
The 8-3-8 encoder (f) was studied, because the error function has global minima
at the boundary of the domain (one or more weights with infinite length). These
minima, though not covered in Section 4, are quickly found. Indeed, the ability
to increase the learning rate geometrically helps these algorithms to approach the
boundary in a few steps.
6
CONCLUSIONS
It has been shown that implementing asymptotic stability does help in the case of the
backpropagation learning rate: the theoretical analysis has been simplified, and the
speed of convergence has been improved. Moreover, the presented framework allows
descent directions to be chosen flexibly, e. g., by the Polak-Ribiere rule. Future work
includes studies of how to apply the stability criterion to other parametric learning
problems.
References
[1] R. Battiti. Accelerated backpropagation learning: Two optimization methods.
Complex Systems, 3:331-342, 1989.
[2] S. Becker and Y. Ie Cun. Improving the convergence of back-propagation learning with second order methods. In D. Touretzky, G. Hinton, and T. Sejnowski,
editors, Proceedings of the 1988 Connectionist Models Summer School, pages
29-37. Morgan Kaufmann, San Mateo, 1989.
[3] R. Jacobs. Increased rates of convergence through learning rate adaptation.
Neural Networks, 1:295-307, 1988.
[4] A. Kramer and A. Sangiovanni-Vincentelli. Efficient parallel learning algorithms
for neural networks. In D. Touretzky, editor, Advances in Neural Information
Processing Systems 1, pages 40-48. Morgan Kaufmann, San Mateo, 1989.
[5] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical
Recipes in C. Cambridge University Press, 1988.
[6] R. Salomon. Verbesserung konnektionistischer Lernverfahren, die nach der Gradientenmethode arbeiten. PhD thesis, TU Berlin, October 1991.
[7] R. Salomon and J. L. van Hemmen. Accelerating backpropagation through
dynamic self-adaptation. Neural Networks, 1996 (in press).
[8] F. M. Silva and L. B. Almeida. Speeding up backpropagation. In Proceedings of
NSMS - International Symposium on Neural Networks for Sensory and Motor
Systems, Amsterdam, 1990. Elsevier.
| 1061 |@word trial:1 version:1 norm:1 uncovers:1 jacob:1 initial:3 series:1 genetic:1 outperforms:1 urgently:1 dx:1 must:2 numerical:1 motor:1 update:1 greedy:1 fewer:1 leaf:1 vanishing:1 provides:1 node:1 location:1 along:2 c2:1 differential:4 symposium:1 prove:1 overhead:2 expected:4 indeed:2 behavior:2 relying:1 globally:1 decreasing:1 solver:2 becomes:1 estimating:1 bounded:2 moreover:1 what:5 transformation:1 every:2 questionable:1 fat:1 exactly:1 control:1 appear:1 positive:4 local:1 modify:1 limit:2 might:1 studied:2 mateo:2 salomon:6 perpendicular:1 recursive:1 implement:1 definite:2 backpropagation:17 projection:2 regular:2 ett:2 get:1 cannot:3 close:1 onto:2 put:2 context:1 instability:1 straightforward:2 flexibly:1 stabilizing:2 pure:2 rule:4 utilizing:1 regarded:1 stability:8 notion:2 traditionally:1 coordinate:3 construction:1 exact:1 us:3 element:1 approximated:1 particularly:1 fly:1 solved:1 region:2 sangiovanni:1 trade:2 decrease:2 transforming:1 vanishes:2 convexity:1 dynamic:17 solving:1 algebra:1 purely:1 completely:1 fast:1 bmax:1 sejnowski:1 artificial:1 choosing:1 whose:1 larger:2 solve:1 ive:1 say:2 otherwise:2 encoder:2 ability:2 polak:3 itself:1 sequence:3 eigenvalue:3 net:5 propose:1 reset:1 adaptation:22 fr:1 tu:2 kohonen:1 loop:1 j2:1 rapidly:1 achieve:1 getting:1 recipe:1 convergence:11 produce:2 converges:1 help:3 oo:1 school:1 qt:7 progress:1 strong:1 c:1 differ:1 direction:17 drawback:1 owing:2 stabilization:1 dii:2 implementing:1 premise:1 fix:1 really:1 singularity:2 strictly:1 hold:2 around:1 great:1 smallest:1 estimation:5 leap:1 largest:2 stefan:1 minimization:2 brought:1 always:2 modified:1 rather:1 ribiere:3 derived:1 improvement:3 sense:1 elsevier:1 vetterling:1 typically:1 lj:1 hidden:1 irn:7 quasi:2 germany:1 proposes:1 fairly:1 once:1 look:2 nearly:1 future:1 connectionist:1 serious:1 few:3 employ:2 ve:1 individual:1 phase:1 suit:1 huge:2 arrives:1 necessary:2 orthogonal:1 taylor:2 old:1 theoretical:1 minimal:2 increased:1 technische:1 surpasses:1 euler:3 successful:1 too:2 universitat:1 definitely:1 international:1 ie:1 off:2 continuously:1 quickly:1 thesis:1 q11:1 opposed:2 choose:2 sekr:1 derivative:1 de:1 bold:1 includes:1 matter:1 explicitly:1 depends:2 h1:3 root:3 reached:2 sort:1 parallel:1 minimize:1 square:5 ir:3 kaufmann:2 resetting:1 yield:1 produced:1 informatik:1 trajectory:3 expertise:1 published:1 llt:1 reach:1 touretzky:2 nonetheless:1 proof:6 adjusting:1 ut:2 schedule:2 back:2 emin:2 improved:1 though:1 stage:1 until:1 propagation:1 quality:2 effect:2 omitting:1 normalized:5 hence:3 symmetric:1 deal:1 during:1 self:1 qe:8 die:1 criterion:3 trying:2 silva:1 consideration:3 numerically:1 refer:1 significant:2 cambridge:1 ai:1 automatic:1 dot:1 stable:7 specification:1 surface:1 etc:1 gt:15 ctt:2 occasionally:1 certain:2 battiti:1 der:1 morgan:2 minimum:16 additional:1 somewhat:3 converge:2 bother:1 exceeds:1 long:2 vincentelli:1 calculates:1 ensuring:1 regression:1 iteration:2 represent:1 limt:1 annealing:1 decreased:1 leaving:1 extra:1 rest:1 probably:1 virtually:1 near:5 feedforward:1 enough:2 conju:1 identified:1 inner:1 intensive:1 accelerating:1 becker:1 effort:1 nonquadratic:1 wo:5 hessian:3 aiw:1 covered:1 amount:1 locally:1 generate:1 exist:1 percentage:1 async:1 sign:1 per:1 track:1 changing:1 asymptotically:1 geometrically:1 compete:1 franklinstr:1 arrive:1 almost:2 bound:1 hi:1 summer:1 quadratic:7 occur:1 nearby:1 speed:2 argument:1 extremely:1 min:1 according:1 conjugate:5 smaller:3 wi:1 cun:1 hl:2 restricted:1 taken:1 equation:5 diagonalizes:1 awt:1 turn:1 needed:3 end:2 apply:2 away:3 occurrence:1 gate:1 existence:1 newton:4 exploit:2 especially:2 prof:2 tensor:1 overemphasized:1 added:1 parametric:1 dependence:1 rt:1 diagonal:2 exhibit:1 gradient:35 berlin:4 simulated:1 sensible:1 unstable:3 reason:2 length:5 october:1 statement:1 design:1 boltzmann:1 upper:1 finite:1 descent:8 situation:1 hinton:1 drift:1 overlooked:1 namely:1 required:1 suggested:1 including:1 max:1 power:7 improve:2 axis:1 speeding:1 qee:2 epoch:6 geometric:1 asymptotic:6 interesting:1 editor:2 qf:1 featuring:1 changed:1 repeat:1 weaker:1 allow:1 van:1 boundary:2 dimension:1 sensory:1 made:1 adaptive:1 san:2 simplified:1 far:3 cope:1 restarting:1 obtains:1 implicitly:1 keep:1 global:1 rid:1 xi:1 search:3 why:2 table:2 learn:1 ca:1 improving:1 expansion:1 necessarily:1 complex:1 domain:1 noise:1 intolerably:1 je:1 hemmen:1 slow:1 momentum:2 explicit:1 exponential:1 lie:1 theorem:3 specific:1 r2:1 decay:3 virtue:1 evidence:1 dominates:1 exists:2 trap:1 adding:1 phd:1 te:1 flannery:1 gtl:1 amsterdam:1 teukolsky:1 viewed:1 kramer:1 careful:1 change:6 determined:1 infinite:2 uniformly:2 typical:1 wt:27 averaging:1 called:1 ew:1 almeida:1 latter:2 arises:2 confines:1 accelerated:1 investigator:1 incorporate:1 evaluate:2 tested:1 |
71 | 1,062 | Universal Approximation and Learning
of Trajectories Using Oscillators
Pierre Baldi*
Division of Biology
California Institute of Technology
Pasadena, CA 91125
pfbaldi@juliet.caltech.edu
Kurt Hornik
Technische Universitat Wien
Wiedner Hauptstra8e 8-10/1071
A-1040 Wien, Austria
Kurt.Hornik@tuwien.ac.at
Abstract
Natural and artificial neural circuits must be capable of traversing specific state space trajectories. A natural approach to this
problem is to learn the relevant trajectories from examples. Unfortunately, gradient descent learning of complex trajectories in
amorphous networks is unsuccessful. We suggest a possible approach where trajectories are realized by combining simple oscillators, in various modular ways. We contrast two regimes of fast
and slow oscillations. In all cases, we show that banks of oscillators
with bounded frequencies have universal approximation properties.
Open questions are also discussed briefly.
1
INTRODUCTION: TRAJECTORY LEARNING
The design of artificial neural systems, in robotics applications and others, often
leads to the problem of constructing a recurrent neural network capable of producing
a particular trajectory, in the state space of its visible units. Throughout evolution,
biological neural systems, such as central pattern generators, have also been faced
with similar challenges. A natural approach to tackle this problem is to try to
"learn" the desired trajectory, for instance through a process of trial and error
and subsequent optimization. Unfortunately, gradient descent learning of complex
trajectories in amorphous networks is unsuccessful. Here, we suggest a possible
approach where trajectories are realized, in a modular and hierarchical fashion, by
combining simple oscillators. In particular, we show that banks of oscillators have
universal approximation properties.
* Also with the Jet Propulsion Laboratory, California Institute of Technology.
P. BALDI, K. HORNIK
452
To begin with, we can restrict ourselves to the simple case of a network with one!
visible linear unit and consider the problem of adjusting the network parameters
in a way that the output unit activity u(t) is equal to a target function I(t), over
an interval of time [0, T]. The hidden units of the network may be non-linear and
satisfy, for instance, one of the usual neural network charging equations such as
dUi
dt
~
= - UiTi + L..JjWij/jUj(t
-
Tij),
(1)
where Ti is the time constant of the unit, the Tij represent interaction delays, and
the functions Ij are non-linear input/output functions, sigmoidal or other. In the
next section, we briefly review three possible approaches for solving this problem,
and some of their limitations. In particular, we suggest that complex trajectories
can be synthesized by proper combination of simple oscillatory components.
2
2.1
THREE DIFFERENT APPROACHES TO TRAJECTORY LEARNING
GRADIENT DESCENT APPROACHES
One obvious approach is to use a form of gradient descent for recurrent networks
(see [2] for a review), such as back-propagation through time, in order to modify any adjustable parameters of the networks (time constants, delays, synaptic
weights and/or gains) to reduce a certain error measure, constructed by comparing
the output u(t) with its target I(t). While conceptually simple, gradient descent
applied to amorphous networks is not a successful approach, except on the most
simple trajectories. Although intuitively clear, the exact reasons for this are not
entirely understood, and overlap in part with the problems that can be encountered
with gradient descent in simple feed-forward networks on regression or classification
tasks.
There is an additional set of difficulties with gradient descent learning offixed points
or trajectories, that is specific to recurrent networks, and that has to do with the
bifurcations of the system being considered. In the case of a recurrent 2 network, as
the parameters are varied, the system mayor may not undergo a series of bifurcations, i.e., of abrupt changes in the structure of its trajectories and, in particular, of
its at tractors (fixed points, limit cycles, ... ). This in turn may translate into abrupt
discontinuities, oscillations or non-convergence in the corresponding learning curve.
At each bifurcation, the error function is usually discontinuous, and therefore the
gradient is not defined. Learning can be disrupted in two ways: when unwanted
abrupt changes occur in the flow of the dynamical system, or when desirable bifurcations are prevented from occurring. A classical example of the second type is the
case of a neural network with very small initial weights being trained to oscillate,
in a symmetric and stable fashion, around the origin. With small initial weights,
the network in general converges to its unique fixed point at the origin, with a large
error. If we slightly perturb the weights, remaining away from any bifurcation, the
network continues to converge to its unique fixed point which now may be slightly
displaced from the origin, and yield an even greater error, so that learning by gradient descent becomes impossible (the starting configuration of zero weights is a local
minimum of the error function).
1 All the results to be derived can be extended immediately to the case of higherdimensional trajectories.
2In a feed-forward network, where the transfer functions of the units are continuous, the
output is a continuous function of the parameters and therefore there are no bifurcations.
Universal Approximation and Learning of Trajectories Using Oscillators
453
8
o
Figure 1: A schematic representation of a 3 layer oscillator network for double figure
eight. Oscillators with period T in a given layer gate the corresponding oscillators,
with period T /2, in the previous layer.
2.2
DYNAMICAL SYSTEM APPROACH
In the dynamical system approach, the function /(t) is approximated in time, over
[0, T] by a sequence of points Yo, Yl, .... These points are associated with the iterates
of a dynamical system, i.e., Yn+l
F(Yn)
Fn(yo), for some function F. Thus
the network implementation requires mainly a feed-forward circuit that computes
the function F. It has a simple overall recursive structure where, at time n, the
output F(Yn) is calculated, and fed back into the input for the next iteration.
While this approach is entirely general, it leaves open the problem of constructing
the function F. Of course, F can be learned from examples in a usual feed-forward
connectionist network. But, as usual, the complexity and architecture of such a
network are difficult to determine in general. Another interesting issue in trajectory
learning is how time is represented in the network, and whether some sort of clock is
needed. Although occasionally in the literature certain authors have advocated the
introduction of an input unit whose output is the time t, this explicit representation
is clearly not a suitable representation, since the problem of trajectory learning
reduces then entirely to a regression problem. The dynamical system approach
relies on one basic clock to calculate F and recycle it to the input layer. In the
next approach, an implicit representation of time is provided by the periods of the
oscillators.
=
2.3
=
OSCILLATOR APPROACH
A different approach was suggested in [1] where, loosely speaking, complex trajectories are realized using weakly pre-structured networks, consisting of shallow
hierarchical combinations of simple oscillatory modules. The oscillatory modules
can consist, for instance, of simple oscillator rings of units satisfying Eq. 1, with
two or three high-gain neurons, and an odd number of inhibitory connections ([3]).
To fix the ideas, consider the typical test problem of constructing a network capable
of producing a trajectory associated with a double figure eight curve (i.e., a set
of four loops joined at one point), see Fig. 1. In this example, the first level of
the hierarchy could contain four oscillator rings, one for each loop of the target
trajectory. The parameters in each one of these four modules can be adjusted, for
instance by gradient descent, to match each of the loops in the target trajectory.
454
P. BALDI, K. HORNIK
The second level of the pyramid should contain two control modules. Each of these
modules controls a distinct pair of oscillator networks from the first level, so that
each control network in the second level ends up producing a simple figure eight .
Again, the control networks in level two can be oscillator rings and their parameters
can be adjusted . In particular, after the learning process is completed, they should
be operating in their high-gain regimes and have a period equal to the sum of the
periods of the circuits each one controls.
Finally, the third layer consists of another oscillatory and adjustable module which
controls the two modules in the second level, so as to produce a double figure
eight. The third layer module must also end up operating in its high-gain regime
with a period equal to four times the period of the oscillators in the first layer.
In general, the final output trajectory is also a limit cycle because it is obtained
by superposition of limit cycles in the various modules. If the various oscillators
relax to their limit cycles independently of one another, it is essential to provide
for adjustable delays between the various modules in order to get the proper phase
adjustments. In this way, a sparse network with 20 units or so can be constructed
that can successfully execute a double figure eight.
There are actually different possible neural network realizations depending on how
the action of the control modules is implemented. For instance, if the control units
are gating the connections between corresponding layers, this amounts to using
higher order units in the network. If one high-gain oscillatory unit, with activity
c(t) always close to 0 or 1, gates the oscillatory activities of two units Ul(t) and
U2(t) in the previous layer, then the overall output can be written as
out(t) = C(t)Ul (t)
+ (1 - C(t))U2(t) .
(2)
The number of layers in the network then becomes a function of the order of the
units one is willing to use. This approach could also be described in terms of a
dynamic mixture of experts architecture, in its high gain regime. Alternatively,
one could assume the existence of a fast weight dynamics on certain connections
governed by a corresponding set of differential equations. Although we believe that
oscillators with limit cycles present several attractive properties (stability, short
transients, biological relevance, . . . ), one can conceivably use completely different
circuits as building blocks in each module.
3
GENERALIZATION AND UNIVERSAL APPROXIMATION
We have just described an approach that combines a modular hierarchical architecture, together with some simple form of learning, enabling the synthesis of a neural
circuit suitable for the production of a double figure eight trajectory. It is clear that
the same approach can be extended to triple figure eight or, for that matter, to any
trajectory curve consisting of an arbitrary number of simple loops with a common
period and one common point. In fact it can be extended to any arbitrary trajectory. To see this, we can subdivide the time interval [0, T] into n equal intervals of
duration f = Tin . Given a certain level of required precision, we can always find n
oscillator networks with period T (or a fraction of T) and visible trajectory Ui(t),
such that for each i, the i-th portion of the trajectory u(t) with if ~ t ~ (i + l)f
can be well approximated by a portion of Ui(t) , the trajectory of the i-th oscillator.
The target trajectory can then be approximated as
(3)
Universal Approximation and Learning of Trajectories Using Oscillators
455
As usual, the control coefficient Cj(t) must have also period T and be equal to 1
for i{ :5 t :5 (i + 1){, and 0 otherwise. The control can be realized with one large
high-gain oscillator, or as in the case described above, by a hierarchy of control
oscillators arranged, for instance, as a binary tree of depth m if n = 2m , with the
corresponding multiple frequencies.
We can now turn to a slightly different oscillator approach, where trajectories are to
be approximated with linear combinations of oscillators, with constant coefficients.
What we would like to show again is that oscillators are universal approximators
for trajectories. In a sense, this is already a well-known result of Fourier theory
since, for instance, any reasonable function f with period T can be expanded in the
form 3
A.k
= kiT.
(4)
For sufficiently smooth target functions, without high frequencies in their spectrum,
it is well known that the series in Eq. 4 can be truncated. Notice, however, that both
Eqs. 3 and 4 require having component oscillators with relatively high frequencies,
compared to the final trajectory. This is not implausible in biological motor control,
where trajectories have typical time scales of a fraction of a second, and single
control neurons operate in the millisecond range. A rather different situation arises
if the component oscillators are "slow" with respect to the final product.
The Fourier representation requires in principle oscillations with arbitrarily large
frequencies (0, liT, 2IT, .. . , niT, .. .). Most likely, relatively small variations in the
parameters (for instance gains, delays andlor synaptic weights) of an oscillator
circuit can only lead to relatively small but continuous variations of the overall
frequency. For instance, in [3] it is shown that the period T of an oscillator ring
with n units obeying Eq. 1 must satisfy
Thus, we need to show that a decomposition similar in flavor to Eq. 4 is possible,
but using oscillators with frequencies in a bounded interval. Notice that by varying
the parameters of a basic oscillator, any frequency in the allowable frequency range
can be realized, see [3]. Such a linear combination is slightly different in spirit from
Eq. 2, since the coefficients are independent of time, and can be seen as a soft
mixture of experts. We have the following result.
Theorem 1 Let a < b be two arbitrary real numbers and let f be a continuous
function on [0, T]. Then for any error level { > 0, there exist n and a function 9n
of the form
such that the uniform distance
Ilf -
9n 1100 is less than {.
In fact, it is not even necessary to vary the frequencies A. over a continuous band
[a, b]. We have the following.
Theorem 2 Let {A.k} be an infinite sequence with a finite accumulation point, and
let f be a continuous function on [0,7]. Then for any error level { > 0, there exist
n and a function 9n(t) 2:~=10:'ke27rjAkt such that Ilf - 9nll00 < {.
=
3In what follows, we use the complex form for notational convenience.
P. BALDI, K. HORNIK
456
Thus, we may even fix the oscillator frequencies as e.g. Ak = l/k without losing
universal approximation capabilities. Similar statements can be made about meansquare approximation or, more generally, approximation in p-norm LP(Il), where
1 ~ p < 00 and Il is a finite measure on [0, T]:
Theorem 3 For all p and f in LP(Il) and for all { > 0, we can always find nand
gn as above such that Ilf - gn IILP{Jl) < {.
The proof of these results is surprisingly simple. Following the proofs in [4], if one
of the above statements was not true, there would exist a nonzero, signed finite
measure (T with support in [0, T] such that hO,T] e21fi >.t d(T(t) = for all "allowed"
?
frequencies A. Now the function z t-+ !rO,T] e21fizt d(T(t) is clearly analytic on the
whole complex plane. Hence, by a well-known result from complex variables, if it
vanishes along an infinite sequence with a finite accumulation point, it is identically
zero. But then in particular the Fourier transform of (T vanishes, which in turn
implies that (T is identically zero by the uniqueness theorem on Fourier transforms,
contradicting the initial assumption.
Notice that the above results do not imply that f can exactly be represented as
e.g. f(t) = e 21fi >.t dV(A) for some signed finite measure v-such functions are not
only band-limited, but also extremely smooth (they have an analytic extension to
the whole complex plane).
f:
Hence, one might even conjecture that the above approximations are rather poor
in the sense that unrealistically many terms are needed for the approximation.
However, this is not true-one can easily show that the rates of approximation
cannot be worse that those for approximation with polynomials. Let us briefly sketch
the argument, because it also shows how bounded-frequency oscillators could be
constructed.
Following an idea essentially due to Stinchcombe & White [5], let, more generally,
9 be an analytic function in a neighborhood of the real line for which no derivative
vanishes at the origin (above, we had g(t) = e 21fit ). Pick a nonnegative integer n
and a polynomial p of degree not greater than n - 1 arbitrarily. Let us show that
for any { > 0, we can always find a gn of the form gn(t)
E~=l Cl'kg(Akt) with Ak
arbitrarily small such that lip - gn 1100 < {. To do so, note that we can write
=
p(t) =
L
n- l
1=0
is,t ' ,
where rn(At) is of the order of An, as A -t 0, uniformly for t in [0, T] . Hence,
L:=l Cl'kg(Ak t )
L:=l Cl'k
=
(L~=-ol fil (At)l + rn (At))
L~=~l (L:: 1 Cl'kAi) filt l + L:=l Cl'krn (Akt).
Now fix n distinct numbers el, ... ,en , let Ak = Ak(p) = pek, and choose the Cl'k =
Cl'k(p) such that E;=lCl'k(p)Ak(p)' = iSl/fil for I = 0, ... , n - 1. (This is possible
because, by assumption, all fil are non-zero.) It is readily seen that Cl'k (p) is of
the order of pl-n as p -t (in fact, the j-th row of the inverse of the coefficient
matrix of the linear system is given by the coefficients of the polynomial nktj (A Ak)/(Aj -Ak)). Hence, as p -t 0, the remainder term EZ=lCl'k(p)rn(Ak(p)t) is ofthe
order of p, and thus E~=lCl'k(p)g(Adp)t) -t E~=-oliS,t' = p(t) uniformly on [0, T].
?
Note that using the above method, the coefficients in the approximation grow quite
rapidly when the approximation error tends to 0. In some sense, this was to be
Universal Approximation and Learning of Trajectories Using Oscillators
457
expected from the observation that the classes of small-band-limited functions are
rather "small". There is a fundamental tradeoff between the size of the frequencies,
and the size of the mixing coefficients. How exactly the coefficients scale with the
width of the allowed frequency band is currently being investigated.
4
CONCLUSION
The modular oscillator approach leads to trajectory architectures which are more
structured than fully interconnected networks, with a general feed-forward flow of
information and sparse recurrent connections to achieve dynamical effects. The
sparsity of units and connections are attractive features for hardware design; and
so is also the modular organization and the fact that learning is much more circumscribed than in fully interconnected systems. We have shown in different ways
that such architectures have universal approximation properties. In these architectures, however, some form of learning remains essential, for instance to fine tune
each one of the modules. This, in itself, is a much easier task than the one a fully
interconnected and random network would have been faced with. It can be solved
by gradient or random descent or other methods. Yet, fundamental open problems
remain in the overall organization of learning across modules, and in the origin of
the decomposition. In particular, can the modular architecture be the outcome of a
simple internal organizational process rather than an external imposition and how
should learning be coordinated in time and across modules (other than the obvious:
modules in the first level learn first, modules in the second level second, .. . )? How
successful is a global gradient descent strategy applied across modules? How can the
same modular architecture be used for different trajectories, with short switching
times between trajectories and proper phases along each trajectory?
Acknowledgments
The work of PB is in part supported by grants from the ONR and the AFOSR.
References
[1] Pierre Baldi. A modular hierarchical approach to learning. In Proceedings of the
2nd International Conference on Fuzzy Logic and Neural Networks, volume II,
pages 985-988, IIzuka, Japan, 1992.
[2] Pierre F. Baldi. Gradient descent learning algorithm overview: a general dynamic systems perspective. IEEE Transactions on Neural Networks, 6(1}:182195, January 1995.
[3] Pierre F. Baldi and Amir F. Atiya. How delays affect neural dynamics and
learning. IEEE Transactions on Neural Networks, 5(4):612-621, July 1994.
[4] Kurt Hornik. Some new results on neural network approximation. Neural Networks, 6:1069-1072,1993.
[5] Maxwell B. Stinchcombe and Halbert White. Approximating and learning unknown mappings using multilayer feedforward networks with bounded weights.
In International Joint Conference on Neural Networks, volume III, pages 7-16,
Washington, 1990. Lawrence Earlbaum, Hillsdale.
| 1062 |@word trial:1 briefly:3 polynomial:3 norm:1 nd:1 open:3 willing:1 meansquare:1 decomposition:2 pick:1 juliet:1 initial:3 configuration:1 series:2 kurt:3 comparing:1 yet:1 must:4 readily:1 written:1 fn:1 visible:3 subsequent:1 analytic:3 motor:1 leaf:1 amir:1 plane:2 short:2 iterates:1 sigmoidal:1 along:2 constructed:3 differential:1 consists:1 combine:1 baldi:7 expected:1 ol:1 tuwien:1 becomes:2 begin:1 provided:1 bounded:4 circuit:6 duo:1 what:2 kg:2 fuzzy:1 ti:1 tackle:1 unwanted:1 exactly:2 ro:1 control:13 unit:16 grant:1 yn:3 producing:3 understood:1 local:1 modify:1 tends:1 limit:5 switching:1 ak:9 signed:2 might:1 limited:2 range:2 unique:2 acknowledgment:1 recursive:1 block:1 universal:10 pre:1 suggest:3 get:1 convenience:1 close:1 cannot:1 impossible:1 ilf:3 accumulation:2 nit:1 starting:1 independently:1 duration:1 abrupt:3 immediately:1 stability:1 variation:2 pek:1 target:6 hierarchy:2 exact:1 losing:1 origin:5 circumscribed:1 approximated:4 satisfying:1 continues:1 module:18 solved:1 calculate:1 cycle:5 vanishes:3 complexity:1 ui:2 dynamic:4 trained:1 weakly:1 solving:1 division:1 completely:1 easily:1 joint:1 various:4 represented:2 distinct:2 fast:2 artificial:2 iilp:1 neighborhood:1 outcome:1 lcl:3 whose:1 modular:8 kai:1 quite:1 relax:1 otherwise:1 transform:1 itself:1 final:3 sequence:3 interaction:1 product:1 interconnected:3 remainder:1 relevant:1 combining:2 loop:4 realization:1 rapidly:1 translate:1 mixing:1 achieve:1 convergence:1 double:5 produce:1 converges:1 ring:4 depending:1 recurrent:5 ac:1 ij:1 odd:1 advocated:1 eq:6 implemented:1 implies:1 discontinuous:1 transient:1 hillsdale:1 require:1 fix:3 generalization:1 biological:3 adjusted:2 extension:1 pl:1 fil:3 around:1 considered:1 sufficiently:1 lawrence:1 mapping:1 vary:1 uniqueness:1 currently:1 superposition:1 successfully:1 clearly:2 always:4 rather:4 varying:1 derived:1 yo:2 notational:1 mainly:1 contrast:1 sense:3 el:1 nand:1 pasadena:1 hidden:1 overall:4 classification:1 issue:1 bifurcation:6 equal:5 having:1 washington:1 biology:1 lit:1 others:1 connectionist:1 phase:2 ourselves:1 consisting:2 organization:2 mixture:2 capable:3 necessary:1 traversing:1 tree:1 loosely:1 desired:1 halbert:1 instance:10 wiedner:1 soft:1 gn:5 organizational:1 technische:1 uniform:1 delay:5 successful:2 universitat:1 disrupted:1 fundamental:2 international:2 yl:1 together:1 synthesis:1 uiti:1 again:2 central:1 choose:1 worse:1 external:1 expert:2 derivative:1 japan:1 wien:2 coefficient:8 matter:1 satisfy:2 coordinated:1 try:1 portion:2 sort:1 capability:1 amorphous:3 il:3 yield:1 ofthe:1 conceptually:1 trajectory:41 oscillatory:6 implausible:1 synaptic:2 higherdimensional:1 frequency:15 obvious:2 associated:2 proof:2 gain:8 adjusting:1 austria:1 cj:1 actually:1 back:2 feed:5 maxwell:1 higher:1 dt:1 arranged:1 execute:1 just:1 implicit:1 clock:2 sketch:1 propagation:1 aj:1 believe:1 building:1 effect:1 contain:2 true:2 evolution:1 hence:4 symmetric:1 laboratory:1 nonzero:1 white:2 attractive:2 width:1 allowable:1 fi:1 common:2 overview:1 volume:2 jl:1 discussed:1 adp:1 synthesized:1 had:1 stable:1 operating:2 perspective:1 occasionally:1 certain:4 binary:1 arbitrarily:3 onr:1 approximators:1 caltech:1 seen:2 minimum:1 additional:1 greater:2 kit:1 converge:1 determine:1 period:12 july:1 ii:1 multiple:1 desirable:1 reduces:1 earlbaum:1 smooth:2 jet:1 match:1 prevented:1 schematic:1 regression:2 basic:2 multilayer:1 mayor:1 essentially:1 offixed:1 represent:1 iteration:1 pyramid:1 robotics:1 unrealistically:1 fine:1 interval:4 grow:1 operate:1 undergo:1 flow:2 spirit:1 integer:1 feedforward:1 iii:1 identically:2 affect:1 fit:1 pfbaldi:1 architecture:8 restrict:1 reduce:1 idea:2 tradeoff:1 whether:1 ul:2 filt:1 speaking:1 oscillate:1 action:1 tij:2 generally:2 clear:2 tune:1 amount:1 transforms:1 band:4 hardware:1 atiya:1 exist:3 inhibitory:1 notice:3 millisecond:1 write:1 four:4 pb:1 fraction:2 sum:1 imposition:1 inverse:1 throughout:1 reasonable:1 oscillation:3 entirely:3 layer:10 encountered:1 nonnegative:1 activity:3 occur:1 fourier:4 argument:1 extremely:1 expanded:1 relatively:3 conjecture:1 structured:2 combination:4 recycle:1 poor:1 remain:1 slightly:4 across:3 lp:2 shallow:1 conceivably:1 intuitively:1 dv:1 equation:2 remains:1 turn:3 needed:2 fed:1 end:2 eight:7 hierarchical:4 away:1 pierre:4 ho:1 gate:2 subdivide:1 existence:1 remaining:1 completed:1 perturb:1 approximating:1 classical:1 question:1 realized:5 already:1 strategy:1 usual:4 gradient:13 distance:1 propulsion:1 reason:1 difficult:1 unfortunately:2 statement:2 design:2 implementation:1 proper:3 adjustable:3 unknown:1 neuron:2 displaced:1 observation:1 enabling:1 finite:5 descent:12 truncated:1 january:1 situation:1 extended:3 rn:3 varied:1 arbitrary:3 isl:1 pair:1 required:1 connection:5 california:2 learned:1 discontinuity:1 suggested:1 usually:1 pattern:1 dynamical:6 regime:4 sparsity:1 challenge:1 unsuccessful:2 charging:1 stinchcombe:2 overlap:1 suitable:2 natural:3 difficulty:1 technology:2 imply:1 faced:2 review:2 literature:1 tractor:1 afosr:1 fully:3 interesting:1 limitation:1 generator:1 triple:1 degree:1 principle:1 bank:2 production:1 row:1 course:1 surprisingly:1 supported:1 institute:2 sparse:2 curve:3 calculated:1 depth:1 computes:1 forward:5 author:1 made:1 transaction:2 logic:1 global:1 alternatively:1 spectrum:1 continuous:6 iizuka:1 lip:1 learn:3 transfer:1 ca:1 andlor:1 hornik:6 investigated:1 complex:8 cl:8 constructing:3 whole:2 contradicting:1 allowed:2 fig:1 en:1 fashion:2 slow:2 akt:2 precision:1 explicit:1 obeying:1 governed:1 third:2 tin:1 krn:1 theorem:4 specific:2 gating:1 consist:1 essential:2 occurring:1 flavor:1 easier:1 likely:1 ez:1 adjustment:1 joined:1 u2:2 relies:1 oscillator:36 change:2 typical:2 except:1 infinite:2 uniformly:2 internal:1 support:1 arises:1 relevance:1 |
72 | 1,063 | Learning Fine Motion by Markov
Mixtures of Experts
Marina Meilii
Dept. of Elec. Eng . and Computer Sci.
Massachussetts Inst . of Technology
Cambridge, MA 02139
mmp@ai .mit.edu
Michael I. J Ol'dan
Dept.of Brain and Cognitive Sciences
Massachussetts Inst. of Technology
Cambridge, MA 02139
jordan@psyche.mit .edu
Abstract
Compliant control is a standard method for performing fine manipulation tasks, like grasping and assembly, but it requires estimation
of the state of contact (s.o.c.) between the robot arm and the objects involved. Here we present a method to learn a model of the
movement from measured data. The method requires little or no
prior knowledge and the resulting model explicitly estimates the
s.o.c. The current s.o.c. is viewed as the hidden state variable of
a discrete HMM. The control dependent transition probabilities
between states are modeled as parametrized functions of the measurement. We show that their parameters can be estimated from
measurements at the same time as the parameters of the movement
in each s.o.c. The learning algorithm is a variant of the EM procedure. The E step is computed exactly ; solving the M step exactly
is not possible in general. Here, gradient ascent is used to produce
an increase in likelihood .
1
INTRODUCTION
For a large class of robotics tasks , such as assembly tasks or manipulation of relatively light-weight objects, under appropriate damping of the manipulator the
dynamics of the objects can be neglected . For these tasks the main difficulty is in
having the robot achieve its goal despite uncertainty in its position relative to the
surrounding objects. Uncertainty is due to inaccurate knowledge of the geometric
shapes and positions of the objects, of their physical properties (surface friction
coefficients), or to positioning errors in the manipulator. The standard solution
to this problem is controlled compliance first introduced in (Mason, 1981). Under
compliant motion , the task is performed in stages; in each stage the robot arm
M. MElLA, M. I. JORDAN
1004
maintains contact with a selected surface or feature of the environment; the stage
ends when contact with the feature corresponding to the next stage is made.
Decomposing the given task into subtasks and specifying each goal or subgoal in
terms of contact constraints has proven to be a particularly fertile idea, from which
a fair number of approaches have evolved. But each of them have to face and solve
the problem of estimating the state of contact (i .e. checking if the contact with
the correct surface is achieved) , a direct consequence of dealing with noisy measurements . Additionally, most approaches assume prior geometrical and physical
knowledge of the environment .
In this paper we present a method to learn a model of the environment which will
serve to estimate the s.o.c. and to predict future positions from noisy measurements.
It associates to each state of contact the coresponding movement model (m.m.); that
is: a relationship between positions, nominal and actual velocities that holds over a
domain of the position-nominal velocity space. The current m.m. is viewed as the
hidden state variable of a discrete Hidden Markov Model (HMM) with transition
probabilities that are parametrized functions of the measurement. We call this
model Markov Mixture of Experts (MME) and show how its parameters can be
estimated. In section 2 the problem is defined, section 3 introduces the learning
algorithm, section 4 presents a simulated example and 5 discusses other aspects
relevant to the implementation.
2
REACHABILITY GRAPHS AND MARKOV
MIXTURES OF EXPERTS
For any ensemble of objects, the space of all the relative degrees of freedom of the
objects in the ensemble is called the configuration space (C-space). Every possible configuration of the ensemble is represented by a unique point in the C-space
and movement in the real space maps into continuous trajectories in the C-space
(Lozano-Perez, 1983). The sets of points corresponding to each state of contact
create a partition over the C-space. Because trajectories are continuous, a point
can move from a s.o.c. only to a neighboring s.o.c. This can be depicted by a directed graph with vertices representing states of contact and arcs for the possible
transitions between them , called the reach ability graph . If no constraints on the
velocities are imposed, then in the reachability graph each s.o.c. is connected to all
its neighbours. But if the range of velocities is restricted, the connectivity of the
graph decreases and the connections are generally non-symmetric. Figure 1 shows
an example of a C-space and its reachability graph for velocities with only positive
components.
Ideally, in the absence of noise, the states of contact can be perfectly observed
and every transition through the graph is thus deterministic. To deal with the
uncertainty in the measurements, we will attach probabilities to the arcs of the graph
in the following way: Let us denote by Qi the set of configurations corresponding
to s.o.c. i and let the movement of a point x with uniform nominal velocity v for a
time aT be given by x( t + aT) =
(x, v, aT); both x and v are vectors of same
dimension as the C-space. Now, let x', v' be the noisy measurements of the true
values x, v, x E Qj and P[x, vlx', v',j] the posterior distribution of (x , v) given the
measurements and the s.o.c. Then, the probability of transition to a state i from a
given state j in time T3 can be expressed as:
r
P[ilx',v',j] =
r P[x,vlx',v',j]dxdv =
aij(x',V')
(1)
J{x ,vIXEQj ,rex ,v ,T.)EQ.}
Defining the transition probability matrix A = [aji]rj=l and assuming measurement
Learning Fine Motion by Markov Mixtures of Experts
1005
y
x
(a)
(b)
Figure 1: A configuration space (a) and its reachability graph (b). The nodes
represent movement models: C is the free space, A and B are surfaces with static
and dynamic friction, G represents jamming in the corner. The velocity V has
positive components.
noise P[x'lq = i, x E Qd leads to an HMM with output x having a continuous
emission probability distribution and where the s.o.c. plays the role of a hidden
state variable. Our main goal is to estimate this model from observed data.
To give a general statement of the problem we will assume that all the position,
velocity and force measurements are represented by the input vector u; the output
vector y of dimensionality ny contains the future position (which our model will
learn to predict). Observations are made at moments which are integer multiples
of T$' indexed by t = 0,1, .. , T. If T$ is a constant sampling time the dependency of
the transition probability on Ts can be ignored. For the purpose of the parameter
estimation, the possible dependence between u(t) and yet + 1) will also be ignored,
but it should be considered when the trained model is used for prediction.
Throughout the following section we will also assume that the input-output dependence is described by a Gaussian conditional density p(y(t)lu(t), q(t) = k) with
mean f(u(t),(h:) and variance E = (1'21. This is equivalent to assuming that given
the S.O.c . all noise is additive Gaussian output noise, which is obviously an approximation. But this approximation will allow us to derive certain quantities in closed
form in an effective way.
The function feu, (he) is the m.m. associated with state of contact k (with Ok its
parameter vector) and q is the selector variable representing it . Sometimes we will
find it useful to partition the domain of a m.m. into subdomains and to represent
it by a different function (i .e. a different set of parameters Ok) on each of the
subdomains; then, the name movement model will be extended to them.
The evolution of q is controlled by a Markov chain which depends on u and of a set
of parameters W:
aij(u(t), W)
with
= Pr[q(t + 1) = ilq(t) = j, u(t)]
L aij(u, W) = 1
t
= 0, 1, ...
\:Iu, W, j = 1, . .. , m.
(2)
M. MElLA, M. I. JORDAN
1006
y
u
!
,,------.1&
'd
q
1-------'
t .......................... .................. ............. _????????????? ~ . . .............................. _...................... .
Figure 2: The Markov Mixture of Experts architecture
Fig. 2 depicts this architecture. It can be easily seen that this model generalizes the
mixture of experts (ME) architecture (Jacobs, et al., 1991), to which it reduces in
the case where aij are independent of j (the columns of A are all equal). It becomes
the model of (Bengio and Frasconi, 1995) when A and f are neural networks.
3
AN EM ALGORITHM FOR MME
To estimate the values of the unknown parameters (J"2, Wk, Ok, k = 1, ... ,m given
the sequence of observations {(u(t), y(t))};=o, T> 0 the Expectation Maximization
(EM) algorithm will be used. The states {q(t)};=o play the role of the unobserved
variables. More about EM can be found in (Dempster et al., 1977) while aspects
specific to this algorithm are in (Meila and Jordan, 1994).
The E step computes the probability of each state and of every transition to occur
at t E {O, ... , T} given the observations and an initial parameter set. This can be
done efficiently by the forward-backward algorithm (Rabiner and Juang, 1986).
I {(u(t), y(t))};=o, W, 0, (J"2]
Pr[q(t) = j, q(t + 1) = i I {(u(t), y(t))};=o , W,
Pr[q(t) = k
(3)
0,
(J"2]
In the M step the new estimates of the parameters are found by maximizing the
average complete log-likelihood J, which in our case has the form
T-l
J(O, (J"2, W)
=
m
L L eij(t) lnaij(u(t), W)t=o i,j=l
Since each parameter appears in only one term of J the maximization is equivalent
to:
T
0l:ew = argmin
Ih
L 'n(t) lIy(t) t=o
f( u(t), Ok)11 2
(5)
Learning Fine Motion by Markov Mixtures of Experts
1007
T-l
w new
= argmax L L~ij(t) In (aij(u(t), w))
W
t=o ij
(6)
1
ny(T + 1)
(7)
T
m
~ ~ ''}'k(t) Ily(t) - I(u(t), Ok )11 2
There is no general closed form solution to (5) and (6). Their difficulty depends on
the form of I and aij. The complexity of the m.m . is determined by the geometrical
shape of the objects' surfaces. For planar surfaces and no rotational degrees of
freedom I is linear in Ok. Then, (5) becomes a weighted least squares problem
which can be solved in closed form.
The functions in A depend both on the movement and of the noise models. Because
the noise is propagated through non-linearities to the output, an exact form as in
(1) may be hard to compute analytically. Moreover, a correct noise model for
each of the possible uncertainties is rarely available (Eberman, 1995). A common
practical approach is to trade accuracy for computability and to parametrize A in
a form which is easy to update but deprived of physical meaning. In all the cases
where maximization cannot be performed exactly, one can resort to Generalized
EM by merely increasing J. In particular, gradient ascent in parameter space is
a technique which can replace maximization. This modification will not affect the
overall convergence of the EM iteration but can significantly reduce its speed.
Because EM only finds local maxima of the likelihood, the initialization is important.
If I( u, Ok) correspond to physical movement models , good initial estimates for their
parameters can be available . The same applies to those components of W which
bear physical significance. A complementary approach is to reduce the number of
parameters by explicitly setting the probabilities of impossible transitions to O.
SIMULATION RESULTS
4
Simulations have been run on the C-space shown in fig . 1. The inputs were the
4-dimensional vectors of position (x, y) and nominal velocity (Vx , Vy); the output
was the predicted position. The coordinate range was [0, 10] and the admissible
velocities were confined to the upper right quadrant (Vmax 2: Vx , Vy 2: Vmin > 0).
The restriction in direction implied that the trajectories remain in the coordinate
domain; it also appeared in the topology of the reachability graph, which has no
transition to the free space from another state.
This model was implemented by a MME. The m.m. are linear in the parameters,
corresponding to the piecewise linearity of the true model. To implement the transition matrix A we used a bank of gating net-works, one for each s.o.c., consisting
of 2 layer perceptrons with softmax 1 output. There are 230 free parameters in the
gating networks and 64 in the m.m.
The training set included N = 5000 data points, in sequences of length T ~ 6, all
starting in free space. The starting position of the sequence and the nominal velocities at each step were picked randomly. We found that a more uniform distribution
of the data points over the states of contact is necessary for successful learning.
Since this is not expected to happen in applications (where, e.g., sticking occurs
less often than sliding) , the obtained models were tested also on a distribution that
1
()
The softmax function is given by: softmax. x
vectors of the same dimension.
= Zexp(WTx)
,i = 1, .. m
jexp(W x)
!
T
j
with Wj , x
M. MElLA, M. I. JORDAN
1008
Table 1: Performance of MME versus ME
Error (MSE) 1/2
Umform V distribution
.1
0
.2
.3
.4
.023
.11
.219 .327 .437
.010 .109 .218 .327 .435
.044 .129 .247 .367 .488
.034 .126 .245 .366 .488
Test set
noise level
MME,(1' =.2
MME,(1' =0
ME, (1' = .2
ME, (1' =0
(a) Model Prediction Standard
Trammg distributIon
.1
.2
.4
.3
0
.024 .113 .222 .332 .443
.003 .114 .228 .343 .456
.052 .133
.25
.37
.493
.047 .131
.49
.25
.37
Test set
noise level
MME, (1' =.2
MME, (1' =0
ME, (1' =.2
ME, (1' =0
(b) State Misclassification
Trammg distribution
.1
.~
.3
0
.4
5.15 5.2
5.5
5.9
6.4
1.40 2.35 3.25 4.13
.78
6.46 6.60 7.18 7.73 8.13
6.25 6.45 6.98 7.61 8.15
Error [%]
Umform
.1
U
3.45 3.5
1.19
.89
3.85 3.90
3.84 3.98
V distribution
.~
.3
.4
3.8
4.2
4.6
1.70 2.30 2.88
4.38 4.99 5.65
4.53 5.05 5.70
was uniform over velocities (and consequently, highly non-uniform over states of
contact). Gaussian noise with (1'=0.2 or 0 was added to the (x, y) training data.
In the M step, the parameters of the gating networks were updated by gradient
ascent. For the m .m.least squares estimation was used. To ensure that models and
gates are correctly coupled, initial values for () are chosen around the true values.
As discussed in the previous section, this is not an unrealistic assumption . W was
initialized with small random values. Each simulation was run until convergence.
We used two criteria to measure the performance of the learning algorithm: square
root of prediction MSE and hidden state misdassificaton. The results are summarized in table 1. The test set size is 50,000 in all cases. Input noise is Gaussian with
levels between 0 and 0.4. Comparisons were made with a ME model with the same
number of states.
The simulations show that the MME architecture is tolerant to input noise, although
it is not taking it into account explicitly. The MME consistently outperforms the
ME model in both prediction and state estimation accuracy.
5
DISCUSSION
An algorithm to estimate the parameters of composite movement models in the
presence of noisy measurements has been presented. The algorithm exploits the
physical decomposability of the problem and the temporal relationship between the
data points to produce estimates of both the model's parameters and the s.o.c. It
requires only imprecise initial knowledge about the geometry and physical properties
of the system.
Prediction via MME The trained model can be used either as an estimator for
the state of contact or as a forward model in predicting the next position. For
the former goal the forward part of the forward-backward algorithm can be used
to implement a recursive estimator or the methods in (Eberman, 1995) can be
used. The obtained 'Yk(t) , combined with the outputs of the movement models, will
produce a predicted output y. An improved posterior estimate of y can be obtained
Learning Fine Motion by Markov Mixtures of Experts
1009
by combining f) with the current measurement.
Scaling issues. Simulations have shown that relatively large datasets are required
for training even for a small number of states. But, since the states represent
physical entities, the model will inherit the geometrical locality properties thereof.
Thus, the number of possible transitions from a state will be bounded by a small
constant when the number of states grows, keeping the data complexity linear in
m.
As a version of EM, our algorithm is batch. It follows that parameters are not
adapted on line. In particular, the discretization time T& must be fixed prior to
training. But small changes in Ts can be accounted for by rescaling the velocities
V. For the other changes, inasmuch as they are local, relearning can be confined to
those components of the architecture which are affected.
References
Bengio, Y. and Frasconi, P . (1995). An input output HMM architecture. In G.
Tesauro, D. Touretzky, & T. Leen (Eds.), Neural Information Processing Sys.
tems 7, Cambridge, MA: MIT Press, pp. 427-435.
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from
incomplete data via the EM algorithm. Journal of the Royal Statistical Society,
B, 39:1- 38.
Eberman, B. S. (1995). A sequential decision approach to sensing manipulation
contact features. PhD thesis, M.I.T., Dept. of Electrical Engineering.
Jacobs, R. A., Jordan, M. 1., Nowlan, S., & Hinton, G. E. (1991). Adaptive mixtures
of local experts. Neural Computation, 3, 1-12.
Lozano-Perez, T. (1983). Spatial planning: a configuration space approach. IEEE
Transactions on Computers.
Mason, M. T. (1981). Compliance and force control for computer controlled manipulation. IEEE Trans. on Systems, Man and Cybernetics.
Meila, M. and Jordan, M. 1. (1994). Learning the parameters of HMMs with auxilliary input. Technical Report 9401, MIT Computational Cognitive Science,
Cambridge, MA.
Rabiner, R. L. and Juang, B. H. (1986). An introduction to hidden Markov models.
ASSP Magazine, 3(1):4-16.
| 1063 |@word version:1 simulation:5 eng:1 jacob:2 moment:1 initial:4 configuration:5 contains:1 outperforms:1 current:3 discretization:1 nowlan:1 yet:1 must:1 additive:1 happen:1 partition:2 shape:2 meilii:1 update:1 selected:1 vmin:1 sys:1 node:1 tems:1 direct:1 dan:1 expected:1 planning:1 ol:1 brain:1 little:1 actual:1 increasing:1 becomes:2 estimating:1 linearity:2 moreover:1 bounded:1 evolved:1 argmin:1 unobserved:1 temporal:1 every:3 exactly:3 jamming:1 control:3 positive:2 engineering:1 local:3 consequence:1 despite:1 feu:1 initialization:1 specifying:1 hmms:1 range:2 directed:1 unique:1 practical:1 recursive:1 implement:2 procedure:1 aji:1 significantly:1 composite:1 imprecise:1 quadrant:1 cannot:1 vlx:2 impossible:1 restriction:1 equivalent:2 map:1 imposed:1 deterministic:1 maximizing:1 starting:2 estimator:2 coordinate:2 updated:1 nominal:5 play:2 ilq:1 exact:1 magazine:1 auxilliary:1 associate:1 velocity:13 particularly:1 observed:2 role:2 solved:1 electrical:1 wj:1 connected:1 grasping:1 movement:11 decrease:1 trade:1 yk:1 environment:3 dempster:2 complexity:2 ideally:1 dynamic:2 neglected:1 trained:2 depend:1 solving:1 serve:1 mme:11 easily:1 represented:2 surrounding:1 elec:1 effective:1 solve:1 ability:1 noisy:4 laird:1 obviously:1 sequence:3 net:1 neighboring:1 relevant:1 combining:1 achieve:1 sticking:1 juang:2 convergence:2 produce:3 object:8 derive:1 measured:1 ij:2 wtx:1 eq:1 implemented:1 predicted:2 reachability:5 qd:1 direction:1 correct:2 vx:2 hold:1 around:1 considered:1 predict:2 jexp:1 purpose:1 estimation:4 trammg:2 create:1 weighted:1 mit:4 gaussian:4 emission:1 ily:1 consistently:1 likelihood:4 inst:2 dependent:1 inaccurate:1 hidden:6 iu:1 overall:1 issue:1 spatial:1 softmax:3 equal:1 having:2 frasconi:2 sampling:1 represents:1 future:2 report:1 piecewise:1 randomly:1 neighbour:1 argmax:1 consisting:1 geometry:1 freedom:2 highly:1 introduces:1 mixture:9 light:1 perez:2 chain:1 necessary:1 damping:1 indexed:1 incomplete:1 initialized:1 column:1 maximization:4 vertex:1 decomposability:1 uniform:4 successful:1 rex:1 dependency:1 combined:1 density:1 compliant:2 michael:1 connectivity:1 thesis:1 cognitive:2 corner:1 expert:9 resort:1 rescaling:1 account:1 summarized:1 wk:1 coefficient:1 explicitly:3 depends:2 performed:2 root:1 picked:1 closed:3 liy:1 maintains:1 square:3 accuracy:2 variance:1 efficiently:1 ensemble:3 t3:1 rabiner:2 correspond:1 lu:1 trajectory:3 cybernetics:1 reach:1 touretzky:1 ed:1 pp:1 involved:1 thereof:1 associated:1 static:1 propagated:1 knowledge:4 mella:3 dimensionality:1 appears:1 ok:7 planar:1 improved:1 subgoal:1 done:1 leen:1 stage:4 until:1 grows:1 manipulator:2 name:1 true:3 lozano:2 evolution:1 analytically:1 former:1 symmetric:1 deal:1 criterion:1 generalized:1 complete:1 motion:5 geometrical:3 meaning:1 common:1 physical:8 discussed:1 he:1 measurement:12 cambridge:4 ai:1 meila:2 robot:3 surface:6 posterior:2 tesauro:1 manipulation:4 certain:1 seen:1 sliding:1 multiple:1 rj:1 reduces:1 positioning:1 technical:1 marina:1 controlled:3 qi:1 prediction:5 variant:1 expectation:1 iteration:1 represent:3 sometimes:1 robotics:1 achieved:1 confined:2 fine:5 ascent:3 compliance:2 fertile:1 jordan:7 call:1 integer:1 presence:1 bengio:2 easy:1 affect:1 architecture:6 perfectly:1 topology:1 reduce:2 idea:1 qj:1 ignored:2 useful:1 generally:1 vy:2 estimated:2 correctly:1 discrete:2 affected:1 backward:2 computability:1 graph:10 merely:1 run:2 uncertainty:4 throughout:1 decision:1 scaling:1 layer:1 adapted:1 occur:1 constraint:2 aspect:2 speed:1 friction:2 performing:1 relatively:2 remain:1 em:9 psyche:1 modification:1 deprived:1 restricted:1 pr:3 discus:1 end:1 generalizes:1 decomposing:1 available:2 parametrize:1 appropriate:1 massachussetts:2 inasmuch:1 batch:1 gate:1 subdomains:2 ensure:1 assembly:2 exploit:1 society:1 contact:15 implied:1 move:1 added:1 quantity:1 occurs:1 dependence:2 gradient:3 sci:1 simulated:1 hmm:4 parametrized:2 entity:1 me:8 assuming:2 length:1 modeled:1 relationship:2 rotational:1 statement:1 implementation:1 unknown:1 upper:1 observation:3 markov:10 datasets:1 arc:2 t:2 defining:1 extended:1 hinton:1 assp:1 subtasks:1 introduced:1 required:1 connection:1 trans:1 appeared:1 royal:1 unrealistic:1 misclassification:1 difficulty:2 force:2 attach:1 predicting:1 arm:2 representing:2 technology:2 coupled:1 prior:3 geometric:1 checking:1 relative:2 bear:1 proven:1 versus:1 degree:2 rubin:1 bank:1 accounted:1 free:4 keeping:1 aij:6 allow:1 face:1 taking:1 dimension:2 transition:12 computes:1 forward:4 made:3 adaptive:1 vmax:1 transaction:1 selector:1 dealing:1 tolerant:1 continuous:3 table:2 additionally:1 learn:3 mse:2 domain:3 inherit:1 significance:1 main:2 noise:12 fair:1 complementary:1 fig:2 depicts:1 ny:2 position:11 mmp:1 lq:1 admissible:1 specific:1 gating:3 sensing:1 mason:2 ih:1 sequential:1 phd:1 relearning:1 locality:1 depicted:1 ilx:1 eij:1 expressed:1 applies:1 ma:4 conditional:1 viewed:2 goal:4 consequently:1 replace:1 absence:1 man:1 hard:1 change:2 included:1 determined:1 called:2 ew:1 rarely:1 perceptrons:1 dept:3 tested:1 |
73 | 1,064 | Estimating the Bayes Risk from Sample Data
Robert R. Snapp? and Tong Xu
Computer Science and Electrical Engineering Department
University of Vermont
Burlington, VT 05405
Abstract
A new nearest-neighbor method is described for estimating the Bayes risk
of a multiclass pattern claSSification problem from sample data (e.g., a
classified training set). Although it is assumed that the classification problem can be accurately described by sufficiently smooth class-conditional
distributions, neither these distributions, nor the corresponding prior probabilities of the classes are required. Thus this method can be applied to
practical problems where the underlying probabilities are not known. This
method is illustrated using two different pattern recognition problems.
1 INTRODUCTION
An important application of artificial neural networks is to obtain accurate solutions to
pattern classification problems. In this setting, each pattern, represented as an n-dimensional
feature vector, is associated with a discrete pattern class, or state of nature (Duda and Hart,
1973). Using available information, (e.g., a statistically representative set of labeled feature
vectors {(Xi, fin, where Xi E Rn denotes a feature vector and fi E l:::: {Wl,W2, ... ,we},
its correct pattern class), one desires a function (e.g., a neural network claSSifier) that assigns
new feature vectors to pattern classes with the smallest possible misclassification cost.
If the classification problem is stationary, such that the patterns from each class are generated
according to known probability distributions, then it is possible to construct an optimal
clasSifier that assigns each pattern to a class with minimal expected risk. Although our
method can be generalized to problems in which different types of classification errors
incur different costs, we shall simplify our discussion by assuming that all errors are equal.
In this case, a Bayes claSSifier assigns each feature vector to a class with maximum posterior
probability. The expected risk of this classifier, or Bayes risk then reduces to the probability
of error
RB =
? E-mail:snapp<Demba.uvm.edu
r [1 - SUPPCf!X)]
JCx)dx,
fEL
Js
(1)
233
Estimating the Bayes Risk from Sample Data
(Duda and Hart, 1973). Here, P( fix) denotes the posterior probability of class f conditioned
on observing the feature vector x, f(x) denotes the unconditional mixture density of the
feature vector x, and S C Rn denotes the probability-one support of f.
Knowing how to estimate the value of the Bayes risk of a given classification problem with
a specific input representation, may facilitate the design of more accurate classifiers. For
example, since the value of RB depends upon the set of features chosen to represent each
pattern (e.g., the significance of the input units in a neural network classifier), one might
compare estimates of the Bayes risk for a number of different feature sets, and then select
the representation that yields the smallest value. Unfortunately, it is necessary to know the
explicit probability distributions to evaluate (1). Thus with the possible exception of trivial
examples, the Bayes risk cannot be determined exactly for practical classification problems.
Lacking the means to evaluate the Bayes risk exactly, motivates the development of statistical
estimators of RB. In this paper, we use a recent asymptotic analysis of the finite-sample
risk of the k-nearest-neighbor classifier to obtain a new procedure for estimating the Bayes
risk from sample data. Section 2 describes the k-nearest-neighbor algorithm, and briefly
describes how estimates of its finite-sample risk have been used to estimate RB. Section 3
describes how a recent asymptotic analysis of the finite-sample risk can be applied to obtain
new statistical estimators of the Bayes risk. In Section 4 the k-nearest-neighbor algorithm
is used to estimate the Bayes risk of two example problems. Section 5 contains some
concluding remarks.
2
THE k-NEAREST-NEIGHBOR CLASSIFIER
Due to its analytic tractability, and its nearly optimal performance in the large sample limit,
the k-nearest-neighbor classifier has served as a useful framework for estimating the Bayes
risk from classified samples. Recall, that the k-nearest-neighbor algorithm (Fix and Hodges,
1951) clasSifies an n-dimensional feature vector x by consulting a reference sample of m
correctly clasSified feature vectors Xm = {(Xi, f i ) : i = 1, ... m}. First, the algorithm
identifies the k nearest neighbors of x, Le., the k feature vectors within Xm that lie closest
to x with respect to a given metric. Then, the classifier assigns x to the most frequent class
label represented by the k nearest neighbors. (A variety of procedures can be used to resolve
ties.) In the following, C denotes the number of pattern classes.
The finite-sample risk of this algorithm, R m , equals the probability that the k-nearestneighbor classifier assigns x to an incorrect class, averaged over all input vectors x, and
all m-samples, X m . The following properties have been shown to be true under weak
assumptions:
Property 1 (Cover and Hart, 1967): For.fixed k,
Rm
with
RB
-+
Roo(k),
as m
~ Roo(1) ~ RB (2 -
-+ 00
C ~ 1R B ).
(2)
Property 2 (Devroye, 1981): If k ~ 5, and C = 2, then there exist universal constants
a =0.3399? .. , and =0.9749 ... such that Roo(k) is bounded by
(3
RB ~ Roo(k) ~ (1 + ak)RB ,
where
ak
(3)
a..(f (
= k _ 3.25 1 + ~ .
More generally, ifC = 2, then
HB '" Roo(k) '"
(1 If)
+
RB-
(3)
R. R. SNAPP, T. XU
234
By the latter property, this algorithm is said to be Bayes consistent in that for any c > 0, it
is possible to construct a k-nearest-neighbor classifier such that IRm - RBI < c if m and
k are sufficiently large. Bayes consistency is also evident in other nonparametric pattern
classifiers.
Several methods for estimating RB from sample data have previously been proposed, e.g.,
(Devijver, 1985), (Fukunaga, 1985), (Fukunaga and Hummels, 1987), (Garnett and Yau,
1977), and (Loizou and Maybank, 1987). Typically, these methods involve constructing
sequences of k-nearest neighbor classifiers, with increasing values of k and m. The misclassification rates are estimated using an independent test sample, from which upper and
lower bounds to RB are obtained. Because these experiments are necessarily performed
with finite reference samples, these bounds are often imprecise. This is especially true for
problems in which Rm converges to Roo(k) at a slow rate. In order to remedy this deficiency,
it is necessary to understand the manner in which the limit in Property 1 is achieved. In the
next section we describe how this information can be used to construct new estimators for
the Bayes risk of sufficiently smooth claSSification problems.
3
NEW ESTIMATORS OF THE BAYES RISK
For a subset of multiclass classification problems that can be described by probability
densities with uniformly bounded partial derivatives up through order N + 1 (with N 2: 2),
the finite-sample risk of a k-nearest-neighbor classifier that uses a weighted Lp metric can
be represented by the truncated asymptotic expansion
N
Rm
= Roo(k) + 2:::Cjm- j/n + 0
(m- CN +1)/n) ,
(4)
j=2
(Psaltis, Snapp, and Venkatesh, 1994), and (Snapp and Venkatesh, 1995). In the above,
n equals the dimensionality of the feature vectors, and Roo(k), C2, ... ,CN, are the expansion coefficients that depend upon the probability distributions that define the pattern
classification problem.
This asymptotic expansion provides a parametric description of how the finite-sample risk
Rm converges to its infinite sample limit Roo(k). Using a large sample of classified data,
one can obtain statistical estimates of the finite-sample risk flm for different values of
m. Specifically, let {md denote a sequence of M different sample sizes, and select fixed
values for k and N. For each value of mi, construct an ensemble of k-nearest-neighbor
classifiers, i.e., for each classifier construct a random reference sample X mi by selecting
mi patterns with replacement from the original large sample. Estimate the empirical risk
of each classifier in the ensemble with an independently drawn set of "test" vectors. Let
flmi denote the average empirical risk of the i-th ensemble. Then, using the resulting set
of data points {(mi, RmJ}, find the values of the coefficients Roo(k), and C2 through CN,
that minimizes the sum of the squares:
8
M (
flmi - Roo(k) -
N)2
~
Cjm;j/n
(5)
Several inequalities can then be used obtain approximations of RB from the estimated value
of Roo(k). For example, if k = 1, then Cover and Hart's inequality in Property 1 implies
that
Roo(l) < R < R (1).
2
B_
00
To enable an estimate of RB with preciSion c, choose k > 2/c 2 , and estimate Roo(k) by the
above methOd. Then Devroye's inequality (3) implies
Roo(k) - c ~ Roo(k)(1 - c) ~ RB ~ Roo(k).
Estimating the Bayes Risk from Sample Data
4
235
EXPERIMENTAL RESULTS
The above procedure for estimating RB was applied to two pattern recognition problems.
First consider the synthetic, two-class problem with prior probabilities PI = P2 = 1/2, and
normally distributed, class-conditional densities
f( x)=
I
1
(27r)n/2
e-H(Xl+(-1)t)2+I:~=2xn
'
for f = 1 and 2. Pseudorandom labeled feature vectors (x, f) were numerically generated in
accordance with the above for dimensions n = 1 and n = 5. Twelve sample sizes between
10 and 3000 were examined. For each dimension and sample size the risks Rm of many
independent k-nearest-neighborclassifiers with k = 1,7, and 63 were empirically estimated.
(Because the asymptotic expansion (4) does not accurately describe the very small sample
behavior of the k-nearest-neighbor classifier, sample sizes smaller than 2k were not included
in the fit.)
Estimates of the coefficients in (5) for six different fits appear in the first equation of each cell
in the third and fourth columns of Table 1. For reference, the second column contains the
values of RooCk) that were obtained by numerically evaluating an exact integral expression
(Cover and Hart, 1967). Estimates of the Bayes risk appear in the second equation of each
cell in the third and fourth columns. Cover and Hart's inequality (2) was used for the
experiments that assumed k = 1, and Devroye's inequality (3) was used if k ~ 7. For thiS
problem, formula (1) evaluates to RB = (l/2)erfc(I/V2) = 0.15865.
Table 1: Estimates of the model coefficients and Bayes error for a classification problem
with two normal classes.
k
1
7
63
Roo(k)
0.2248
0.1746
0.1606
n=1
(N=2)
0.6536
m2
RB =0.172 ? 0.057
R m =0.2287 +
4.842
m
RB =0.152 ?0.023
Rm =0.1744 + -2-
20.23
m
RB =0.157 ? 0.004
Rm =0.1606 + - 2 -
n=5
(N =6)
0.0222
0.1121 0.2001
+
5
4
5
2
m6/ 5
m /
m /
RB =0.172 ? 0.057
Rm =0.2287 +
0.2218
1.005 3.782
-+-m4/ 5 m 6/ 5
m 2/ 5
RB =0.148 ? 0.022
R m =0.1700+
0.1002 1.426 10.96
- - 4-5+ -6 m /
m /5
m 2/ 5
RB =0.156 ? 0.004
Rm =0.1595 +
The second pattern recognition problem uses natural data; thus the underlying probability
distributions are not known. A pool of 222 classified multispectral pixels were was extracted
from a seven band satellite image. Each pixel was represented by five spectral components,
x = (Xl, .. . ,X5), each in the range 0 ~ X" ~ 255. (Thus, n = 5.) The class label of
each pixel was determined by one of the remaining spectral components, 0 ~ y ~ 255.
Two pattern classes were then defined: Wl = {y < B}, and W2 = {y ~ B}, where B was a
predetermined threshOld. (This particular problem was chosen to test the feasibility of this
method. In future work, we will examine more interesting pixel claSsification problems.)
R. R. SNAPP, T. XU
236
Table 2: Coefficients that minimize the squared error fit for different N. Note that
and Cs = 0 in (2) ifn ~ 4 (Psaltis, Snapp, and Venkatesh, 1994).
N
Roo(l)
2
0.0757133
0.126214
4
0.0757846
0.124007
0.0132804
6
0.0766477
0.0785847
0.689242
C3
=0
-2.68818
With k = 1, a large number of Bernoulli trials (e.g., 2~1000) were performed for each
value of mi . Each trial began by constructing a reference sample of mi classified pixels
chosen at random from the pool. The risk of each reference sample was then estimated by
classifying t pixels with the nearest-neighbor algorithm under a Euclidean metric. Here,
the t pixels, with 2000 ~ t ~ 20000, were chosen independently, with replacement, from
the pool. The risk 11m. was then estimated as the average risk of each reference sample
of size mi . (The number of experiments performed for each value of mi, and the values
oft, were chosen to ensure that the variance of
was sufficiently small, less than 10- 4
in this case.) This process was repeated for M = 33 different values of mi in the range
100 ~ mi ~ 15000. Results of these experiments are displayed in Table 2 and Figure 1
for three different values of N. Note that the robustness of the fit begins to dissolve, for this
data, at N = 6, either the result of overfitting, or insuffiCient smoothness in the underlying
probability distributions. However, the estimate for Roo(l) appears to be stable. For this
claSSification problem, we thus obtain RB = 0.0568 ? 0.0190.
Hm.
5
CONCLUSION
The described method for estimating the Bayes risk is based on a recent asymptotic analysis
of the finite-sample risk of the k-nearest-neighbor classifier (Snapp and Venkatesh, 1995).
Representing the finite-sample risk as a truncated asymptotic series enables an efficient
estimation of the infinite-sample risk Roo(k) from the classifier's finite-sample behavior.
The Bayes risk can then be estimated by the Bayes consistency of the k-nearest-neighbor
algorithm. Because such finite-sample analyses are difficult, and consequently rare, this
new method has the potential to evolve into a useful algorithm for estimating the Bayes risk.
Further improvements in efficiency may be obtained by incorporating principles of optimal
experimental deSign, cf., (Elfving, 1952) and (Federov, 1972).
It is important to emphasize, however, that the validity of (4) rests on several rather strong
smoothness assumptions, including a high-degree of differentiability of the class-conditional
probability densities. For problems that do not satisfy these conditions, other finite-sample
descriptions need to be constructed before this method can be applied. Nevertheless, there
is much evidence that nature favors smoothness. Thus, these restrictive assumptions may
still be applicable to many important problems.
Acknowledgments
The work reported here was supported in part by the National Science Foundation under
Grant No. NSF OSR-9350540 and by Rome Laboratory, Air Force Material Command,
USAF, under grant number F30602-94-1-OOlO.
237
Estimating the Bayes Risk from Sample Data
-1.8
-rl
I
-2.0
t:
-0.::
0
-2.2
' ol)
0
-2.4
100
1000
m
10000
Figure 1: The best fourth-order (N = 4) fit of Eqn. (5) to 33 empirical estimates of Hmo
for a pixel classification problem obtained from a multispectral Landsat image. Using
RXI = 0.0758, the fourth-order fit, Rm =0.0758 + 0.124m- 2 / 5 + 0.0133m - 4/5, is plotted
on a log-log scale to reveal the significance of the j = 2 term.
References
T. M. Cover and P. E. Hart, "Nearest neighbor pattern classification," IEEE Trans. Inform.
Theory,vol.IT-13,1967,pp.21-27.
P. A. Devijver, "A multiclass, k - N N approach to Bayes risk estimation," Pattern Recognition Letters, vol. 3, 1985, pp. 1-6.
L. Devroye, "On the asymptotic probability of error in nonparametric discrimination," Annals Of Statistics, vol. 9, 1981, pp. 1320-1327.
R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. New York, New York:
John Wiley & Sons, 1973.
G. Elfving, "Optimum allocation in linear regression theory," Ann. Math. Statist., vol. 23,
1952,pp.255-262.
V. V. Federov, Theory Of Optimal Experiments, New York, New York: Academic Press,
1972.
E. Fix and J. L. Hodges, "Discriminatory Analysis: Nonparametric Discrimination: Consistency Properties," from Project 21-49-004, Repon Number 4, UASF School of Aviation
Medicine, Randolf Field, Texas, 1951, pp. 261-279.
238
R. R. SNAPP, T. XU
K. Fukunaga, "The estimation of the Bayes error by the k-nearest neighbor approach," in L.
N. Kanal and A. Rosenfeld (ed.), Progress in Pattern Recognition, vol. 2, Elesvier Science
Publishers B.V. (North Holland), 1985, pp. 169-187.
K. Fukunaga and D. Hummels, "Bayes error estimation using Parzen and k-NN procedures,"
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, 1987, pp. 634-643.
J. M. Garnett, III and S. S. Yau, "Nonparametric estimation of the Bayes error of feature
extractors using ordered nearest neighbor sets," IEEE Transactions on Computers, vol. 26,
1977,pp.46-54.
G. Loizou and S. J. Maybank, "The nearest neighbor and the Bayes error rate," IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 9, 1987, pp. 254-262.
D. Psaltis, R. R. Snapp, and S. S. Venkatesh, "On the finite sample performance of the
nearest neighbor classifier," IEEE Trans. Inform. Theory, vol. IT-40, 1994, pp. 820--837.
R. R. Snapp and S. S. Venkatesh, "k Nearest Neighbors in Search of a Metric," 1995,
(submitted).
| 1064 |@word trial:2 briefly:1 duda:3 series:1 contains:2 selecting:1 dx:1 john:1 predetermined:1 analytic:1 enables:1 discrimination:2 stationary:1 intelligence:2 provides:1 math:1 consulting:1 five:1 c2:2 constructed:1 incorrect:1 manner:1 expected:2 behavior:2 nor:1 examine:1 ol:1 resolve:1 increasing:1 begin:1 estimating:11 bounded:2 underlying:3 classifies:1 project:1 minimizes:1 tie:1 exactly:2 classifier:22 rm:10 unit:1 normally:1 grant:2 appear:2 before:1 engineering:1 accordance:1 limit:3 ak:2 might:1 nearestneighbor:1 examined:1 discriminatory:1 range:2 statistically:1 averaged:1 practical:2 acknowledgment:1 procedure:4 universal:1 empirical:3 imprecise:1 cannot:1 risk:39 cjm:2 independently:2 assigns:5 m2:1 estimator:4 annals:1 exact:1 us:2 recognition:5 vermont:1 labeled:2 electrical:1 loizou:2 depend:1 usaf:1 incur:1 upon:2 efficiency:1 represented:4 describe:2 artificial:1 roo:21 favor:1 statistic:1 rosenfeld:1 sequence:2 frequent:1 description:2 fel:1 optimum:1 satellite:1 converges:2 nearest:24 school:1 progress:1 strong:1 p2:1 c:1 implies:2 correct:1 enable:1 material:1 fix:3 sufficiently:4 normal:1 smallest:2 estimation:5 applicable:1 label:2 psaltis:3 wl:2 hummels:2 weighted:1 rather:1 command:1 improvement:1 bernoulli:1 landsat:1 nn:1 typically:1 pixel:8 classification:16 development:1 equal:3 construct:5 field:1 nearly:1 future:1 simplify:1 national:1 m4:1 replacement:2 mixture:1 unconditional:1 accurate:2 integral:1 partial:1 necessary:2 euclidean:1 irm:1 plotted:1 minimal:1 column:3 cover:5 rxi:1 cost:2 tractability:1 subset:1 rare:1 reported:1 synthetic:1 density:4 twelve:1 oolo:1 pool:3 parzen:1 squared:1 hodges:2 choose:1 yau:2 derivative:1 potential:1 north:1 coefficient:5 satisfy:1 depends:1 performed:3 observing:1 dissolve:1 bayes:30 ifc:1 multispectral:2 minimize:1 square:1 air:1 variance:1 ensemble:3 yield:1 weak:1 accurately:2 served:1 classified:6 submitted:1 inform:2 ed:1 evaluates:1 pp:10 associated:1 mi:10 recall:1 dimensionality:1 appears:1 eqn:1 reveal:1 facilitate:1 validity:1 true:2 remedy:1 laboratory:1 illustrated:1 x5:1 generalized:1 evident:1 image:2 fi:1 began:1 empirically:1 rl:1 numerically:2 maybank:2 smoothness:3 consistency:3 stable:1 j:1 posterior:2 closest:1 recent:3 inequality:5 vt:1 reduces:1 smooth:2 academic:1 hart:8 feasibility:1 regression:1 ifn:1 metric:4 represent:1 achieved:1 cell:2 rmj:1 publisher:1 w2:2 rest:1 iii:1 m6:1 hb:1 variety:1 rbi:1 fit:6 cn:3 knowing:1 multiclass:3 texas:1 six:1 expression:1 york:4 remark:1 useful:2 generally:1 involve:1 repon:1 nonparametric:4 band:1 statist:1 differentiability:1 exist:1 nsf:1 estimated:6 correctly:1 rb:23 discrete:1 shall:1 vol:9 threshold:1 nevertheless:1 drawn:1 neither:1 sum:1 letter:1 fourth:4 bound:2 deficiency:1 scene:1 osr:1 fukunaga:4 concluding:1 pseudorandom:1 department:1 according:1 elfving:2 describes:3 smaller:1 hmo:1 son:1 lp:1 equation:2 previously:1 know:1 b_:1 available:1 v2:1 spectral:2 robustness:1 original:1 denotes:5 remaining:1 ensure:1 cf:1 medicine:1 f30602:1 restrictive:1 especially:1 erfc:1 parametric:1 md:1 said:1 seven:1 mail:1 trivial:1 assuming:1 devroye:4 insufficient:1 difficult:1 unfortunately:1 robert:1 design:2 motivates:1 upper:1 fin:1 finite:14 displayed:1 truncated:2 rome:1 rn:2 venkatesh:6 required:1 c3:1 flm:1 trans:2 pattern:23 xm:2 oft:1 including:1 misclassification:2 natural:1 force:1 representing:1 identifies:1 hm:1 prior:2 evolve:1 asymptotic:8 lacking:1 interesting:1 allocation:1 foundation:1 degree:1 consistent:1 principle:1 classifying:1 pi:1 supported:1 understand:1 neighbor:23 distributed:1 dimension:2 xn:1 evaluating:1 transaction:3 emphasize:1 overfitting:1 assumed:2 xi:3 search:1 table:4 nature:2 kanal:1 expansion:4 necessarily:1 constructing:2 garnett:2 significance:2 snapp:11 repeated:1 xu:4 representative:1 slow:1 tong:1 wiley:1 precision:1 explicit:1 xl:2 lie:1 third:2 burlington:1 extractor:1 formula:1 specific:1 evidence:1 incorporating:1 conditioned:1 demba:1 desire:1 ordered:1 holland:1 devijver:2 extracted:1 conditional:3 consequently:1 ann:1 included:1 determined:2 infinite:2 uniformly:1 specifically:1 aviation:1 experimental:2 exception:1 select:2 support:1 latter:1 evaluate:2 |
74 | 1,065 | A Unified Learning Scheme:
Bayesian-Kullback Ying-Yang Machine
Lei Xu
1. Computer Science Dept., The Chinese University of HK, Hong Kong
2. National Machine Perception Lab, Peking University, Beijing
Abstract
A Bayesian-Kullback learning scheme, called Ying-Yang Machine,
is proposed based on the two complement but equivalent Bayesian
representations for joint density and their Kullback divergence.
Not only the scheme unifies existing major supervised and unsupervised learnings, including the classical maximum likelihood or
least square learning, the maximum information preservation, the
EM & em algorithm and information geometry, the recent popular
Helmholtz machine, as well as other learning methods with new
variants and new results; but also the scheme provides a number
of new learning models.
1
INTRODUCTION
Many different learning models have been developed in the literature. We may
come to an age of searching a unified scheme for them. With a unified scheme,
we may understand deeply the existing models and their relationships, which may
cause cross-fertilization on them to obtain new results and variants; We may also be
guided to develop new learning models, after we get better understanding on which
cases we have already studied or missed, which deserve to be further explored.
Recently, a Baysian-Kullback scheme, called the YING-YANG Machine, has been
proposed as such an effort(Xu, 1995a). It bases on the Kullback divergence and two
complement but equivalent Baysian representations for the joint distribution of the
input space and the representation space, instead of merely using Kullback divergence for matching un-structuralized joint densities in information geometry type
learnings (Amari, 1995a&b; Byrne, 1992; Csiszar, 1975). The two representations
consist of four different components. The different combinations of choices of each
component lead the YING-YANG Machine into different learning models. Thus,
it acts as a general learning scheme for unifying the existing major unsupervised
and supervised learnings. As shown in Xu(1995a), its one special case reduces to
the EM algorithm (Dempster et aI, 1977; Hathaway, 1986; Neal & Hinton , 1993)
445
A Unified Learning Scheme: Bayesian-Kullback Ying-Yang Machine
and the closely related Information Geometry theory and the em algorithm (Amari,
1995a&b), to MDL autoencoder with a "bits-back" argument by Hinton & Zemel
(1994) and its alternative equivalent form that minimizes the bits of uncoded residual errors and the unused bits in the transmission channel's capacity (Xu, 1995d),
as well as to Multisets modeling learning (Xu, 1995e)- a unified learning framework
for clustering, PCA-type learnings and self-organizing map. It other special case
reduces to maximum information preservation (Linsker, 1989; Atick & Redlich,
1990; Bell & Sejnowski, 1995). More interestingly its another special case reduces
to Helmholtz machine (Dayan et al,1995 ; Hinton, 1995) with new understandings.
Moreover , the YING-YANG machine includes also maximum likelihood or least
square learning.
Furthermore, the YING- YANG Machine has also been extended to temporal patterns with a number of new models for signal modeling. Some of them are the
extensions of Helmholtz machine or maximum information preservation learning to
temporal processing. Some of them include and extend the Hidden Markov Model
(HMM), AMAR and AR models (Xu, 1995b). In addition, it has also been shown in
Xu(1995a&c, 1996a) that one special case of the YING-YANG machine can provide
us three variants for clustering or VQ, particularly with criteria and an automatic
procedure developed for solving how to select the number of clusters in clustering
analysis or Gaussian mixtures - a classical problem that remains open for decades .
In this paper, we present a deep and systematical further study. Section 2 redescribes the unified scheme on a more precise and systematical basis via discussing
the possible marital status of the two Bayesian representations for joint density.
Section 3 summarizes and explains those existing models under the unified scheme,
particularly we have clarified some confusion made in the previous papers (Xu,
1995a&b) on maximum information preservation learning. Section 4 proposed and
summarizes a number of possible new models suggested by the unified scheme.
2
BAYESIAN-KULLBACK YING-YANG MACHINE
As argued in Xu (1995a), unsupervised and supervised learning problems can be
summarized into the problem of estimating joint density P(x, y) of patterns in
the input space X and the representation space Y, as shown in Fig.I. Under the
Bayesian framework, we have two representations for P(x, y). One is PM! (x , y) =
PM! (ylx)PM! (x), implemented by a model Ml called YANG/(male) part since it
performs the task of transferring a pattern/(a real body) into a code/(a seed). The
other is PM 2(X, y) = PM2(xly)PM 2(Y), implemented by a model M2 called YING
part since it performs the task of generating a pattern/(a real body) from a code/(a
seed). They are complement to each other and together implement an entire circle
x -t y -t x. This compliments to the ancient chinese YING-YANG philosophy.
Here we have four components PM! (x), PM! (ylx), PM2 (xly) and PM 2(Y). The
PM! (x) can be fixed at some density estimate on input data, e.g ., we have at least
two choices-Parzen window estimate Ph (x) or empirical estimate Po (x) :
Ph(X)
=
N~d I:~l K( X~XI),
Po(x)
= limh ...OPh(X) = -b I:~l 8(x -
Xi).
(1)
For PM!(ylx), PM2 (xly), each can have three choices: (1) from a parametric family specified by model Ml or M 2 ; (2) free of model with PM!(ylx) = P(ylx) or
PM 2(xly) = P(xly); (3) broken channel PM! (ylx) = PM!(y) or PM 2(xly) = PM2 (X) .
Finally, PM 2(y) with its y consistent to PM! (ylx) can also being from a parametric
family or free of model. Any combinations of the choices of the four components
forms a potential YING-YANG pair. We at least have 2 x 3 x 3 x 2 = 36 pairs.
A YING-YANG pair has four types of marital status: (a) marry, i.e., YING and
446
L. XU
...p.....n. .tIon apace v
Symbola. Intea __ ? Binary Cod..
':.2(Y)
Decoding
Encoding
o ........trng
".ooannlon
Aepr_ _ ntatlon
Aeconatruotlon
Figure 1 The joint spaces X, Y and the YING-YANG Machine
YANG match each other; (b) divorce, i.e., YING and YANG go away from each
other; (c) YING chases YANG, YANG escapes; (d) YANG chases YING, but YING
escapes. The four types can be described by a combination of minimization (chasing) and maximization (escaping) on one of the two Kullback divergences below:
,
f
(\
PM) (ylx) PM) (x)
(
R.(MI,M2)
= x,y PMlyx)PMl(x)logpM2 (I)P
()dxdy
2a)
x Y M2 Y
(
) f x,y PM2 (X\)
Y PM2 ()
Y log P
K M2,MI =
PM2
(xly) P M2 (y)
(I)P
()dxdy
Ml Y x
Ml x
(
2b
)
We can replace K(MI' M 2) by K(M2, MJ) in the table. The 2nd & 3rd columns are for
(c) (d) respectively, each has two cases depending on who starts the act and the two
are usually not equivalent. Their results are undefined depending on initial condition for
M I,M2, except of two special cases: (i) Free PMl(Y\X) and parametric PM2(X\Y), with
minM2 maxMl K being the same as (b) with broken PM l (y\x), and with maXM2 minMl K
defined but useless. (ii) Free PM2 (X\Y) and parametric PMl(y\X), with minMl maXM2 K
the same as case (a) with broken PM2 (xly), with minMl maxM2 K defined but useless.
Therefore, we will focus on the status marry and divorce. Even so, not all of the
above mentioned 2 x 3 x 3 x 2 = 36 YING-YANG pairs provide sensible learning
models although minM l ,M2 K and maxM l ,M2 K are always well defined. Fortunately,
a quite number of them indeed lead us to useful learning models, as will be shown
in the sequent sections.
We can implement minM l ,M2 K(Ml, M 2) by the following Alternative Minimization
(ALTMIN) procedure:
Step 1 Fix M2 = M21d, to get Mr ew = arg M inM l K L( M I , M 21d )
Step 2 Fix MI = Mfld, to get M:;ew = arg MinM2 KL(Mfld, M 2)
The ALTMIN iteration will finally converge to a local minimum of K(MI , M 2 ). We can
have a similar procedure for maXM l ,M2 K(M I , M2) via replacing Min by Max.
Since the above scheme bases on the two complement YING and YANG Bayesian
representations and their Kullback divergence for their marital status, we call it
Bayesian-Kullback YING- YANG learning scheme. Furthermore, under this scheme
we call each obtained YING-YANG pair that is sensible for learning purpose as a
Bayesian-Kullback YING- YANG Machine or YING- YANG machine shortly.
3
UNIFIED EXISTING LEARNINGS
Let PMl(X) = Po(x) by eq.(l) and put it into eq.(2), through certain mathematics
we can get K(M1 , M2) = hMl - haMl - QM l ,2 + D with D independent of M 1 , M2
and hMll haMl' QM l ,2 given by Eqs.(El)(E2)&(E4) in Tab.2 respectively. The larger
A Unified Learning Scheme: Bayesian-Kullback Ying-Yang Machine
447
is the hM l , the more discriminative or separable are the representations in Y for the
input data set. The larger is the haMl' the more concentrated the representations
in Y . The larger is the qM l ,2' the better PM2(xIY) fits the input data.
Therefore, minM l ,M2 K(M1, M 2) consists of (1) best fitting of PM2 (xIY) on input
data via maxQM l ,2' which is desirable, (2) producing more concentrated representations in Y to occupy less resource, which is also desirable and is the behind reason for
solving the problem of selecting cluster number in clustering analysis Xu(1995a&c,
1996a), (3) but with the cost of less discriminative representations in Y for the input
data. Inversely, maxM l ,M2 K(M1 , M 2 ) consists of (1) producing best discriminative
or separable representation PMl (ylx) in Y for the input data set, which is desirable,
in the cost of (2) producing a more uniform representation in Y to fully occupy the
resource, and (3) causing PM 2(xly) away from fitting input data.
Shown in Table 2 are the unified existing unsupervised learnings. For the case
H-f- W, we have hMl = h, haMl =ha , QM l ,2 =QM2, and minMJ?M1 , M2) results in PM 2(y) = PM l (y) =O:y and PM2(xly)PM 2(Y) = PM2(X)PM l (ylx) with
PM2 (X) =I:~=l PM2 (xly)PM2 (y)? In turn, we get K(M 1 , M 2) =-L M2 + D with
LM2 being the likelihood given by eq.(E5), i.e., we get maximum likelihood estimation on mixture model. In fact, the ALTMIN given in Tab.2 leads us to exactly the
EM algorithm by Dempster et al(1977). Also, here PMl(X,y), PM2(X,y) is equivalent to the data submanifold D and model submanifold M in the Information
Geometry theory (Amari, 1995a&b), with the ALTMIN being the em algorithm.
As shown in Xu(95a), the cases also includes the MDL auto-encoder (Hinton &
Zemel, 1994) and Multi-sets modeling (Xu, 1995e).
For the case Single-M, the hMl - haMl is actually the information transmitted by
the YANG part from x to y. In this case, its minimization produces a non-sensible
model for learning. However, its maximization is exactly the Informax learning
scheme (Linsker, 1989; Atick & Redlich, 1990; Bell & Sejnowski, 1995). Here, we
clear up a confusion made in Xu(95a&b) where the minimization was mistakenly
considered.
For the case H-m- W, the hMl -haMl -Q Ml,2 isjust the -F(d; B, Q) used by Dayan et
al (1995) and Hinton et al (1995) for Helmholtz machine. We can set up the detailed
correspondence that (i) here PMl(ylx;) is their Qa; (ii) logPM 2(x,y is their -Ea;
and (iii) their Pa is PM 2 (ylx) = PM2(xly)PM 2(Y)/ I:y PM 2(xly)PM 2(Y). So, we get
a new perspective for Helmholtz machine . Moreover, we know that K(M1, M 2) becomes a negative likelihood only when PM2(xly)PM 2(Y) = PM2(X)PM l (ylx), which
is usually not true when the YANG and YING parts are both parametric. So
Helmholtz machine is not equivalent to maximum likelihood learning in general
with a gap depending on PM 2(xly)PM2 (y) - PM2 (X)PMl (ylx). The equivalence is
approximately acceptable only when the family of PM2(xly) or/and PM l (ylx;) is
large enough or M 2 , Ml are both linear with gaussian density.
In Tab.4, the case Single-Munder K(M2, Ml) is the classical maximum likelihood
(ML) learning for supervised learning which includes the least square learning by
back propagation (BP) for feedfarward net as a special case. Moreover, its counterpart for a backward net as inverse mapping is the case Single-Funder K(Ml, M 2).
4
NEW LEARNING MODELS
First, a number of variants for the above existing models are given in Table 2.
Second, a particular new model can be obtained from the case H-m- Wby changing
minM l ,M2 into maxM l ,M2. That is, we have maXM l ,M2 [hMl - haMl - QM l ,2]' shortly
448
L.XU
Table 2: BKC-YY Machine for Unsupervised Learning ( Part I) : K(MI, M 2)
Given Data {X;}f:l' Fix PMl (x) = Po(x) by eq.(l), and thus K(MI' M2) = Kb + D, with
D irrelevant to M 1 , M2 and K b given by the following formulae and table:
h
= -N1 ""N,k
P(ylx;)logP(ylx;) ,
~t"y
hMl
= -N1 "Ut
" , y PMl(ylx;)logPMl(ylx;),
(El)
h aMl = 2::yO'~llogO'~l,
O'~l = 1:i 2::; PMl(ylx;), ha = 2:: y O'ylogO'y,
O'y = 1:i 2::. P(ylx;),
P(ylx;) = O'yPM2 (xily)J 2:: y O'yPM2 (x;iy),
qM 1 ,2 = 1:; 2::;,y PM l (Ylx;) log PM2(x;iy),
qM2 = 1:; 2::i,y P(ylx;) log PM2(Xily),
L~2 = 1:i 2::; ,y O'y log PM2(x;iy),
LM2 = 1:; 2::; log 2:: y O'yPM2(X.ly)
Marriage
Status
Condition
Kb
H-f-W
PM l (ylx)
free, i.e.,
PMl:~YJXJ
= Pyx
h-ha-qM2
= -LM2
(minl
~1: t'IX
M2, get
P(ylx;)
O'y by
Single-M
PM2 (y)
= PM l (y)
PM2(xly)
= PM2 (X)
= Po(xl
Single-F
H-m-W
(E2)
(E3)
(E4)
(E5)
W-f-H
Uniform
PMl (ylx)
PM2(y),
and
and free
P M2 (xly)
PM~~~I~~
=
P xjy
lhMl-haMl
haMl
P Ml (ylx)
= PMl(y)
hMl - haMl
-LM2
(max)
(min)
(~~l~)]
mIn
Get M2
by
max
L~2'
t'IX
M2, get
MI by
min [hMl
-haMl - QM l ,2]
(min)
~1:
Get MI
by max
hMl-haMl
Get
MI by
mm
(E3), O'~l
haMl
ALTMIN
by (E2)
82: Fix M 1 ,
82: get
get M2 by
M2 by
max QM2.
max QM l 2'
Kepeat
Kepeat
No
No
No
81,82.
Repeat
Repeat
81,82.
Repeat
1. ML on
Mixtures
Dupli&EM
cated
HelmRelated
(Dem77)
Informax,
to
Maximum
models
holtz
2. Informmutual
by ML
machine
PCA
Existing
ation
geometry
Informlearning
(Hin95)
Equiv(Amari95)
ation
on
(Day95)
-lent
models
3. MDL
input
~Lin89~
Autodata.
Ati90
encoder
(BeI95)
in94 )
4. ulti-sets
modeling
(Xu94 ,95)
1. t'or H-f- W type, we have:
Three VQ variants when PM2(xly) is Gaussian. Also, criteria for
selecting the correct k for VQ or clustering (Xu95a&c).
New
2. For H-m-W type, we have:
Results
Robust PCA + criterion for determining subspace dimension (Xu, 95c).
1. More smooth PMl_(x)given by Parzen window estimate.
2. Factorial coding PM2(y) = ~M2(Y;) with binary y = [YI "', yrn].
Variants
3. Factorial coding PM l (ylx) = . PM2(Yi Ix) with binary [YI ... , Yrn].
4. Replace '2::11 .' in all the above items by 'fu ?dy' for real y.
Note: H- Husband, W-WIfe, f- follows, M-Male, F-Female, m-matches. X-f-Y stands for
X part is free. Single-X stands for the other part broken. H-m-W stands for both parts
being parametric. '(min)' stands for min Kb and '(max), stands for max Kb.
W
449
A Unified Learning Scheme: Bayesian-Kullback Ying-Yang Machine
Table 3: BKC-YY Machine for Unsupervised Learning ( Part II) : J(M2 , Ml)
Given Data {Xi}F:I, Fix P MI (x) = Po(x) by eq.(l), and thus J(M2 , Ml) = J(b + D, with
D irrelevant to M 1 , M2 and J(b given by the following formulae and table:
(E6)
(E7)
(ES)
(E9)
(ElO)
MarrIage
Status
Si ngle-M
H-f-W
Single-F
H-m-W
W-f-H
+
-La MI
The same as those m Table 1.
C;ondahon
lha M2 -
hM2
- L MI ,2]
lhaMI- LMI ]
[h~2
hM2
+ haMI
-qM 2,1]
(if forcing
J(b
-
PM1(y) =
(max)
Get M2
by
max
hM2
ALTMIN
(min)
S1:
Fix M I ,
get ai:2
by (E7) .
S2:
update
MI by
max LMI ,2
P02?
(~I)
mIn
SI:
Fix M 1 ,
get ai:l
by (E2).
in Tab.l
S2:
update
MI by
max LMI
(min)
' SI:
Fix M 2 ,
get MI
by
mIn
(max)
Get M2
by
max
h~2'
(min)
Get MI
by
max
[haMI
- q M 2,1]
L o MI
S2: Fix M 1 ,
get M2
by min
h~?-qM~ I
1\l0
t;xlstmg
models
Vanants
Repeat
no
new!
ttepeat
SI, S2
no
new!
1\l0
ttepeat
.H.epeat
Repeat
SI, S2
SI,S2
no
no
no
new!
new!
new!
Imilar to those m Table 1.
1\l0
Repeat
no
new!
Table 4: BKC-YY Machine for Supervised Learning
Given Data {Xi,y.}F:I , Fix PMI(X) = Po(x) by eq.(l).
h'kl
= -kEiPM1(y;JXi)logPMI(Yil x i),
h'k2 = -kEiPM2(x;JYi)logPM2(x;Jy.),
(Ell)
Q'kI ,2 = -k Ei PM I (y;JXi) log PM2 (xdYi), Q'k2 , 1 = -k Ei PM2(X;Jy.) log PM I (Y.lxi) , (El2)
L'kl = -k Ei log PMI (y;JX i ), L'k2 = -k Ei log PM2(XiIYi) ,
(El3)
K(MI,
Marnage
Status
J(b
Feature
~'xastmg
models
M2)
= Kb + D
Single-M
hMl
Single-F
-LM2
(max)
mIrumum
entropy (ME)
F-net
no
new!
(min)
ML
B-net
tH' on
B-net
H-m-W
h MI -QMI2
(min) ,
MIxed
F-B
net
no
new!
J((M2 , MI)
= J(b + D
Single-M
-LMI
Si ngle-F
hM2
(min)
(max)
mlrumum
entropy
B-net
no
new!
ML
F-net
tiP on
F-net
H-m-W
h M2-QM2 1
(min) ,
Mixed
B-F
net
no
new!
450
L. XU
denoted by H-m- W-Max. This model is a dual to the Helmholtz machine in order
to focus on getting best discriminative or separable representations PM l (ylx) in Y
instead of best fitting of PM2(xly) on input data.
Third, by replacing K(M 1 , M 2) with K(M2, M 1 ), in Table 3 we can obtain new
models that are the counterparts of those given in Table 2. For the case H-J- W,
its maxM l ,M2 gives minimum entropy estimate on PM 2 (X) instead of maximum
likelihood estimate on PM 2 (X) in Table 2. For the case Single-M, it will function
similarly to the case Single-F in Table 2, but with minimum entropy on PMl (ylx)
in Table 2 replaced by maximum likelihood on PM l (ylx) here. For the case H-mW, the focus shifts from on getting best fitting of PM 2(xly) on input data to on
getting best discriminative representations PM 1 (ylx) in Y, which is similar to the
just mentioned H-m- W-Max, but with minimum entropy on PMJylx) replaced by
maximum likelihood on PM 1 (ylx). The other two cases in Table 3 have been also
changed similarly from those in Table 2.
Fourth, several new model have also been proposed in Table 4 for supervised learning . Instead of maximum likelihood, the new models suggest learning by minimum
entropy or a mix of maximum likelihood and minimum entropy.
Finally, further studies on the other status in Table 1 are needed. Heuristically,
we can also treat the case H-m- W by two separated steps. We first get Ml by
max[h Ml - haMl], and then get M2 by maxqM l ,2; or we first get M2 by min[h ha - qM2] and then get Ml by min[hMl - haMl - QMl,2]' The two algorithms
attempt to get both a good discriminative representation by PMl (ylx) and a good
fitting of PM 2(xly) on input data. However whether they work well needs to be
tested experimentally.
We are currently conducting experiments on comparison several of the above new
models against their existing counterparts.
Acknowledgements The work was Supported by the HK RGG Earmarked Grant
GUHK250/94E.
References
Amari , S(1995a) [Amari95] " Information geometry of the EM and em algorithms for neural networks",
Neural Networks 8, to appear.
Amari, S(1995b), Neural Computation 7 ppI3-18.
Atick, J .J. & Redlich, A .N . (1990) [Ati901. Neural Computation Vo1.2, No.3 , pp308-320 .
Bell A . J . & Sejnowski , T . J.(1995) [Be195], Neural Computation Vo1.7 , No.6 , 1129-1159 .
Byrne, W . (1992), IEEE Trans. Neural Networks 3 , pp612-620 .
Csiszar, I. , 11975), Annals of Probabil.ty 3, ppI46-158.
Dayan, P. , Hinton , G. E ., & Neal, R. N. (1995) [Day95], Neural Computat.on Vo1.7, No.5 , 889-904.
Dempster, A.P., Laird , N .M ., & Rubin, D .B. (1977) [Dem77] , 1. Royal Statist. Soc.ety , 839, 1-38.
Hathaway, R.J.(1986), Statistics & Probability Letters 4, pp53-56 .
Hinton, G . E ., et ai, (1995) [Hin95], Sc.ence 268, pp1158-1160 .
Hinton , G. E . & Zemel, R.S. (19M) [Hin94], Advances in NIPS 6, pp3-10.
Linsker, R. (1989) [Lin89], Advances in NIPS 1, ppI86-194.
Neal , R. N .& Hinton, G. E(1993), A new view of the EM algorithm that Jushfies Incremental and other
vanants , pr~rint.
Xu, L . (1996 , "How Many Clusters? : A YING- YANG Machine Based Theory For A Classical Open
Problem In attern Recognition" , to appear on Proc. IEEE ICNN96.
Xu , L. (1995a), "YING-YANG Machine: a Bayesian-Kullback scheme for unified learnings and new
results on vector quantization" , Keynote talk, Proc. Inti Conf. on Neural Information Processing
(ICONIP95), Oct 30 - Nov . 3, 1995 , pp977-988 .
Xu , L.(1995b), "YING-YANG Machine for Temporal Signals", Keynote talk, Proc IEEE inti Conf.
Neural Networks & Signal Processing, Vol.I, pp644-651, Nanjing, 10-13 , 1995.
Xu , L . (1995c) , "New Advances on The YING- YANG Machine", Invited paper, Proc. of 1995 IntI.
Symposium on Artificial Neural Networks, ppIS07-12 , Dec. 18-20 , Taiwan .
Xu , L . (1995d), "Cluster Number Selection, Adaptive EM Algorithms and Competitive Learnings",
Invited paper, Proc . Inti Conf. on Neural Information Processing (ICONIP95), Oct 30 - Nov. 3, 1995 ,
Vol. II , ppI499-1502.
Xu , L . (1995e), Invited paper, Proc. WCNN95, Vol.I, pp35-42. Also, Invited paper, Proc . IEEE ICNN
1994, ppI315-320 .
Xu , L. , & Jordan, M.I . (1993). Proc . of WCNN '93, Portland, OR, Vol. II, 431-434 .
| 1065 |@word kong:1 nd:1 rint:1 open:2 heuristically:1 initial:1 xiy:2 selecting:2 interestingly:1 existing:9 si:7 fertilization:1 update:2 item:1 provides:1 clarified:1 symposium:1 consists:2 fitting:5 indeed:1 multi:1 window:2 munder:1 becomes:1 estimating:1 moreover:3 p02:1 minimizes:1 developed:2 unified:13 temporal:3 act:2 exactly:2 qm:10 k2:3 ly:1 grant:1 appear:2 producing:3 el2:1 local:1 treat:1 encoding:1 approximately:1 studied:1 equivalence:1 implement:2 chasing:1 procedure:3 empirical:1 bell:3 matching:1 suggest:1 get:26 divorce:2 nanjing:1 selection:1 put:1 equivalent:6 map:1 go:1 m2:46 searching:1 annals:1 pa:1 helmholtz:7 recognition:1 particularly:2 keynote:2 wcnn:1 deeply:1 mentioned:2 dempster:3 broken:4 solving:2 basis:1 qml:1 po:7 joint:6 talk:2 separated:1 cod:1 sejnowski:3 artificial:1 zemel:3 sc:1 quite:1 larger:3 amari:5 encoder:2 statistic:1 amar:1 laird:1 chase:2 net:10 causing:1 organizing:1 marital:3 getting:3 probabil:1 cluster:4 transmission:1 produce:1 generating:1 incremental:1 depending:3 develop:1 eq:7 soc:1 implemented:2 come:1 xly:22 guided:1 closely:1 aml:1 correct:1 kb:5 explains:1 argued:1 fix:10 icnn:1 equiv:1 extension:1 mm:1 marriage:2 considered:1 seed:2 mapping:1 elo:1 major:2 jx:1 purpose:1 estimation:1 proc:8 currently:1 maxm:6 minimization:4 jxi:2 gaussian:3 always:1 e7:2 l0:3 focus:3 yo:1 portland:1 likelihood:12 hk:2 dayan:3 el:2 entire:1 transferring:1 hidden:1 arg:2 dual:1 denoted:1 special:6 ell:1 unsupervised:6 linsker:3 escape:2 national:1 divergence:5 replaced:2 geometry:6 n1:2 attempt:1 xjy:1 mdl:3 male:2 mixture:3 undefined:1 behind:1 csiszar:2 fu:1 pm2:38 hm2:4 ancient:1 circle:1 inm:1 column:1 modeling:4 ence:1 ar:1 logp:1 maximization:2 cost:2 uniform:2 submanifold:2 density:6 decoding:1 tip:1 together:1 parzen:2 iy:3 e9:1 lmi:4 conf:3 potential:1 ety:1 summarized:1 coding:2 includes:3 tion:1 view:1 lab:1 tab:4 start:1 competitive:1 square:3 who:1 conducting:1 pm1:1 bayesian:13 unifies:1 minm:4 informax:2 husband:1 against:1 ty:1 e2:4 mi:21 popular:1 limh:1 ut:1 actually:1 back:2 ea:1 supervised:6 furthermore:2 just:1 atick:3 lent:1 mistakenly:1 replacing:2 ei:4 propagation:1 lei:1 yrn:2 true:1 byrne:2 counterpart:3 neal:3 self:1 ulti:1 hong:1 criterion:3 confusion:2 performs:2 marry:2 recently:1 holtz:1 yil:1 extend:1 m1:5 ai:4 compliment:1 automatic:1 rd:1 pmi:2 pm:48 mathematics:1 similarly:2 base:2 recent:1 female:1 perspective:1 systematical:2 irrelevant:2 forcing:1 certain:1 binary:3 discussing:1 yi:3 transmitted:1 minimum:6 dxdy:2 fortunately:1 mr:1 minl:1 converge:1 signal:3 preservation:4 ii:5 desirable:3 mix:1 reduces:3 smooth:1 match:2 cross:1 jy:2 peking:1 variant:6 iteration:1 rgg:1 dec:1 addition:1 invited:4 jyi:1 jordan:1 call:2 mw:1 yang:33 unused:1 iii:1 enough:1 fit:1 escaping:1 shift:1 whether:1 pca:3 effort:1 e3:2 cause:1 deep:1 useful:1 clear:1 detailed:1 ylx:35 factorial:2 ph:2 concentrated:2 statist:1 occupy:2 computat:1 yy:3 vol:4 four:5 changing:1 backward:1 merely:1 beijing:1 wife:1 inverse:1 letter:1 fourth:1 family:3 missed:1 dy:1 summarizes:2 acceptable:1 bit:3 ki:1 correspondence:1 bp:1 argument:1 min:19 separable:3 combination:3 em:11 pml:16 s1:1 pr:1 inti:4 resource:2 vq:3 remains:1 turn:1 needed:1 know:1 lm2:5 away:2 altmin:6 apace:1 cated:1 alternative:2 shortly:2 lxi:1 clustering:5 include:1 unifying:1 bkc:3 chinese:2 classical:4 already:1 parametric:6 subspace:1 capacity:1 hmm:1 sensible:3 me:1 reason:1 taiwan:1 code:2 useless:2 relationship:1 ying:32 ngle:2 negative:1 hml:11 markov:1 hinton:9 extended:1 precise:1 xily:2 sequent:1 complement:4 pair:5 oph:1 specified:1 kl:3 baysian:2 nip:2 qa:1 deserve:1 trans:1 suggested:1 below:1 perception:1 pattern:4 usually:2 including:1 max:20 royal:1 ation:2 residual:1 scheme:19 inversely:1 uncoded:1 multisets:1 hm:1 autoencoder:1 auto:1 literature:1 understanding:2 acknowledgement:1 determining:1 fully:1 mixed:2 age:1 consistent:1 rubin:1 changed:1 repeat:6 supported:1 free:7 understand:1 dimension:1 stand:5 made:2 adaptive:1 nov:2 kullback:15 status:8 ml:20 xi:4 discriminative:6 un:1 decade:1 table:19 channel:2 mj:1 robust:1 pyx:1 e5:2 qm2:6 s2:6 xu:24 body:2 fig:1 redlich:3 xl:1 third:1 ix:3 e4:2 formula:2 explored:1 consist:1 quantization:1 gap:1 entropy:7 hathaway:2 oct:2 replace:2 experimentally:1 except:1 vo1:3 called:4 e:1 la:1 ew:2 select:1 e6:1 philosophy:1 dept:1 tested:1 |
75 | 1,066 | On Neural Networks with Minimal
Weights
J ehoshua Bruck
Vasken Bohossian
California Institute of Technology
Mail Code 136-93
Pasadena, CA 91125
E-mail: {vincent, bruck }?Iparadise. cal tech. edu
Abstract
Linear threshold elements are the basic building blocks of artificial
neural networks. A linear threshold element computes a function
that is a sign of a weighted sum of the input variables. The weights
are arbitrary integers; actually, they can be very big integers-exponential in the number of the input variables. However, in
practice, it is difficult to implement big weights. In the present
literature a distinction is made between the two extreme cases:
linear threshold functions with polynomial-size weights as opposed
to those with exponential-size weights. The main contribution of
this paper is to fill up the gap by further refining that separation.
Namely, we prove that the class of linear threshold functions with
polynomial-size weights can be divided into subclasses according
to the degree of the polynomial. In fact, we prove a more general
result- that there exists a minimal weight linear threshold function
for any arbitrary number of inputs and any weight size. To prove
those results we have developed a novel technique for constructing
linear threshold functions with minimal weights.
1
Introduction
Human brains are by far superior to computers for solving hard problems like combinatorial optimization and image and speech recognition, although their basic building blocks are several orders of magnitude slower. This observation has boosted
interest in the field of artificial neural networks [Hopfield 82]' [Rumelhart 82]. The
latter are built by interconnecting multiple artificial neurons (or linear threshold
gates), whose behavior is inspired by that of biological neurons . Artificial neural
networks have found promising applications in pattern recognition, learning and
247
On Neural Networks with Minimal Weights
other data processing tasks. However most of the research has been oriented towards the practical aspect of neural networks, simulating or building networks for
particular tasks and then comparing their performance with that of more traditional
methods for those particular tasks. To compare neural networks to other computational models one needs to develop the theoretical settings in which to estimate
their capabilities and limitations.
1.1
Linear Threshold Gate
The present paper focuses on the study of a single linear threshold gate (artificial
neuron) with binary inputs and output as well as integer weights (synaptic coefficients). Such a gate is mathematically described by a linear threshold function.
Definition 1 (Linear Threshold FUnction)
A linear threshold function of n variables is a Boolean function
{ -1, 1} that can be written as
f( x.... ) -- sgn (F( x....? -for any
{
-
1 ,for F(x)
1 ,0therW1se
. ~0
x E {-1, 1}n and a fixed tV E
f
{ -1, I} n
~
n
, where F(x)
= tV? x = L
WiXi
i=1
zn.
Although we could allow the weights Wi to be real numbers, it is known [Muroga 71),
[Raghavan 88) that for a, binary input neuron, one needs O( n log n) bits per weight,
where n is the number of inputs. So in the rest ofthe paper, we will assume without
loss of generality that all weights are integers.
1.2
Motivation
Many experimental results in the area of neural networks have indicated that the
magnitudes of the coefficients in the linear threshold elements grow very fast with
the size of the inputs and therefore limit the practical use of the network. One
natural question to ask is the following. How limited is the computational power of
the network if one limits oneself to threshold elements with only "small" growth in
the size of the coefficients? To answer that question we have to define a measure of
the magnitudes of the weights. Note that, given a function I, the weight vector tV
is not unique (see Example 1 below).
Definition 2 (Weight Space)
Given a lineal' threshold function f we define W as the set of all weights that satisfy
Definition 1, that is W = {UI E zn : Vx E {-1, 1}n,sgn(tV? x) = f(x)}.
Here follows a measure of the size of the weights.
Definition 3 (Minimal Weight Size)
We define the size of a weight vector as the sum of the absolute values of the weights.
The minimal weight size of a linear threshold function is defined as :
n
S[j)
= ~ia/L IWi I)
,=1
The particular vector that achieves the minimum is called a minimal weight vector.
Naturally, S[f) is a function of n.
V. BOHOSSIAN, J. BRUCK
248
It has been shown [Hastad 94], [Myhill 61], [Shawe-Taylor 92], (Siu 91] that there
exists a linear threshold function that can be implemented by a single threshold
element with exponentially growing weights, S[j] '" 2'1, but cannot be implemented
by a threshold element with smaller: polynomialy growing weights, S[j] '" n d , d
constant. In light of that result the above question was dealt with by defining a
class within the set of linear threshold functions: the class of functions with "small"
(Le. polynomialy growing) weights [Siu 91]. Most of the recent research focuses on
the power of circuits with small weights, relative to circuits with arbitrary weights
[Goldmann 92], [Goldman 93]. Rather than dealing with circuits we are interested
in studying a single threshold gate. The main contribution of the present paper is
to further refine the division of small versus arbitrary weights. We separate the set
of functions with small weights into classes indexed by d, the degree of polynomial
growth and show that all of them are non-empty. In particular, we develop a
technique for proving that a weight vector is minimal. We use that technique to
construct a function of size S[j] = s for an arbitrary s.
1.3
Approach
The main difficulty in analyzing the size of the weights of a threshold element is due
to the fact that a single linear threshold function can be implemented by different
sets of weights as shown in the following example.
Example 1 (A Threshold FUnction with Minimal Weights)
Consider the following two sets of weights (weight vectors).
tih = (124), FI(X) =
+ 2X2 + 4X3
2XI + 4X2 + 8X3
Xl
W2 = (248), F2(X) =
They both implement the same threshold function
f(X) = sgn(F2(x? = sgn(2FI (x? = sgn(FI (x?
A closer look reveals that f(x) = sgn(x3), implying that none of the above weight
vectors has minimal size. Indeed, the minimal one is W3 = (00 1) and S(J] = 1.
It is in general difficult to determine if a given set of weights is minimal [Amaldi 93],
[Willis 63]. Our technique consists of limiting the study to only a particular subset
of linear threshold functions, a subset for which it is possible to prove that a given
weight vector is minimal. That subset is loosely defined by the requirement that
there exist input vectors for which f(x) = f( -x). The existence of such a vector,
called a root of f, puts a constraint on the weight vector used to implement f. The
larger the set of roots - the larger the constraint on the set of weight vectors, which
in turn helps determine the minimal one. A detailed description of the technique is
given in Section 2.
1.4
Organization
Here follows a brief outline of the rest of the paper. Section 2 mathematically defines
the setting of the problem as well as derives some basic results on the properties
of functions that admit roots. Those results are used as bUilding blocks for the
proof of the main results in Section 3. It also introduces a construction method
for functions with minimal weights. Section 3 presents the main result: for any
weight size, s, and any nunlber of inputs, n, there exists an n-input linear threshold
fllllction that requires weights of size S[f] = s. Section 4 presents some applications
of the result of Section 3 and indicates future research directions.
249
On Neural Networks with Minimal Weights
2
Construction of Minimal Threshold Functions
The present section defines the mathematical tools used to construct functions with
minimal weights.
2.1
Mathematical setting
We are interested in constructing functions for which the minimal weight is easily
determined. Finding the minimal weight involves a search, we are therefore interested in finding functions with a constrained weight spaces. The following tools
allows us to put constraints on W.
Definition 4 (Root Space of a Boolean Function)
A vector v E {-I, 1} n such that 1(V) = 1(-V) is called a root of I. We define the
root space, R, as the set of all roots of I.
Definition 5 (Root Generator Matrix)
For a given weight vector
E W and a root v E R, the root generator matrix,
G = (gij), is a (n x k)-matrix, with entries in {-I, 0,1}, whose rows 9 are orthogonal
to w and equal to vat all non-zero coordinates, namely,
w
1.
Gw = 0
2. 9ij
=
?or
9ij
= Vj
for all i and j.
Example 2 (Root Generator Matrix)
Suppose that we are given a linear threshold function specified by a weight
vector w = (1,1,2,4,1,1,2,4). By inspection we determine one root v =
(1,1,1,1, -1, -1, -1, -1). Notice that WI + W2 - W7 =
which can be written
as g. w = 0, where 9 = (1,1,0,0,0,0, -1,0) is a row of G. Set r=
2g. Since 9
is equal to vat all non-zero coordinates, r E {-I, I} n. Also r? w = v? w+ g. w = 0.
We have generated a new root : r = (-1, -1, 1, 1, -1, -1, 1, -1).
?
Lemma 6 (Orthogonality of G and W)
For a given weight vector w E Wand a root
ilG T =
v-
vE R
0
holds for any weight vector il E W.
Proof. For an arbitrary il E Wand an arbitrary row, gi, of G, let if = v - 2gi.
By definition of gi, if E {-I,1}n and if? w= 0. That implies I(if) = I(-if) : if
is a root of I. For any weight vector il E W, sgn(il? if) = sgn( -il? if). Therefore
il? (v - 2gi) = and finally, since v? il = we get il? gi = 0. 0
?
?
Lemma 7 (Minimality)
For a given weight vector w E W and a root v E R if rank( G) = n - 1 (Le. G
has n - 1 independent rows) and IWil = 1 for some i, then w is the minimal weight
vector.
Proof. From Lemma 6 any weight vector il satisfies ilGT = O. rank( G) = n - 1
implies that dim(W) = 1, i.e. all possible weight vectors are integer multiples of
each other. Since IWi I = 1, all vectors are of the form il = kw, for k ~ 1. Therefore
has the smallest size. 0
w
We complete Example 2 with an application of Lemma 7.
V. BOHOSSIAN, J. BRUCK
250
Example 3 (Minimality)
Given ill = (1,1,2,4,1,1,2,4) and
G=
v = (1,1,1,1, -1, -1, -1, -1) we can construct:
0
0
1 0 0 0 -1
0
1 0 0
0 -1
0 -1
0 1 0
0
0
0
0 0 1
0
0 -1
0
1 0 0 0
0 -1
0
1 1 0 0
0
0
1 1 1 0
0
0
0
0
It is easy to verify that rank( G) = n - 1
minimal and 8[/] = 16.
2.2
=
0
0
0
-1
0
0
-1
7 and therefore, by Lemma 7, ill is
Construction of minimal weight vectors
In Example 3 we saw how, given a weight vector, one can show that it is minimal.
In this section we present an example of a linear threshold function with minimal
weight size, with an arbitrary number of input variables.
We would like to construct a weight vector and show that it is minimal. Let
the number of inputs, n, be even. Let ill consist of two identical blocks :
(Wl,W2, ... ,Wn /2,Wl,W2, ... ,Wn /2)' Clearly, if = (1,1,; .. ,1,-1,-1, ... ,-1) is a root
and G is the corresponding generator matrix.
G=
3
1 0 0 0
1 0 0
0 1 0
0
0
0
0
0
0
0
0
0
-1
0
0
0
-1
0
0
0
-1
0
0
0
0
0
0
1 0
0 1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-1
0
0
0
0
0
0
-1
The Main Result
The following theorem states that given an integer s and a number of variables n
there exists a function of n variables and minimal weight size s.
Theorem 8 (Main Result)
For any pair (s,n) that satisfies
, for n even
, for n odd
2. seven
there exists a linear threshold function of n variables,
8[J] = s.
I, with minimal
weight size
Proof. Given a pair (s, n), that satisfies the above conditions we first construct
a weight vector w that satisfies E~l IWil = s, then show that it is the minimal
weight vector ofthe function I(x) = sgn(w?X). The proof is shown only for n even.
CONSTRUCTION.
1. Define (at, a2, ... , an /2)
= (1,1, ... , 1).
251
On Neural Networks with Minimal Weights
n/2
. 2
2. If L,:::l a, < s/2 then increase by one the smallest a, such that a, < 2'- .
(In the case of a tie take the Wi with smallest index i).
3. Repeat the previous step until L~; ai
(1,1,2,4, ... , 2~ - 2).
4. Set
=
s /2 or (aI, a2, ... , aN) =
w= (al,a2, ... ,a n /2,al,a2, ... ,a n /2)'
Because we increase the size by one unit at a time the algorithm will converge to the
desired result for any integer s that satisfies n ~ s ~ 2~. We have a construction
for any valid (s, n) pair. Let us show that wis minimal.
Given that w = (aI, a2, ... , a n /2, aI, a2, ... , aaj2) we find a root v =
(1, 1, ... , 1, -1, -1, ... , -1) and n/2 rows of the generator matrix G corresponding to
the equations w, = wH ~. To form additional rows note that the first k ais are
powers of two (where k depends on sand n). Those can be written as a, = L~:~ aj
and generate k - 1 rows. And finally note that all other ai, i > k, are smaller than
2k+l. Hence, they can be written as a binary expansion a, = L~:::l aijaj where
aij E {O, I}. There are
k such weights. G has a total of n -1 independent rows.
rank(G) = n -1 and WI = 1, therefore by Lemma 7, tV is minimal and S[J] = s. 0
MINIMALITY.
-r -
Example 4 (A Function of 10 variables and size S[fJ = 26)
We start with a = (1,1,1,1,1). We iterate: (1,1,2,1,1), (1,1,2,2,1), (1,1,2,2,2),
(1,1,2, 3,2), (1,1,2,3,3) , (1,1,2,4,3), (1,1,2,4,4), and finally (1,1 , 2,4,5). The
construction algorithm converges to a = (1,1,2,4,5). We claim that tV = (a, a) =
(1,1,2,4,5,1,1,2,4,5) is minimal. Indeed, v = (1,1,1,1,1, -1, -1, -1, -1, -1) and
1 0 0 0
1 0 0
0 1 0
0 0 1
G=
0
0
0
0
0
0
0
0
0 1
0 0
0 0
1
1 1 1 0 0
1 0 0 1 0
0
1 0
0
0
1 0
-1
0
0
0
0
0
0
0
0
0
-1
0
0
0
-1
0
0
0
0
0
-1
0
0
0
-1
0
0
0
0
0
-1
0
0
0
-1
0
0
0
0
0
-1
0
0
0
-1
is a matrix of rank 9.
Example 5 (Functions with Polynomial Size)
This example shows an application of Theorem 8. We define fred) as the set of
linear threshold functions for which S[I} ~ n d ? The Theorem states that for any
even n there exists a function 1 of n variables and minimum weight S[I] = n d ? The
- - (d- I)
implication is that for all d, LT
4
- - (d)
is a proper subset of LT
Conclusions
We have shown that for any reasonable pair of integers (n, s), where s is even, there
exists a linear threshold function of n variables with minimal weight size S[J} = s.
We have developed a novel technique for constructing linear threshold functions
with minimal weights that is based on the existence of root vectors. An interesting
application of our method is the computation of a lower bound on the number
of linear threshold functions [Smith 66}. In addition, our technique can help in
studying the trade-otIs between a number of important parameters associated with
252
V. BOHOSSIAN, 1. BRUCK
linear threshold (neural) circuits, including, the munber of elements, the number of
layers, the fan-in, fan-out and the size of the weights.
Acknow ledgements
This work was supported in part by the NSF Young Investigator Award CCR9457811, by the Sloan Research Fellowship, by a grant from the IBM Almaden
Research Center, San Jose, California, by a grant from the AT&T Foundation and
by the center for Neuromorphic Systems Engineering as a part of the National
Science Foundation Engineering Research Center Program; and by the California
Trade and Commerce Agency, Office of Strategic Technology.
References
[Amaldi 93] E. Amaldi and V. Kann. The complexity andapproximabilityoffinding
maximum feasible subsystems of linear relations. Ecole Polytechnique Federale
De Lausanne Technical Report, ORWP 93/11, August 1993.
[Goldmann 92] M. Goldmann, J. Hastad, and A. Razborov. Majority gates vs. general weighted threshold gates. Computational Complexity, (2):277-300, 1992.
[Goldman 93] M. Goldmann and M. Karpinski. Simulating threshold circuits by
majority circuits. In Proc. 25th ACM STOC, pages pp. 551- 560, 1993.
[Hastad 94] .1. Hastad. On the size of weights for threshold gates. SIAM. J. Disc.
Math., 7:484-492, 1994.
[Hopfield 82) .1. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. of the USA National Academy of Sciences,
79:2554- 2558, 1982.
[Muroga 71) M. Muroga. Threshold Logic and its Applications. Wiley-Interscience,
1971.
[Myhill 61) J. Myhill and W. H. Kautz. On the size of weights required for linearinput switching functions. IRE Trans. Electronic Computers, (EClO):pp. 288290, 1961.
[Raghavan 88] P. Raghavan. Learning in threshold networks: a computational
model and applications. Technical Report RC 13859, IBM Research, July
1988.
[Rumelhart 82] D. Rumelhart and J. McClelland. Parallel distributed processing:
Explorations in the microstructure of cognition. MIT Press, 1982.
[Shawe-Taylor 92] J. S. Shawe-Taylor, M. H. G. Anthony, and W. Kern. Classes
of feedforward neural networks and their circuit complexity. Neural Networks,
Vol. 5:pp. 971- 977, 1992.
[Siu 91] K. Siu and J. Bruck. On the power of threshold circuits with small weights.
SIAM J. Disc. Math., Vol. 4(No. 3):pp. 423-435, August 1991.
[Smith 66] D. R. Smith. Bounds on the number of threshold functions. IEEE
Transactions on Electronic Computers, June 1966.
[Willis 63] D. G. Willis. Minimum weights for threshold switches. In Switching
Theory in Space Techniques. Stanford University Press, Stanford, Calif., 1963.
| 1066 |@word polynomial:5 ecole:1 comparing:1 written:4 v:1 implying:1 inspection:1 smith:3 ire:1 math:2 mathematical:2 rc:1 prove:4 consists:1 interscience:1 indeed:2 behavior:1 growing:3 brain:1 inspired:1 goldman:2 circuit:8 developed:2 finding:2 subclass:1 growth:2 tie:1 unit:1 grant:2 engineering:2 limit:2 switching:2 analyzing:1 lausanne:1 limited:1 practical:2 unique:1 commerce:1 practice:1 block:4 implement:3 x3:3 otis:1 area:1 kern:1 get:1 cannot:1 subsystem:1 cal:1 put:2 center:3 fill:1 proving:1 coordinate:2 razborov:1 limiting:1 construction:6 suppose:1 element:8 rumelhart:3 recognition:2 trade:2 agency:1 ui:1 complexity:3 solving:1 division:1 f2:2 easily:1 hopfield:3 emergent:1 myhill:3 fast:1 artificial:5 ehoshua:1 whose:2 larger:2 stanford:2 ability:1 gi:5 academy:1 description:1 empty:1 requirement:1 converges:1 help:2 develop:2 ij:2 odd:1 implemented:3 involves:1 implies:2 direction:1 exploration:1 human:1 sgn:9 raghavan:3 vx:1 sand:1 microstructure:1 biological:1 mathematically:2 hold:1 cognition:1 claim:1 achieves:1 smallest:3 a2:6 proc:2 combinatorial:1 saw:1 ilg:1 wl:2 tool:2 weighted:2 mit:1 clearly:1 rather:1 boosted:1 office:1 focus:2 refining:1 june:1 rank:5 indicates:1 tech:1 dim:1 pasadena:1 relation:1 interested:3 ill:3 almaden:1 constrained:1 field:1 construct:5 equal:2 identical:1 kw:1 look:1 muroga:3 amaldi:3 future:1 report:2 oriented:1 ve:1 national:2 organization:1 interest:1 introduces:1 extreme:1 light:1 implication:1 closer:1 orthogonal:1 indexed:1 taylor:3 loosely:1 calif:1 desired:1 theoretical:1 minimal:35 federale:1 boolean:2 hastad:4 zn:2 neuromorphic:1 strategic:1 subset:4 entry:1 siu:4 answer:1 siam:2 minimality:3 opposed:1 admit:1 de:1 coefficient:3 satisfy:1 sloan:1 depends:1 root:19 start:1 capability:1 kautz:1 parallel:1 iwi:2 contribution:2 il:10 ofthe:2 dealt:1 vincent:1 disc:2 none:1 synaptic:1 definition:7 pp:4 naturally:1 proof:5 associated:1 wixi:1 ask:1 wh:1 actually:1 kann:1 generality:1 until:1 defines:2 aj:1 indicated:1 usa:1 building:4 verify:1 hence:1 gw:1 outline:1 complete:1 willis:3 polytechnique:1 fj:1 image:1 novel:2 fi:3 superior:1 physical:1 exponentially:1 ai:6 shawe:3 recent:1 binary:3 minimum:3 additional:1 bohossian:4 determine:3 converge:1 july:1 multiple:2 technical:2 divided:1 award:1 vat:2 basic:3 karpinski:1 addition:1 fellowship:1 grow:1 w2:4 rest:2 integer:8 feedforward:1 easy:1 wn:2 iterate:1 switch:1 w3:1 oneself:1 speech:1 detailed:1 mcclelland:1 generate:1 exist:1 nsf:1 notice:1 sign:1 per:1 ledgements:1 vol:2 threshold:44 sum:2 wand:2 jose:1 reasonable:1 electronic:2 separation:1 bit:1 bound:2 layer:1 fan:2 refine:1 constraint:3 orthogonality:1 x2:2 aspect:1 lineal:1 tv:6 according:1 smaller:2 wi:5 equation:1 turn:1 studying:2 goldmann:4 simulating:2 slower:1 gate:8 existence:2 question:3 traditional:1 separate:1 majority:2 seven:1 mail:2 w7:1 code:1 index:1 iwil:2 difficult:2 stoc:1 acknow:1 proper:1 collective:1 observation:1 neuron:4 defining:1 arbitrary:8 august:2 tih:1 namely:2 required:1 specified:1 pair:4 california:3 distinction:1 trans:1 below:1 pattern:1 program:1 built:1 including:1 power:4 ia:1 natural:1 difficulty:1 bruck:6 technology:2 brief:1 literature:1 relative:1 loss:1 interesting:1 limitation:1 versus:1 generator:5 foundation:2 degree:2 ibm:2 row:8 repeat:1 supported:1 aij:1 allow:1 institute:1 absolute:1 distributed:1 valid:1 fred:1 computes:1 made:1 san:1 far:1 transaction:1 logic:1 dealing:1 reveals:1 xi:1 search:1 promising:1 ca:1 expansion:1 constructing:3 anthony:1 vj:1 main:7 big:2 motivation:1 wiley:1 interconnecting:1 exponential:2 xl:1 young:1 theorem:4 derives:1 exists:7 polynomialy:2 consist:1 magnitude:3 gap:1 lt:2 satisfies:5 acm:1 towards:1 feasible:1 hard:1 determined:1 lemma:6 called:3 gij:1 total:1 experimental:1 latter:1 investigator:1 |
76 | 1,067 | SPERT-II: A Vector Microprocessor
System and its Application to Large
Problems in Backpropagation Training
John Wawrzynek, Krste Asanovic, & Brian Kingsbury
University of California at Berkeley
Department of Electrical Engineering and Computer Sciences
Berkeley, CA 94720-1776
{johnw ,krste,bedk }@cs.berkeley.edu
James Beck, David Johnson, & Nelson Morgan
International Computer Science Institute
1947 Center Street, Suite 600
Berkeley, CA 94704-1105
{beck,davidj,morgan}@icsi.berkeley.edu
Abstract
We report on our development of a high-performance system for
neural network and other signal processing applications. We have
designed and implemented a vector microprocessor and packaged it as an attached processor for a conventional workstation.
We present performance comparisons with commercial workstations on neural network backpropagation training. The SPERT-II
system demonstrates significant speedups over extensively handoptimization code running on the workstations.
1
Introduction
We are working on pattern recognition problems using neural networks with a large
number of parameters. Because of the large computational requirements of our area
of research, we set out to design an integrated circuit that would serve as a good
building block for our systems. Initially we considered designing extremely specialized chips, as this would maximize performance for a particular algorithm. However,
the algorithms we use undergo considerable change as our research progresses. Still,
we needed to provide some specialization if our design was to offer significant improvement over commercial workstation systems. Competing with workstations is
620
J. WAWRZYNEK, K. ASANOVIC, B. KINGSBURY, J. BECK, D. JOHNSON, N. MORGAN
a challenge to anyone designing custom programmable processors, but as will be
shown in this paper, one can still provide a performance advantage by focusing on
one general class of computation.
Our solution was to design a vector microprocessor, TO, optimized for fixed-point
computations, and to package this as an inexpensive workstation accelerator board.
In this manner, we gain a considerable performance/cost advantage for neural network and other signal processing algorithms, while leveraging the commercial workstation environment for software development and I/O services.
In this paper, we focus on the neural network applications ofthe SPERT-II system.
We are also investigating other applications in the areas of hum an-machine interface
and multimedia processing, as we believe vector microprocessors show promise in
providing the flexible, cost-effective, high-performance computing required.
Section 2 discusses the design of the hardware, followed in Section 3 by a discussion
of the software environment we are developing and a discussion of related systems
in Section 4. In Section 5 we discuss how we map a backpropagation training task
to the system and in Section 6 we compare the resulting performance with two
commercial workstation systems.
2
SPERT- II System
SPERT-II is a double slot SEus card for use in Sun compatible workstations and is
shown in Figure 1. The board contains a TO vector microprocessor and its memory,
a Xilinx FPGA device for interfacing with the host, and various system support
devices.
SPERT?11
Board
TO Chip
Data
8MBSRAM
Xilinx
FPGA
Host Wor1<station
Figure 1: SPERT-II System Organization
2.1
The TO vector microprocessor
Development of the TO vector microprocessor follows our earlier work on the original
SPERT VLIW /SIMD neuro-microprocessor (Wawrzynek, 1993). The most significant change we have made to the architecture is to move to a vector instruction set
architecture (IS A) , based on the industry standard MIPS RISe scalar ISA (Kane,
1992) extended with vector coprocessor instructions. The resulting ISA, which we
call Torrent, offers important advantages over our previous design. We gain access to
existing software tools for the MIPS architecture, including optimizing e compilers,
assemblers, linkers, and debuggers. VLIW machines expose details of the hardware
implementation at the instruction set level, and so must change instruction sets
SPERT-II: A Vector Microprocessor System
621
~hen scaling to higher degrees of on-chip parallelism. In contrast, vector ISAs provide a simple abstraction of regular data parallelism that enables different hardware
implementations to make different trade-offs between cost and performance while
remaining software compatible. Compared with the VLIW /SIMD design, the vector
ISA reduces requirements on instruction cache space and fetch bandwidth. It also
makes it easier to write optimized library routines in assembly language, and these
library routines will still run well on future devices with greater on-chip parallelism.
In the design of the TO vector microprocessor, the main technique we employ to
improve cost-performance over a commercial general purpose processor is to integrate multiple fixed-point datapaths with a high-bandwidth memory system. Fast
digital arithmetic units, multipliers in particular, require chip area proportional to
the square of the number of operand bits. In modern microprocessors and digital
signal processors a single floating-point unit takes up a significant portion ofthe chip
area. High-precision arithmetic units also requires high memory bandwidth to move
large operands. However, for a wide class of problems, full-precision floating-point,
or even high-precision fixed-point arithmetic, is not needed. Studies by ourselves
and others have shown that for error back-propagation training of neural networks,
16-bit weights and 8-bit activation values provide similar training performance to
IEEE single-precision floating-point (Asanovic, 1991).
However, fast fixed-point multiply-adds alone are not sufficient to increase performance on a wide range of problems. Other components of a complete application
may dominate total compute time if only multiply-add operations are accelerated.
Our processor integrates a fast general-purpose RISC core, and includes general
purpose operations in its vector instruction set to obtain a balanced design.
The TO processor is a complete single chip implementation of the Torrent architecture. It was fabricated in Hewlett-Packard's CMOS26B process using 1.0 pm
scalable CMOS design rules and two layers of metal. The die measures 16.75mm x
16.75mm, and contains 730,701 transistors. TO runs at an internal clock rate of
40MHz.
The main components of TO are the MIPS-II compatible RISC CPU with an onchip instruction cache, a vector unit coprocessor, a 128-bit wide external memory
interface, and an 8-bit wide serial host interface (TSIP) and control unit. The
external memory interface supports up to 4 GB of memory over a 128-bit wide data
bus. The current SPERT-II board uses 16, 4 Mb SRAM parts to provide 8 MB of
mam memory.
At the core of the TO processor is a MIPS-II compatible 32-bit integer RISC processor with a 1 KB instruction cache. The system coprocessor provides a 32-bit
counter/timer and registers for host synchronization and exception handling.
The vector unit contains a vector register file with 16 vector registers, each holding
32 elements of 32 bits each, and three vector functional units, VPO, VP1, and
VMP. VPO and VPl are vector arithmetic functional units. With the exception of
multiplies, that must execute in VPO, either pipeline can execute any arithmetic
operation. The multipliers perform 16-bit x 16-bit multiplies producing 32-bit
results. All other arithmetic, logical and shift functions operate on 32 bits. VMP
is the vector memory unit, and it handles all vector load/store operations, scalar
load/store operations, and the vector insert/extract operations.
All three vector functional units are composed of 8 parallel pipelines, and so can
each produce up to 8 results per cycle. The TO memory interface has a single
memory address port, therefore non-unit stride and indexed memory operations are
limited to a rate of one element per cycle.
622
J. WAWRZYNEK, K. ASANOVIC, B. KINGSBURY, J. BECK, D. JOHNSON, N. MORGAN
The elements of a vector register are striped across all 8 pipelines. With the maximum vector length of 32 , a vector functional unit can accept a new instruction
every 4 cycles. TO can saturate all three vector functional units by issuing one
instruction per cycle to each, leaving a single issue slot every 4 cycles for the scalar
unit. In this manner, TO can sustain up to 24 operations per cycle. Several important library routines, such as matrix-vector and matrix-matrix multiplies, have
been written which achieve this level of performance. All vector pipeline hazards
are fully interlocked in hardware, and so instruction scheduling is only required to
improve performance, not to ensure correctness.
3
SPERT-II Software Environment
The primary design goal for the SPERT-II software environment was that it should
appear as similar as possible to a conventional workstation environment. This
should ease the task of porting existing workstation applications, as well as provide
a comfortable environment for developing new code.
The Torrent instruction set architecture is based on the MIPS-II instruction set,
with extra coprocessor instructions added to access the vector unit functionality.
This compatibility allows us to base our software environment on the GNU tools
which already include support for MIPS based machines. We have ported the
gee C/C++ compiler, modified the gdb symbolic debugger to debug TO programs
remotely from the host, enhanced the gas assembler to understand the new vector
instructions and to schedule code to avoid interlocks, and we also employ the GNU
linker and other library management utilities.
Currently, the only access to the vector unit we provide is either through library
routines or directly via the scheduling assembler. We have developed an extensive
set of optimized vector library routines including fixed-point matrix and vector
operations, function approximation through linear interpolation, and IEEE single
precision floating-point emulation. The majority of the routines are written in
Torrent assembler, although a parallel set of functions have been written in ANSI
C to allow program development and execution on workstations. Finally, there is a
standard C library containing the usual utility, I/O and scalar math routines.
After compilation and linking, a TO executable is run on the SPERT-II board by
invoking a "server" program on the host. The server loads a small operating system
"kernel" into TO memory followed by the TO executable. While the TO application
runs, the server services I/O requests on behalf of the TO process.
4
Related Systems
Several programmable digital neurocomputers have been constructed, most notably
systems based on the CNAPS chip from Adaptive Solutions (Hammerstrom, 1990)
and the SYNAPSE-I, based on the MA-16 chip from Siemens (Ramacher, 1991).
The Adaptive Solutions CNAPS-I064 chip contains a SIMD array with 64 16-bit
processing elements (PEs) per chip. Systems require an external microcode sequencer. The PEs have 16-bit datapaths with a single 32-bit accumulator, and are
less flexible than the TO datapaths. This chip provides on-chip memory for 128K
16-bit weights, distributed among the individual PEs. Off-chip memory bandwidth
is limited by an 8-bit port. In contrast, TO integrates an on-chip CPU that acts as
controller, and provides fast access to a external memory equally accessible by all
datapaths thereby increasing the range of applications that can be run efficiently.
SPERT-n: A Vector Microprocessor System
623
Like SPERT-II, the SYNAPSE-l leverages commercial memory parts. It features
an array of MA-16 chips connected to interleaved DRAM memory banks. The MA16 chips require extensive external circuitry, including 68040 CPUs with attached
arithmetic pipelines, to execute computations not supported by the MA-16 itself.
The SYNAPSE-l system is a complex and expensive multi-board design, containing several different control streams that must be carefully orchestrated to run an
application. However, for some applications the MA-16 could potentially provide
greater throughput than TO as the former's more specialized architecture permits
more multiply-add units on each chip.
5
Mapping Backpropagation to TO
One artificial neural network (ANN) training task that we have done is taken from
a speaker-independent continuous speech recognition system. The ANN is a simple
feed-forward multi-layer percept ron (MLP) with three layers. Typical MLPs have
between 100-400 input units. The input layer is fully connected to a hidden layer of
100-4000 hidden units. The hidden layer is fully connected to an output layer that
contains one output per phoneme, typically 56-61. The hidden units incorporate
a standard sigmoid activation function. The output units compute a "soft-max"
activation function. All training is "on-line", with the weight matrices updated
after each pattern presentation.
All of the compute-intensive sections can be readily vectorized on TO.
Three operations are performed on the weight matrices: forward propagation, error
back-propagation, and weight update. These operations are available as three standard linear algebra routines in the TO library: vector-matrix multiply, matrix-vector
multiply, and scaled outer-product accumulation, respectively.
TO can sustain one multiply-add per cycle in each of the 8 datapath slices, and
can support this with one 16-bit memory access per cycle to each datapath slice
provided that vector accesses have unit stride. The loops for the matrix operations
are rearranged to perform only unit-stride memory accesses, and memory bandwidth
requirements are further reduced by tiling matrix accesses and reusing operands
from the vector registers whenever possible.
There are a number of other operations required while handling input and output
vectors and activation values. While these require only O(n) computation versus
the O(n 2 ) requirements of the matrix operations, they would present a significant
overhead on smaller networks if not vectorized.
The sigmoid activation function is implemented using a library piecewise-linear
function approximation routine. The function approximation routine makes use
of the vector indexed load operations to perform the table lookups. Although TO
can only execute vector indexed operations at the rate of one element transfer
per cycle, the table lookup routine can simultaneously perform all the arithmetic
operations for index calculation and linear interpolation in the vector arithmetic
units, achieving a rate of one 16-bit sigmoid result every 2 cycles. Similarly, a
table based vector logadd routine is used to implement the soft-max function, also
producing one result every 2 cycles.
To simplify software porting, the MLP code uses standard IEEE single-precision
floating-point for input and output values. Vector library routines convert formats
to the internal fixed-point representation. These conversion routines operate at the
rate of up to 1 conversion every 2 cycles.
624
6
J. WAWRZYNEK, K. ASANOVIC, B. KINGSBURY, J. BECK, D. JOHNSON, N. MORGAN
Performance Evaluation
We chose two commercial RISC workstations against which to compare the performance of the SPERT-II system. The first is a SPARCstation-20/61 containing a
single 60 MHz SuperSPARC+ processor with a peak performance of60 MFLOPS, 1
MB of second level cache, and 128 MB of DRAM main memory. The SPARCstation20/61 is representative of a current mid-range workstation. The second is an IBM
RS/6000-590, containing the RIOS-2 chipset running at 72 MHz with a peak performance of 266 MFLOPS, 256 KB of primary cache, and 768 MB of DRAM main
memory. The RS/6000 is representative of a current high-end workstation.
The workstation version of the code performs all input and output and all computation using IEEE single precision floating-point arithmetic. The matrix and vector
operations within the back prop algorithm have been extensively hand optimized,
using manual loop unrolling together with register and cache blocking.
The SPERT-II numbers are obtained for a single TO processor running at 40 MHz
with 8 MB of SRAM main memory. The SPERT-II version of the application maintains the same interface, with input and output in IEEE single precision floatingpoint format, but performs all MLP computation using saturating fixed-point arithmetic with 16-bit weights, 16-bit activation values, and 32-bit intermediate results.
The SPERT-II timings below include the time for conversion between floating-point
and fixed-point for input and output.
Figure 2 shows the performance of the three systems for a set of three-layer networks on both backpropagation training and forward propagation. For ease of
presentation we use networks with the same number of units per layer . Table 1
presents performance results for two speech network architectures. The general
trend we observe in these evaluations is that for small networks the three hardware
systems exhibit similar performance, while for larger network sizes the SPERT-II
system demonstrates a significant performance advantage. For large networks the
SPERT-II system demonstrates roughly 20-30 times the performance of a SPARC20
workstation and 4-6 times the performance of the IBM RS/6000-590 workstation.
Acknowledgements
Thanks to Jerry Feldman for his contribution to the design of the SPERT-II system,
Bertrand Irrisou for his work on the TO chip, John Hauser for Torrent libraries, and
John Lazzaro for his advice on chip and system building. Primary support for this
work was from the ONR, URI Grant N00014-92-J-1617 and ARPA contract number
N0001493-C0249. Additional support was provided by the NSF and ICS!.
SPERT-II: A Vector Microprocessor System
625
Forward Pass
Training
300.-------------------------~
250
80
<:I""'i-\\
S~<.. ......... . .. ... .. . ............. ?.. ?. .. . .. . .. ?. . . .. ... .. .. ..
St>Ef\i-\\
~80
C/)
fii200
Il.
Il.
o
~150
B
:::!;
"C
CD
CD
alCD
~40
c%loo
a.
C/)
'BM RS/6000
50
..... .. ........ . ........ .............. . . . .. . .. .... .
??????????????????????? ??? ??? ????? ????? ??? ?'BM?RSisooo ..... .
20
. ? ?.......?...........?. ? .... .... ... .... "
SPARC20/61
oL=~==~~~==~==~
o
200
400
600
800
1,000
Layer Size
--------------------
::======~====:L===S=PA~R~C~2~~6~1~
oL
o
200
400
600
800
1,000
Layer Size
Figure 2: Performance Evaluation Results (all layers the same size).
Table 1: Performance Evaluation for Selected Net Sizes.
net size
IBM
net type
(in x hidden x out) SPERT-II SPARC20 RS/6000-590
Forward Pass (MCPS)
small speech net
153 x 200 x 56
17.6
43.0
181
large speech net
342 x 4000 x 61
276
11.3
45.1
Training (MCUPS)
small speech net
153 x 200 x 56
16.7
55.8
7.00
17.2
large speech net
342 x 4000 x 61
78 .7
4.18
References
Krste Asanovic and Nelson Morgan. Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks. In Proc.
2nd Inti. Conf. on Microelectronics for Neural Networks, Munich, Oct. 1991.
D. Hammerstrom . A VLSI architecture for High-Performance, Low-Cost, On-Chip
Learning. In Proc. Intl. Joint Cant on Neural Networks, pages 11-537-543, 1990.
G . Kane, and Heinrich, J . MIPS RISC Architecture. Prentice Hall, 1992.
U. Ramacher, J. Beichter, W. Raab, J. Anlauf, N. Bruls, M. Hachmann, and
M. Wesseling. Design of a 1st Generation Neurocomputer. In VLSI Design of
Neural Networks. Kluwer Academic, 1991.
J. Wawrzynek, K. Asanovic, and N. Morgan. The Design ofa Neuro-Microprocessor.
IEEE Journal on Neural Networks, 4(3), 1993.
| 1067 |@word coprocessor:4 version:2 nd:1 instruction:15 r:5 invoking:1 thereby:1 contains:5 existing:2 current:3 timer:1 activation:6 issuing:1 must:3 written:3 john:3 readily:1 datapath:2 cant:1 enables:1 designed:1 update:1 alone:1 selected:1 device:3 sram:2 core:2 provides:3 math:1 ron:1 kingsbury:4 constructed:1 overhead:1 manner:2 notably:1 roughly:1 multi:2 ol:2 bertrand:1 cpu:3 cache:6 increasing:1 unrolling:1 provided:2 circuit:1 developed:1 sparcstation:1 fabricated:1 suite:1 berkeley:5 every:5 act:1 ofa:1 demonstrates:3 scaled:1 control:2 unit:25 grant:1 appear:1 producing:2 comfortable:1 service:2 engineering:1 timing:1 debugger:2 interpolation:2 onchip:1 chose:1 mcups:1 kane:2 ease:2 limited:2 range:3 accumulator:1 block:1 implement:1 backpropagation:5 sequencer:1 area:4 remotely:1 regular:1 symbolic:1 scheduling:2 prentice:1 accumulation:1 conventional:2 map:1 center:1 cnaps:2 rule:1 array:2 dominate:1 his:3 handle:1 updated:1 enhanced:1 commercial:7 us:2 designing:2 pa:1 element:5 trend:1 recognition:2 expensive:1 blocking:1 electrical:1 cycle:12 sun:1 connected:3 trade:1 counter:1 linkers:1 icsi:1 balanced:1 environment:7 heinrich:1 algebra:1 serve:1 joint:1 chip:21 various:1 fast:4 effective:1 artificial:2 larger:1 neurocomputers:1 itself:1 advantage:4 transistor:1 net:7 mb:6 product:1 loop:2 achieve:1 double:1 requirement:5 intl:1 produce:1 cmos:1 progress:1 implemented:2 c:1 emulation:1 functionality:1 kb:2 require:4 brian:1 insert:1 mm:2 considered:1 ic:1 hall:1 mapping:1 circuitry:1 purpose:3 proc:2 integrates:2 currently:1 expose:1 correctness:1 tool:2 offs:1 interfacing:1 modified:1 avoid:1 focus:1 improvement:1 contrast:2 rio:1 abstraction:1 integrated:1 typically:1 accept:1 initially:1 hidden:5 floatingpoint:1 vlsi:2 compatibility:1 issue:1 among:1 flexible:2 ported:1 multiplies:3 development:4 simd:3 vmp:2 throughput:1 future:1 report:1 others:1 piecewise:1 simplify:1 employ:2 modern:1 composed:1 simultaneously:1 individual:1 beck:5 floating:7 ourselves:1 organization:1 mlp:3 mflop:2 multiply:6 custom:1 evaluation:4 hewlett:1 compilation:1 indexed:3 arpa:1 industry:1 earlier:1 soft:2 mhz:4 cost:5 krste:3 fpga:2 johnson:4 interlocked:1 loo:1 hauser:1 fetch:1 thanks:1 st:2 international:1 peak:2 accessible:1 contract:1 off:1 together:1 management:1 containing:4 external:5 conf:1 reusing:1 lookup:2 stride:3 includes:1 register:6 stream:1 performed:1 portion:1 compiler:2 maintains:1 parallel:2 contribution:1 mlps:1 square:1 il:2 phoneme:1 efficiently:1 percept:1 ofthe:2 processor:10 whenever:1 manual:1 inexpensive:1 against:1 james:1 workstation:18 gain:2 logical:1 schedule:1 routine:14 carefully:1 back:4 focusing:1 porting:2 feed:1 higher:1 sustain:2 synapse:3 execute:4 done:1 torrent:5 clock:1 working:1 hand:1 propagation:5 gdb:1 believe:1 vp1:1 building:2 multiplier:2 former:1 jerry:1 speaker:1 die:1 xilinx:2 complete:2 performs:2 interface:6 ef:1 sigmoid:3 specialized:2 functional:5 operand:3 packaged:1 executable:2 attached:2 linking:1 kluwer:1 significant:6 feldman:1 debug:1 pm:1 similarly:1 language:1 access:8 operating:1 add:4 base:1 optimizing:1 store:2 n00014:1 server:3 alcd:1 onr:1 morgan:7 additional:1 greater:2 maximize:1 signal:3 arithmetic:11 ii:24 multiple:1 full:1 isa:3 reduces:1 ramacher:2 determination:1 calculation:1 offer:2 academic:1 hazard:1 host:6 serial:1 equally:1 neuro:2 scalable:1 controller:1 kernel:1 leaving:1 spert:24 operate:2 extra:1 file:1 undergo:1 leveraging:1 call:1 integer:1 leverage:1 intermediate:1 mips:7 architecture:9 competing:1 bandwidth:5 intensive:1 shift:1 specialization:1 utility:2 gb:1 assembler:4 speech:6 lazzaro:1 programmable:2 mid:1 extensively:2 hardware:5 risc:5 rearranged:1 reduced:1 nsf:1 per:10 write:1 promise:1 achieving:1 convert:1 run:6 package:1 ansi:1 scaling:1 bit:23 interleaved:1 layer:12 gnu:2 followed:2 striped:1 software:8 anyone:1 extremely:1 format:2 speedup:1 department:1 developing:2 munich:1 anlauf:1 request:1 across:1 smaller:1 wawrzynek:6 mcps:1 inti:1 pipeline:5 taken:1 bus:1 discus:2 needed:2 end:1 tiling:1 available:1 operation:18 permit:1 observe:1 hammerstrom:2 original:1 mam:1 running:3 remaining:1 assembly:1 ensure:1 include:2 move:2 added:1 already:1 hum:1 primary:3 usual:1 behalf:1 exhibit:1 card:1 street:1 majority:1 outer:1 nelson:2 code:5 length:1 index:1 providing:1 potentially:1 holding:1 rise:1 dram:3 design:15 implementation:3 perform:4 conversion:3 gas:1 extended:1 station:1 david:1 required:3 extensive:2 optimized:4 california:1 chipset:1 address:1 parallelism:3 pattern:2 below:1 challenge:1 program:3 including:3 memory:23 packard:1 max:2 improve:2 library:11 extract:1 acknowledgement:1 hen:1 synchronization:1 fully:3 accelerator:1 generation:1 proportional:1 versus:1 digital:3 integrate:1 degree:1 sufficient:1 metal:1 vectorized:2 port:2 bank:1 cd:2 ibm:3 compatible:4 supported:1 gee:1 allow:1 understand:1 institute:1 wide:5 distributed:1 slice:2 asanovic:7 forward:5 made:1 adaptive:2 bm:2 investigating:1 continuous:1 table:5 transfer:1 ca:2 complex:1 microprocessor:14 main:5 advice:1 representative:2 board:6 precision:9 pe:3 saturate:1 load:4 microelectronics:1 execution:1 uri:1 easier:1 saturating:1 scalar:4 neurocomputer:1 microcode:1 ma:4 prop:1 oct:1 slot:2 goal:1 presentation:2 ann:2 considerable:2 change:3 typical:1 multimedia:1 total:1 orchestrated:1 pas:2 experimental:1 siemens:1 exception:2 internal:2 support:6 accelerated:1 incorporate:1 sparc20:3 handling:2 |
77 | 1,068 | A Neural Network Model of 3-D
Lightness Perception
Luiz Pessoa
Federal Univ. of Rio de Janeiro
Rio de Janeiro, RJ, Brazil
pessoa@cos.ufrj.br
William D. Ross
Boston University
Boston, MA 02215
bill@cns.bu.edu
Abstract
A neural network model of 3-D lightness perception is presented
which builds upon the FACADE Theory Boundary Contour System/Feature Contour System of Grossberg and colleagues. Early
ratio encoding by retinal ganglion neurons as well as psychophysical results on constancy across different backgrounds (background
constancy) are used to provide functional constraints to the theory
and suggest a contrast negation hypothesis which states that ratio
measures between coplanar regions are given more weight in the
determination of lightness of the respective regions. Simulations
of the model address data on lightness perception, including the
coplanar ratio hypothesis, the Benary cross, and White's illusion.
1
INTRODUCTION
Our everyday visual experience includes surface color constancy. That is, despite 1)
variations in scene lighting and 2) movement or displacement across visual contexts,
the color of an object appears to a large extent to be the same. Color constancy
refers, then, to the fact that surface color remains largely constant despite changes
in the intensity and composition of the light reflected to the eyes from both the
object itself and from surrounding objects. This paper discusses a neural network
model of 3D lightness perception - i.e., only the achromatic or black to white
dimension of surface color perception is addressed. More specifically, the problem
of background constancy (see 2 above) is addressed and mechanisms to accomplish
it in a system exhibiting illumination constancy (see 1 above) are proposed.
A landmark result in the study of lightness was an experiment reported by Wallach (1948) who showed that for a disk-annulus pattern, lightness is given by the
ratio of disk and annulus luminances (i.e., independent of overall illumination); the
A Neural Network Model of 3-D Lightness Perception
845
so-called ratio principle. In another study, Whittle and Challands (1969) had subjects perform brightness matches in a haploscopic display paradigm. A striking
result was that subjects always matched decrements to decrements , or increments
to increments, but never increments to decrements. Whittle and Challands' (1969)
results provide psychophysical support to the notion that the early visual system
codes luminance ratios and not absolute luminance. These psychophysical results
are in line with results from neurophysiology indicating that cells at early stages
of the visual system encode local luminance contrast (Shapley and Enroth-Cugell,
1984). Note that lateral inhibition mechanisms are sensitive to local ratios and can
be used as part of the explanation of illumination constancy.
Despite the explanatory power of the ratio principle, and the fact that the early
stages of the visual system likely code contrast, several experiments have shown that,
in general, ratios are insufficient to account for surface color perception. Studies
of background constancy (Whittle and Challands, 1969; Land and McCann, 1971;
Arend and Spehar, 1993), of the role of 3-D spatial layout and illumination arrangement on lightness perception (e.g. , Gilchrist, 1977) as well as many other effects,
argue against the sufficiency of local contrast measures (e.g., Benary cross, White 's,
1979 illusion). The neural network model presented here addresses these data using
several fields of neurally plausible mechanisms of lateral inhibition and excitation.
2
FROM LUMINANCE RATIOS TO LIGHTNESS
The coplanar ratio hypothesis (Gilchrist, 1977) states that the lightness of a given
region is determined predominantly in relation to other coplanar surfaces, and not
by equally weighted relations to all retinally adjacent regions. We propose that in
the determination of lightness, contrast measures between non-coplanar adjacent
surfaces are partially negated in order to preserve background constancy.
Consider the Benary Cross pattern (input stimulus in Fig. 2). If the gray patch on
the cross is considered to be at the same depth as the cross , while the other gray
patch is taken to be at the same depth as the background (which is below the cross),
the gray patch on the cross should look lighter (since its lightness is determined
in relation to the black cross), and the other patch darker (since its lightness is
determined in relation to the white background) . White's (1979) illusion can be
discussed in similar terms (see the input stimulus in Fig. 3).
The mechanisms presented below implement a process of partial contrast negation in
which the initial retinal contrast code is modulated by depth information such that
the retinal contrast consistent with the depth interpretation is maintained while the
retinal contrast not supported by depth is negated or attenuated.
3
A FILLING-IN MODEL OF 3-D LIGHTNESS
Contrast/Filling-in models propose that initial measures of boundary contrast followed by spreading of neural activity within filling-in compartments produce a response profile isomorphic with the percept (Gerrits & Vendrik, 1970; Cohen &
Grossberg, 1984; Grossberg & Todorovic, 1988; Pessoa, Mingolla, & Neumann,
1995). In this paper we develop a neural network model of lightness perception in
the tradition of contrast/filling-in theories. The neural network developed here is an
extension of the Boundary Contour System/Feature Contour System (BCS/FCS)
proposed by Cohen and Grossberg (1984) and Grossberg and Mingolla (1985) to
explain 3- D lightness data.
L. PESSOA. W. D. ROSS
846
A fundamental idea of the BCS/FCS theory is that lateral inhibition achieves illumination constancy but requires the recovery of lightness by the filling-in, or diffusion ,
of featural quality ("lightness" in our case) . The final diffused activities correspond
to lightness, which is the outcome of interactions between boundaries and featural
quality, whereby boundaries control the process of filling-in by forming gates of
variable resistance to diffusion .
How can the visual system construct 3-D lightness percepts from contrast measures
obtained by retinotopic lateral inhibition? A mechanism that is easily instantiated in
a neural model and provides a straightforward modification to the contrast/fillingin proposal of Grossberg and Todorovic (1988) is the use of depth-gated filling-in.
This can be accomplished through a pathway that modulates boundary strength
for boundaries between surfaces or objects across depth. The use of permeable
or "leaky" boundaries was also used by Grossberg and Todorovic (1988) for 2-D
stimuli. In the current usage, permeability is actively increased at depth boundaries
to partially negate the contrast effect - since filling-in proceeds more freely - and
thus preserve lightness constancy across backgrounds. Figure 1 describes the four
computational stages of the system.
I
BOUNDARIES
,...---------,
~
~-
~
ON/OFF
FILTERING
j
~I
RLLlNG-IN
I
I
'"
I
DEPTH
MAP
I
Figure 1: Model components.
Stage 1: Contrast Measurement. At this stage both ON and OFF neural fields
with lateral inhibitory connectivity measure the strength of contrast at image regions - in uniform regions a contrast measurement of zero results . Formally, the
ON field is given by
dyi; -_ -aYij+ + ((3 dt
ct
+ ) C + - (+
Yij
Yij
ij
+ 'Y ) Eij+
(1)
yi;
Et;
where a , (3 and 'Yare constants;
is the total excitatory input to
and
is the
total inhibitory input to
These terms denote discrete convolutions of the input
Iij with Gaussian weighting functions, or kernels. An analogous equation specifies
Yi; for the OFF field . Figure 2 shows the ON-contrast minus the OFF-contrast.
yi; .
Stage 2: 2-D Boundary Detection. At Stage 2, oriented odd-symmetric boundary detection cells are excited by the oriented sampling of the ON and OFF Stage 1
cells. Responses are maximal when ON activation is strong on one side of a cell's
receptive field and OFF activation is strong on the opposite side. In other words,
the cells are tuned to ON/OFF contrast co-occurrence, or juxtaposition (see Pessoa
et aI., 1995). The output at this stage is the sum of the activations of such cells at
each location for all orientations. The output responses are sharpened and localized
through lateral inhibition across space; an equation similar to Equation 1 is used .
The final output of Stage 2 is given by the signals Zij (see Fig. 2, Boundaries).
Stage 3: Depth Map. In the current implementation a simple scheme was employed for the determination of the depth configuration. Initially, four types of
847
A Neural Network Model of 3-D Lightness Perception
T-junction cells detect such configurations in the image. For example,
Iij
=
Zi-d ,j
x
Zi+d ,j
x
Zi ,j+d,
(2)
where d is a constant, detects T-junctions, where left , right, and top positions of the
boundary stage are active; similar cells detect T-junctions of different orientations.
The activities of the T-junction cells are then used in conjunction with boundary
signals to define complete boundaries. Filling-in within these depth boundaries
results in a depth map (see Fig. 2, Depth Map).
Stage 4: Depth-modulated Filling-in . In Stage 4, the ON and OFF contrast
measures are allowed to diffuse across space within respective filling-in regions . Diffusion is blocked by boundary activations from Stage 2 (see Grossberg & Todorovic,
1988, for details). The diffusion process is further modulated by depth information.
The depth map provides this information; different activities code different depths .
In a full blown implementation of the model, depth information would be obtained
by the depth segmentation of the image supported by both binocular disparity and
monocular depth cues.
Depth-modulated filling-in is such that boundaries across depths are reduced in
strength. This allows a small percentage of the contrast on either side ofthe boundary to leak across it resulting in partial contrast negation, or reduction, at these
boundaries. ON and OFF filling-in domains are used which receive the corresponding
ON and OFF contrast activities from Stage 1 as inputs (see Fig. 2, Filled-in).
4
SIMULATIONS
The present model can account for several important phenomena, including 2 - D
effects of lightness constancy and contrast (see Grossberg and Todorovic, 1988).
The simulations that follow address 3 -D lightness effects.
4.1
Benary Cross
Figure 2 shows the simulation for the Benary Cross . The plotted gray level values
for filling-in reflect the activities of the ON filling-in domain minus the OFF domain.
The model correctly predicts that the patch on the cross appears lighter than the
patch on the background. This result is a direct consequence of contrast negation.
The depth relationships are such that the patch on the cross is at the same depth as
the cross and the patch on the background is at the same depth as the background
(see Fig. 2, Depth Map) . Therefore, the ratio of the background to the patch on
the cross (across a depth boundary) and the ratio of the cross to the patch on
the background (also across a depth boundary), are given a smaller weight in the
lightness computation. Thus, the background will have a stronger effect on the
appearance of the patch on the background, which will appear darker. At the same
time, the cross will have a greater effect on the appearance of the patch on the
cross , which will appear lighter.
4.2
White's lllusion
White 's (1979) illusion (Fig. 3) is such that the gray patches on the black stripes
appear lighter than the gray patches on the white stripes. This effect is considered
a puzzling violation of simultaneous contrast since the contour length of the gray
patches is larger for the stripes they do not lie on . Simultaneous contrast would
predict that the gray patches on the black stripes appear lighter than the ones on
white.
848
L. PESSOA, W. D. ROSS
I
L~
I
I
- -- I
Stimulus
Boundaries
Depth Map
ON-OFF Contrast
Filled-in
Figure 2: Benary Cross. The filled-in values of the gray patch on the cross are higher
than the ones for the gray patch on the background. Gray levels code intensity;
darker grays code lower values, lighter grays code higher values.
Figure 3 shows the result of the model for White's effect . The T-junction information in the stimulus determines that the gray patches are coplanar with the
patches they lie on. Therefore, their appearance will be determined in relation to
the contrast of their respective backgrounds. This is obtained, again, through contrast modulation, where the contrast of, say, the gray patch on a black stripe is
preserved, while the contrast of the same patch with the white is partially negated
(due to the depth arrangement).
4.3
Coplanar Hypothesis
Gilchrist (1977) showed that the perception of lightness is not determined by retinal
adjacency, and that depth configuration and spatial layout help specify lightness.
More specifically, it was proposed that the ratio of coplanar surfaces, not necessarily
retinally adjacent, determines lightness, the so-called coplanar ratio hypothesis.
Gilchrist was able to convincingly demonstrate this by comparing the perception of
lightness in two equivalent displays (in terms of luminance values), aside from the
perceived depth relationships in the displays.
Figure 4 shows computer simulations of the coplanar ratio effect. The same stimulus
is given as input in two simulations with different depth specifications. In one
(Depth Map 1), the depth map specifies that the rightmost patch is at a different
depth than the two leftmost patches which are coplanar. In the other (Depth Map
2), the two rightmost patches are coplanar and at a different depth than the leftmost
patch. In all, the depth organization alters the lightness of the central region, which
should appear darker in the configuration of Depth Map 1 than the one for Depth
Map 2. For Depth Map 1, since the middle patch is coplanar with a white patch, this
patch is darkened by simultaneous contrast. For Depth Map 2, the middle patch
will be lightened by contrast since it is coplanar with a black patch. It should be
noted that the depth maps for the simulations shown in Fig . 4 were given as input.
A Neural Network Model of 3-D Lightness Perception
-
- --
- 1
-,
849
1
Boundaries
Stimulus
ON-OFF Contrast
Filled-in
Figure 3: White's effect. The filled-in values of the gray patches on the black stripes
are higher than the ones for the gray patches on white stripes.
The current implementation cannot recover depth trough binocular disparity and
only employs monocular cues as in the previous simulations.
5
CONCLUSIONS
In this paper, data from experiments on lightness perception were used to extend
the BCSjFCS theory of Grossberg and colleagues to account for several challenging
phenomena. The model is an initial step towards providing an account that can
take into consideration the complex factors involved in 3-D vision - see Grossberg
(1994) for a comprehensive account of 3-D vision.
Acknowledgements
The authors would like to than Alan Gilchrist and Fred Bonato for their suggestions
concerning this work. L. P. was supported in part by Air Force Office of Scientific
Research (AFOSR F49620-92-J-0334) and Office of Naval Research (ONR N0001491-J-4100); W. R. was supported in part by HNC SC-94-001.
Reference
Arend , L., & Spehar, B. (1993) Lightness, brightness, and brightness contrast : 2.
Reflectance variation. Perception {3 Psychophysics 54 :4576-468.
Cohen, M., & Grossberg, S. (1984) Neural dynamics of brightness perception:
Features, boundaries, diffusion, and resonance. Perception {3 Psychophysics
36:428-456.
Gerrits, H. & Vendrik, A. (1970) Simultaneous contrast, filling-in process and information processing in man's visual system. Experimental Brain Research
11:411-430.
850
L. PESSOA, W. D. ROSS
Filled-in 2
Stimulus
Depth Map 1
Filled-in 1
Figure 4: Gilchrist's coplanarity. The Filled-in values for the middle patch on top
are higher than on bottom.
Gilchrist, A. (1977) Perceived lightness depends on perceived spatial arrangement.
Science 195:185-187.
Grossberg, S. (1994) 3-D vision and figure-ground separation by visual cortex. Perception & Psychophysics 55:48-120 .
Grossberg, S., & Mingolla, E . (1985) Neural dynamics of form perception: Boundary
completion, illusory figures, and neon color spreading. Psychological Review
92:173-211.
Grossberg, S., & Todorovic. D. (1988). Neural dynamics of 1-D and 2-D brightness
perception: A unified model of classical and recent phenomena. Perception &
Psychophysics 43:241-277 .
Land, E., & McCann, J . (1971). Lightness and retinex theory. Journal of the Optical
Society of America 61:1-11.
Pessoa, L., Mingolla, E., & Neumann, H. (1995) A contrast- and luminance-driven
multiscale network model of brightness perception. Vision Research 35:22012223.
Shapley, R., & Enroth-Cugell, C. (1984) Visual adaptation and retinal gain controls.
In N. Osborne and G. Chader (eds.), Progress in Retinal Research, pp. 263346. Oxford: Pergamon Press.
Wallach, H. (1948) Brightness constancy and the nature of achromatic colors. Journal of Experimental Psychology 38: 310-324.
White, M. (1979) A new effect of pattern on perceived lightness. Perception 8:413416 .
Whittle, P., & Challands, P. (1969) The effect of background luminance on the
brightness of flashes . Vision Research 9:1095-1110.
| 1068 |@word neurophysiology:1 middle:3 stronger:1 disk:2 simulation:8 excited:1 brightness:8 minus:2 reduction:1 initial:3 configuration:4 disparity:2 zij:1 tuned:1 rightmost:2 current:3 comparing:1 activation:4 aside:1 cue:2 provides:2 location:1 direct:1 shapley:2 pathway:1 mccann:2 brain:1 detects:1 retinotopic:1 matched:1 developed:1 unified:1 control:2 appear:5 local:3 consequence:1 despite:3 encoding:1 oxford:1 permeable:1 modulation:1 black:7 wallach:2 challenging:1 co:2 janeiro:2 grossberg:15 implement:1 illusion:4 displacement:1 word:1 refers:1 suggest:1 cannot:1 context:1 bill:1 map:16 equivalent:1 layout:2 straightforward:1 recovery:1 notion:1 variation:2 increment:3 brazil:1 analogous:1 lighter:6 hypothesis:5 stripe:7 predicts:1 bottom:1 constancy:13 role:1 region:8 movement:1 leak:1 dynamic:3 upon:1 easily:1 america:1 surrounding:1 univ:1 instantiated:1 sc:1 outcome:1 larger:1 plausible:1 say:1 achromatic:2 itself:1 final:2 propose:2 interaction:1 maximal:1 adaptation:1 facade:1 everyday:1 neumann:2 produce:1 object:4 help:1 develop:1 completion:1 ij:1 odd:1 progress:1 strong:2 exhibiting:1 adjacency:1 yij:2 extension:1 considered:2 ground:1 predict:1 achieves:1 early:4 perceived:4 spreading:2 ross:4 sensitive:1 weighted:1 federal:1 always:1 gaussian:1 office:2 conjunction:1 encode:1 naval:1 contrast:40 tradition:1 detect:2 rio:2 explanatory:1 initially:1 relation:5 overall:1 orientation:2 resonance:1 spatial:3 psychophysics:4 field:5 construct:1 never:1 sampling:1 look:1 filling:16 stimulus:8 employ:1 oriented:2 preserve:2 comprehensive:1 cns:1 retinally:2 william:1 negation:4 detection:2 organization:1 violation:1 light:1 dyi:1 partial:2 experience:1 respective:3 filled:8 permeability:1 plotted:1 psychological:1 increased:1 uniform:1 reported:1 accomplish:1 fundamental:1 bu:1 off:13 connectivity:1 again:1 sharpened:1 reflect:1 central:1 actively:1 account:5 de:2 retinal:7 whittle:4 includes:1 trough:1 cugell:2 depends:1 recover:1 air:1 compartment:1 largely:1 who:1 percept:2 correspond:1 ofthe:1 annulus:2 lighting:1 explain:1 simultaneous:4 ed:1 against:1 colleague:2 pp:1 involved:1 gain:1 illusory:1 color:8 segmentation:1 appears:2 higher:4 dt:1 follow:1 reflected:1 response:3 specify:1 sufficiency:1 stage:16 binocular:2 multiscale:1 quality:2 gray:17 scientific:1 usage:1 effect:12 symmetric:1 white:15 adjacent:3 maintained:1 whereby:1 excitation:1 noted:1 leftmost:2 complete:1 demonstrate:1 image:3 consideration:1 predominantly:1 gilchrist:7 functional:1 cohen:3 discussed:1 interpretation:1 extend:1 measurement:2 composition:1 blocked:1 ai:1 had:1 specification:1 cortex:1 surface:8 inhibition:5 showed:2 recent:1 driven:1 onr:1 accomplished:1 yi:3 greater:1 employed:1 freely:1 paradigm:1 signal:2 neurally:1 full:1 rj:1 bcs:2 alan:1 match:1 determination:3 cross:19 concerning:1 equally:1 vision:5 kernel:1 cell:9 proposal:1 background:18 receive:1 preserved:1 addressed:2 subject:2 zi:3 psychology:1 opposite:1 idea:1 br:1 attenuated:1 mingolla:4 resistance:1 enroth:2 todorovic:6 reduced:1 specifies:2 percentage:1 inhibitory:2 alters:1 blown:1 correctly:1 discrete:1 four:2 arend:2 diffusion:5 luminance:8 sum:1 striking:1 patch:34 separation:1 ct:1 followed:1 display:3 neon:1 activity:6 strength:3 constraint:1 scene:1 diffuse:1 optical:1 across:10 describes:1 smaller:1 modification:1 taken:1 equation:3 monocular:2 remains:1 discus:1 mechanism:5 coplanar:14 junction:5 yare:1 occurrence:1 gate:1 top:2 reflectance:1 build:1 classical:1 society:1 psychophysical:3 diffused:1 pergamon:1 arrangement:3 receptive:1 darkened:1 lateral:6 landmark:1 argue:1 extent:1 code:7 length:1 relationship:2 insufficient:1 ratio:16 providing:1 implementation:3 perform:1 negated:3 gated:1 neuron:1 convolution:1 intensity:2 address:3 able:1 proceeds:1 below:2 perception:23 pattern:3 convincingly:1 including:2 explanation:1 power:1 force:1 scheme:1 lightness:37 eye:1 featural:2 review:1 hnc:1 acknowledgement:1 afosr:1 suggestion:1 filtering:1 localized:1 consistent:1 principle:2 land:2 excitatory:1 supported:4 side:3 pessoa:8 absolute:1 leaky:1 f49620:1 boundary:27 dimension:1 depth:47 fred:1 contour:5 author:1 active:1 luiz:1 juxtaposition:1 nature:1 necessarily:1 complex:1 domain:3 decrement:3 profile:1 osborne:1 allowed:1 fig:8 darker:4 iij:2 position:1 lie:2 weighting:1 negate:1 modulates:1 illumination:5 boston:2 spehar:2 fcs:2 eij:1 likely:1 appearance:3 ganglion:1 forming:1 visual:9 partially:3 determines:2 ma:1 flash:1 towards:1 man:1 change:1 specifically:2 determined:5 called:2 total:2 isomorphic:1 experimental:2 indicating:1 formally:1 puzzling:1 support:1 retinex:1 modulated:4 phenomenon:3 |
78 | 1,069 | How Perception Guides Production
Birdsong Learning
?
In
Christopher L. Fry
cfry@cogsci.ucsd.edu
Department of Cognitive Science
University of California at San Diego
La Jolla, CA 92093-0515
Abstract
A c.:omputational model of song learning in the song sparrow
(M elospiza melodia) learns to categorize the different syllables of
a song sparrow song and uses this categorization to train itself to
reproduce song. The model fills a crucial gap in the computational
explanation of birdsong learning by exploring the organization of
perception in songbirds. It shows how competitive learning may
lead to the organization of a specific nucleus in the bird brain,
replicates the song production results of a previous model (Doya
and Sejnowski, 1995), and demonstrates how perceptual learning
can guide production through reinforcement learning.
1
INTRODUCTION
The passeriformes or songbirds make up more than half of all bird species and
are divided into two groups: the os cines which learn their songs and sub-oscines
which do not. Oscines raised in isolation sing degraded species typical songs similar
to wild song. Deafened oscines sing completely degraded songs (Konishi, 1965) ,
while deafened sub-oscines develop normal songs (Kroodsma and Konishi, 1991)
indicating that auditory feedback is crucial in oscine song learning.
Innate structures in the bird brain regulate song learning. For example, song sparrows show innate preferences for their own species' songs and song structure (Marler, 1991). Innate preferences are thought to be encoded in an auditory template
which limits the sounds young birds may copy. According to the auditory template hypothesis birds go through two phases during song learning , a memorization phase and a motor phase. In the memorization phase, which lasts
from approximately 20 to 50 days after birth in the song sparrow, the bird selects
which sounds to copy based on an innate template and refines the template based
111
How Perception Guides Production in Birdsong Learning
POSTERIOR
ANTERIOR
ing
--+ ......
Pall..,.
~l'IodUclcn
10 ?? chea ?
SJrIn I
Pall ...,
Figure 1: A simplified sketch of a saggital section of the songbird brain . Field L (Field
L) receives auditory input and projects to the production pathway: HVc (formerly the
caudal nucleus of the hyperstriatum), RA (robust nucleus of archistriatum), nXIIts (hypoglossal nerve), the syrinx (vocal organ) and the learning pathway: X (area X), DLM
(medial nucleus of the dorsolateral thalamus), LMAN (lateral magnocellular nucleus of
the anterior neostriatum), RA (Konishi, 1989; Vicario, 1994). V is the lateral ventricle.
on the sounds it hears . In the motor phase (from approximately 272 to 334 days
after birth) the template provides feedback during singing. Learning to sing the
memorized, template song is a gradual process of refining the produced song to
match memory (Marler, 1991).
A song is made up of phrases, phrases of syllables and syllables of notes. Syllables,
usually separated by periods of silence, are the main units of analysis. Notes typically last from 10-100 msecs and are used to construct syllables (100-200 msecs)
which are reused to produce trills and other phrases.
2
NEUROBIOLOGY OF SONG
The two main neural pathways that govern song are the motor and learning pathways seen in figure 1 (Konishi , 1989). Lesions to the motor pathway interrupt
singing throughout life while lesions to the learning pathway disrupt early song
learning. Although these pathways seem to have segregated functions , recordings
of neurons during song playback have shown that cells throughout the song system
respond to song (Konishi, 1989).
Studies of song perception have shown the best auditory stimulus that will evoke a
response in the song system is the bird's own song (Margoliash , 1986) . The song
specific neurons in HV c of the white-crowned sparrow often require a sequence of
two syllables to respond (Margoliash , 1986 ; Margoliash and Fortune , 1992) and are
made up of two main types in HV c . One type is sensitive to temporal combinations
of stimuli while the other is sensitive to harmonic characteristics (Margoliash and
Fortune, 1992) .
3
COMPUTATION
Previous computational work on birdsong learning predicted individual neural responses using back-propagation (Margoliash and Bankes , 1993) and modelled motor
mappings for song production (Doya and Sejnowski, 1995). The current work de-
C.L.FRY
112
1
2
00
8
Kohonen Neuron
InpulLayer
Sliding
Windo"",.
~
..
_ _ _ _ __
~
1000
Figure 2: Perceptual network input encoding. The song is converted into frequency bins
which are presented to the Kohonen layer over four time steps.
velops a model of birdsong syllable perception which extends Doya and Sejnowski's
(1995) model of birdsong learning. Birdsong syllable segmentation is accomplished
using an unsupervised system and this system is used to train the network to reproduce its input using reinforcement learning.
The model implements the two phases of the auditory template hypothesis, memorization and motor. In the first phase the template song is segmented into
syllables by an unsupervised Kohonen network (Kohonen, 1984). In the second
phase the syllables are reproduced using a reinforcement learning paradigm based
on Doya and Sejnowski (1995).
The model extends previous work in three ways: 1) a self-organizing network picks
out syllables in the song; 2) the self-organizing network provides feedback during
song production; and 3) a more biologically plausible model of the syrinx is used to
generate song.
3.1
Perception
Recognizing a syllable involves identifying a short sequence of notes. Kohonen
networks use an unsupervised learning method to categorize an input space based
on similar neural responses. Thus a Kohonen network is a natural candidate for
identifying the syllables in a song.
One song from the repertoire of a song sparrow was chosen as the training song
for the network . The song was encoded by passing a sliding window across the
training waveform (sampled at 22 .255 kHz) of the selected song. At each time step,
a non-overlapping 256 point (~ .011 sec) fast fourier transform (FFT) was used to
generate a power spectrum (figure 2). The power spectrum was divided into 8 bins.
Each bin was mapped to a real number using a gaussian summation procedure with
the peak of the gaussian at the center of each frequency bin. Four time-steps were
passed to each Kohonen neuron.
The network's task was to identify similar syllables in the input song. The input
song was broken down into syllables by looking for points where the power at all
113
How Perception Guides Production in Birdsong Learning
10
>.
u
.:
g."
"..."
kH<
5
t
-/>,
a
s
0.0
0.5
1.0
20
time
"ii:..
CD
.
S
:I
CD
z
n1
n2
n3
n4
n5
n6
n7
n8
Figure 3: Categorization of song syllables by a Kohonen network. The power-spectrum of
the training song is at the top. The responses of the Kohonen neurons are at the bottom.
For each time-step the winning neuron is shown with a vertical bar. The shaded areas
indicate the neuron that fired the most during the presentation of the syllable.
frequencies dropped below a threshold . A syllable was defined as sound of duration
greater than .011 seconds bounded by two low-power points . The network was not
trained on the noise between syllables. The song was played for the network ten
times (1050 training vectors), long enough for a stable response pattern to emerge.
=
The activation of a neuron was : N etj
'ExiWij' Where: N etj = output of neuron
j , Wij = the weight connecting inputi to n eu ronj , Xi = inputi. The Kohonen network was trained by initializing the connection weights to 1/Jnumber of neurons
+ small random component (r S; .01) , normalizing the inputs , and updating the
weights to the winning neuron by the following rule : W n ew = W old + a(x - W old)
where : a = training rat e = .20 . If the same neuron won twice in a row the training rate was decreased by 1/2. Only the winning neuron was reinforced resulting
in a non-localized feature map .
3.1.1
Perceptual Results
The Kohonen network was able to assign a unique neuron to each type of syllable
(figure 3) . Of the eight neurons in the network. the one that fired the most frequently
during the presentation of a syllable uniquely identified the type of syllable. The
first four syllables of the input song sound alike, contain similar frequencies , and
are coded by the first neuron (N1). The last three syllables sound alike, contain
similar frequencies , and are coded by the fourth neuron (N4). Syllable five was
coded by neuron six (N6) , syllable six by neuron two (N2) and syllable seven by
neuron eight (N8).
Figure 4 shows the frequency sensitivity of each neuron (1-8, figure 3) plotted against
each time step (1-4). This plot shows the harmonic and temporally sensitive neurons that developed during the learning phase of the Kohonen network. Neuron 2
is sensitive to only one frequency at approximately 6-7 kHz , indicated by the solid
white band across the 6-7 kHz frequency range in figure 4. Neuron 4 is sensitive
to mid-range frequencies of short duration . Note that in figure 4 N4 responds
C. L. FRY
114
o
1
2
3
4
01
N6
N5
23
N7
4
N8
Time S t e p
Figure 4: The values of the weights mapping frequency bins and time steps to Kohonen
neurons. White is maximum , Black is minimum .
maximally to mid-range frequencies only in the first two time steps. It uses this
temporal sensitivity to distinguish between the last three syllables and the fifth syllable (figure 3) by keying off the length of time mid-range frequencies are present.
Contrast this early response sensitivity with neuron 6, which is sensitive to midrange frequencies of long duration , but responds only after one time step . It uses
this temporal sensitivity to respond to the long sustained frequency of syllable four .
Considered together, neurons 2,4,6 and 8 illustrate the two types of neurons (temporal and harmonic) found in HVc by Margoliash and Fortune (1993). Competitive
learning may underly the formation of these neurons in HV c.
3.2
Production
After competitive learning trains the perceptual part of the network to categorize
the song into syllables , the perceptual network can be used to train the production
side of the network to sing.
The first step in modelling song production is to create a model of the avian vocal apparatus , the syrinx. In the syrinx sounds arise when air flows through the
syringeal passage and causes the tympanic membrane to vibrate. The frequency is
controlled by the tension of the membrane controlled by the syringeal musculature.
The amplitude is dependent on the area of the syringeal orifice which is dependent
on the tension of the labium. The interactions of this system were modelled by
modulated sine waves. Four parameters governed the fundamental frequency(p) ,
frequency modulation(tm) , amplitude (ex) and frequency of amplitude modulation(I). The range of the parameters was set according to calculations in Greenwalt
(1968). The parameters were combined in the following equation (based on Greenwalt, 1968), f(ex , l,p, tm , t) excos(21l"t 1) cos(21l"t p + cos(21l"t tm)) .
=
Using this equation song can be generated over time by making assumptions about
the response properties of neurons in RA . Following Doya and Sejnowski (1995) it
was assumed that pools of RA neurons have different temporal response profiles.
Syllable like temporal responses can be generated by modifying the weights from
the Kohonell layer (HV c) to the production layer (RA) .
How Perception Guides Production in Birdsong Learning
Tnnni~
115
Song
,
. .. .
.f . ?~:if>vr.
...
i
i
I
'0
15
20
Tim.
J'etworl< Song trained with Spectmgmm Target
Ti'llll!
J'etworl< Song trained with J'euraJ. Activation Target
10
kHz
S
aa
as
10
15
20
Figure 5: Training song and two songs produced with different representations of the
training song.
The production side of the network was trained using the reinforcement learning
paradigm described in Doya and Sejnowski (1995). Each syllable was presented in
the order it occurred in the training song to the Kohonen layer, which turned on a
single neuron. A random vector was added to the weights from the Kohonen layer to
the output layer and a syllable was produced. The produced syllable was compared
to the stored representation of the template song which was used to generate an
error signal and an estimate of the gradient. If the evaluation of the produced
syllable was better than a threshold the weights were kept, otherwise they were
discarded .
Two experiments were done using different representations of the template song.
In the first experiment the template song was the stored power spectrum of each
syllable and the error signal was the cosine of the angle between the power spectrum
of the produced syllable and the template syllable. In the second experiment the
template song was the stored neural responses to song (recorded during the memorization phase) and the error signal was the Euclidean distance between neural
responses to the produced syllable and the neural responses to the template song.
3.2.1
Production Results
Figure 5 shows the output of the production network after training with different
representations of the training song. The network was able to replicate the major
frequency components of the training song to a high degree of accuracy. The song
trained with the spectrogram target was learned to a 90% average cosine between
the spectrograms of the produced song and the training song on each syllable with
the best syllable learned to 100% accuracy and the worst to 85% after 1000 trials. A
crucial aspect to achieving performance was smoothing the template spectrogram.
The third song shows that the network was able to learn the template song using the
neural responses of the perceptual system to generate the reinforcement signal. The
average distance between the initial randomly produced syllables and the training
C. L.FRY
116
song was reduced by 50%.
4
DISCUSSION
This work fills a crucial gap in the computational explanation of song learning left
by prior work . Doya and Sejnowski (1995) showed how song could be produced
but left unanswered the questions of how song is perceived and how the perceptual
system provides feedback during song production. This study shows a time-delay
Kohonen network can learn to categorize the syllables of a sample song and this
network can train song production with no external teacher. The Kohonen network
explains how neurons sensitive to temporal and harmonic structure could arise in
the songbird brain through competitive learning. Taken as a whole , the model
presents a concrete proposal of the computational principles governing the Auditory Template Hypothesis and how a song is memorized and used to train song
production. Future work will flesh out the effects of innate structure on learning by
examining how the settings of the initial weights on the network affect song learning
and predict experimental effects of deafening and isolation .
Acknowledgements
Thanks to S. Vehrencamp for providing the song data, J . Batali, J. Elman, J. Bradbury and T. Sejnowski for helpful comments , and K. Doya for advice on replicating
his model.
References
Doya, K . and Sejnowski, T .J. (1995). A novel reinforcement model of bird song vocalization
learning. In Tesauro, G ., Touretzky, D. S. and Leen , T.K., editors, Advances in Neural
Information Processing Systems 7. MIT Press, Cambridge, MA.
Greenwalt, C.H. (1968). Bird Song: Acoustics and Physiology. Smithsonian Institution
Press. Wash., D.C.
Kohonen, T . (1984). Self-organization and Associative Memory, Vol. 8. Springer-Verlag,
Berlin.
Konishi, M. (1965). The role of auditory feedback in the control of vocalization in the
white-crowned sparrow. Zeitschrijt fur Tierpsychogie , 22,770-783.
Konishi, M. (1989). Birdsong for Neurobiologists. Neuron , 3, 541-549.
Kroodsma, D.E . and Konishi , M. (1991) . A suboscine bird (eastern phoebe, Sayonoris
phoebe) develops normal song without auditory feedback. Animal Behavior, 42, 477-487.
Marler , P. (1991). The instinct to learn. In The Epigenesis of Mind: Essays on Biology
and Cognition, eds. S. Carey and R. Gelman. Lawrence Erlbaum Associates.
Margoliash , D . (1986). Preference for autogenous song by auditory neurons in a song
system nucleus of the white-crowned sparrow . Journal of Neuroscience, 6,1643-1661.
Margoliash, D . and Bankes, S.C. (1993) . Computations in the Ascending Auditory Pathway in Songbirds Related to Song Learning. American Zoologist, 33 , 94-103.
Margoliash , D. and Fortune, E . (1992). Temporal and Harmonic Combination-Sensitive
Neurons in the Zebra Finch's HVc. Journal of Neuroscien ce, 12, 4309-4326.
Vicario , D. (1994). Motor Mechanisms Relevant to Auditory-Vocal Interactions in Songbirds. Bra in, Behavior and Evolution,44, 265-278 .
| 1069 |@word trial:1 replicate:1 reused:1 essay:1 gradual:1 pick:1 solid:1 n8:3 initial:2 current:1 anterior:2 activation:2 refines:1 underly:1 motor:7 plot:1 medial:1 half:1 selected:1 short:2 institution:1 provides:3 smithsonian:1 preference:3 five:1 sustained:1 wild:1 pathway:8 ra:5 behavior:2 elman:1 frequently:1 brain:4 window:1 project:1 bounded:1 developed:1 temporal:8 ti:1 demonstrates:1 control:1 unit:1 dropped:1 apparatus:1 limit:1 encoding:1 modulation:2 approximately:3 black:1 bird:10 twice:1 shaded:1 co:2 range:5 unique:1 implement:1 procedure:1 area:3 thought:1 physiology:1 vocal:3 nxiits:1 gelman:1 memorization:4 map:1 center:1 go:1 instinct:1 duration:3 identifying:2 rule:1 fill:2 his:1 konishi:8 unanswered:1 margoliash:9 diego:1 target:3 us:3 hypothesis:3 associate:1 updating:1 bottom:1 role:1 initializing:1 singing:2 hv:4 worst:1 eu:1 govern:1 broken:1 trained:6 completely:1 train:6 separated:1 fast:1 sejnowski:9 cogsci:1 formation:1 birth:2 encoded:2 plausible:1 otherwise:1 fortune:4 itself:1 transform:1 vocalization:2 reproduced:1 associative:1 sequence:2 interaction:2 kohonen:18 turned:1 relevant:1 organizing:2 fired:2 kh:1 etj:2 produce:1 categorization:2 tim:1 illustrate:1 develop:1 avian:1 bradbury:1 predicted:1 involves:1 syrinx:4 indicate:1 flesh:1 waveform:1 modifying:1 memorized:2 bin:5 explains:1 require:1 tympanic:1 assign:1 repertoire:1 summation:1 exploring:1 considered:1 normal:2 lawrence:1 mapping:2 predict:1 cognition:1 major:1 early:2 perceived:1 sensitive:8 organ:1 create:1 caudal:1 mit:1 gaussian:2 playback:1 lman:1 refining:1 interrupt:1 modelling:1 fur:1 contrast:1 helpful:1 dependent:2 typically:1 reproduce:2 wij:1 selects:1 animal:1 raised:1 smoothing:1 field:2 construct:1 biology:1 unsupervised:3 future:1 stimulus:2 develops:1 randomly:1 individual:1 phase:10 n1:2 organization:3 evaluation:1 replicates:1 old:2 euclidean:1 plotted:1 hvc:3 phrase:3 recognizing:1 delay:1 examining:1 erlbaum:1 stored:3 teacher:1 finch:1 combined:1 thanks:1 peak:1 sensitivity:4 fundamental:1 off:1 pool:1 connecting:1 together:1 concrete:1 recorded:1 cognitive:1 external:1 american:1 converted:1 de:1 sec:1 sine:1 competitive:4 wave:1 carey:1 air:1 degraded:2 accuracy:2 characteristic:1 reinforced:1 identify:1 modelled:2 produced:10 archistriatum:1 touretzky:1 ed:1 against:1 frequency:19 sampled:1 auditory:12 segmentation:1 amplitude:3 back:1 nerve:1 vicario:2 day:2 tension:2 response:13 maximally:1 leen:1 done:1 governing:1 sketch:1 receives:1 christopher:1 o:1 overlapping:1 propagation:1 indicated:1 innate:5 effect:2 contain:2 evolution:1 white:5 during:9 self:3 uniquely:1 songbird:6 won:1 rat:1 cosine:2 passage:1 harmonic:5 novel:1 khz:4 keying:1 occurred:1 cambridge:1 zebra:1 replicating:1 crowned:3 stable:1 posterior:1 own:2 showed:1 jolla:1 tesauro:1 verlag:1 life:1 accomplished:1 seen:1 minimum:1 greater:1 spectrogram:3 bra:1 paradigm:2 period:1 signal:4 ii:1 sliding:2 sound:7 thalamus:1 ing:1 sparrow:8 match:1 segmented:1 calculation:1 long:3 divided:2 coded:3 controlled:2 n5:2 syringeal:3 cell:1 proposal:1 decreased:1 crucial:4 comment:1 recording:1 flow:1 n7:2 seem:1 enough:1 fft:1 affect:1 isolation:2 identified:1 tm:3 six:2 passed:1 birdsong:10 song:89 passing:1 cause:1 mid:3 ten:1 band:1 reduced:1 generate:4 neuroscience:1 vol:1 group:1 four:5 threshold:2 achieving:1 ce:1 kept:1 musculature:1 angle:1 fourth:1 respond:3 extends:2 throughout:2 doya:9 neurobiologist:1 dorsolateral:1 batali:1 layer:6 syllable:44 played:1 distinguish:1 n3:1 fourier:1 aspect:1 department:1 according:2 combination:2 membrane:2 across:2 deafened:2 n4:3 biologically:1 alike:2 making:1 dlm:1 taken:1 equation:2 mechanism:1 mind:1 ascending:1 eight:2 llll:1 regulate:1 fry:4 top:1 pall:2 magnocellular:1 added:1 question:1 responds:2 gradient:1 distance:2 mapped:1 lateral:2 berlin:1 seven:1 length:1 providing:1 vertical:1 neuron:36 discarded:1 sing:4 neostriatum:1 neurobiology:1 looking:1 ucsd:1 connection:1 california:1 acoustic:1 learned:2 able:3 bar:1 usually:1 perception:8 below:1 pattern:1 vibrate:1 trill:1 memory:2 explanation:2 power:7 natural:1 temporally:1 n6:3 hears:1 formerly:1 prior:1 acknowledgement:1 segregated:1 localized:1 nucleus:6 degree:1 ventricle:1 principle:1 editor:1 cd:2 production:19 row:1 last:4 copy:2 eastern:1 silence:1 guide:5 side:2 template:17 emerge:1 fifth:1 feedback:6 made:2 reinforcement:6 san:1 simplified:1 evoke:1 assumed:1 xi:1 disrupt:1 spectrum:5 learn:4 robust:1 ca:1 main:3 whole:1 noise:1 arise:2 profile:1 n2:2 lesion:2 advice:1 vr:1 sub:2 msec:2 winning:3 candidate:1 governed:1 perceptual:7 third:1 learns:1 young:1 down:1 midrange:1 specific:2 normalizing:1 wash:1 gap:2 springer:1 aa:1 ma:1 presentation:2 typical:1 specie:3 experimental:1 la:1 ew:1 indicating:1 modulated:1 categorize:4 ex:2 |
79 | 107 | 323
NEURAL NETWORK RECOGNIZER FOR
HAND-WRITTEN ZIP CODE DIGITS
J. S. Denker, W. R. Gardner, H. P. Graf, D. Henderson, R. E. Howard,
W. Hubbard, L. D. Jackel, H. S. Baird, and I. Guyon
AT &T Bell Laboratories
Holmdel, New Jersey 07733
ABSTRACT
This paper describes the construction of a system that recognizes hand-printed
digits, using a combination of classical techniques and neural-net methods. The
system has been trained and tested on real-world data, derived from zip codes seen
on actual U.S. Mail. The system rejects a small percentage of the examples as
unclassifiable, and achieves a very low error rate on the remaining examples. The
system compares favorably with other state-of-the art recognizers. While some of
the methods are specific to this task, it is hoped that many of the techniques will
be applicable to a wide range of recognition tasks.
MOTIVATION
The problem of recognizing hand-written digits is of enormous practical and the~
retical interest [Kahan, Pavlidis, and Baird 1987; Watanabe 1985; Pavlidis 1982].
This project has forced us to formulate and deal with a number of questions ranging from the basic psychophysics of human perception to analog integrated circuit
design.
This is a topic where "neural net" techniques are expected to be relevant, since
the task requires closely mimicking human performance, requires massively parallel
processing, involves confident conclusions based on low precision data, and requires
learning from examples. It is also a task that can benefit from the high throughput
potential of neural network hardware.
Many different techniques were needed. This motivated us to compare various classical techniques as well as modern neural-net techniques. This provided valuable
information about the strengths, weaknesses, and range of applicability of the numerous methods.
The overall task is extremely complex, so we have broken it down into a great
number of simpler steps. Broadly speaking, the recognizer is divided into the preprocessor and the classifier. The two main ideas behind the preprocessor are (1) to
remove meaningless variations (i.e. noise) and (2) to capture meaningful variations
(i.e . salient features).
Most of the results reported in this paper are based on a collection of digits taken
from hand-written Zip Codes that appeared on real U.S. Mail passing through the
324
Denker, et al
(j{OL3l-/.jGIJ
OI~.?4--:;
JI<=t
~789
d/~~'f.!Jd,7
012dL/-S-67
87
get
Figure 1: Typical Data
Buffalo, N.Y. post office. Details will be discussed elsewhere [Denker et al., 1989].
Examples of such images are shown in figure 1. The digits were written by many
different people, using a great variety of writing styles and instruments, with widely
varying levels of care.
Important parts of the task can be handled nicely by our lab's custom analog
neural network VLSI chip [Gra! et aI., 1987; Gra! & deVegvar, 1987], allowing us
to perform the necessary computations in a reasonable time. Also, since the chip
was not designed with image processing in mind, this provided a good test of the
chips' versatility.
THE PREPROCESSOR
Acquisition
The first step is to create a digital version of the image. One must find where on
the envelope the zip code is, which is a hard task in itself (Wang and Srihari 1988].
One must also separate each digit from its neighbors. This would be a relatively
simple task if we could assume that a character is contiguous and is disconnected
from its neighbors, but neither of these assumptions holds in practice. It is also
common to find that there are meaningless stray marks in the image.
Acquisition, binarization, location, and preliminary segmentation were performed
by Poetal Service contractors. In some images there were extraneous marks, so we
developed some simple heuristics to remove them while preserving, in most cases,
all segments of a split character.
Scaling and Deskewing
At this point, the size of the image is typically 40 x 60 pixels, although the scaling
routine can accept images that are arbitrarily large, or as small as 5 x 13 pixels. A
translation and scale factor are then applied to make the image fit in a rectangle
Neural Network Recognizer for Hand-Written Zip Code Digits
20 x 32 pixels. The character is centered in the rectangle, and just touches either
the horizontal or vertical edges, whichever way fits. It is clear that any extraneous
marks must be removed before this step, lest the good part of the image be radically
compressed in order to make room for some wild mark. The scaling routine changes
the horizontal and vertical size of the image by the same factor, so the aspect ratio
of the character is preserved.
As shown in figure 1, images can differ greatly in the amount of skew, yet be
considered the same digit. This is an extremely significant noise source. To remove
this noise, we use the methods of [Casey 1970]; see also [Naylor 1971]. That is, we
calculate the XY and YY moments of the image, and apply a linear transformation
that drives the XY moment to zero. The transformation is a pure shear, not a
rotation, because we find that rotation is much less common than skew.
The operations of scaling and deskewing are performed in a single step. This yields
a speed advantage, and, more importantly, eliminates the quantization noise that
would be introduced by storing the intermediate images as pixel maps, were the
calculation carried out in separate steps.
Skeletonization
For the task of digit recognition, the width of the pen used to make the characters is
completely meaningless, and is highly variable. It is important to remove this noise
source. By deleting pixels at the boundaries of thick strokes. After a few iterations
of this process, each stroke will be as thin as possible. The idea is to remove as
many pixels as possible without breaking the connectivity. Connectivity is based
on the 8 nearest neighbors.
This can be formulated as a pattern matching problem - we search the image
looking for situations in which a pixel should be deleted. The qecisions can be
expressed as a convolution, using a rather small kernel, since the identical decision
process is repeated for each location in the image, and the decision depends on the
configuration of the pixel's nearest and next-nearest neighbors.
Figure 2 shows an example of a character before (e) and after (I) skeletonization.
It also shows some of the templates we use for skeletonization, together with an
indication of where (in the given image) that template was active. To visualize the
convolution process, imagine taking a template, laying it over the image in each
possible place, and asking if the template is "active" in that place. (The template
is the convolution kernel; we use the two terms practically interchangeably.) The
portrayal of the template uses the following code: Black indicates that if the corresponding pixel in the image is ON, it will contribute +1 to the activity level of
this template. Similarly, gray indicates that the corresponding pixel, if ON, will
contribute -5, reducing the activity of this template. The rest of the pixels don't
matter. If the net activity level exceeds a predetermined threshold, the template
is considered active at this location. The outputs of all the skeletonizer templates
325
326
Denker, et al
a)
b)
c)
d)
Figure 2: Skeletonization
are eombined in a giant logieal OR, that is, whenever any template is aetive, we
eonelude that the pixel presently under the eenter of the template should be deleted.
The skeletonization eomputation involves six nested loops:
for each iteration I
for all X in the image (horizontal eoordinate)
for all Y in the image (vertical eoordinate)
for all T in the set of template shapes
for all P in the template (horizontal)
for all Q in the template (vertical)
compare image element(X +P, Y+Q)
with template(T) element(P, Q)
The inner three loops (the loops over T, P, and Q) are performed in parallel, in
a single cyde of our special-purpose ehip. The outer three loops (1, X, and Y)
are performed serially, calling the ehip repeatedly. The X and Y loops eould be
performed in parallel with no change in the algorithms. The additional parallelism
would require a proportionate increase in hardware.
Neural Network Recognizer for Hand-Written Zip Code Digits
The purpose of template a is to detect pixels at the top edge of a thick horizontal
line. The three "should be OFF" (light grey shade in figure 2) template elements
enforce the requirement that this should be a boundary, while the three "should be
ON" (solid black shade in figure 2) template elements enforce the requirement that
the line be at least two pixels wide.
Template b is analogous to template a, but rotated 90 degrees. Its purpose is to
detect pixels at the left edge of a thick vertical line.
Template c is similar to, but not exactly the same as, template a rotated 180 degrees.
The distinction is necessary because all templates are applied in parallel. A stroke
that is only two pixels thick ?must not be attacked from both sides at once, lest it be
removed entirely, changing the connectivity of the image . Previous convolutional
line-thinning schemes [Naccache 1984] used templates of size 3 x 3, and therefore
had to use several serial sub-stages. For parallel operation at least 3 x 4 kernels are
needed, and 5 x 5 templates are convenient, powerful, and flexible.
Feature Maps
Having removed the main sources of meaningless variation, we turn to the task of
extracting the meaningful information. It is known from biological studies [Hubel
and Wiesel 1962] that the human vision system is sensitive to certain features that
occur in images, particularly lines and the ends of lines. We therefore designed
detectors for such features. Previous artificial recognizers [\Vatanabe 1985] have
used similar feature extractors.
Once again we use a convolutional method for locating the features of interest - we
check each location in the image to see if each particular feature is present there.
Figure 3 shows some of the templates we use, and indicates where they become
active in an example image. The feature extractor templates are 7 x 7 pixels slightly larger than the skeletonizer templates.
Feature b is designed to detect the right-hand end of (approximately) horizontal
strokes. This can be seen as follows: in order for the template to become active
at a particular point, the image must be able to touch the "should be ON" pixels
at the center of the template without touching the surrounding horseshoe-shaped
collection of "'must be OFF" pixels. Essentially the only way this can happen is at
the right-hand end of a stroke. (An isolated dot in the image will also activate this
template, but the images, at this stage, are not supposed to contain dots). Feature
d detects (approximately) horizontal strokes.
There are 49 different feature extractor templates. The output of each is stored
separately. These outputs are called feature maps, since they show what feature(s)
occurred where in the image. It is possible, indeed likely, that several different
features will occur in the same place.
Whereas the outputs of all the skeletonizer templates were combined in a very simple
way (a giant OR), the outputs of the feature extractor templates are combined in
327
328
Denker, et al
a)
c)
b)
?
I
~------~.~
~.~------~
Figure 3: Feature Extraction
various artful ways. For example, feature" and a similar one are O~d to form a
single combined feature that responds to right-hand ends in general. Certain other
features are ANDed to form detectors for arcs (long curved strokes). There are 18
combined features, and these are what is passed to the next stage.
We need to create a compact representation, but starting from the skeletonized
image, we have, instead, created 18 feature maps of the same size. Fortunately, we
can now return to the theme of removing meaningless variation.
If a certain image contains a particular feature (say a left-hand stroke end) in the
upper left corner, it is not really necessary to specify the location of that feature
with great precision. To recognize the Ihope of the feature required considerable
precision at the input to the convolution, but the pOlitiora of the feature does not
require so much precision at the output of the convolution. We call this Coarse
Blocking or Coarse Coding of the feature maps. We find that 3 x 5 is sufficent
resolution.
CLASSIFIERS
If the automatic recognizer is unable to classify a particular zip code digit, it may
be possible for the Post Office to determine the correct destination by other means.
This is costly, but not nearly so costly as a misclassification (substitution error) that
causes the envelope to be sent to the wrong destination. Therefore it is critically
Neural Network Recognizer for Hand-Written Zip Code Digits
important for the system to provide estimates of its confidence, and to reject digits
rather than misclassify them.
The objective is not simply to maximize the number of classified digits, nor to
minimize the number of errors . The objective is to minimize the cost of the whole
operation, and this involves a tradeoff between the rejection rate and the error rate.
Preliminary Inves tigations
Several different classifiers were tried, including Parzen Windows, K nearest neighbors, highly customized layered networks, expert systems, matrix associators, feature spins, and adaptive resonance. We performed preliminary studies to identify
the most promising methods. We determined that the top three methods in this
list were significantly better suited to our task than the others, and we performed
systematic comparisons only among those three.
Classical Clustering Methods
We used two classical clustering techniques, Parzen Windows (PW) and K Nearest Neighbors (KNN), which are nicely described in Duda and Hart [1973]. In
this application, we found (as expected) that they behaved similarly, although PW
consistently outperformed KNN by a small margin. These methods have many
advantages, not the least of which is that they are well motivated and easily understood in terms of standard Bayesian inference theory. They are well suited to
implementation on parallel computers and/or custom hardware. They provide excellent confidence information.
Unlike modern adaptive network methods, PW and KNN require no "learning
time", Furthermore the performance was reproducible and responded smoothly to
improvements in the preprocessor and increases in the size of the training set. This
is in contrast to the "noisy" performance of typical layered networks. This is convenient, indeed crucial, during exploratory work .
Adaptive Network Methods
In the early phases of the project, we found that neural network methods gave
rather mediocre results. Later, with a high-performance preprocessor, plus a large
training database, we found that a layered network gave the best results, surpassing
even Parzen Windows. We used a network with two stages of processing (i.e., two
layers of weights), with 40 hidden units and using a one-sided objective function (as
opposed to LMS) as described in [Denker and Wittner 1987]. The main theoretical
advantage of the layered network over the classical methods is that it can form
"higher order" features - conjunctions and disjunctions of the features provided
by our feature extractor. Once the network is trained, it has the advantage that the
classification of each input is very rapid compared to PW or KNN. Furthermore,
the weights represent a compact distillation of the training data and thus have a
smaller memory requirement. The network provides confidence information that is
329
330
Denker, et al
just as good as the classical methods. This is obtained by comparing the activation
level of the most active output against the runner-up unit(s).
To check on the effectiveness of the preprocessing stages, we applied these three
classification schemes (PW, KNN, and the two-layer network) on 256-bit vectors
consisting of raw bit maps of the images - with no skeletonization and no feature
extraction. For each classification scheme, we found the error rate on the raw bit
maps was at least a factor of 5 greater than the error rate on the feature vectors,
thus clearly demonstrating the utility of feature extraction.
TESTING
It is impossible to compare the performance of recognition systems except on identical databases. Using highly motivated "friendly" writers, it is possible to get a
dataset that is so clean that practically any algorithm would give outstanding results. On the other hand, if the writers are not motivated to write clearly, the result
will be not classifiable by machines of any sort (nor by humans for that matter).
It would have been much easier to classify digits that were input using a mouse or
bitpad, since the lines in the such an image have zero thickness, and stroke-order
information is available. It would also have been much easier to recognize digits
from a single writer.
The most realistic test data we could obtain was provided by the US Postal Service.
It consists of approximately 10,000 digits (1000 in each category) obtained from the
zip codes on actual envelopes. The data we received had already been binarized
and divided into images of individual digits, rather than multi-digit zip codes, but
no further processing had been done.
On this data set, our best performance is as follows: if 14% of the images are rejected
as unclassifiable, only 1% of the remainder are misclassified. If no images are rejected, approximately 6% are misclassified. Other groups are working with the same
dataset, but their results have not yet been published. Informal communications
indicate that our results are among the best.
CONCLUSIONS
We have obtained very good results on this very difficult task. Our methods include
low-precision and analog processing, massively parallel computation, extraction of
biologically-motivated features, and learning from examples. We feel that this is,
therefore, a fine example of a Neural Information Processing System. We emphasize that old-fashioned engineering, classical pattern recognition, and the latest
learning-from-examples methods were all absolutely necessary. Without the careful
engineering, a direct adaptive network attack would not succeed, but by the same
token, without learning from a very large database, it would have been excruciating
to engineer a sufficiently accurate representation of the probability space.
Neural Network Recognizer for Hand-Written Zip Code Digits
Acknowledgements
It is a pleasure to acknowledge useful discussions with Patrick Gallinari and technical assistance from Roger Epworth. We thank Tim Barnum of the U.S. Postal
Service for making the Zip Code data available to us.
References
1. R. G. Casey, "Moment Normalization of Handprinted Characters", IBM J.
Res. Develop., 548 (1970)
2. J. S. Denker et al., "Details of the Hand-Written Character Recognizer", to
be published (1989)
3. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis,
John Wiley and Sons (1973)
4. E. Gullichsen and E. Chang, "Pattern Classification by Neural Network: An
Experimental System for Icon Recognition", Proc. IEEE First Int. Conf. on
Neural Networks, San Diego, IV, 725 (1987)
5. H. P. Graf, W. Hubbard, L. D. Jackel, P.G.N. deVegvar, "A CMOS Associative
Memory Chip", Proc. IEEE First Int. Conf. on Neural Networks, San Diego,
111,461 (1987)
6. H.P Graf and P. deVegvar, "A CMOS Implementation of a Neural Network
Model", Proc. 1987 Stanford Conf. Advanced Res. VLSI, P. Losleben (ed.)
MIT Press, 351 (1987)
7. D. H. Hubel and T. N. Wiesel, "Receptive fields, binocular interaction and
functional architecture in the cat's visual cortex", J. Physiology 160, 106
(1962)
8. S. Kahan, T. Pavlidis, and H. S. Baird, "On the Recognition of Printed Characters of Any Font and Size", IEEE Transactions on Pattern Analysis and
Machine Intelligence, PAMI-9, 274 (1987)
9. N. J. Naccache and R. Shinghal, ''SPTA: A Proposed Algorithm for Thinning
Binary Patterns", IEEE Trans. Systems, Man, and Cybernetics, SMC-14,
409 (1984)
10. W. C. Naylor, "Some Studies in the Interactive Design of Character Recognition Systems", IEEE Transactions on Computers, 1075 (1971)
11. T. Pavlidis, Algorithms for Graphics and Image Processing, Computer
Science Press (1982)
12. C. Y. Suen, M. Berthod, and S. Mori, "Automatic Recognition of Handprinted
Characters - The State of the Art", Proceedings of the IEEE 68 4, 469
(1980).
13. C-H. Wang and S. N. Srihari, "A Framework for Object Recognition in a Visually Complex Environment and its Application to Locating Address Blocks
on Mail Pieces", IntI. J. Computer Vision 2, 125 (1988)
14. S. Watanabe, Pattern Recognition, John Wiley and Sons, New York (1985)
331
| 107 |@word version:1 pw:5 wiesel:2 duda:2 grey:1 tried:1 solid:1 moment:3 configuration:1 contains:1 substitution:1 comparing:1 activation:1 yet:2 written:9 must:6 john:2 realistic:1 happen:1 predetermined:1 shape:1 remove:5 designed:3 reproducible:1 intelligence:1 coarse:2 provides:1 contribute:2 location:5 postal:2 attack:1 simpler:1 direct:1 become:2 consists:1 wild:1 indeed:2 expected:2 rapid:1 nor:2 multi:1 detects:1 actual:2 window:3 project:2 provided:4 circuit:1 what:2 developed:1 transformation:2 giant:2 binarized:1 friendly:1 interactive:1 exactly:1 classifier:3 wrong:1 gallinari:1 unit:2 before:2 service:3 understood:1 engineering:2 approximately:4 pami:1 black:2 plus:1 smc:1 range:2 pavlidis:4 devegvar:3 practical:1 testing:1 practice:1 block:1 digit:20 bell:1 reject:2 printed:2 matching:1 convenient:2 significantly:1 physiology:1 confidence:3 get:2 layered:4 mediocre:1 impossible:1 writing:1 map:7 center:1 latest:1 starting:1 formulate:1 resolution:1 pure:1 importantly:1 sufficent:1 exploratory:1 variation:4 analogous:1 feel:1 construction:1 imagine:1 diego:2 us:1 element:4 recognition:10 particularly:1 database:3 blocking:1 wang:2 capture:1 calculate:1 removed:3 valuable:1 environment:1 broken:1 trained:2 segment:1 writer:3 completely:1 easily:1 chip:4 jersey:1 various:2 cat:1 surrounding:1 forced:1 activate:1 artificial:1 disjunction:1 heuristic:1 widely:1 larger:1 stanford:1 say:1 compressed:1 knn:5 kahan:2 itself:1 noisy:1 associative:1 advantage:4 indication:1 net:4 interaction:1 remainder:1 relevant:1 loop:5 supposed:1 deskewing:2 requirement:3 cmos:2 rotated:2 object:1 tim:1 develop:1 nearest:5 received:1 involves:3 indicate:1 differ:1 thick:4 closely:1 correct:1 centered:1 human:4 require:3 really:1 preliminary:3 biological:1 hold:1 practically:2 sufficiently:1 considered:2 visually:1 great:3 visualize:1 lm:1 achieves:1 early:1 purpose:3 recognizer:8 proc:3 outperformed:1 applicable:1 jackel:2 sensitive:1 hubbard:2 create:2 mit:1 clearly:2 suen:1 rather:4 varying:1 office:2 conjunction:1 derived:1 casey:2 improvement:1 consistently:1 indicates:3 check:2 greatly:1 contrast:1 detect:3 inference:1 integrated:1 typically:1 accept:1 vatanabe:1 hidden:1 vlsi:2 misclassified:2 mimicking:1 overall:1 pixel:19 flexible:1 among:2 classification:5 extraneous:2 resonance:1 art:2 special:1 psychophysics:1 field:1 once:3 nicely:2 having:1 shaped:1 extraction:4 identical:2 throughput:1 thin:1 nearly:1 others:1 few:1 modern:2 recognize:2 individual:1 phase:1 consisting:1 versatility:1 misclassify:1 interest:2 highly:3 custom:2 henderson:1 weakness:1 runner:1 light:1 behind:1 accurate:1 edge:3 necessary:4 xy:2 iv:1 old:1 re:2 isolated:1 theoretical:1 classify:2 asking:1 contiguous:1 applicability:1 cost:1 recognizing:1 graphic:1 reported:1 stored:1 thickness:1 combined:4 confident:1 destination:2 off:2 systematic:1 together:1 parzen:3 mouse:1 connectivity:3 again:1 opposed:1 corner:1 conf:3 expert:1 style:1 return:1 potential:1 coding:1 int:2 baird:3 matter:2 depends:1 piece:1 performed:7 later:1 lab:1 sort:1 parallel:7 proportionate:1 minimize:2 oi:1 spin:1 convolutional:2 responded:1 yield:1 identify:1 bayesian:1 raw:2 critically:1 drive:1 icon:1 published:2 cybernetics:1 classified:1 stroke:9 detector:2 whenever:1 ed:1 against:1 acquisition:2 dataset:2 segmentation:1 routine:2 thinning:2 higher:1 specify:1 done:1 furthermore:2 just:2 stage:5 rejected:2 roger:1 fashioned:1 binocular:1 hand:14 working:1 horizontal:7 touch:2 gray:1 behaved:1 contain:1 laboratory:1 deal:1 assistance:1 interchangeably:1 width:1 during:1 associators:1 ranging:1 image:37 common:2 rotation:2 shear:1 functional:1 ji:1 analog:3 discussed:1 occurred:1 surpassing:1 significant:1 distillation:1 ai:1 automatic:2 similarly:2 had:3 dot:2 recognizers:2 cortex:1 patrick:1 touching:1 massively:2 certain:3 binary:1 arbitrarily:1 preserving:1 seen:2 additional:1 care:1 fortunately:1 zip:12 greater:1 determine:1 maximize:1 exceeds:1 technical:1 calculation:1 long:1 divided:2 hart:2 post:2 serial:1 wittner:1 basic:1 vision:2 essentially:1 iteration:2 kernel:3 represent:1 normalization:1 preserved:1 whereas:1 separately:1 fine:1 source:3 crucial:1 envelope:3 meaningless:5 eliminates:1 rest:1 lest:2 unlike:1 sent:1 effectiveness:1 call:1 extracting:1 intermediate:1 split:1 variety:1 fit:2 gave:2 architecture:1 inner:1 idea:2 tradeoff:1 motivated:5 handled:1 six:1 utility:1 passed:1 locating:2 speaking:1 passing:1 cause:1 repeatedly:1 york:1 useful:1 clear:1 amount:1 hardware:3 category:1 percentage:1 yy:1 broadly:1 write:1 group:1 salient:1 threshold:1 enormous:1 demonstrating:1 deleted:2 changing:1 neither:1 clean:1 rectangle:2 powerful:1 classifiable:1 gra:2 place:3 guyon:1 reasonable:1 decision:2 holmdel:1 scaling:4 bit:3 entirely:1 layer:2 portrayal:1 activity:3 strength:1 occur:2 scene:1 calling:1 aspect:1 speed:1 extremely:2 relatively:1 contractor:1 combination:1 disconnected:1 describes:1 slightly:1 smaller:1 character:11 son:2 biologically:1 making:1 presently:1 sided:1 taken:1 inti:1 mori:1 skew:2 turn:1 needed:2 mind:1 instrument:1 whichever:1 end:5 informal:1 available:2 operation:3 denker:8 apply:1 enforce:2 skeletonization:6 jd:1 top:2 remaining:1 clustering:2 include:1 recognizes:1 classical:7 objective:3 question:1 already:1 font:1 receptive:1 costly:2 responds:1 separate:2 unable:1 pleasure:1 thank:1 outer:1 topic:1 mail:3 naccache:2 laying:1 code:13 ratio:1 handprinted:2 difficult:1 favorably:1 design:2 implementation:2 perform:1 allowing:1 upper:1 vertical:5 convolution:5 howard:1 arc:1 horseshoe:1 acknowledge:1 buffalo:1 attacked:1 curved:1 situation:1 looking:1 communication:1 introduced:1 required:1 distinction:1 trans:1 address:1 able:1 parallelism:1 perception:1 pattern:7 appeared:1 including:1 memory:2 deleting:1 misclassification:1 serially:1 customized:1 advanced:1 scheme:3 numerous:1 gardner:1 created:1 carried:1 binarization:1 acknowledgement:1 graf:3 digital:1 degree:2 storing:1 translation:1 ibm:1 elsewhere:1 token:1 retical:1 side:1 wide:2 neighbor:6 template:35 taking:1 benefit:1 boundary:2 world:1 collection:2 adaptive:4 preprocessing:1 san:2 transaction:2 compact:2 emphasize:1 active:6 hubel:2 don:1 search:1 pen:1 losleben:1 promising:1 excellent:1 complex:2 main:3 motivation:1 noise:5 whole:1 repeated:1 wiley:2 precision:5 sub:1 stray:1 watanabe:2 theme:1 breaking:1 extractor:5 down:1 preprocessor:5 removing:1 shade:2 specific:1 list:1 dl:1 quantization:1 hoped:1 margin:1 easier:2 rejection:1 suited:2 smoothly:1 simply:1 likely:1 srihari:2 visual:1 expressed:1 chang:1 radically:1 nested:1 succeed:1 unclassifiable:2 formulated:1 careful:1 room:1 man:1 considerable:1 hard:1 change:2 typical:2 determined:1 reducing:1 except:1 engineer:1 called:1 experimental:1 meaningful:2 people:1 mark:4 outstanding:1 absolutely:1 tested:1 |
80 | 1,070 | A Bound on the Error of Cross Validation Using
the Approximation and Estimation Rates, with
Consequences for the Training-Test Split
Michael Kearns
AT&T Research
1 INTRODUCTION
We analyze the performance of cross validation 1 in the context of model selection and
complexity regularization. We work in a setting in which we must choose the right number
of parameters for a hypothesis function in response to a finite training sample, with the goal
of minimizing the resulting generalization error. There is a large and interesting literature
on cross validation methods, which often emphasizes asymptotic statistical properties, or
the exact calculation of the generalization error for simple models. Our approach here is
somewhat different, and is pri mari Iy inspired by two sources. The first is the work of Barron
and Cover [2], who introduced the idea of bounding the error of a model selection method
(in their case, the Minimum Description Length Principle) in terms of a quantity known as
the index of resolvability. The second is the work of Vapnik [5], who provided extremely
powerful and general tools for uniformly bounding the deviations between training and
generalization errors.
We combine these methods to give a new and general analysis of cross validation performance. In the first and more formal part of the paper, we give a rigorous bound on the error
of cross validation in terms of two parameters of the underlying model selection problem:
the approximation rate and the estimation rate. In the second and more experimental part
of the paper, we investigate the implications of our bound for choosing 'Y, the fraction of
data withheld for testing in cross validation. The most interesting aspect of this analysis is
the identification of several qualitative properties of the optimal 'Y that appear to be invariant
over a wide class of model selection problems:
? When the target function complexity is small compared to the sample size, the
performance of cross validation is relatively insensitive to the choice of 'Y.
? The importance of choosing 'Y optimally increases, and the optimal value for 'Y
decreases, as the target function becomes more complex relative to the sample
size.
? There is nevertheless a single fixed value for'Y that works nearly optimally for a
wide range of target function complexity.
2 THE FORMALISM
We consider model selection as a two-part problem: choosing the appropriate number of
parameters for the hypothesis function, and tuning these parameters. The training sample
is used in both steps of this process. In many settings, the tuning of the parameters is
determined by a fixed learning algorithm such as backpropagation, and then model selection
reduces to the problem of choosing the architecture. Here we adopt an idealized version of
this division of labor. We assume a nested sequence of function classes Hl C ... C H d ??? ,
called the structure [5], where Hd is a class of boolean functions of d parameters, each
IPerhaps in conflict with accepted usage in statistics, here we use the term "cross validation" to
mean the simple method of saving out an independent test set to perform model selection. Precise
definitions will be stated shortly.
184
M.KEARNS
function being a mapping from some input space X into {O, I}. For simplicity, in this
paper we assume that the Vapnik-Chervonenkis (VC) dimension [6, 5] of the class Hd is
O(d). To remove this assumption, one simply replaces all occurrences of d in our bounds by
the VC dimension of H d ? We assume that we have in our possession a learning algorithm
L that on input any training sample 8 and any value d will output a hypothesis function
hd E H d that minimizes the training error over H d - that is, ?t ( hd) = minhE H" { ?t (h)},
where EtCh) is the fraction of the examples in 8 on which h disagrees with the given label.
In many situations, training error minimization is known to be computationally intractable,
leading researchers to investigate heuristics such as backpropagation. The extent to which
the theory presented here applies to such heuristics will depend in part on the extent to
which they approximate training error minimization for the problem under consideration.
Model selection is thus the problem of choosing the best value of d. More precisely, we
assume an arbitrary target function I (which mayor may not reside in one of the function
classes in the structure H1 C ... C H d ???), and an input distribution P; I and P together
define the generalization error function ?g(h)
PrzEP[h(x) =f I(x)]. We are given a
training sample 8 of I, consisting of m random examples drawn according to P and labeled
by I (with the labels possibly corrupted by a noise process that randomly complements each
label independently with probability TJ < 1/2). The goal is to minimize the generalization
error of the hypothesis selected.
=
In this paper, we will make the rather mild but very useful assumption that the structure has
the property that for any sample size m, there is a value dm.u:(m) such that ?t(hdm.u:(m)) =
o for any labeled sample 8 of m examples. We call the function dmaz(m) the fitting
number of the structure. The fitting number formalizes the simple notion that with enough
parameters, we can always fit the training data perfectly, a property held by most sufficiently
powerful function classes (including multilayer neural networks). We typically expect the
fitting number to be a linear function of m, or at worst a polynomial in m. The significance
of the fitting number for us is that no reasonable model selection method should choose hd
for d ~ dmaz(m), since doing so simply adds complexity without reducing the training
error.
In this paper we concentrate on the simplest version of cross validation. We choose a
parameter "( E [0, 1], which determines the split between training and test data. Given the
input sample 8 of m examples, let 8' be the subsample consisting of the first (1 - "()m
examples in 8, and 8" the subsample consisting of the last "(mexamples. In cross validation,
rather than giving the entire sample 8 to L, we give only the smaller sample 8', resulting in
the sequence h1' ... , h dmaz ((1-"I)m) of increasingly complex hypotheses. Each hypothesis
is now obtained by training on only (I - "()m examples, which implies that we will only
consider values of d smaller than the corresponding fitting number dmaz((1 - "()m); let
us introduce the shorthand d"!naz for dmaz((1 - "()m). Cross validation chooses the hd
satisfying hd mini E{1, .. . ,d~az} {?~' (~)} where ?~' (~) is the error of hi on the subsample
8". Notice that we are not considering multifold cross validation, or other variants that
make more efficient use of the sample, because our analyses will require the independence
of the test set. However, we believe that many of the themes that emerge here may apply to
these more sophisticated variants as well.
=
We use ?ClI ( m) to denote the generalization error ?g( hd) ofthe hypothesis hd chosen by cross
validation when given as input a sample 8 of m random examples of the target function.
Obviously, ?clI(m) depends on 8, the structure, I, P, and the noise rate. When bounding
?cv (m), we will use the expression "with high probability" to mean with probability 1 - ~
over the sample 8, for some small .fixed constant ~ > O. All of our results can also be
stated with ~ as a parameter at the cost of a loge 1/~) factor in the bounds, or in terms of the
expected value of ?clI(m).
3
THE APPROXIMATION RATE
It is apparent that any nontrivial bound on ?cv (m) must take account of some measure of the
"complexity" of the unknown target function I. The correct measure of this complexity is
less obvious. Following the example of Barron and Cover's analysis of MDL performance
A Bound on the Error of Cross Validation
185
in the context of density estimation [2], we propose the approximation rate as a natural
measure of the complexity of I and P in relation to the chosen structure HI C ... C H d ????
Thus we define the approximation rate function Eg(d) to be Eg(d) = minhEH.. {Eg(h)}. The
function E9 (d) tells us the best generalization error that can be achieved in the class Hd,
and it is a nonincreasing function of d. If Eg(S) = 0 for some sufficiently large s, this
means that the target function I, at least with respect to the input distribution, is realizable
in the class H., and thus S is a coarse measure of how complex I is. More generally, even
if Eg(d) > 0 for all d, the rate of decay of Eg(d) still gives a nice indication of how much
representational power we gain with respect to I and P by increasing the complexity of
our models. StilI missing, of course, is some means of determining the extent to which this
representational power can be realized by training on a finite sample of a given size, but
this will be added shortly. First we give examples of the approximation rate that we will
examine following the general bound on E ClI ( m).
The Intervals Problem. In this problem, the input space X is the real interval [0,1], and
the class Hd of the structure consists of all boolean step functions over [0,1] of at most
d steps; thus, each function partitions the interval [0, 1] into at most d disjoint segments
(not necessarily of equal width), and assigns alternating positive and negative labels to
these segments. The input space is one-dimensional, but the structure contains arbitrarily
complex functions over [0, 1]. It is easily verified that our assumption that the VC dimension
of Hd is Oed) holds here, and that the fitting number obeys dmllZ(m) S m. Now suppose
that the input density P is uniform, and suppose that the target function I is the function
of S alternating segments of equal width 1/ s, for some s (thUS, I lies in the class H.).
We will refer to these settings as the intervals problem. Then the approximation rate is
Eg(d) = (1/2)(1 - dis) for 1 S d < sand Eg(d) = 0 for d ~ s (see Figure 1).
The Perceptron Problem. In this problem, the input space X is RN for some large
natural number N. The class Hd consists of all perceptrons over the N inputs in which
at most d weights are nonzero. If the input density is spherically symmetric (for instance,
the uniform density on the unit ball in R N ), and the target function is the function in H.
with all s nonzero weights equal to 1, then it can be shown that the approximation rate
is Eg(d) = (1/11") cos-I(..jd/s) for d < s [4], and of course Eg(d) = 0 for d ~ s (see
Figure 1).
Power Law Decay. In addition to the specific examples just given, we would also like
to study reasonably natural parametric forms of Eg( d), to determine the sensitivity of our
theory to a plausible range of behaviors for the approximation rate. This is important,
because in practice we do not expect to have precise knowledge of Eg(d), since it depends
on the target function and input distribution. Following the work of Barron [1], who shows
a c/dbound on Eg(d) for the case of neural networks with one hidden layer under a squared
error generalization measure (where c is a measure of target function complexity in terms
of a Fourier transform integrability condition) 2, we can consider approximation rates of
the form Eg(d) = (c/d)a + Emin, where Emin ~ 0 is a parameter representing the "degree
of unreal izability" of I with respect to the structure, and c, a > 0 are parameters capturing
the rate of decay to Emin (see Figure 1).
4 THE ESTIMATION RATE
For a fixed I, P and HI C .. . C Hd? .., we say that a function p( d, m) is an estimation rate
boundifforall dand m, with high probability over the sampleSwehave IEt(hd)-Eg(hct)1 S
p(d, m), where as usual hd is the result of training error minimization on S within H d.
Thus p( d, m) simply bounds the deviation between the training error and the generalization
error of h d ? Note that the best such bound may depend in a complicated way on all of
the elements of the problem: I, P and the structure. Indeed, much of the recent work
on the statistical physics theory of learning curves has documented the wide variety of
behaviors that such deviations may assume [4, 3]. However, for many natural problems
2Since the bounds we will give have straightforward generalizations to real-valued function learning under squared error, examining behavior for Eg( d) in this setting seems reasonable.
M. KEARNS
186
it is both convenient and accurate to rely on a universal estimation rate bound provided
by the powerful theory of unifonn convergence: Namely, for any I, P and any structure,
the function p(d, m) = ..j(d/m) log(m/d) is an estimation rate bound [5]. Depending
upon the details of the problem, it is sometimes appropriate to omit the loge m/ d) factor,
and often appropriate to refine the J dim behavior to a function that interpolates smoothly
between dim behavior for small Et to Jd/m for large Et. Although such refinements are
both interesting and important, many of the qualitative claims and predictions we will make
are invariant to them as long as the deviation kt(hd) - Eg(hd)1 is well-approximated by a
power law (d/m)a (0 > 0); it will be more important to recognize and model the cases in
which power law behavior is grossly violated.
Note that this universal estimation rate bound holds only under the assumption that the
training sample is noise-free, but straightforward generalizations exist. For instance, if the
training data is corrupted by random label noise at rate 0 ~ TJ < 1/2, then p( d, m)
..j(d/(1 - 2TJ)2m)log(m/d) is again a universal estimation rate bound.
5 THE BOUND
Theorem 1 Let HI C ... C H d ? .. be any structure, where the VC dimension 0/ Hd is
Oed). Let I and P be any target function and input distribution, let Eg(d) be the approximation rate/unction/or the structure with respect to I and P, and let p(d, m) be an
estimation rate bound/or the structure with respect to I and P. Then/or any m, with high
probability
Ecv(m)
~
min
I~d~di...
{Eg(d)
+ p(d, (1
- ,)m)}
+0
(1)
(
where, is the/raction o/the training sample used/or testing, and lfYmax is thefitting number
dmax ( (1 -,)m). Using the universal estimation bound rate and the rather weak assumption
that dmax(m) is polynomial in m, we obtain that with high probability
10g?I-,)m)) .
,m
(2)
Straightforward generalizations 0/ these bounds/or the case where the data is corrupted
by classification noise can be obtained, using the modified estimation rate bound given in
Section 4 3.
We delay the proof of this theorem to the full paper due to space considerations. However,
the central idea is to appeal twice to uniform convergence arguments: once within each class
Hd to bound the generalization error of the resulting training error minimizer hd E Hd, and
a second time to bound the generalization error of the hd minimizing the error on the test
set of ,m examples.
In the bounds given by (1) and (2), themin{? } expression is analogous to Barron and Cover's
index of resolvability [2]; the final tenn in the bounds represents the error introduced by
the testing phase of cross validation. These bounds exhibit tradeoff behavior with respect
to the parameter,: as we let, approach 0, we are devoting more of the sample to training
the hd, and the estimation rate bound tenn p(d, (1 - ,)m) is decreasing. However, the
test error tenn O( Jlog(~,u:)/(Tm)) is increasing, since we have less data to accurately
estimate the Eg(hd). The reverse phenomenon occurs as we let, approach 1.
While we believe Theorem 1 to be enlightening and potentially useful in its own right,
we would now like to take its interpretation a step further. More precisely, suppose we
~e main effect of classification noise at rate '1 is the replacement of occurrences in the bound of
the sample size m by the smaller "effective" sample size (1 - '1)2m.
A Bound on the Error of Cross Validation
187
assume that the bound is an approximation to the actual behavior of EClI(m). Then in
principle we can optimize the bound to obtain the best value for "Y. Of course, in addition
to the assumptions involved (the main one being that p(d, m) is a good approximation to
the training-generalization error deviations of the hd), this analysis can only be carried out
given information that we should not expect to have in practice (at least in exact form)in particular, the approximation rate function Eg(d), which depends on f and P. However.
we argue in the coming sections that several interesting qualitative phenomena regarding
the choice of"Y are largely invariant to a wide range of natural behaviors for Eg (d).
6
A CASE STUDY: THE INTERVALS PROBLEM
We begin by performing the suggested optimization of"Y for the intervals problem. Recall
that the approximation rate here is Eg(d) = (1/2)(1 - d/8) for d < 8 and Ey(d) = 0 for
d ~ 8, where 8 is the complexity of the target function. Here we analyze the behavior
obtained by assuming that the estimation rate p(d, m) actually behaves as p(d, m) =
Jd/(l - "Y)m (so we are omitting the log factor from the universal bound), and to simplify
the formal analysis a bit (but without changing the qualitative behavior) we replace the
term Jlog?1 - "Y)m)/bm) by the weaker Jlog(m)/m. Thus, if we define the function
F(d, m, "Y) = Ey(d) + Jd/(1 - "Y)m + Jlog(m)/bm) then following Equation (1), we
are approximating EclI (m) by EclI (m) ~ min1<d<d"
_ _ maa: {F(d, m, "Yn 4.
The first step of the analysis is to fix a value for"Y and differentiate F( d, m, "Y) with respect
to d to discover the minimizing value of d; the second step is to differentiate with respect to
"Y. It can be shown (details omitted) that the optimal choice of"Y under the assumptions is
"Yopt (log (m)/ 8)1/3/(1 + (Iog(m)/ 8)1/3). It is importantto remember at this point that
despite the fact that we have derived a precise expression for "Yopt. due to the assumptions
and approximations we have made in the various constants, any quantitative interpretation
of this expression is meaningless. However, we can reasonably expect that this expression
captures the qualitative way in which the optimal "Y changes as the amount of data m
changes in relation to the target function complexity 8. On this score the situation initially
appears rather bleak, as the function (log( m)/ 8)1/3 /(1 + (log(m)/ 8)1/3) is quite sensitive
to the ratio log(m)/8, which is something we do not expect to have the luxury of knowing
in practice. However, it is both fortunate and interesting that "Yopt does not tell the entire
story. In Figure 2, we plot the function F ( 8, m, "Y) as a function of"Y for m = 10000 and for
several different values of 8 (note that for consistency with the later experimental plots, the
z axis of the plot is actually the training fraction 1 - "Y). Here we can observe four important
qualitative phenomena, which we list in order of increasing subtlety: (A) When 8 is small
compared to m, the predicted error is relatively insensitive to the choice of "Y: as a function
of "Y, F( 8, m, "Y) has a wide, flat bowl, indicating a wide range of "Y yielding essentially the
same near-optimal error. (B) As s becomes larger in comparison to the fixed sample size
m, the relative superiority of "Yopt over other values for"Y becomes more pronounced. In
particular, large values for"Y become progressively worse as s increases. For example, the
10 (again, m
10000), even though "Yopt 0.524 ... the choice
plots indicate that for s
"Y = 0.75 will result in error quite near that achieved using "Yopt. However, for s = 500,
"Y = 0.75 is predicted to yield greatly suboptimal error. Note that for very large s, the bound
predicts vacuously large error for all values of "Y, so that the choice of "Y again becomes
irrelevant. (C) Because of the insensitivity to "Y for s small compared to m, there is a fixed
value of "Y which seems to yield reasonably good performance for a wide range of values
for s. This value is essentially the value of "Yopt for the case where 8 is large but nontrivial
generalization is still possible, since choosing the best value for "Y is more important there
than for the small 8 case. (D) The value of "Yopt is decreasing as 8 increases. This is slightly
difficult to confirm from the plot, but can be seen clearly from the precise expression for
=
=
=
=
"Yopt.
4 Although there are hidden constants in the 0(.) notation of the bounds. it is the relative weights
of the estimation and test error terms that is important. and choosing both constants equal to 1 is a
reasonable choice (since both terms have the same Chernoff bound origins).
188
M.KEARNS
In Figure 3, we plot the results of experiments in which labeled random samples of size
m = 5000 were generated for a target function of s equal width intervals, for s = 10,100
and 500. The samples were corrupted by random label noise at rate TJ = 0.3. For each
value of 'Y and each value of d, (1 - 'Y)m of the sample was given to a program performing
training error minimization within Hd.; the remaining 'Ym examples were used to select the
best hd. according to cross validation. The plots show the true generalization error of the
hd. selected by cross validation as a function of'Y (the generalization error can be computed
exactly for this problem). Each point in the plots represents an average over 10 trials.
While there are obvious and significant quantitative differences between these experimental
plots and the theoretical predictions of Figure 2, the properties (A), (B) and (C) are rather
clearly borne out by the data: (A) In Figure 3, when s is small compared to m, there
is a wide range of acceptable 'Y; it appears that any choice of'Y between 0.10 and 0.50
yields nearly optimal generalization error. (B) By the time s = 100, the sensitivity to'Y is
considerably more pronounced. For example, the choice 'Y = 0.50 now results in clearly
suboptimal performance, and it is more important to have 'Y close to 0.10. (C) Despite these
complexities, there does indeed appear to be single value of'Y - approximately 0.10that performs nearly optimally for the entire range of s examined.
The property (D) - namely, that the optimal 'Y decreases as the target function complexity
is increased relative to a fixed m - is certainly not refuted by the experimental results,
but any such effect is simply too small to be verified. It would be interesting to verify
this prediction experimentally, perhaps on a different problem where the predicted effect is
more pronounced.
7 CONCLUSIONS
For the cases where the approximation rate Eg(d) obeys either power law decay or is that
derived for the perceptron problem discussed in Section 3, the behavior of EClI(m) as a
function of 'Y predicted by our theory is largely the same (for example, see Figure 4). In the
full paper, we describe some more realistic experiments in which cross validation is used
to determine the number of backpropogation training epochs. Figures similar to Figures 2
through 4 are obtained, again in rough accordance with the theory.
In summary, our theory predicts that although significant quantitative differences in the
behavior of cross validation may arise for different model selection problems, the properties
(A), (B), (C) and (D) should be present in a wide range of problems. At the very least,
the behavior of our bounds exhibits these properties for a wide range of problems. It
would be interesting to try to identify natural problems for which one or more of these
properties is strongly violated; a potential source for such problems may be those for which
the underlying learning curve deviates from classical power law behavior [4, 3].
Acknowledgements: I give warm thanks to Yishay Mansour, Andrew Ng and Dana Ron
for many enlightening conversations on cross validation and model selection. Additional
thanks to Andrew Ng for his help in conducting the experiments.
References
[1] A. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE
Transaclions on Information Theory. 19:930-944. 1991.
[2] A. R. Barron and T. M. Cover. Minimum complexity density estimation. IEEE Transaclions on
Information Theory, 37:1034-1054, 1991.
[3] D. Haussler, M. Kearns. H.S. Seung, and N. Tishby. Rigourous learning curve bounds from
statistical mechanics. In Proceedings of the Seventh Annual ACM Confernce on Compulalional
Learning Theory. pages 76-87. 1991l.
[4] H. S. Seung, H. Sompolinsky. and N. Tishby. Statistical mechanics of learning from examples.
Physical Review, A45:6056-6091, 1992.
[5] V. N. Vapnik:. Estimalion ofDependences Based on Empirical Dala. Springer-Verlag, New York,
1982.
[6] V. N. Vapnik: and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of
events to their probabilities. Theory of Probability and ils Applicalions. 16(2):264-280, 1971.
189
A Bound on the Error of Cross Validation
rgg
pprox mat on
a as
Figure 1: Plots of three approximation rates:
for the intervals problem with target complexity
II = 250 intervals (linear plot intersecting d-axis at
250), for the perceptron problem with target complexity II = 150 nonzero weights (nonlinear plot
intersecting d-axis at 150), and for power law decay asymptoting at E",," = 0.05.
Figure 2: Plot of the predicted generalization error
of cross validation for the intervals model selection
problem, as a function of the fraction 1 - "'( of
data used for training. (In the plot, the fraction of
training data is 0 on the left (-y = 1) and 1 on the
right ("'( 0?. The fixed sample size m = 10,000
was used, and the 6 plots show the error predicted
by the theory for target function complexity values
II = 10 (bottom plot), 50, 100, 250, 500, and 1000
(top plot) .
v grror
=
..
..,
rr vs
ra n sat s za, s-
no
SQ,
m-
Figure 3: Experimental plots of cross validation
generalization error in the intervals problem as a
function of training set size (1-"'() m. Experiments
with the three target complexity values II = 10,100
and 500 (bottom plot to top plot) are shown. Each
point represents performance averaged over 10 trials .
..
.~
.~
.. ..,J ....
CV
Dun.
(c
)
or c
rom
.
to
. , m-
?i.
Figure 4: Plot of the predicted generalization error
of cross validation for the power law case E,( d) =
(c/d), as a function of the fraction 1-",(ofdata used
for training. The fixed sample size m = 25,000
was used, and the 6 plots show the error predicted
by the theory for target function complexity values
c = 1 (bottom plot), 25,50,75, 100. and 150 (top
plot).
' .1
...
| 1070 |@word mild:1 trial:2 version:2 polynomial:2 seems:2 contains:1 score:1 chervonenkis:2 pprox:1 mari:1 unction:1 must:2 realistic:1 partition:1 refuted:1 remove:1 plot:24 progressively:1 v:1 tenn:3 selected:2 dun:1 coarse:1 ron:1 sigmoidal:1 become:1 qualitative:6 shorthand:1 consists:2 combine:1 fitting:6 introduce:1 indeed:2 ra:1 expected:1 behavior:15 examine:1 mechanic:2 inspired:1 decreasing:2 actual:1 considering:1 increasing:3 becomes:4 provided:2 begin:1 underlying:2 discover:1 notation:1 minimizes:1 possession:1 formalizes:1 remember:1 quantitative:3 exactly:1 unit:1 omit:1 appear:2 yn:1 superiority:1 positive:1 accordance:1 unreal:1 consequence:1 despite:2 approximately:1 twice:1 examined:1 dala:1 co:1 range:9 obeys:2 averaged:1 testing:3 practice:3 backpropagation:2 sq:1 universal:6 empirical:1 convenient:1 close:1 selection:12 context:2 optimize:1 missing:1 straightforward:3 independently:1 yopt:9 simplicity:1 assigns:1 haussler:1 his:1 hd:29 notion:1 analogous:1 target:21 suppose:3 yishay:1 exact:2 hypothesis:7 origin:1 element:1 satisfying:1 approximated:1 predicts:2 labeled:3 bottom:3 min1:1 capture:1 worst:1 sompolinsky:1 decrease:2 complexity:19 seung:2 oed:2 depend:2 segment:3 rigourous:1 upon:1 division:1 easily:1 bowl:1 various:1 effective:1 describe:1 tell:2 choosing:7 apparent:1 heuristic:2 quite:2 plausible:1 valued:1 say:1 larger:1 statistic:1 transform:1 final:1 obviously:1 differentiate:2 sequence:2 indication:1 rr:1 propose:1 coming:1 insensitivity:1 representational:2 description:1 pronounced:3 az:1 cli:4 convergence:3 raction:1 help:1 depending:1 andrew:2 predicted:8 implies:1 indicate:1 concentrate:1 correct:1 vc:4 sand:1 require:1 fix:1 generalization:22 hold:2 sufficiently:2 mapping:1 claim:1 adopt:1 omitted:1 estimation:16 label:6 superposition:1 sensitive:1 tool:1 minimization:4 rough:1 clearly:3 always:1 modified:1 rather:5 derived:2 integrability:1 greatly:1 rigorous:1 realizable:1 dim:2 typically:1 entire:3 initially:1 hidden:2 relation:2 etch:1 classification:2 equal:5 once:1 saving:1 devoting:1 ng:2 chernoff:1 represents:3 nearly:3 simplify:1 randomly:1 recognize:1 phase:1 consisting:3 replacement:1 luxury:1 investigate:2 certainly:1 mdl:1 yielding:1 tj:4 held:1 nonincreasing:1 implication:1 accurate:1 kt:1 loge:2 theoretical:1 instance:2 formalism:1 increased:1 boolean:2 cover:4 cost:1 deviation:5 uniform:4 delay:1 examining:1 seventh:1 too:1 tishby:2 optimally:3 corrupted:4 considerably:1 chooses:1 thanks:2 density:5 sensitivity:2 physic:1 michael:1 together:1 iy:1 ym:1 intersecting:2 squared:2 again:4 central:1 choose:3 possibly:1 e9:1 borne:1 worse:1 leading:1 account:1 potential:1 idealized:1 depends:3 later:1 h1:2 try:1 a45:1 analyze:2 doing:1 complicated:1 minimize:1 il:1 who:3 largely:2 conducting:1 yield:3 ofthe:1 identify:1 weak:1 identification:1 accurately:1 emphasizes:1 researcher:1 za:1 definition:1 grossly:1 frequency:1 involved:1 obvious:2 dm:1 proof:1 di:1 gain:1 recall:1 knowledge:1 conversation:1 sophisticated:1 actually:2 appears:2 response:1 emin:3 though:1 strongly:1 just:1 nonlinear:1 perhaps:1 believe:2 usage:1 effect:3 omitting:1 verify:1 true:1 regularization:1 alternating:2 spherically:1 nonzero:3 symmetric:1 pri:1 eg:24 width:3 performs:1 consideration:2 behaves:1 physical:1 insensitive:2 discussed:1 interpretation:2 refer:1 significant:2 backpropogation:1 cv:3 tuning:2 consistency:1 add:1 something:1 own:1 recent:1 irrelevant:1 reverse:1 verlag:1 arbitrarily:1 seen:1 minimum:2 additional:1 somewhat:1 ey:2 determine:2 ii:4 full:2 reduces:1 calculation:1 cross:25 long:1 iog:1 prediction:3 variant:2 multilayer:1 mayor:1 essentially:2 sometimes:1 achieved:2 rgg:1 addition:2 interval:11 source:2 meaningless:1 call:1 near:2 split:2 enough:1 variety:1 independence:1 fit:1 architecture:1 perfectly:1 suboptimal:2 idea:2 tm:1 regarding:1 tradeoff:1 knowing:1 expression:6 interpolates:1 york:1 useful:2 generally:1 amount:1 simplest:1 documented:1 exist:1 unifonn:1 notice:1 disjoint:1 mat:1 four:1 nevertheless:1 drawn:1 changing:1 verified:2 fraction:6 powerful:3 reasonable:3 acceptable:1 bit:1 capturing:1 bound:38 hi:4 layer:1 replaces:1 refine:1 annual:1 nontrivial:2 precisely:2 flat:1 aspect:1 fourier:1 argument:1 extremely:1 min:1 performing:2 relatively:2 according:2 ball:1 smaller:3 slightly:1 increasingly:1 hl:1 invariant:3 computationally:1 equation:1 dmax:2 hct:1 apply:1 observe:1 barron:6 appropriate:3 occurrence:2 shortly:2 jd:4 top:3 remaining:1 dand:1 jlog:4 giving:1 approximating:1 classical:1 added:1 quantity:1 realized:1 occurs:1 parametric:1 usual:1 exhibit:2 argue:1 extent:3 rom:1 assuming:1 length:1 index:2 mini:1 ratio:1 minimizing:3 difficult:1 potentially:1 stated:2 negative:1 unknown:1 perform:1 finite:2 withheld:1 situation:2 precise:4 rn:1 mansour:1 arbitrary:1 introduced:2 complement:1 namely:2 conflict:1 suggested:1 program:1 including:1 enlightening:2 power:9 event:1 natural:6 rely:1 warm:1 representing:1 axis:3 carried:1 deviate:1 review:1 literature:1 disagrees:1 nice:1 epoch:1 acknowledgement:1 determining:1 asymptotic:1 relative:5 law:7 expect:5 interesting:7 dana:1 validation:25 degree:1 principle:2 story:1 course:3 summary:1 last:1 free:1 dis:1 formal:2 weaker:1 perceptron:3 wide:10 emerge:1 curve:3 dimension:4 reside:1 made:1 refinement:1 bm:2 approximate:1 confirm:1 sat:1 iet:1 reasonably:3 complex:4 necessarily:1 significance:1 main:2 bounding:3 noise:7 subsample:3 arise:1 theme:1 fortunate:1 lie:1 theorem:3 specific:1 appeal:1 decay:5 list:1 intractable:1 vapnik:4 importance:1 smoothly:1 simply:4 labor:1 subtlety:1 applies:1 maa:1 springer:1 nested:1 minimizer:1 determines:1 acm:1 goal:2 resolvability:2 replace:1 change:2 experimentally:1 determined:1 uniformly:1 reducing:1 kearns:5 called:1 accepted:1 experimental:5 perceptrons:1 indicating:1 select:1 violated:2 phenomenon:3 |
81 | 1,071 | A Model of Spatial Representations in
Parietal Cortex Explains Hemineglect
Alexandre Pouget
Dept of Neurobiology
UCLA
Los Angeles, CA 90095-1763
alex@salk.edu
Terrence J. Sejnowski
Howard Hughes Medical Institute
The Salk Institute
La Jolla, CA 92037
terry@salk.edu
Abstract
We have recently developed a theory of spatial representations in
which the position of an object is not encoded in a particular frame
of reference but, instead, involves neurons computing basis functions of their sensory inputs. This type of representation is able
to perform nonlinear sensorimotor transformations and is consistent with the response properties of parietal neurons. We now ask
whether the same theory could account for the behavior of human
patients with parietal lesions. These lesions induce a deficit known
as hemineglect that is characterized by a lack of reaction to stimuli
located in the hemispace contralateral to the lesion. A simulated
lesion in a basis function representation was found to replicate three
of the most important aspects of hemineglect: i) The models failed
to cross the leftmost lines in line cancellation experiments, ii) the
deficit affected multiple frames of reference and, iii) it could be
object centered. These results strongly support the basis function
hypothesis for spatial representations and provide a computational
theory of hemineglect at the single cell level.
1
Introduction
According to current theories of spatial representations, the positions of objects
are represented in multiple modules throughout the brain, each module being specialized for a particular sensorimotor transformation and using its own frame of
reference. For instance, the lateral intraparietal area (LIP) appears to encode the
location of objects in oculocentric coordinates, presumably for the control of saccadic eye movements. The ventral intraparietal cortex (VIP) and the premotor
cortex, on the other hand, seem to use head-centered coordinates and might be
11
A Model of Spatial Representations in Parietal Cortex Explains Hemineglect
B
A
. . .-----.a...
Right
Stimulus
~ Left
.-----Stimulus
Cl
C2
it~
Cl
FP
!
C2
Target
C3
Distractors
\~
+ ??? "
+
??? "
Cl
C2
C3
Figure 1: A. Retinotopic neglect modulated by egocentric position. B. Stimuluscentered neglect
involved in the control of hand movements toward the face.
This modular theory of spatial representations is not fully consistent with the behavior of patients with parietal or frontal lesions. Such lesions causes a syndrome
known as hemineglect which is characterized by a lack of response to sensory stimuli appearing in the hemispace contralateral to the lesion [3]. According to the
modular view, the deficit should be behavior dependent, e.g., oculocentric for eye
movements, head-centered for reaching. However, experimental and clinical studies
show that this is not the case. Instead, neglect affects multiple frames of reference
simultaneously, and to a first approximation, independently of the task.
This point is particularly clear in an experiment by Karnath et al (1993) (Figure 1A). Subjects were asked to identify a stimulus that can appear on either side
of the fixation point. In order to test whether the position of the stimuli with
respect to the body affects performance, two conditions were tested: a control condition with head straight ahead (C1), and a second condition with head rotated
20 degrees on the right (or equivalently, with the trunk rotated 20 degrees on the
left, see figure) (C2). In C2, both stimuli appeared further to the right ofthe trunk
while being at the same location with respect to the head and retina than in Cl.
Moreover, the trunk-centered position of the left stimulus in C2 was the same than
the trunk-centered position of the right stimulus in C1.
As expected, subjects with right parietal lesions performed better on the right
stimulus in the control condition, a result consistent with both, retinotopic and
trunk-centered neglect. To distinguish between the two frames of reference, one
needs to compare performance across conditions.
If the deficit is purely retinocentric, the results should be identical in both conditions, since the retinotopic location of the stimuli does not vary. If, on the other
hand, the deficit is purely trunk-centered, the performance on the left stimulus
should improve when the head is turned right since the stimulus now appears further toward the right of the trunk-centered hemispace. Furthermore, performance
on the right stimulus in the control condition should be the same as performance on
the left stimulus in the rotated condition, since they share the same trunk-centered
position in both cases.
A. POUGET, T. J. SEJNOWSKI
12
Neither of these hypotheses can fully account for the data. As expected from a
retinotopic neglect, subjects always performed better on the right stimulus in both
conditions. However, performance on the left stimulus improved when the head
was turned right (C2), though not sufficiently to match the level of performance on
the right stimulus in the control condition (C1). Therefore, these results suggest a
retinotopic neglect modulated by trunk-centered factors.
In addition, Karnath et al (1991) tested patients on a similar experiment in which
subjects were asked to generate a saccade toward the target. The analysis of reaction
time revealed the same type of results than the one found in the identification
task, thereby demonstrating that the spatial deficit is, to a first approximation,
independent of the task .
An experiment by Arguin and Bub (1993) suggests that neglect can be objectcentered as well. As shown in figure 1B, they found that reaction times were faster
when the target appeared on the right of a set of distractors (C2), as opposed
to the left (C1), even though the target is at the same retinotopic location in
both conditions. Interestingly, moving the target further to the right leads to even
faster reaction times (C3), showing that hemineglect is not only object-centered but
retinotopic as well in this task.
These results strongly support the existence of spatial representations using multiple
frames of reference simultaneously shared by several behaviors. We have recently
developed a theory [6] which has precisely these properties and we ask here whether
a simulated lesion would lead to a deficit similar to hemineglect. Our theory posits
that parietal neurons computes basis function (BF) of sensory signals, such as visual, or auditory inputs, and posture signals, such as eye or head position. The
resulting representation, which we called a basis function map, can be used for performing nonlinear transformations of the sensory inputs, the type of transformations
required for sensorimotor coordination.
2
Model Organization
The model contains two distinct parts: a network for performing sensorimotor transformations and a selection mechanism.
2.1
Network Architecture
We implemented a network using basis function units in the intermediate layer
to perform a transformation from a visual retinotopic map to two motor maps
in, respectively, head-centered and oculocentric coordinates (Figure 2). The input
contains a retinotopic visual map analog to the one found in the early stages of
visual processing, and a set of units encoding eye position, similar to the neurons
found in the intralaminar nucleus of the thalamus. These input units project to a
set of intermediate units shared by both transformations. Each intermediate unit
computes a gaussian of the retinal location of object, r x , multiplied by a sigmoid of
eye position, ex:
Oi=-----
1 + e- e:Z:-/;rj
(1)
These units are organized in a map covering all possible combinations of retinal
and eye position selectivities. As we have shown elsewhere [6], this type of response
function is consistent with the response of single parietal neurons found in area 7a.
13
A Model of Spatial Representations in Parietal Cortex Explains Hemineglect
A
B
Saccadic Eye Movements
t
Reaching
t
(00000000000000)
(0000000000009)
Retinotopic map
(Superior CollicuJus)
Head-centered map
(Premotor Cortex)
,.
~
.~
0
00000000000000
& ., ::::::::::::::
Q)
-,0
~ .1'
Retinal position (0)
00000000000000
00000000000000
00000000000000
] . ::::::::::::::
,8
t&:-6 .. t .,', J0,\~
Head-centered position (0)
?
c::
BFmap
(7a)
00000000000000
:::::::::::::0
,g
BFmap
(7a)
l
~
'"
"
Retinal position (0)
7~i1i
tJ\,L\.
_20
000000000
Retinotopic map
(VI)
Eye position cells
(Thalamus)
_10
0
10
..
Retinal position (0)
~~
.
Figure 2: A. Network architecture B. Typical pattern of activity
The resulting map forms a basis function map which encodes the location of objects
in head-centered and retinotopic coordinates simultaneously.
The activity of the unit in the output maps is computed by a simple linear combination of the BF unit activities. Appropriate values of the weights were found by
using linear regression techniques.
This architecture mimics the pattern of projections of the parietal area 7a. 7a is
known to project to, both , the superior colliculus and the premotor cortex (via the
ventral parietal area, VIP) , in which neurons have, respectively, retinotopic and
head-centered visual receptive fields. Figure 2B shows a typical pattern of activity
in the network when two stimuli are presented simultaneously while the eye fixated
10 degrees toward the right.
2.2
Hemispheric Biases and Lesion Model
Neurophysiological data indicate that both hemispheres contain neurons with all
possible combinations of retinal and eye position selectivities, but with a contralateral bias. Hence , most neurons in the right parietal cortex (resp . left) have their
retinal receptive field on the left hemiretina (resp . right) . The bias for eye position
is much weaker but a trend has been reported in several studies [1] .
Therefore, spatial representations in a patient with a right parietal lesions are biased
toward the right side of space. We modeled such a lesion by using a similar bias in
the intermediate layer of our network . The BF map simply has more neurons tuned
to right retinal and eye positions. We found that the exact profile of the neuronal
gradient across the basis function maps did not matter as long as it was monotonic
and contralateral for both eye position and retinal location .
2.3
Selection model
We also developed a selection mechanism to model the behavior of patients when
presented with several stimuli simultaneously. The simultaneous presentation of
14
A. POUGET, T. J. SEJNOWSKI
stimuli induces multiple hills of activity in the network (see for instance the pattern
of activity shown in figure IB for two visual stimuli). Our selection mechanism
operates on the peak values of these hills.
At each time step, the most active stimulus is selected according to a winner-takeall and its corresponding activity is set to zero (inhibition of return). At the next
time step, the second highest stimuli is selected while the previously selected item
is allowed to recover slowly. This procedure ensures that the most active item is
not selected twice in a row, but because of the recovery process, stimulus with high
activity might be selected again if displayed long enough.
This mechanism is such that the probability of selecting an item is proportional
to two factors : the absolute amount of activity associated with the item, and the
relative activity with respect to other competing items.
2.4
Evaluating network performance
We used this model to simulate several experiments in which patient performance
was evaluated according to reaction time or percent of correct response.
Reaction time in the model was taken to be proportional to the number of time
steps required by our selection mechanism to select a particular target. Performance
on identification task was assumed to be proportional to the strength of the activity
generated by the stimuli in the BF map.
3
3.1
Results
Line cancellation
We first tested the network on the line cancellation test, a test in which patients are
asked to cross out short line segments uniformly spread over a page. To simulate
this test, we presented the display shown in figure 3A and we ran the selection
mechanism to determine which lines get selected by the network . As illustrated in
figure 3A, the network crosses out only the lines located in the right half of the
display, just as left neglect patients do in the same task. The rightward gradient
introduced by the lesion biases the selection mechanism in favor of the most active
lines, i.e., the ones on the right. As a result, the rightmost lines win the competition
over and over, preventing the network from selecting the left lines.
3.2
Mixture of frames of reference
Next, we sought to determine the frame of reference of neglect in the model. Since
Karnath et al (1993) manipulated head position, we simulated their experiment
by using a BF map integrating visual inputs with head position, rather than eye
position. We show in figure 3B the pattern of activity obtained in the retinotopic
output layer of the network in the various experimental conditions (the other maps
behaved in a similar way). In both conditions, head straight ahead (dotted lines) or
turned on the side (solid lines), the right stimulus is associated with more activity
than the left stimulus. This is the consequence of the larger number of cells in
the basis function map for rightward position. In addition, the activity for the left
stimulus increases when the head is turned to the right. This effect is related to the
larger number of cells in the basis function maps tuned to right head positions.
Since network performance is proportional to activity strength, the overall pattern
of performance was found to be similar to what has been reported in human patients
15
A Model of Spatial Representations in Parietal Cortex Explains Hemineglect
c
A
a1
___________ _
,.
I
+
,.,,
,.,
!
!,
"
I
,
',I?
'_.'
.1
'\'\
)( ? ? ?
FP
Target
C1
Distractors
a21-----:--;-~-~
.".... '.' ...
B
+
a3
,.
~
? ? ? )(
C2
-----------------------
,',
1".
,...
",.. ." .
I?
Left
Stimulus
Right
Stimulus
+
'./
'
???
)(
C3
Figure 3: Network behavior in line cancellation task (A). Activity patterns in the
retinotopic output layer when simulating the experiments by Karnath et al (1993)
(B) and Arguin et al (1993) (C)
(figure lA), namely: the right stimulus was better processed than the left stimulus
and performance on the left stimulus increases when the head is rotated toward the
right. Therefore, just like in human, neglect in the model is neither retinocentric
nor trunk-centered alone, but both at the same time.
3.3
Object-centered effect
When simulating Arguin et al (1993) experiments, the network reaction times were
found to follow the same trends than for human patients. Figure 3C illustrates the
patterns of activity in the retinotopic output layer of the network when simulating
the three conditions of Arguin experiments. Notice that the absolute activity associated with the target (solid lines) in conditions 1 and 2 is the same, but the activity
of the distractors (dotted lines) differs in the two conditions. In condition 1, they
have higher relative activity and thereby strongly delay the detection of the target
by the selection mechanism. In condition 2, the distractors are now less active than
the target and do not delay target processing as much as they do in condition 1.
The reaction time decreases even more in condition 3, due to a higher absolute
activity associated with the target . Therefore, the network exhibits retinocentric
and object-centered neglect, just like parietal patients [2].
4
Discussion
The model of parietal cortex presented here was originally developed by considering the response properties of parietal neurons and the computational constraints
inherent in sensorimotor transformations. It was not designed to model neglect, so
its ability to account for a wide range of deficits is additional evidence in favor of
the basis function hypothesis .
As we have shown, our model captures three essential aspects of the neglect syndrome: 1) It reproduces the pattern of line crossing reported in patients in linecancellation experiments, 2) the deficit coexists in multiple frames of reference simultaneously, and 3) the model accounts for some of the object-based effects.
16
A. POUGET, T. J. SEJNOWSKI
We can account for a very large number of studies beyond the ones we have considered here, using very similar computational principles. We can reproduce, in
particular, the behavior of patients in line-bisection experiments and we can explain why neglect affects multiple cartesian frames of reference such as retinotopic,
head-centered, trunk-centered, environment-centered (i.e. with respect to gravity),
and object-centered.
It must be emphasized that these results have been obtained without using explicit representations of these various cartesian frames of reference (except for the
retinotopy of the BF map). In fact, this is precisely because the lesion affected
noncartesian representations that we have been able to reproduce these results. We
have assumed that the lesion affects the functional space in which the basis functions
are defined. This functional space shares common dimensions with cartesian spaces,
but cannot be reduced to the latter. Hence, a basis function map integrating retinal
location and head position is retinotopic, but not solely retinotopic. Consequently,
any attempts to determine the cartesian space in which hemineglect operates is
bound to lead to inconclusive results in which cartesian frames of reference appear
to be mixed.
This study and previous research [6] suggests that the parietal cortex represents
the position of objects by computing basis functions of the sensory and posture
inputs. It would now be interesting to see if this hypothesis could also account for
sensorimotor adaptation, such as learning to reach properly when wearing visual
prisms. We predict that adaptation takes place in several frames of reference simultaneously, a prediction that is testable and would provide further support for the
basis function framework.
References
[1] R.A. Andersen, C. Asanuma, G. Essick, and R.M. Siegel. Corticocortical connections of anatomically and physiologically defined subdivisions within the inferior
parietal lobule. Journal of Comparative Neurology, 296(1):65-113,1990.
[2] M. Arguin and D.N. Bub. Evidence for an independent stimulus-centered reference frame from a case of visual hemineglect. Cortex, 29:349-357, 1993.
[3] K.M. Heilman, R.T. Watson, and E. Valenstein. Neglect and related disorders.
In K.M. Heilman and E. Valenstein, editors, Clinical Neuropsychology, pages
243-294. Oxford University Press, New York, 1985.
[4] H.O. Karnath, K. Christ, and W. Hartje. Decrease of contralateral neglect by
neck muscle vibration and spatial orientation of trunk midline. Brain, 116:383396, 1993.
[5] H.O. Karnath, P. Schenkel, and B. Fischer. Trunk orientation as the determining factor of the 'contralateral' deficit in the neglect syndrome and as the physical anchor of the internal representation of body orientation in space. Brain,
114:1997-2014, 1991.
[6] A. Pouget and T.J. Sejnowski. Spatial representations in the parietal cortex
may use basis functions. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information Processing Systems, volume 7. MIT Press,
Cambridge, MA, 1995.
| 1071 |@word replicate:1 bf:6 thereby:2 solid:2 contains:2 selecting:2 tuned:2 interestingly:1 rightmost:1 reaction:8 current:1 must:1 motor:1 designed:1 alone:1 half:1 selected:6 item:5 short:1 location:8 c2:9 asanuma:1 fixation:1 expected:2 behavior:7 nor:1 brain:3 considering:1 project:2 retinotopic:19 moreover:1 what:1 developed:4 transformation:8 gravity:1 control:6 unit:8 medical:1 appear:2 consequence:1 encoding:1 oxford:1 solely:1 might:2 twice:1 suggests:2 range:1 lobule:1 hughes:1 differs:1 procedure:1 j0:1 area:4 projection:1 induce:1 integrating:2 suggest:1 get:1 cannot:1 selection:8 map:20 independently:1 recovery:1 disorder:1 pouget:5 i1i:1 coordinate:4 resp:2 target:12 exact:1 hypothesis:4 trend:2 crossing:1 particularly:1 located:2 corticocortical:1 module:2 capture:1 ensures:1 movement:4 highest:1 decrease:2 neuropsychology:1 ran:1 environment:1 asked:3 objectcentered:1 segment:1 purely:2 basis:16 rightward:2 represented:1 various:2 distinct:1 sejnowski:5 encoded:1 premotor:3 modular:2 larger:2 favor:2 ability:1 fischer:1 adaptation:2 turned:4 bub:2 competition:1 los:1 hemiretina:1 comparative:1 rotated:4 object:12 implemented:1 involves:1 indicate:1 posit:1 correct:1 centered:24 human:4 explains:4 sufficiently:1 considered:1 presumably:1 predict:1 ventral:2 vary:1 early:1 sought:1 coordination:1 vibration:1 mit:1 always:1 gaussian:1 reaching:2 rather:1 encode:1 properly:1 dependent:1 reproduce:2 overall:1 orientation:3 spatial:13 field:2 essick:1 identical:1 represents:1 mimic:1 stimulus:35 inherent:1 retina:1 manipulated:1 simultaneously:7 midline:1 attempt:1 detection:1 organization:1 mixture:1 tj:1 instance:2 contralateral:6 delay:2 reported:3 peak:1 terrence:1 again:1 andersen:1 opposed:1 slowly:1 return:1 account:6 retinal:10 matter:1 vi:1 performed:2 view:1 recover:1 oi:1 identify:1 ofthe:1 identification:2 bisection:1 straight:2 simultaneous:1 explain:1 reach:1 touretzky:1 sensorimotor:6 involved:1 associated:4 auditory:1 ask:2 distractors:5 organized:1 appears:2 alexandre:1 higher:2 originally:1 follow:1 response:6 improved:1 leen:1 evaluated:1 though:2 strongly:3 furthermore:1 just:3 stage:1 hand:3 nonlinear:2 lack:2 behaved:1 effect:3 contain:1 hence:2 illustrated:1 inferior:1 covering:1 oculocentric:3 hemispheric:1 leftmost:1 hill:2 percent:1 recently:2 sigmoid:1 superior:2 specialized:1 common:1 functional:2 physical:1 winner:1 volume:1 analog:1 cambridge:1 cancellation:4 moving:1 cortex:13 inhibition:1 own:1 jolla:1 hemisphere:1 tesauro:1 selectivity:2 prism:1 watson:1 muscle:1 additional:1 syndrome:3 determine:3 signal:2 ii:1 multiple:7 rj:1 thalamus:2 match:1 characterized:2 faster:2 cross:3 clinical:2 long:2 dept:1 a1:1 prediction:1 regression:1 patient:13 cell:4 c1:5 addition:2 biased:1 subject:4 seem:1 revealed:1 iii:1 intermediate:4 enough:1 affect:4 architecture:3 competing:1 angeles:1 whether:3 york:1 cause:1 clear:1 amount:1 induces:1 processed:1 reduced:1 generate:1 notice:1 dotted:2 intraparietal:2 affected:2 demonstrating:1 neither:2 egocentric:1 colliculus:1 place:1 throughout:1 schenkel:1 layer:5 bound:1 distinguish:1 display:2 activity:21 strength:2 hemispace:3 ahead:2 precisely:2 constraint:1 alex:1 encodes:1 ucla:1 aspect:2 simulate:2 performing:2 according:4 combination:3 across:2 anatomically:1 taken:1 previously:1 trunk:13 mechanism:8 vip:2 multiplied:1 takeall:1 appropriate:1 simulating:3 appearing:1 existence:1 neglect:17 testable:1 posture:2 receptive:2 saccadic:2 exhibit:1 gradient:2 win:1 deficit:10 simulated:3 lateral:1 toward:6 modeled:1 equivalently:1 hemineglect:12 perform:2 neuron:10 howard:1 parietal:20 displayed:1 neurobiology:1 head:21 frame:14 introduced:1 namely:1 required:2 c3:4 connection:1 able:2 beyond:1 pattern:9 fp:2 appeared:2 terry:1 improve:1 eye:14 determining:1 relative:2 fully:2 mixed:1 interesting:1 proportional:4 nucleus:1 degree:3 consistent:4 principle:1 editor:2 share:2 row:1 elsewhere:1 side:3 bias:5 weaker:1 institute:2 wide:1 face:1 absolute:3 dimension:1 evaluating:1 computes:2 sensory:5 preventing:1 reproduces:1 active:4 anchor:1 fixated:1 assumed:2 neurology:1 physiologically:1 why:1 lip:1 ca:2 cl:4 did:1 spread:1 profile:1 lesion:15 allowed:1 body:2 neuronal:1 siegel:1 salk:3 position:27 explicit:1 a21:1 ib:1 emphasized:1 showing:1 a3:1 evidence:2 essential:1 inconclusive:1 illustrates:1 cartesian:5 simply:1 neurophysiological:1 visual:9 failed:1 christ:1 saccade:1 monotonic:1 ma:1 presentation:1 consequently:1 shared:2 retinotopy:1 typical:2 except:1 operates:2 uniformly:1 called:1 neck:1 experimental:2 la:2 subdivision:1 select:1 internal:1 support:3 latter:1 modulated:2 frontal:1 wearing:1 tested:3 ex:1 |
82 | 1,072 | Dynamics of On-Line Gradient Descent
Learning for Multilayer Neural Networks
David Saad"
Dept. of Comp o Sci. & App. Math.
Aston University
Birmingham B4 7ET, UK
Sara A. Solla t
CONNECT, The Niels Bohr Institute
Blegdamsdvej 17
Copenhagen 2100, Denmark
Abstract
We consider the problem of on-line gradient descent learning for
general two-layer neural networks. An analytic solution is presented and used to investigate the role of the learning rate in controlling the evolution and convergence of the learning process.
Learning in layered neural networks refers to the modification of internal parameters
{J} which specify the strength of the interneuron couplings, so as to bring the map
1.
fJ implemented by the network as close as possible to a desired map
The
degree of success is monitored through the generalization error, a measure of the
dissimilarity between fJ and
1.
e
Consider maps from an N-dimensional input space onto a scalar (, as arise in
the formulation of classification and regression tasks. Two-layer networks with an
arbitrary number of hidden units have been shown to be universal approximators
[1] for such N-to-one dimensional maps. Information about the desired map i is
provided through independent examples (e, (1'), with (I' = i(e) for all p . The
examples are used to train a student network with N input units, K hidden units,
and a single linear output unit; the target map
is defined through a teacher
network of similar architecture except for the number M of hidden units. We
investigate the emergence of generalization ability in an on-line learning scenario
[2], in which the couplings are modified after the presentation of each example so
as to minimize the corresponding error. The resulting changes in {J} are described
as a dynamical evolution; the number of examples plays the role of time .
In this paper we limit our discussion to the case of the soft-committee machine
[2], in which all the hidden units are connected to the output unit with positive
couplings of unit strength, and only the input-to-hidden couplings are adaptive.
i
*D.Saad@aston.ac.uk
tOn leave from AT&T Bell Laboratories, Holmdel, NJ 07733, USA
Dynamics of On-line Gradient Descent Learning for Multilayer Neural Networks
303
Consider the student network: hidden unit i receives information from input unit
r through the weight hr, and its activation under presentation of an input pattern
~ = (6,? .. ,~N) is Xi = J i .~, with J i = (hl, ... ,JiN) defined as the vector of
incoming weights onto the i-th hidden unit. The output of the student network is
a(J,~) = L:~l 9 (Ji . ~), where 9 is the activation function of the hidden units,
taken here to be the error function g(x) == erf(x/V2), and J == {Jdl<i<K is the set
of input-to-hidden adaptive weights.
- Training examples are of the form (e, (Il) . The components of the independently
drawn input vectors ~Il are un correlated random variables with zero mean and
unit variance. The corresponding output (Il is given by a deterministic teacher
whose internal structure is the same as for the student network but may differ in
the number of hidden units. Hidden unit n in the teacher network receives input
information through the weight vector Bn = (B nl , ... , BnN), and its activation
is Y~ = Bn .
The corresponding
under presentation of the input pattern
output is (Il = L:~=l 9 (Bn ?e). We will use indices i,j,k,l ... to refer to units
in the student network, and n, m, ... for units in the teacher network.
e
e.
The error made by a student with weights J on a given input
quadratic deviation
~
is given by the
(1)
Performance on a typical input defines the generalization error Eg(J)
< E(J ,~) >{O through an average over all possible input vectors ~, to be performed implicitly through averages over the activations x = (Xl"'" XK) and
Y = (YI
YM). Note that both < Xi >=< Yn >= 0; second order correlations are
given by the overlaps among the weight vectors associated with the various hidden
units: < Xi Xk > = J i . Jk == Qikl < Xi Yn > = J i . Bn == Rin, and < Yn Ym > =
Bn . Bm == Tnm. Averages over x and yare performed using the resulting multivariate Gaussian probability distribution, and yield an expression for the generalization
error in terms of the parameters Qik l Rin, and Tnm [3]. For g(x) == erf(x/V2) the
result is:
I ' ?? I
1 {,""
-
L...J arCSlll
7r' k
z
- 2 ,""
L...J arCSlll
.
sn
Qik
V1+Qii V1+Qu
Rin
V1 + Qi; V1 + Tnn
Tnm
+ ,""
L...J arCSlll --;:;=:=;:;;;=---;:::=:::::::;:;;;:==
nm
} .
V1+Tnn V1+Tmm
(2)
The parameters Tnm are characteristic of the task to be learned and remain fixed.
The overlaps Qik and Rin, which characterize the correlations among the various
student units and their degree of specialization towards the implementation of the
desired task, are determined by the student weights J and evolve during training.
A gradient descent rule for the update of the student weights results in Jf+l =
Jf + bf
where the learning rate TJ has been scaled with the input size N, and
N e,
or "g'(xf) [~g(y~) - ~g(xj')l
(3)
is defined in terms of both the activation function 9 and its derivative g'. The time
evolution of the overlaps Rin and Qik can be explicitly written in terms of similar
D. SAAD. S. A. SOLLA
304
difference equations. In the large N limit the normalized number of examples
Q' = piN can be interpreted as a continuous time variable, leading to the equations
of motion
(4)
to be averaged over all possible ways in which an example can be chosen at a given
time st.ep. The dependence on the current input is only through the activations
x and y; the corresponding averages can be performed analytically for g(x) =
erf( x I v'2), resulting in a set of coupled first-order differential equations [3]. These
dynamical equations are exact, and provide a novel tool used here to analyze the
learning process for a general soft-committee machine with an arbitrary number ]{
of hidden units, trained to implement a task defined through a teacher of similar
architecture except for the number M of hidden units. In what follows we focus on
uncorrelated teacher vectors of unit length, Tnm = onm.
e
The time evolution of the overlaps Rin and Qik follows from integrating the equations of motion (4) from initial conditions determined by a random initialization of
the student vectors {Jdl<i<K. Random initial norms Qii for the student vectors
are taken here from a unIform distribution in the [0,0.5] interval. Overlaps Qik
between independently chosen student vectors Ji and Jk, or ~n between J i and
an unknown teacher vector Bn are small numbers, of order 1/VN for N ~ ]{, M,
and taken here from a uniform distribution in the [0,10- 12] interval.
We show in Fig. 1a-c the evolution of the overlaps and generalization error for a
realizable case: ]{ = M = 3 and "I = 0.1. This example illustrates the successive regimes of the learning process. The system quickly evolves into a symmetric
subspace controlled by an unstable suboptimal solution which exhibits no differentiation among the various student hidden units. Trapping in the symmetric subspace
prevents the specialization needed to achieve the optimal solution, and the generalization error remains finite, as shown by the plateau in Fig. 1c. The symmetric
solution is unstable, and the perturbation introduced through the random initialization of the overlaps ~n eventually takes over: the student units become specialized
and the matrix R of student-teacher overlaps tends towards the matrix T, except
for a permutational symmetry associated with the arbitrary labeling of the student
hidden units. The generalization error plateau is followed by a monotonic decrease
towards zero once the specialization begins and the system evolves towards the
optimal solution. The evolution of the overlaps and generalization error for the unrealizable case ]{ < M is characterized by qualitatively similar stages, except that
the asymptotic behavior is controlled by a suboptimal solution which reflects the
differences between student and teacher architectures.
Curves for the time evolution of the generalization error for different values of "I
shown in Fig. 1d for ]{ = M = 3 identify trapping in the symmetric subspace
as a small "I phenomenon. We therefore consider the equations of motion (4) in
the small "I regime. The term proportional to "12 is neglected and the resulting
truncated equations of motion are used to investigate a phase characterized by
students of similar norms: Qii = Q for all 1 ~ i ~ ]{, similar correlations among
themselves: Qik = C for all i 1= k, and similar correlations with the teacher vectors:
R in = R for all 1 ~ i ~ ]{, 1 ~ n ~ M. The resulting dynamical equations exhibit
a fixed point solution at
M M -
Q"
= C" = ]{2
]{2
+ ..j]{4 2M _ 1
]{2
+ M2
and
R"
rcr
= VM
(5)
305
Dynamics of On-line Gradient Descent Learning for Multilayer Neural Networks
r-
(a)
LOO.B-
(b)
... ...... R" --R'2
.. R 2 ,
R, J
---- R2 2 ........ R 2 ,
- - . Rl1 - .-- RJ 2
---- R"
O.B-
1
a
0.4-
~~~~-
-
~.
0 .2
t
0., --0'2
---- 0, J
......... O2 J
.!:I::
..... 0.6-
~---
g:: J
~
0.4-
2000
6000
(c)
R
6
~
0.0
2000
0
BOOO
4000
BOOO
ex
O.OB-
11 0.1
11 0.3
-?- ? ?110.5
--110 .7
O.OB-
0 .06bO
0.06-
6000
(d)
0.1-
~
5
I
?
0.2-
"
4000
ex
bO
,
i
"j'
--'
0.0
0
~ 0.6.....
t
r-/
1.0-
W
0.04- f-----,
0 .04
.. ......
0.02-
0.02
0.0
\
I
0
4000
2000
6000
'r--"-'
..... ,\
..
i
I
i
0.0
BOOO
0
2000
ex
4000
6000
ex
Figure 1: Dependence of the overlaps and the generalization error on the normalized number of examples Q' for a three-node student learning a three-node teacher
characterized by Tnm = onm. Results for TJ = 0.1 are shown for (a) student-student
overlaps Qik and (b) student-teacher overlaps Rin . The generalization error is shown
in (c) , and again in (d) for different values of the learning rate.
for the general case , which reduces to
Q*
-
C*
1
- 2K-1
in the realizable case (K
given by
= M),
and
R* _
rcr _
- VK -
1
VK(2K -1)
(6)
where the corresponding generalization error is
E; = ~ {i - K arcsin (2~{ )}
.
(7)
A simple geometrical picture explains the relation Q* = C* = M(R*)2 at the
symmetric fixed point . The learning process confines the student vectors {Jd to
the subspace SB spanned by the set of teacher vectors {Bn} . For Tnm = onm
the teacher vectors form an orthonormal set: Bn = en , with en . em = Onm for
1 :::; n , m :::; M , and provide an expansion for the weight vectors of the trained
student: Ji = Ln R inen . The student-teacher overlaps Rin are independent of i in
the symmetric phase and independent of n for an isotropic teacher: Rin = R" for
all 1 :::; i :::; K and 1 :::; n :::; M. The expansion Ji = R* Ln en for all i results in
Q* = C* = M(R*)2.
D. SAAD, S. A. SOLLA
306
The length of the symmetric plateau is controlled by the degree of asymmetry in the
initial conditions [2] and by the learning rate "I . The small "I analysis predicts trapping times inversely proportional to "I, in quantitative agreement with the shrinking
plateau of Fig. 1d. The increase in the height of the plateau with decreasing "I is
a second order effect, as the truncated equations of motion predict a unique value:
f; = 0.0203 for K = M = 3. The mechanism for the second order effect is revealed by an examination of Fig. 1a: the student-student overlaps do agree with
the prediction C" = 0.2 of the small "I analysis for K = M = 3, but the norms of
the student vectors remain larger, at Q = Q" +~ . The gap ~ between diagonal
and off-diagonal elements is observed numerically to increase with increasing "I, and
is responsible for the excess generalization error. A first order expansion in ~ at
R = R", C = C .. , and Q = Q" + ~ yields
t
g
= -K{7r
+
7r -6- l\.,arcsm. (1)
2K
2K -1 }
2K + 1 ~ ,
(8)
in agreement with the trend observed in Fig. 1d for the realizable case.
The excess norm
~
of the student vectors corresponds to a residual component in
J i not confined to the subspace SB. The weight vectors of the trained student can
be written as Ji
R" Ln en +
with
en
0 for all 1 n
M. Student
=
Jt,
Jt .
=
:s :s
weight vectors are not constrained to be identical; they differ through orthogonal
components Jt which are typically uncorrelated: Jt?J t = 0 for i =1= k. Correlations
Qik = C do satisfy C = C .. = M(R .. )2, but norms Qii = Q are given by Q = Q"+~,
with ~ =11 J.L 112. Learning at very small "I tends to eliminate J.L and confine the
student vectors to SB.
Escape from the symmetric subspace signals the onset of hidden unit specialization.
As shown in Fig. 1b, the process is driven by a breaking of the uniformity of the
student-teacher correlations: each student node becomes increasingly specialized to
a specific teacher node, while its overlap with the remaining teacher nodes decreases
and eventually decays to its asymptotic value. In the realizable case this asymptotic
value is zero, while in the unrealizable case two different non-zero asymptotic values
distinguish weak overlaps with teacher nodes imitated by other student vectors from
more significant overlaps with those teacher nodes not specifically imitated by any
of the student vectors.
The matrix of student-teacher overlaps can no longer be characterized by a unique
parameter, as we need to distinguish between a dominant overlap R between a
given student node and the teacher node it begins to imitate, secondary overlaps S
between the same student node and the teacher nodes to which other student nodes
are being assigned, and residual overlaps U with the remaining teacher nodes. The
student hidden nodes can be relabeled so as to bring the matrix of student-teacher
overlaps to the form Rin = RDin + S(l - Din)8(K - n) + U(l - 8(K - n)), where
the step function 8 is 0 for negative arguments and 1 otherwise. The emerging
differentiation among student vectors results in a decrease of the overlaps Qik = C
for i =1= k, while their norms Qii = Q increase. The matrix of student-student
overlaps takes the form Qik = QDik + C(l - Dik).
Here we limit our description of the onset of specialization to the realizable case, for
which Rin = RDin +S(l-Din). The small "I analysis is extended to allow for S =1= R in
order to describe the escape from the symmetric subspace. The resulting dynamical
C ..
1/(2K - 1)
equations are linearized around the fixed point solution at Q"
and R" = S .. = 1/ K (2K - 1), and the generalization error is expanded around its
fixed point value (7) to first order in the corresponding deviations q, c, r, and s. The
analysis identifies a relevant perturbation with q = c = 0 and s = -r /(K -1), which
J
=
=
307
Dynamics of On-line Gradient Descent Learning for Multilayer Neural Networks
0.3-.--- - - - - - - - - - - - - ,
0.2
Figure 2: Dependence of the
two leading decay eigenvalues on
the learning rate 'fJ in the realizable case: A1 (curved line) and
A2 (straight line) are shown for
M = J{ = 3.
t<.
0.1
-0.1
0.0
0.2
0 .4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
leaves the generalization error unchanged and explains the behavior illustrated in
Fig . la-b . It is the differentiation between Rand S which signals the escape from the
symmetric subspace; the differentiation between Q and C occurs for larger values
of Q'. The relevant perturbation corresponds to an enhancement of the overlap
R = R* + r between a given student node and the teacher node it is learning to
imitate, while the overlap S = S* + 5 between the same student node and the
remaining teacher nodes is weakened. The time constant associated with this mode
is T = (7r/2J{)(2J{ - 1)1/2(2J{ + 1)3/2, with T""" 27rJ{ in the large J{ limit.
It is in the subsequent convergence to an asymptotic solution that the realizable
and unrealizable cases exhibit fundamental differences . We examine first the realizable scenario, in which the system converges to an optimal solution with perfect
generalization .
As the specialization continues, the dominant overlaps R grow, and the secondary
overlaps S decay to zero. Further specialization involves the decay to zero of the
student-student correlations C and the growth of the norms Q of the student vectors.
To investigate the convergence to the optimal solution we linearize the equations
of motion around the asymptotic fixed point at S* = C" = 0, R* = Q* = 1,
with f; = o. We describe convergence to the optimal solution by applying the full
equations of motion (4) to a phase characterized by R in = Rbin + S(l - bin) and
Qik = Qbik
+ C(l -
bin).
Linearization of the full equations of motion around the asymptotic fixed point
results in four eigenvalues; the dependence of the two largest eigenvalues on 'fJ is
shown in Fig. 2 for M = J{ = 3. An initially slow mode corresponds to the
eigenvalue A2, which remains negative for all values of 'fJ, while the eigenvalue A1
for the initially fast mode becomes positive as 'fJ exceeds 'fJmax, given by
7r
75 - 42V3
'fJmax = J{ 25V3 - 42
(9)
to first order in 1/ J{. The optimal solution with f* = 0 is not accessible for
> 'fJmax? Exponential convergence of R, S, C, ana Q to their optimal values
is guaranteed for all learning rates in the range (0, 'fJmax) ; in this regime the gener0, with a rate controlled by the slowest
alization error decays exponentially to f;
decay mode. An expansion of fg in terms of r = 1 - R, 5 , c, and q = 1 - Q reveals
that of the leading modes whose eigenvalues are shown in Fig. 2 only the mode associated with A1 contributes to the decay of the linear term , while the decay of the
second order term is controlled by the mode associated with A2 and dominates the
convergence if 2A2 < A1. The learning rate 'fJopt which guarantees fastest asymptotic
decay for the generalization error follows from A1('fJopt) = 2A2('fJopt).
'fJ
=
The asymptotic convergence of unrealizable learning is an intrinsically more complicated process that cannot be described in closed analytic form. The asymptotic
D. SAAD. S. A. SOLLA
308
values of the order parameters and the generalization error depend on the learning rate TJ; convergence to an optimal solution with minimal generalization error
requires TJ --+ 0 as a --+ 00. Optimal values for the order parameters follow from a
small TJ analysis, equivalent to neglecting J.L and assuming student vectors confined
to SB. The resulting expansion J i = 2:~=1 Hinen, with Rii = R, Hin = S for
1 :::; n :::; J{, n 1= i, and Hin = U for J{ + 1 :s n :::; M, leads to
Q = R2 + (I< -1)S2 + (M - J{)U 2 , C = 2RS + (I< - 2)S2 = (M - J{)U 2 . (10)
The equations of motion for the remaining parameters R, S, and U exhibit a fixed
point solution which controls the asymptotic behavior. This solution cannot be
obtained analytically, but numerical results are well approximated to order (1/ J{3)
by
6\1'3 - 3 L (
1)
8
J{2 1 - J{
R*
1-
S*
(1 --:) :3'
w=
,
~ ( 1-2~' )
(11)
where L == M - J{. The corresponding fixed point values Q" and C" follow from
Eq. (10). Note that R" is lower than for the realizable case, and that correlations
U" (significant) and S .. (weaker) between student vectors and the teacher vectors
they do not imitate are not eliminated . The asymptotic generalization error is given
by
(12)
to order (1/ J{2). Note its proportionality to the mismatch L between teacher and
student architectures.
Learning at fixed and sufficiently small TJ results in exponential convergence to
an asymptotic solution whose fixed point coordinates are shifted from the values
discussed above. The solution is suboptimal; the resulting increase in f; from its
optimal value (12) is easily obtained to first order in TJ , and it is also proportional
to L. We have investigated convergence to the optimal solution (12) for schedules
of the form TJ(a) = TJo/(a-ao)Z for the decay of the learning rate. A constant rate
TJo is used for a :::; 0'0; the monotonic decrease of TJ for a > 0'0 is switched on after
specialization begins. Asymptotic convergence requires 0 < z :::; 1; fastest decay of
the generalization error is achieved for z = 1/2.
Specialization as described here and illustrated in Fig.l is a simultaneous process in
which each student node acquires a strong correlation with a specific teacher node
while correlations to other teacher nodes decrease. Such synchronous escape from
the symmetric phase is characteristic of learning scenarios where the target task is
defined through an isotropic teacher. In the case of a graded teacher we find that
specialization occurs through a sequence of escapes from the symmetric subspace,
ordered according to the relevance of the corresponding teacher nodes [3].
Acknowledgement The work was supported by the EU grant CHRX-CT92-0063.
References
[1] G. Cybenko, Math . Control Signals and Systems 2, 303 (1989) .
[2] M. Biehl and H. Schwarze, J. Phys. A 28, 643 (1995).
[3] D. Saad and S. A. Solla, Phys. Rev. E, 52,4225 (1995) .
| 1072 |@word norm:7 bf:1 proportionality:1 r:1 linearized:1 bn:8 initial:3 o2:1 current:1 activation:6 written:2 numerical:1 subsequent:1 analytic:2 update:1 leaf:1 imitate:3 imitated:2 trapping:3 xk:2 isotropic:2 math:2 node:22 successive:1 height:1 differential:1 become:1 behavior:3 themselves:1 examine:1 decreasing:1 increasing:1 becomes:2 provided:1 begin:3 what:1 interpreted:1 emerging:1 differentiation:4 nj:1 guarantee:1 booo:3 quantitative:1 growth:1 scaled:1 uk:2 control:2 unit:26 grant:1 yn:3 positive:2 tends:2 limit:4 initialization:2 weakened:1 sara:1 qii:5 fastest:2 range:1 averaged:1 unique:2 responsible:1 implement:1 universal:1 bell:1 integrating:1 refers:1 onto:2 close:1 layered:1 cannot:2 applying:1 equivalent:1 map:6 deterministic:1 independently:2 m2:1 rule:1 spanned:1 orthonormal:1 coordinate:1 rl1:1 controlling:1 target:2 play:1 exact:1 agreement:2 element:1 trend:1 approximated:1 jk:2 continues:1 predicts:1 ep:1 role:2 observed:2 connected:1 solla:5 decrease:5 eu:1 dynamic:4 neglected:1 trained:3 uniformity:1 depend:1 rin:11 easily:1 various:3 train:1 fast:1 describe:2 labeling:1 whose:3 larger:2 biehl:1 otherwise:1 ability:1 erf:3 emergence:1 sequence:1 eigenvalue:6 relevant:2 achieve:1 description:1 convergence:11 enhancement:1 asymmetry:1 perfect:1 leave:1 converges:1 coupling:4 linearize:1 ac:1 eq:1 strong:1 implemented:1 involves:1 differ:2 ana:1 bin:2 explains:2 ao:1 generalization:21 cybenko:1 unrealizable:4 tmm:1 confine:1 around:4 sufficiently:1 predict:1 a2:5 niels:1 birmingham:1 largest:1 tool:1 reflects:1 gaussian:1 modified:1 focus:1 vk:2 slowest:1 realizable:9 sb:4 typically:1 eliminate:1 initially:2 hidden:18 relation:1 classification:1 among:5 constrained:1 once:1 eliminated:1 identical:1 escape:5 phase:4 rcr:2 investigate:4 tjo:2 nl:1 tj:9 bohr:1 neglecting:1 orthogonal:1 desired:3 minimal:1 soft:2 deviation:2 uniform:2 loo:1 characterize:1 connect:1 teacher:35 st:1 fundamental:1 accessible:1 vm:1 off:1 ym:2 quickly:1 again:1 nm:1 derivative:1 leading:3 student:54 satisfy:1 explicitly:1 onset:2 performed:3 closed:1 analyze:1 complicated:1 minimize:1 il:4 variance:1 characteristic:2 yield:2 identify:1 weak:1 comp:1 straight:1 app:1 plateau:5 simultaneous:1 phys:2 associated:5 monitored:1 intrinsically:1 schedule:1 follow:2 specify:1 rand:1 formulation:1 stage:1 correlation:10 receives:2 defines:1 mode:7 schwarze:1 usa:1 effect:2 normalized:2 evolution:7 analytically:2 assigned:1 din:2 symmetric:12 laboratory:1 bnn:1 eg:1 illustrated:2 during:1 acquires:1 motion:9 bring:2 fj:7 geometrical:1 hin:2 novel:1 specialized:2 ji:5 b4:1 exponentially:1 discussed:1 numerically:1 refer:1 significant:2 longer:1 dominant:2 multivariate:1 driven:1 scenario:3 success:1 approximators:1 yi:1 v3:2 signal:3 full:2 rj:2 reduces:1 exceeds:1 xf:1 characterized:5 a1:5 controlled:5 qi:1 prediction:1 regression:1 multilayer:4 qik:12 confined:2 achieved:1 interval:2 grow:1 saad:6 revealed:1 xj:1 architecture:4 suboptimal:3 synchronous:1 expression:1 specialization:10 dik:1 shifted:1 four:1 drawn:1 v1:6 vn:1 ob:2 holmdel:1 layer:2 followed:1 distinguish:2 guaranteed:1 quadratic:1 strength:2 argument:1 expanded:1 according:1 remain:2 em:1 increasingly:1 qu:1 evolves:2 modification:1 rev:1 hl:1 taken:3 ln:3 equation:14 agree:1 remains:2 pin:1 eventually:2 committee:2 mechanism:1 needed:1 alization:1 yare:1 v2:2 jd:1 remaining:4 graded:1 unchanged:1 occurs:2 dependence:4 diagonal:2 exhibit:4 gradient:6 subspace:9 sci:1 unstable:2 denmark:1 assuming:1 length:2 index:1 negative:2 implementation:1 rii:1 unknown:1 finite:1 descent:6 jin:1 curved:1 truncated:2 extended:1 perturbation:3 arbitrary:3 david:1 introduced:1 copenhagen:1 learned:1 dynamical:4 pattern:2 mismatch:1 regime:3 overlap:28 examination:1 hr:1 residual:2 aston:2 inversely:1 picture:1 identifies:1 coupled:1 sn:1 acknowledgement:1 evolve:1 asymptotic:14 proportional:3 switched:1 degree:3 uncorrelated:2 supported:1 allow:1 weaker:1 institute:1 fg:1 curve:1 made:1 adaptive:2 qualitatively:1 bm:1 excess:2 implicitly:1 incoming:1 reveals:1 xi:4 inen:1 un:1 continuous:1 correlated:1 symmetry:1 contributes:1 expansion:5 investigated:1 s2:2 arise:1 jdl:2 fig:11 en:5 slow:1 shrinking:1 exponential:2 xl:1 breaking:1 tnm:7 specific:2 jt:4 tnn:2 ton:1 r2:2 decay:11 dominates:1 relabeled:1 dissimilarity:1 linearization:1 illustrates:1 arcsin:1 interneuron:1 gap:1 prevents:1 ordered:1 scalar:1 bo:2 monotonic:2 corresponds:3 presentation:3 towards:4 jf:2 change:1 typical:1 except:4 determined:2 specifically:1 rdin:2 secondary:2 la:1 internal:2 confines:1 relevance:1 dept:1 phenomenon:1 ex:4 |
83 | 1,073 | Improving Elevator Performance Using
Reinforcement Learning
Robert H. Crites
Computer Science Department
University of Massachusetts
Amherst, MA 01003-4610
critesGcs.umass.edu
Andrew G. Barto
Computer Science Department
University of Massachusetts
Amherst, MA 01003-4610
bartoGcs.umass.edu
Abstract
This paper describes the application of reinforcement learning (RL)
to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most
RL research to date. Elevator systems operate in continuous state
spaces and in continuous time as discrete event dynamic systems.
Their states are not fully observable and they are nonstationary
due to changing passenger arrival rates. In addition, we use a team
of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which
appears noisy to each agent due to the effects of the actions of the
other agents, the random nature of the arrivals and the incomplete
observation of the state. In spite of these complications, we show
results that in simulation surpass the best of the heuristic elevator
control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic
optimization problem of practical utility.
1
INTRODUCTION
Recent algorithmic and theoretical advances in reinforcement learning (RL) have
attracted widespread interest. RL algorithms have appeared that approximate dynamic programming (DP) on an incremental basis. Unlike traditional DP algorithms, these algorithms can perform with or without models of the system, and
they can be used online as well as offline, focusing computation on areas of state
space that are likely to be visited during actual control. On very large problems,
they can provide computationally tractable ways of approximating DP. An example of this is Tesauro's TD-Gammon system (Tesauro, 1992j 1994; 1995), which
used RL techniques to learn to play strong masters level backgammon. Even the
R. H . CR~.A.G. BARTO
1018
best human experts make poor teachers for this class of problems since they do not
always know the best actions. Even if they did, the state space is so large that
it would be difficult for experts to provide sufficient training data. RL algorithms
are naturally suited to this class of problems, since they learn on the basis of their
own experience. This paper describes the application of RL to elevator dispatching,
another problem where classical DP is completely intractable. The elevator domain
poses a number of difficulties that were not present in backgammon. In spite of
these complications, we show results that surpass the best of the heuristic elevator
control algorithms of which we are aware. The following sections describe the elevator dispatching domain, the RL algorithm and neural network architectures that
were used, the results, and some conclusions.
2
THE ELEVATOR SYSTEM
The particular elevator system we examine is a simulated 10-story building with
4 elevator cars (Lewis, 1991; Bao et al, 1994). Passenger arrivals at each floor are
assumed to be Poisson, with arrival rates that vary during the course of the day.
Our simulations use a traffic profile (Bao et al, 1994) which dictates arrival rates for
every 5-minute interval during a typical afternoon down-peak rush hour. Table 1
shows the mean number of passengers arriving at each floor (2-10) during each
5-minute interval who are headed for the lobby. In addition, there is inter-floor
traffic which varies from 0% to 10% of the traffic to the lobby.
Table 1: The Down-Peak Traffic Profile
The system dynamics are approximated by the following parameters:
? Floor time (the time to move one floor at the maximum speed): 1.45 secs.
? Stop time (the time needed to decelerate, open and close the doors, and
accelerate again): 7.19 secs.
? Turn time (the time needed for a stopped car to change direction): 1 sec.
? Load time (the time for one passenger to enter or exit a car): random
variable from a 20th order truncated Erlang distribution with a range from
0.6 to 6.0 secs and a mean of 1 sec.
? Car capacity: 20 passengers.
The state space is continuous because it includes the elapsed times since any hall
calls were registered. Even if these real values are approximated as binary values,
the size of the state space is still immense. Its components include 218 possible
combinations of the 18 hall call buttons (up and down buttons at each landing
except the top and bottom), 240 possible combinations of the 40 car buttons, and
184 possible combinations of the positions and directions of the cars (rounding off
to the nearest floor). Other parts of the state are not fully observable, for example,
the desired destinations of the passengers waiting at each floor. Ignoring everything
except the configuration of the hall and car call buttons and the approximate position and direction of the cars, we obtain an extremely conservative estimate of the
size of a discrete approximation to the continuous state space:
Improving Elevator Performance Using Reinforcement Learning
1019
Each car has a small set of primitive actions. Ifit is stopped at a floor, it must either
"move up" or "move down". If it is in motion between floors, it must either "stop
at the next floor" or "continue past the next floor". Due to passenger expectations,
there are two constraints on these actions: a car cannot pass a floor if a passenger
wants to get off there and cannot turn until it has serviced all the car buttons in its
present direction. We have added three additional action constraints in an attempt
to build in some primitive prior knowledge: a car cannot stop at a floor unless
someone wants to get on or off there, it cannot stop to pick up passengers at a floor
if another car is already stopped there, and given a choice between moving up and
down, it should prefer to move up (since the down-peak traffic tends to push the
cars toward the bottom of the building). Because of this last constraint, the only
real choices left to each car are the stop and continue actions. The actions of the
elevator cars are executed asynchronously since they may take different amounts of
time to complete.
The performance objectives of an elevator system can be defined in many ways. One
possible objective is to minimize the average wait time, which is the time between
the arrival of a passenger and his entry into a car. Another possible objective is
to minimize the average 6y6tem time, which is the sum of the wait time and the
travel time. A third possible objective is to minimize the percentage of passengers
that wait longer than some dissatisfaction threshold (usually 60 seconds). Another
common objective is to minimize the sum of 6quared wait times. We chose this
latter performance objective since it tends to keep the wait times low while also
encouraging fair service.
3
THE ALGORITHM AND NETWORK
ARCHITECTURE
Elevator systems can be modeled as ducrete event systems, where significant events
(such as passenger arrivals) occur at discrete times, but the amount oftime between
events is a real-valued variable. In such systems, the constant discount factor 'Y
used in most discrete-time reinforcement learning algorithms is inadequate. This
problem can be approached using a variable discount factor that depends on the
amount of time between events (Bradtke & Duff, 1995). In this case, returns are
defined as integrals rather than as infinite sums, as follows:
becomes
where rt is the immediate cost at discrete time t, r.,. is the instantaneous cost at
continuous time T (e.g., the sum of the squared wait times of all waiting passengers),
and {3 controls the rate of exponential decay.
Calculating reinforcements here poses a problem in that it seems to require knowledge of the waiting times of all waiting passengers. There are two ways of dealing
with this problem. The simulator knows how long each passenger has been waiting.
It could use this information to determine what could be called omnucient reinforcements. The other possibility is to use only information that would be available
to a real system online. Such online reinforcements assume only that the waiting
time of the first passenger in each queue is known (which is the elapsed button
time). If the Poisson arrival rate A for each queue is estimated as the reciprocal of
the last inter-button time for that queue, the Gamma distribution can be used to
estimate the arrival times of subsequent passengers. The time until the nth. subsequent arrival follows the Gamma distribution r(n, f). For each queue, subsequent
R. H. CRITES, A. G. BARTO
1020
arrivals will generate the following expected penalties during the first b seconds after
the hall button has been pressed:
L Jor
b
00
n=l
(prob
nth
arrival occurs at time r) . (penalty given arrival at time r) dr
0
This integral can be solved by parts to yield expected penalties. We found that
using online reinforcements actually produced somewhat better results than using
omniscient reinforcements, presumably because the algorithm was trying to learn
average values anyway.
Because elevator system events occur randomly in continuous time, the branching
factor is effectively infinite, which complicates the use of algorithms that require
explicit lookahead. Therefore, we employed a team of discrete-event Q-Iearning
agents, where each agent is responsible for controlling one elevator car. Q(:z:, a)
is defined as the expected infinite discounted return obtained by taking action a
in state :z: and then following an optimal policy (Watkins, 1989). Because of the
vast number of states, the Q-values are stored in feedforward neural networks. The
networks receive some state information as input, and produce Q-value estimates
as output. We have tested two architectures. In the parallel architecture, the agents
share a single network, allowing them to learn from each other's experiences and
forcing them to learn identical policies. In the fully decentralized architecture, the
agents have their own networks, allowing them to specialize their control policies.
In either case, none of the agents have explicit access to the actions of the other
agents. Cooperation has to be learned indirectly via the global reinforcement signal.
Each agent faces added stochasticity and nonstationarity because its environment
contains other learning agents. Other work on team Q-Iearning is described in
(Markey, 1994).
The algorithm calls for each car to select its actions probabilistic ally using the
Boltzmann distribution over its Q-value estimates, where the temperature is lowered gradually during training. After every decision, error backpropagation is used
to train the car's estimate of Q(:z:, a) toward the following target output:
where action a is taken by the car from state :z: at time t x , the next decision by
that car is required from state y at time ty, and TT and (3 are defined as above.
e-tJ(tv-t.) acts as a variable discount factor that depends on the amount of time
between events. The learning rate parameter was set to 0.01 or 0.001 and {3 was set
to 0.01 in the experiments described in this paper.
After considerable experimentation, our best results were obtained using networks
for pure down traffic with 47 input units, 20 hidden sigmoid units, and two linear
output units (one for each action value). The input units are as follows:
? 18 units: Two units encode information about each of the nine down hall
buttons. A real-valued unit encodes the elapsed time if the button has
been pushed and a binary unit is on if the button has not been pushed.
Improving Elevator Performance Using Reinforcement Learning
1021
? 16 units: Each of these units represents a possible location and direction
for the car whose decision is required. Exactly one of these units will be on
at any given time.
? 10 units: These units each represent one of the 10 floors where the other cars
may be located. Each car has a "footprint" that depends on its direction
and speed. For example, a stopped car causes activation only on the unit
corresponding to its current floor, but a moving car causes activation on
several units corresponding to the floors it is approachmg, with the highest
activations on the closest floors.
? 1 unit: This unit is on if the car whose decision is required is at the highest
floor with a waiting passenger.
? 1 unit: This unit is on if the car whose decision is required is at the floor
with the passenger that has been waiting for the longest amount of time.
? 1 unit: The bias unit is always on.
4
RESULTS
Since an optimal policy for the elevator dispatching problem is unknown, we measured the performance of our algorithm against other heuristic algorithms, including
the best of which we were aware. The algorithms were: SECTOR, a sector-based
algorithm similar to what is used in many actual elevator systems; DLB, Dynamic
Load Balancing, attempts to equalize the load of all cars; HUFF, Highest Unanswered Floor First, gives priority to the highest floor with people waiting; LQF,
Longest Queue First, gives priority to the queue with the person who has been
waiting for the longest amount of time; FIM, Finite Intervisit Minimization, a receding horizon controller that searches the space of admissible car assignments to
minimize a load function; ESA, Empty the System Algorithm, a receding horizon
controller that searches for the fastest way to "empty the system" assuming no new
passenger arrivals. ESA uses queue length information that would not be available
in a real elevator system. ESA/nq is a version of ESA that uses arrival rate information to estimate the queue lengths. For more details, see (Bao et al, 1994). These
receding horizon controllers are very sophisticated, but also very computationally
intensive, such that they would be difficult to implement in real time. RLp and
RLd denote the RL controllers, parallel and decentralized. The RL controllers were
each trained on 60,000 hours of simulated elevator time, which took four days on a
100 MIPS workstation. The results are averaged over 30 hours of simulated elevator
time. Table 2 shows the results for the traffic profile with down traffic only.
Algorithm
SECTOR
DLB
BASIC HUFF
LQF
HUFF
FIM
ESA/nq
ESA
RLp
RLd
I AvgWait I SquaredWait I SystemTime I Percent>60 secs I
21.4
19.4
19.9
19.1
16.8
16.0
15.8
15.1
14.8
14.7
674
658
580
534
396
359
358
338
320
313
47.7
53.2
47.2
46.6
48.6
47.9
47.7
47.1
41.8
41.7
1.12
2.74
0.76
0.89
0.16
0.11
0.12
0.25
0.09
0.07
Table 2: Results for Down-Peak Profile with Down Traffic Only
R.H.C~.A.G. BARTO
1022
Table 3 shows the results for the down-peak traffic profile with up and down traffic,
including an average of 2 up passengers per minute at the lobby. The algorithm
was trained on down-only traffic, yet it generalizes well when up traffic is added
and upward moving cars are forced to stop for any upward hall calls.
Algorithm
SECTOR
DLB
BASIC HUFF
LQF
HU ...?F
ESA
FIM
RLp
RLd
I AvgWait I Squared wait I SystemTime I Percent>60 secs I
27.3
21.7
22.0
21.9
19.6
18.0
17.9
16.9
16.9
1252
826
756
732
608
524
476
476
468
54.8
54.4
51.1
50.7
50.5
50.0
48.9
42.7
42.7
9.24
4.74
3.46
2.87
1.99
1.56
0.50
1.53
1.40
Table 3: Results for Down-Peak Profile with Up and Down Traffic
Table 4 shows the results for the down-peak traffic profile with up and down traffic,
including an average of 4 up passengers per minute at the lobby. This time there is
twice as much up traffic, and the RL agents generalize extremely well to this new
situation.
Algorithm
SECTOR
HUFF
DLB
LQF
BASIC HUFF
FIM
ESA
RLd
RLp
I AvgWait I SquaredWait I SystemTime I Percent>60 secs I
30.3
22.8
22.6
23.5
23.2
20.8
20.1
18.8
18.6
1643
884
880
877
875
685
667
593
585
59.5
55.3
55.8
53.5
54.7
53.4
52.3
45.4
45.7
13.50
5.10
5.18
4.92
4.94
3.10
3.12
2.40
2.49
Table 4: Results for Down-Peak Profile with Twice as Much Up Traffic
One can see that both the RL systems achieved very good performance, most notably as measured by system time (the sum of the wait and travel time), a measure
that was not directly being minimized. Surprisingly, the decentralized RL system
was able to achieve as good a level of performance as the parallel RL system. Better performance with nonstationary traffic profiles may be obtainable by providing
the agents with information about the current traffic context as part of their input
representation. We expect that an additional advantage of RL over heuristic controllers may be in buildings with less homogeneous arrival rates at each floor, where
RL can adapt to idiosyncracies in their individual traffic patterns.
5
CONCLUSIONS
These results demonstrate the utility of RL on a very large scale dynamic optimization problem. By focusing computation onto the states visited during simulated
trajectories, RL avoids the need of conventional DP algorithms to exhaustively
Improving Elevator Performance Using Reinforcement Learning
1023
sweep the state set. By storing information in artificial neural networks, it avoids
the need to maintain large lookup tables. To achieve the above results, each RL
system experienced 60,000 hours of simulated elevator time, which took four days
of computer time on a 100 MIPS processor. Although this is a considerable amount
of computation, it is negligible compared to what any conventional DP algorithm
would require. The results also suggest that approaches to decentralized control
using RL have considerable promise. Future research on the elevator dispatching
problem will investigate other traffic profiles and further explore the parallel and
decentralized RL architectures.
Acknowledgements
We thank John McNulty, Christ os Cassandras, Asif Gandhi, Dave Pepyne, Kevin
Markey, Victor Lesser, Rod Grupen, Rich Sutton, Steve Bradtke, and the ANW
group for assistance with the simulator and for helpful discussions. This research
was supported by the Air Force Office of Scientific Research under grant F4962093-1-0269.
References
G. Bao, C. G. Cassandras, T. E. Djaferis, A. D. Gandhi, and D. P. Looze. (1994)
Elevator Di,patcher, for Down Peale Traffic. Technical Report, ECE Department,
University of Massachusetts, Amherst, MA.
S. J. Bradtke and M. O. Duff. (1995) Reinforcement Learning Methods for
Continuous-Time Markov Decision Problems. In: G. Tesauro, D. S. Touretzky
and T. K. Leen, eds., Advance, in Neural Information Procelling Sy,tem, 7, MIT
Press, Cambridge, MA.
J. Lewis. (1991) A Dynamic Load Balancing Approach to the Control of Multuerver
Polling Sy,tem, with Applicationl to Elevator Syltem Dupatching. PhD thesis,
University of Massachusetts, Amherst, MA.
K. L. Markey. (1994) Efficient Learning of Multiple Degree-of-Freedom Control
Problems with Quasi-independent Q-agents. In: M. C. Mozer, P. Smolensky,
D. S. Touretzky, J. L. Elman and A. S. Weigend, eds., Proceeding' of the 1993
Connectionilt Modell Summer SchooL Erlbaum Associates, Hillsdale, NJ.
G. Tesauro. (1992) Practical Issues in Temporal Difference Learning. Machine
Learning 8:257-277.
G. Tesauro. (1994) TO-Gammon, a Self-Teaching Backgammon Program, Achieves
Master-Level Play. Neural Computation 6:215-219 .
G. Tesauro. (1995) Temporal Difference Learning and TD-Gammon. Communication, of the ACM 38:58-68.
C. J. C. H. Watkins. (1989) Learning from Delayed Reward,. PhD thesis, Cambridge University.
| 1073 |@word version:1 seems:1 open:1 hu:1 simulation:2 pick:1 pressed:1 configuration:1 contains:1 uma:2 omniscient:1 past:1 current:2 activation:3 yet:1 attracted:1 must:2 john:1 subsequent:3 nq:2 reciprocal:1 complication:2 location:1 grupen:1 specialize:1 headed:1 inter:2 notably:1 expected:3 elman:1 examine:1 simulator:2 discounted:1 td:2 actual:2 encouraging:1 becomes:1 what:3 nj:1 temporal:2 every:2 act:1 iearning:2 exactly:1 control:8 unit:21 grant:1 service:1 negligible:1 tends:2 sutton:1 chose:1 twice:2 someone:1 fastest:1 range:1 averaged:1 practical:2 responsible:2 implement:1 backpropagation:1 footprint:1 area:1 dictate:1 gammon:3 spite:2 wait:8 suggest:1 get:2 cannot:4 close:1 onto:1 context:1 landing:1 conventional:2 primitive:2 anw:1 pure:1 his:1 unanswered:1 anyway:1 controlling:2 play:2 target:1 programming:1 homogeneous:1 us:2 gandhi:2 associate:1 approximated:2 located:1 bottom:2 solved:1 highest:4 equalize:1 mozer:1 environment:1 reward:1 dynamic:7 exhaustively:1 trained:2 exit:1 basis:2 completely:1 accelerate:1 train:1 forced:1 describe:1 artificial:1 approached:1 kevin:1 whose:3 heuristic:4 valued:2 noisy:1 asynchronously:1 online:4 advantage:1 took:2 date:1 achieve:2 lookahead:1 bao:4 empty:2 produce:1 incremental:1 andrew:1 pose:3 measured:2 nearest:1 school:1 strong:1 direction:6 stochastic:1 human:1 everything:1 hillsdale:1 require:3 hall:6 presumably:1 algorithmic:1 vary:1 achieves:1 travel:2 visited:2 minimization:1 mit:1 always:2 rather:1 cr:1 barto:4 office:1 encode:1 longest:3 backgammon:3 helpful:1 hidden:1 quasi:1 polling:1 upward:2 issue:1 aware:3 identical:1 represents:1 tem:2 future:1 minimized:1 report:1 randomly:1 gamma:2 individual:1 elevator:30 delayed:1 maintain:1 attempt:2 freedom:1 interest:1 possibility:1 investigate:1 tj:1 immense:1 integral:2 erlang:1 experience:2 unless:1 incomplete:1 desired:1 rush:1 theoretical:1 stopped:4 complicates:1 assignment:1 cost:2 entry:1 rounding:1 inadequate:1 erlbaum:1 stored:1 teacher:1 varies:1 person:1 peak:8 amherst:4 destination:1 off:3 probabilistic:1 again:1 squared:2 thesis:2 dr:1 priority:2 expert:2 return:2 dlb:4 lookup:1 sec:8 includes:1 depends:3 passenger:22 traffic:23 parallel:4 minimize:5 air:1 who:2 sy:2 yield:1 generalize:1 produced:1 none:1 trajectory:1 dave:1 processor:1 touretzky:2 nonstationarity:1 ed:2 against:1 ty:1 naturally:1 di:1 workstation:1 stop:6 rld:4 massachusetts:4 knowledge:2 car:33 obtainable:1 sophisticated:1 actually:1 appears:1 focusing:2 steve:1 day:3 leen:1 until:2 receives:1 ally:1 o:1 widespread:1 scientific:1 building:3 effect:1 assistance:1 during:7 branching:1 self:1 trying:1 complete:1 demonstrate:2 tt:1 motion:1 bradtke:3 temperature:1 percent:3 instantaneous:1 common:1 sigmoid:1 rl:23 significant:1 cambridge:2 enter:1 teaching:1 stochasticity:1 moving:3 access:1 lowered:1 longer:1 modell:1 closest:1 own:2 recent:1 tesauro:6 forcing:1 asif:1 binary:2 continue:2 victor:1 seen:1 additional:2 somewhat:1 floor:23 employed:1 determine:1 signal:2 multiple:1 technical:1 adapt:1 long:1 basic:3 controller:6 expectation:1 poisson:2 represent:1 achieved:1 receive:1 addition:2 want:2 interval:2 huff:6 operate:1 dispatching:5 unlike:1 call:5 nonstationary:2 door:1 feedforward:1 mips:2 serviced:1 architecture:6 lesser:1 intensive:1 rod:1 utility:2 fim:4 penalty:3 queue:8 nine:1 cause:2 action:12 ifit:1 amount:7 discount:3 generate:1 percentage:1 estimated:1 per:2 discrete:6 promise:1 waiting:10 group:1 four:2 threshold:1 changing:1 vast:1 button:11 sum:5 weigend:1 prob:1 master:2 decision:6 prefer:1 pushed:2 summer:1 occur:2 constraint:3 encodes:1 speed:2 extremely:2 department:3 tv:1 combination:4 poor:1 describes:2 gradually:1 taken:1 computationally:2 turn:2 needed:2 know:2 tractable:1 available:2 generalizes:1 decelerate:1 decentralized:5 experimentation:1 indirectly:1 top:1 include:1 calculating:1 build:1 approximating:1 classical:1 sweep:1 move:4 objective:6 added:3 already:1 occurs:1 rt:1 traditional:1 dp:6 thank:1 simulated:5 capacity:1 toward:2 assuming:1 length:2 procelling:1 modeled:1 providing:1 syltem:1 difficult:3 executed:1 robert:1 sector:5 policy:4 boltzmann:1 perform:1 allowing:2 unknown:1 observation:1 markov:1 finite:1 truncated:1 immediate:1 situation:1 communication:1 team:4 duff:2 esa:8 required:4 elapsed:3 registered:1 learned:1 hour:4 able:1 usually:1 receding:3 pattern:1 appeared:1 smolensky:1 challenge:1 program:1 including:3 power:1 event:8 difficulty:1 force:1 nth:2 prior:1 acknowledgement:1 fully:3 expect:1 agent:14 degree:1 sufficient:1 story:1 storing:1 share:1 balancing:2 course:1 cooperation:1 surprisingly:1 last:2 supported:1 arriving:1 offline:1 bias:1 taking:1 face:1 world:1 jor:1 avoids:2 rich:1 reinforcement:15 approximate:2 observable:2 idiosyncracies:1 lobby:4 keep:1 dealing:1 global:2 assumed:1 continuous:7 search:2 table:9 nature:1 learn:5 ignoring:1 improving:4 domain:3 did:1 crites:2 arrival:16 profile:10 fair:1 experienced:1 position:2 explicit:2 exponential:1 watkins:2 third:1 admissible:1 minute:4 down:20 load:5 decay:1 intractable:1 effectively:1 phd:2 push:1 horizon:3 cassandra:2 suited:1 likely:1 dissatisfaction:1 explore:1 christ:1 rlp:4 afternoon:1 lewis:2 acm:1 ma:5 considerable:3 change:1 typical:1 except:2 infinite:3 surpass:2 conservative:1 called:1 pas:1 ece:1 select:1 people:1 latter:1 tested:1 |
84 | 1,074 | Visual gesture-based robot guidance
with a modular neural system
E. Littmann,
A. Drees, and H. Ritter
Abt. Neuroinformatik, Fak. f. Informatik
Universitat Ulm, D-89069 Ulm, FRG
enno@neuro.informatik.uni-ulm.de
AG Neuroinformatik, Techn. Fakultat
Univ. Bielefeld, D-33615 Bielefeld, FRG
andrea,helge@techfak.uni-bielefeld.de
Abstract
We report on the development of the modular neural system "SEEEAGLE" for the visual guidance of robot pick-and-place actions.
Several neural networks are integrated to a single system that visually recognizes human hand pointing gestures from stereo pairs
of color video images. The output of the hand recognition stage is
processed by a set of color-sensitive neural networks to determine
the cartesian location of the target object that is referenced by the
pointing gesture. Finally, this information is used to guide a robot
to grab the target object and put it at another location that can
be specified by a second pointing gesture. The accuracy of the current system allows to identify the location of the referenced target
object to an accuracy of 1 cm in a workspace area of 50x50 cm. In
our current environment, this is sufficient to pick and place arbitrarily positioned target objects within the workspace. The system
consists of neural networks that perform the tasks of image segmentation, estimation of hand location, estimation of 3D-pointing
direction, object recognition, and necessary coordinate transforms.
Drawing heavily on the use of learning algorithms, the functions of
all network modules were created from data examples only.
1
Introduction
The rapidly developing technology in the fields of robotics and virtual reality requires the development of new and more powerful interfaces for configuration and
control of such devices. These interfaces should be intuitive for the human advisor
and comfortable to use. Practical solutions so far require the human to wear a
device that can transfer the necessary information. One typical example is the data
glove [14, 12]. Clearly, in the long run solutions that are contactless will be much
more desirable, and vision is one of the major modalities that appears especially
suited for the realization of such solutions.
In the present paper, we focus on a still restricted but very important task in robot
control, the guidance of robot pick-and-place actions by unconstrained human pointing gestures in a realistic laboratory environment. The input of target locations by
904
E. LITTMANN, A. DREES, H. RITTER
pointing gestures provides a powerful, very intuitive and comfortable functionality
for a vision-based man-machine interface for guiding robots and extends previous
work that focused on the detection of hand location or the discrimination of a small,
discrete number of hand gestures only [10, 1, 2, 8]. Besides two color cameras, no
special device is necessary to evaluate the gesture of the human operator.
A second goal of our approach is to investigate how to build a neural system for
such a complex task from several neural modules. The development of advanced
artificial neural systems challenges us with the task of finding architect.ures for the
cooperat.ion of multiple functional modules such that. part of the structure of the
overall system can be designed at a useful level of abstraction, but at the same t.ime
learning can be used to create or fine-tune the functionality of parts of t.he system
on the basis of suit.able training examples.
To approach this goal requires to shift the focus from exploring t.he properties of
single networks to exploring the propert.ies of entire systems of neural networks.
The work on "mixtures of experts" [3, 4] is one important contribution along these
lines. While this is a widely applicable and powerful approach, there clearly is
a need to go beyond the exploration of strictly hierarchical systems and to gain
experience with architectures t.hat admit more complex types of information flow
as required e.g. by the inclusion of feat.ures such as control of focal attention or
reent.rant processing branches. The need for such features arose very naturally in
the context of the task described above, and in the following sect.ion we will report
our results wit.h a system architecture that is crucially based on the exploitation of
such elements.
2
System architecture
Our system, described in fig. 1, is situated in a complex laboratory environment. A
robot arm with manipulator is mounted at one side of a table with several objects
of different color placed on it. A human operator is positioned at the next side to
the right of the robot. This scenery is watched by two cameras from the other two
sides from high above. The cameras yield a stereo color image of t.he scene (images
10). The operator points with one hand at one of the objects on the table. On the
basis of the image information, the object is located and the robot grabs it. Then,
the operator points at another location, where the robot releases the object. 1
The syst.em consists of several hardware components: a PUMA 560 robot arm with
six axes and a three-fingered manipulator 2; two single-chip PULNIX color cameras;
two ANDRox vision boards with software for data acquisition and processing; a
work space consisting of a table with a black grid on a yellow surface. Robot and
person refer to the same work space. Bot.h cameras must show both the human
hand and the table with the objects. Within this constraint, the position of the
cameras can be chosen freely as long as they yield significantly different views.
An important prerequisite for the recognition of the pointing direction is the segmentation of the human hand from the background scenery. This task is solved by
a LLM network (Sl) trained to yield a probability value for each image pixel to
belong to the hand region. The training is based on t.he local color information.
This procedure has been investigated in [7].
An important feature of the chosen method is the great reliability and robustness
of both the classification performance and the localization accuracy of the searched
object. Furthermore, the performance is quite constant over a wide range of image
resolutions. This allows a fast two-step procedure: First, the images are segmented
in low resolution (Sl: 11 -+ A1) and the hand position is extracted. Then, a small
1 In analogy to the sea eagle who watches its prey from high above, shoots down to grab
the prey, and then flies to a safe place to feed, we nicknamed our system "SEE-EAGLE".
2Development by Prof. Pfeiffer, TV Munich
Visual Gesture-based Robot Guidance with a Modular Neural System
905
Fig. 1: System architecture. From two color camera images 10 we extract the hand position
(11 I> Sl I> A1 (pixel coord.) I> P1 I> cartesian hand coord.). In a subframe centered on
the hand location (12) we determine the pointing direction (12 I> S2 I> A2 (pixel coord.) I>
G I> D I> pointing angles). Pointing direction and hand location define a cartesian target
location that is mapped to image coord. that define the centers of object subframes (10 I>
P2 I> 13). There we determine the target object (13 I> S3 I> A3) and map the pixel coord.
of its centers to world coord. (A3 I> P3 I> world target loc.). These coordinates are used
to guide the robot R to the target object.
906
E. LITTMANN. A. DREES. H. RlTIER
subframe (12) around the estimated hand position is processed in high resolution
by another dedicated LLM network (S2: 12 - t A2). For details of the segmentation
process, refer to [6].
The extraction of hand information by LLMs on the basis of Gabor masks has
already been studied for hand posture [9] and orientation [5]. The method is based
on a segmented image containing the hand only (A2). This image is filtered by 36
Gabor masks that are arranged on a 3x3 grid with 4 directions per grid position
and centered on the hand. The filter kernels have a radius of 10 pixels, the distance
between the grid points is 20 pixels. The 36 filter responses (G) form the input
vector for a LLM network (D). Further details of the processing are reported in [6].
The network yields the pointing direction of the hand (D: 12 - t G - t pointing
direction). Together with the hand position which is computed by a parametrized
self-organizing map ("PSOM", see below and [11, 13]) (P1: Al - t cartesian hand
position), a (cartesian) target location in the workspace can be calculated. This
location can be retransformed by the PSOM into pixel coordinates (P2: cartesian
target location - t target pixel coordinates). These coordinates define the center of
an "attention region" (13) that is searched for a set of predefined target objects.
This object recognition is performed by a set of LLM color segmentation networks
(S3: 13 - t A3), each previously trained for one of the defined targets. A ranking
procedure is used to determine the target object. The pixel coordinates ofthe target
in the segmented image are mapped by the PSOM to world coordinates (P3: A3 - t
cartesian target position). The robot R now moves to above these world coordinates,
moves vertically down, grabs whatever is there, and moves upward again. Now, the
system evaluates a second pointing gesture that specifies the place where to place
the object. This time, the world coordinates calculated on the basis of the pointing
direction from network D and the cartesian hand location from PSOM PI serve
directly as target location for the robot.
For our processing we must map corresponding pixels in the stereo images to cartesian world coordinates. For these transformations, training data was generated
with aid of the robot on a precise sampling grid. We automatically extract the
pixel coordinates of a LED at the tip of the robot manipulator from both images.
The seven-dimensional feature vector serves as training input for an PSOM network [11]. By virtue of its capability to represent a transformation in a symmetric,
"multiway" -fashion, this offers the additional benefit that both the camera-to-world
mapping and its inverse can be obtained with a single network trained only once on
a data set of 27 calibration positions of the robot. A detailed description for such
a procedure can be found in [13].
3
Results
3.1 System performance
The accuracy of the current system allows to estimate the pointing target to an
accuracy of 1 ? 0.4 cm (average over N = 7 objects at randomly chosen locations
in the workspace) in a workspace area of 50x50 cm. In our current environment,
this is sufficient to pick and place any of the seven defined target objects at any
location in the workspace. This accuracy can only be achieved if we use the object
recognition module described in sec. 2. The output of the pointing direction module
approximates the target location with an considerably lower accuracy of 3.6? 1.6 cm.
3.2 Image segmentation
The problem to evaluate these preprocessing steps has been discussed previously [7],
especially the relation of specifity and sensitivity of the network for the given task.
As the pointing recognition is based on a subframe centered on the hand center, it
is very sensitive to deviations from this center so that a good localization accuracy
Visual Gesture-based Robot Guidance with a Modular Neural System
907
is even more important than the classification rate. The localization accuracy is
calculated by measuring the pixel distance between the centers determined manually on the original image and as the center of mass in the image obtained after
application of the neural network. Table 1 provides quantitative results.
On the whole) the two-step cascade of LLM networks yields for 399 out of 400 images
an activity image precisely centered on the human hand. Only in one image) the
first LLM net missed the hand completely) due to a second hand in the image that
could be clearly seen in this view. This image was excluded from further processing
and from the evaluation of the localization accuracy.
Person A
Person H
Camera A
Pixel deviatIOn
NRMSE
0.8 ? 1.2
0.03 ? 0.06
1.3 ? 1.4
0.06 ? 0.11
Camera B
Pixel deViatIOn
NRMSE
0.8 ? 2.2
0.03 ? 0.09
2.2 ? 2.8
0.11 ? 0.21
Table 1: Estimation error of the hand localization on the test set. Absolute error in pixels
and normalized error for both persons and both camera images.
3.3 Recognition performance
One major problem in recognizing human pointing gestures is the variability of these
gestures and their measurement for the acquisition of reliable training information.
Different persons follow different strategies where and how to point (fig. 2 (center)
and (right?. Therefore) we calculate this information indirectly. The person is
told to point at a certain grid position with known world coordinates. From the
camera images we extract the pixel positions of the hand center and map them to
world coordinates using the PSOM net (PI in fig . 1). Given these coordinates the
angles of the intended pointing vector with the basis vectors of the world coordinate
system can be calculated trigonometrically. These angles form the target vector for
the supervised training of a LLM network (D in fig. 1).
After training) the output of the net is used to calculate the point where the pointing
vector intersects the table surface. For evaluation of the network performance we
measure the Euclidian distance between this point and the actual grid point where
the person intended to point at. Fig. 3 (left) shows the mean euclidean error MEE
of the estimated target position as a function of the number of learning steps. The
error on the training set can be considerably reduced) whereas on the test set the
improvement stagnates after some 500 training steps. If we perform even more
training steps the performance might actually suffer from overfitting. The graph
compares training and test results achieved on images obtained by two different
ways of determining the hand center. The "manual" curves show the performance
that can be achieved if the Gabor masks are manually centered on the hand. For
the "neuronal)) curves) the center of mass calculated in the fine-segmented and postprocessed subframe was used. This allows us to study the influence of the error of
the segmentation and localization steps on the pointing recognition. This influence
is rather small. The MEE increases from 17 mm for the optimal method to 19 mm
for the neural method) which is hardly visible in practice.
The curves in fig. 3 (center) are obtained if we apply the networks to images of
another person. The MEE is considerably larger but a detailed analysis' shows
that part of this deviation is due to systematic differences in the pointing strategy
as shown in fig. 2 (right). Over a wide range, the number of nodes used for the
LLM network has only minor influence on the performance. While obviously the
performance on the training set can be arbitrarily improved by spending more nodes,
the differences in the MEE on the test set are negligible in a range of 5 to 15 nodes.
Using more nodes is problematic as the training data consists of 50 examples only.
If not indicated otherwise) we use LLM networks with 10 nodes. Further results)
908
E. LIITMANN. A. DREES. H. RIITER
Fig. 2: The table grid points can be reconstructed according to the network output. The
target grid is dotted . Reconstruction of training grid (left) and test grid (center) for one
person, and of the test grid for another person (right).
MEB on test oet of unknown perron
MER
30
20
e?
I~
10
~
---- ~--.---
~
~-
0
n
m..... aI,trainneuronal, train manual, test -
:l~
100
250
sao
1000 2SOO SOOO
train.., itHabonr
e
?
70
68
66
64
62
60
58
56
4
-~.
100
:l~
sao
1000
2SOO SOOO
Fig. 3: The euclidean error of
estimated target point calculated using the network output depends on the preprocessing (left), and the person
(center).
trairq IteratioN
comparing the pointing recognition based on only one of the camera images, indicate
that the method works better if the camera takes a lateral view rather than a frontal
view . All evaluations were done for both persons. The performance was always very
similar.
4
Discussion
While we begin to understand many properties of neural networks at the single
network level, our insight into principled ways of how to build neural systems is
still rather limited . Due to the complexity of this task, theoretical progress is
(and probably will continue to be) very slow. What we can do in the mean time,
however, is to experiment with different design strategies for neural systems and
try to "evolve" useful approaches by carefully chosen case studies.
The current work is an effort along these lines. It is focused on a challenging,
practically important vision task with a number of generic features that are shared
with vision tasks for which biological vision systems were evolved.
One important issue is how to achieve robustness at the different processing levels
of the system. There are only very limited possibilities to study this issue in simulations, since practically nothing is known about the statistical properties of the
various sources of error that occur when dealing with real world data. Thus, a real
implementation that works with actual data is practically the only way to study
the robustness issue in a realistic fashion. Therefore, the demonstrated integration
of several functional modules that we had developed previously in more restricted
settings [7, 6] was a non-trivial test of the feasability of having these functions
cooperate in a larger, modular system. It also gives confidence that the scaling
problem can be dealt with successfully if we apply modular neural nets.
A related and equally important issue was the use of a processing strategy in which
earlier processing stages incrementally restrict the search space for the subsequent
stages. Thus, the responsibility for achieving the goal is not centralized in any single
module and subsequent modules have always the chance to compensate for limited
errors of earlier stages. This appears to be a generally useful strategy for achieving
Visual Gesture-based Robot Guidance with a Modular Neural System
909
robustness and for cutting computational costs that is related to the use of "focal
attention" , which is clearly an important element of many biological vision systems.
A third important point is the extensive use of learning to build the essential constituent functions of the system from data examples. We are not yet able to train
the assembled system as a whole. Instead, different modules are trained separately
and are integrated only later. Still, the experience gained with assembling a complex system via this "engineering-type" of approach will be extremely valuable for
gradually developing the capability of crafting larger functional building blocks by
learning methods.
We conclude that carefully designed experiments with modular neural systems that
are based on the use of real world data and that focus on similar tasks for which
also biological neural systems were evolved can make a significant contribution in
tackling the challenge that lies ahead of us: to develop a reliable technology for the
construction of large-scale artificial neural systems that can solve complex tasks in
real world environments.
Acknowledgements
We want to thank Th. Wengerek (robot control), J. Walter (PSOM implementation), and
P. Ziemeck (image acquisition software). This work was supported by BMFT Grant No.
ITN9104AO.
References
[1] T. J. Darell and A. P. Pentland. Classifying hand gestures with a view-based distributed representation. In J . D. Cowan, G. Tesauro, and J. Alspector, editors, Neural
Information Processing Systems 6, pages 945-952. Morgan Kaufman, 1994.
[2] J. Davis and M. Shah. Recognizing hand gestures. In J.-O. Eklundh, editor, Computer
Vision - ECCV '94, volume 800 of Lecture Notes in Computer Science, pages 331340. Springer-Verlag, Berlin Heidelberg New York, 1994.
[3] R.A. Jacobs, M.1. Jordan, S.J. Nowlan, and G.E. Hinton. Adaptive mixtures of local
experts. Neural Computation, 3:79- 87, 1991.
[4] M.1. Jordan and R.A. Jacobs. Hierarchical mixtures of experts and the EM algorithm.
Neural Computation, 6(2):181-214, 1994.
[5] F. Kummert, E. Littmann, A. Meyering, S. Posch, H. Ritter, and G. Sagerer. A
hybrid approach to signal interpretation using neural and semantic networks. In
Mustererkennung 1993, pages 245-252. Springer, 1993.
[6] E. Littmann, A. Drees, and H. Ritter. Neural recognition of human pointing gestures
in real images. Submitted to Neural Processing Letters, 1996.
[7] E. Littmann and H. Ritter. Neural and statistical methods for adaptive color segmentation - a comparison. In G. Sagerer, S. Posch, and F. Kummert, editors,
Mustererkennung 1995, pages 84-93. Springer-Verlag, Heidelberg, 1995.
[8] C. Maggioni. A novel device for using the hand as a human-computer interface. In
Proceedings HC1'93 - Human Control Interface, Loughborough, Great Britain, 1993.
[9] A. Meyering and H. Ritter. Learning 3D shape perception with local linear maps. In
Proc. of the lJCNN, volume IV, pages 432-436, Baltimore, MD, 1992.
[10] Steven J. Nowlan and John C. Platt. A convolutional neural network hand tracker.
In Neural Information Processing Systems 7. Morgan Kaufman Publishers, 1995.
[11] H. Ritter. Parametrized self-organizing maps for vision learning tasks. In P. Morasso,
editor, ICANN '94. Springer-Verlag, Berlin Heidelberg New York, 1994.
[12] K. Viiiina.nen and K. Bohm. Gesture driven interaction as a human factor in virtual
environments - an approach with neural networks. In R. Earnshaw, M. Gigante, and
H. Jones, editors, Virtual reality systems, pages 93-106. Academic Press, 1993.
[13] J. Walter and H. Ritter. Rapid learning with parametrized self-organizing maps.
Neural Computing, 1995. Submitted.
[14] T. G. Zimmermann, J. Lanier, C. Blanchard, S. Bryson, and Y. Harvill. A hand
gesture interface device. In Proc. CHI+GI, pages 189-192, 1987.
| 1074 |@word exploitation:1 mee:4 simulation:1 crucially:1 jacob:2 pick:4 euclidian:1 configuration:1 loc:1 meyering:2 current:5 comparing:1 nowlan:2 yet:1 tackling:1 must:2 john:1 visible:1 subsequent:2 realistic:2 shape:1 designed:2 discrimination:1 device:5 filtered:1 provides:2 node:5 location:18 along:2 consists:3 mask:3 rapid:1 alspector:1 p1:2 andrea:1 chi:1 automatically:1 actual:2 begin:1 mass:2 what:1 evolved:2 cm:5 kaufman:2 developed:1 ag:1 finding:1 transformation:2 quantitative:1 platt:1 control:5 whatever:1 grant:1 comfortable:2 negligible:1 engineering:1 referenced:2 local:3 vertically:1 drees:5 black:1 might:1 coord:6 studied:1 challenging:1 limited:3 range:3 mer:1 practical:1 camera:14 practice:1 block:1 x3:1 procedure:4 area:2 significantly:1 gabor:3 cascade:1 puma:1 confidence:1 operator:4 put:1 context:1 influence:3 map:7 demonstrated:1 center:14 britain:1 go:1 attention:3 focused:2 resolution:3 wit:1 insight:1 maggioni:1 coordinate:15 techfak:1 target:25 construction:1 heavily:1 ulm:3 fak:1 element:2 recognition:10 located:1 steven:1 module:9 fly:1 solved:1 calculate:2 region:2 sect:1 valuable:1 principled:1 environment:6 complexity:1 trained:4 serve:1 localization:6 basis:5 completely:1 chip:1 various:1 intersects:1 train:3 univ:1 walter:2 fast:1 artificial:2 quite:1 modular:8 widely:1 larger:3 solve:1 drawing:1 otherwise:1 hc1:1 gi:1 obviously:1 net:4 reconstruction:1 interaction:1 realization:1 rapidly:1 organizing:3 achieve:1 intuitive:2 description:1 constituent:1 sea:1 postprocessed:1 object:21 develop:1 minor:1 progress:1 p2:2 indicate:1 direction:9 safe:1 radius:1 functionality:2 filter:2 exploration:1 human:14 centered:5 virtual:3 frg:2 require:1 biological:3 exploring:2 strictly:1 mm:2 practically:3 around:1 tracker:1 visually:1 great:2 mapping:1 pointing:24 major:2 a2:3 enno:1 estimation:3 proc:2 applicable:1 sensitive:2 create:1 successfully:1 clearly:4 always:2 rather:3 arose:1 psom:7 release:1 focus:3 ax:1 improvement:1 bryson:1 abstraction:1 integrated:2 entire:1 relation:1 pixel:16 overall:1 classification:2 orientation:1 upward:1 issue:4 development:4 special:1 integration:1 field:1 once:1 feasability:1 extraction:1 having:1 sampling:1 manually:2 jones:1 report:2 abt:1 randomly:1 ime:1 intended:2 consisting:1 suit:1 detection:1 centralized:1 investigate:1 possibility:1 evaluation:3 fingered:1 mixture:3 predefined:1 oet:1 posch:2 necessary:3 experience:2 iv:1 euclidean:2 guidance:6 theoretical:1 advisor:1 earlier:2 measuring:1 cost:1 deviation:4 recognizing:2 universitat:1 reported:1 considerably:3 person:12 sensitivity:1 workspace:6 ritter:8 told:1 systematic:1 tip:1 together:1 again:1 containing:1 admit:1 expert:3 syst:1 de:2 sec:1 blanchard:1 ranking:1 depends:1 performed:1 view:5 try:1 responsibility:1 later:1 capability:2 contribution:2 accuracy:10 convolutional:1 who:1 yield:5 identify:1 ofthe:1 yellow:1 dealt:1 informatik:2 submitted:2 manual:2 stagnates:1 evaluates:1 acquisition:3 naturally:1 gain:1 color:10 segmentation:7 positioned:2 carefully:2 actually:1 appears:2 feed:1 supervised:1 follow:1 response:1 improved:1 arranged:1 done:1 furthermore:1 stage:4 hand:36 incrementally:1 indicated:1 lanier:1 manipulator:3 building:1 normalized:1 excluded:1 symmetric:1 laboratory:2 semantic:1 self:3 davis:1 dedicated:1 interface:6 cooperate:1 spending:1 image:30 shoot:1 novel:1 functional:3 sooo:2 volume:2 belong:1 assembling:1 he:4 approximates:1 interpretation:1 discussed:1 refer:2 measurement:1 significant:1 ai:1 unconstrained:1 focal:2 grid:12 inclusion:1 multiway:1 had:1 reliability:1 wear:1 robot:22 calibration:1 surface:2 driven:1 tesauro:1 certain:1 verlag:3 arbitrarily:2 continue:1 seen:1 morgan:2 additional:1 freely:1 determine:4 signal:1 branch:1 multiple:1 desirable:1 segmented:4 academic:1 gesture:19 offer:1 long:2 compensate:1 equally:1 y:1 a1:2 watched:1 neuro:1 vision:9 iteration:1 kernel:1 represent:1 robotics:1 ion:2 achieved:3 background:1 whereas:1 ures:2 fine:2 separately:1 want:1 baltimore:1 source:1 modality:1 publisher:1 probably:1 cowan:1 flow:1 jordan:2 architecture:4 restrict:1 shift:1 six:1 effort:1 stereo:3 suffer:1 york:2 hardly:1 action:2 useful:3 generally:1 detailed:2 tune:1 transforms:1 situated:1 hardware:1 processed:2 reduced:1 specifies:1 sl:3 problematic:1 s3:2 dotted:1 bot:1 estimated:3 per:1 discrete:1 nrmse:2 achieving:2 prey:2 grab:4 graph:1 run:1 angle:3 inverse:1 powerful:3 letter:1 place:7 bielefeld:3 extends:1 p3:2 missed:1 scaling:1 activity:1 eagle:2 occur:1 ahead:1 constraint:1 precisely:1 scene:1 software:2 propert:1 extremely:1 developing:2 tv:1 munich:1 according:1 em:2 helge:1 restricted:2 gradually:1 zimmermann:1 previously:3 serf:1 prerequisite:1 apply:2 nen:1 hierarchical:2 indirectly:1 generic:1 robustness:4 shah:1 hat:1 original:1 littmann:6 recognizes:1 meb:1 especially:2 build:3 prof:1 crafting:1 move:3 already:1 posture:1 strategy:5 md:1 distance:3 thank:1 mapped:2 lateral:1 berlin:2 parametrized:3 seven:2 trivial:1 besides:1 design:1 implementation:2 unknown:1 perform:2 pentland:1 architect:1 hinton:1 variability:1 precise:1 neuroinformatik:2 pair:1 required:1 specified:1 perron:1 extensive:1 fakultat:1 x50:2 assembled:1 able:2 beyond:1 specifity:1 below:1 perception:1 challenge:2 reliable:2 soo:2 video:1 hybrid:1 advanced:1 arm:2 pfeiffer:1 morasso:1 technology:2 created:1 extract:3 acknowledgement:1 evolve:1 determining:1 lecture:1 mounted:1 analogy:1 ljcnn:1 sufficient:2 sao:2 editor:5 classifying:1 pi:2 eccv:1 placed:1 supported:1 guide:2 side:3 understand:1 wide:2 absolute:1 benefit:1 distributed:1 curve:3 calculated:6 world:13 adaptive:2 preprocessing:2 far:1 reconstructed:1 uni:2 cutting:1 feat:1 eklundh:1 dealing:1 overfitting:1 llm:10 conclude:1 search:1 reality:2 table:8 transfer:1 heidelberg:3 investigated:1 complex:5 icann:1 s2:2 whole:2 nothing:1 neuronal:1 fig:10 board:1 fashion:2 slow:1 aid:1 position:12 guiding:1 lie:1 third:1 down:2 virtue:1 a3:4 essential:1 gained:1 cartesian:9 bmft:1 suited:1 led:1 visual:5 watch:1 springer:4 chance:1 extracted:1 goal:3 scenery:2 bohm:1 shared:1 man:1 typical:1 glove:1 determined:1 techn:1 searched:2 mustererkennung:2 frontal:1 evaluate:2 |
85 | 1,075 | Improving Committee Diagnosis with
Resampling Techniques
Bambang Parmanto
Department of Information Science
University of Pittsburgh
Pittsburgh, PA 15260
parmanto@li6.pitt. edu
Paul W. Munro
Department of Information Science
University of Pittsburgh
Pittsburgh, PA 15260
munro@li6.pitt. edu
Howard R. Doyle
Pittsburgh Transplantation Institute
3601 Fifth Ave, Pittsburgh, PA 15213
doyle@vesaliw. tu. med. pitt. edu
Abstract
Central to the performance improvement of a committee relative to
individual networks is the error correlation between networks in the
committee. We investigated methods of achieving error independence between the networks by training the networks with different
resampling sets from the original training set. The methods were
tested on the sinwave artificial task and the real-world problems of
hepatoma (liver cancer) and breast cancer diagnoses.
1
INTRODUCTION
The idea of a neural net committee is to combine several neural net predictors
to perform collective decision making, instead of using a single network (Perrone,
1993). The potential of a committee in improving classification performance has
been well documented. Central to this improvement is the extent to which the
errors tend to coincide. Committee errors occur where the misclassification sets of
individual networks overlap. On the one hand, if all errors of committee members
coincide, using a committee does not improve performance. On the other hand, if
errors do not coincide, performance of the committee dramatically increases and
asymptotically approaches perfect performance. Therefore, it is beneficial to make
the errors among the networks in the committee less correlated in order to improve
the committee performance.
Improving Committee Diagnosis with Resampling Techniques
883
One way of making the networks less correlated is to train them with different sets
of data. Decreasing the error correlation by training members of the committee
using different sets of data is intuitively appealing. Networks trained with different
data sets have a higher probability of generalizing differently and tend to make
errors in different places in the problem space.
The idea is to split the data used in the training into several sets. The sets are
not necessarily mutually exclusive, they may share part of the set (overlap). This
idea resembles resampling methods such as cross-validation and bootstrap known
in statistics for estimating the error of a predictor from limited sets of available
data. In the committee framework, these techniques are recast to construct different
training sets from the original training set. David Wolpert (1992) has put forward
a general framework of training the committee using different partitions of the
data known as stacked generalization. This approach has been adopted to the
regression environment and is called stacked regression (Breiman, 1992). Stacked
regression uses cross-validation to construct different sets of regression functions.
A similar idea of using a bootstrap method to construct different training sets has
been proposed by Breiman (1994) for classification and regression trees predictors.
2
2.1
THE ALGORITHMS
BOOTSTRAP COMMITTEE (BOOTC)
Consider a total of N items are available for training. The approach is to generate
K replicates from the original set, each containing the same number of item as the
original set. The replicates are obtained from the original set by drawing at random
with replacement. See Efron & Tibshirani (1993) for background on bootstrapping.
Use each replicate to train each network in the committee.
Using this bootstrap procedure, each replicate is expected to include roughly 36
% duplicates (due to replacement during sampling). Only the distinct fraction is
used for training and the leftover fraction for early stopping, if necessary (notice
slight difference from the standard bootstrapping and from Breiman's bagging).
Early stopping usually requires a fraction of the data to be taken from the original
training set, which might degrade the performance of the neural network. The
advantage of a BOOTC is that the leftover sample is already available.
Algorithm:
1. Generate bootstrap replicates Ll, ... , LK from the original set.
2. For each bootstrap replicate, collect unsampled items into leftover sample
..
l*l , ... , l*K .
set s, gIVIng:
3. For each Lk, train a network. Use the leftover set l*k as validation stopping
criteria if necessary. Giving K neural net predictors: f(~i Lk)
4. Build a committee from the bootstrap networks using a simple averaging
procedure: fcom(~) =
~~=l f(~i Lk)
ic
There is no rule as to how many bootstrap replicates should be used to achieve a
good performance. In error estimation, the number ranges from 20 to 200. It is
beneficial to keep the number of replicates, hence the number of networks, small to
reduce training time. Unless the networks are trained on a parallel machine, training
time increases proportionally to the number of networks in the committee. In this
experiment, 20 bootstrap training replicates were constructed for 20 networks in
884
B. PARMANTO, P. W. MUNRO, H. R. DOYLE
the committee. Twenty replicates were chosen since beyond this number there is
no significant improvement on the performance.
2.2
CROSS-VALIDATION COMMITTEE (CVC)
The algorithm is quite similar to the procedure used in prediction error estimation.
First, generate replicates from the original training set by removing a fraction of
the data. Let D denote the original data, and D- V denote the data with subset
v removed. The procedure revolves so that each item is in the removed fraction
at least once. Generate replicates D11Jl , ??? Di/Ie and train each network in the
committee with one replicate.
An important issue in the eve is the degree of data overlap between the replicates.
The degree of overlap depends on the number of replicates and the size of a removed
fraction from the original sample. For example, if the committee consists of 5
networks and 0.5 of the data are removed for each replicate, the minimum fraction
of overlap is 0 (calculation: (v x 2) - 1.0) and the maximum is ~ (calculation:
1.0 -
k)'
Algorithm:
1. Divide data into v-fractions db . . . , dv
2. Leave one fraction die and train network fie with the rest of the data (D-d le ).
3. Use die as a validation stopping criteria, if necessary.
4. Build a committee from the networks using a simple averaging procedure.
The fraction of data overlap determines the trade-off between the individual network
performance and error correlation between the networks. Lower correlation can be
expected if the networks train with less overlapped data, which means a larger
removed fraction and smaller fraction for training. The smaller the training set
size, the lower the individual network performance that can be expected.
We investigated the effect of data overlap on the error correlations between the
networks and the committee performance. We also studied the effect of training
size on the individual performance. The goal was to find an optimal combination
of data overlap and individual training size.
3
THE BASELINE & PERFORMANCE EVALUATION
To evaluate the improvement of the proposed methods on the committee performance, they should be compared with existing methods as the baseline. The common method for constructing a committee is to train an ensemble of networks
independently. The networks in the committee are initialized with different sets
of weights. This type of committee has been reported as achieving significant improvement over individual network performances in regression (Hashem, 1993) and
classification tasks (Perrone, 1993; Parmanto et al., 1994).
The baseline, BOOTe, and eve were compared using exactly the same architecture
and using the same pair of training-test sets. Performance evaluation was conducted
using 4-fold exhaustive cross-validation where 0.25 fraction of the original data is
used for the test set and the remainder of the data is used for the training set. The
procedure was repeated 4 times so that all items were once on the test set. The
performance was calculated by averaging the results of 4 test sets. The simulations
Improving Committee Diagnosis with Resampling Techniques
885
were conducted several times using different initial weights to exclude the possibility
that the improvement was caused by chance.
4
4.1
EXPERIMENTS
SYNTHETIC DATA: SINWAVE CLASSIFICATION
The sinwave task is a classification problem with two classes, a negative class represented as 0 and a positive class represented as 1. The data consist of two input
variables, x = (Xli X2). The entire space is divided equally into two classes with
the separation line determined by the curve X2 = sin( 2: Xl). The upper half of the
rectangle is the positive class, while the lower half is the negative one (see Fig. 1).
Gaussian noise along the perfect boundary with variance of 0.1 is introduced to
the clean data and is presented in Fig. 1 (middle). Let z be a vector drawn from
the Gaussian distribution with variance TI, then the classification rule is given by
equation:
(1)
A similar artificial problem is used to analyze the bias-variance trade-offs by Geman
et al. (1992).
Figure 1: Complete and clean data/without noise (top), complete data with noise
(middle), and a small fraction used for training (bottom).
The population contains 3030 data items, since a grid of 0.1 is used for both Xl and
X2 . In the real world, we usually have no access to the entire population. To mimic
this situation, the training set contained only a small fraction of the population.
Fig. 1 (bottom) visualizes a training set that contains 200 items with 100 items for
each class. The training set is constructed by randomly sampling the population.
The performance of the predictor is measured with respect to the test set. The
population (3030 items) is used as the test set.
4.2
HEPATOMA DETECTION
Hepatoma is a very important clinical problem in patients who are being considered
for liver transplantation for its high probability of recurrence. Early hepatoma
detection may improve the ultimate outlook of the patients since special treatment
can be carried out. Unfortunately, early detection using non-invasive procedures
886
B. PARMANTO, P. W. MUNRO, H. R. DOYLE
can be difficult, especially in the presence of cirrhosis. We have been developing
neural network classifiers as a detection system with minimum imaging or invasive
studies (Parmanto et al., 1994).
The task is to detect the presence or absence (binary output) of a hepatoma given
variables taken from an individual patient. Each data item consists of 16 variables,
7 of which are continuous variables and the rest are binary variables, primarily
blood measurements.
For this experiment, 1172 data items with their associated diagnoses are available.
Out of 1172 itmes, 693 items are free from missing values, 309 items contain missing
values only on the categorical variables, and 170 items contain missing values on
both types of variables. For this experiment, only the fraction without missing
values and the fraction with missing values on the categorical variables were used,
giving the total item of 1002. Out of the 1002 items, 874 have negative diagnoses
and the remaining 128 have positive diagnoses.
4.3
BREAST CANCER
The task is to diagnose if a breast cytology is benign or malignant based on cytological characteristics. Nine input variables have been established to differentiate
between the benign and malignant samples which include clump thickness, marginal
adhesion, the uniformity of cell size and shape, etc.
The data set was originally obtained from the University of Wisconsin Hospitals
and currently stored at the UCI repository for machine learning (Murphy & Aha,
1994). The current size of the data set is 699 examples.
5
THE RESULTS
Committee Performance
Indiv. Performance
~ ~.::.:.:-:~~~?:
.: : ..::.::---.-.-.........---.. . .
?
? :!
---.... _---
,I; N
~ .....
o
4
6
10
12
14
16
4
8
bas.an.
0
-
?
::-:::.
&10...,,,,
10
12
14
/I hidden units
/I hidden units
Correlation
Percent Improvement
16
0r------------------.
.... -... -
-~-------~-------~
......-. ....-.........
-- ...._-. --_._..._---.
" , --.
Q
o
o~
4
-~. . . .
& :::: li&>mr",
________________
6
10
12
II hidden units
14
~
16
8
10
/I hidden
12
14
16
...,its
Figure 2: Results on the sinwave classif. task. Performances of individual nets
and the committee (top); error correlation and committee improvement (bottom).
Figure 2. (top) and Table 1. show that the performance of the committee is always
better than the average performance of individual networks in all three committees.
Improving Committee Diagnosis with Resampling Techniques
Task
Methods
Smwave
(2 vars )
Baseline
BOOTC
CVC
Baseline
BOOTC
CVC
BaSeline
BOOTC
CVC
Cancer
(9 vars)
Hepatoma
(16 vars)
Indiv. Nets
% error
13.31
12.85
15.72
2.7
3.14
3.2
25.95
26.00
26.90
Error
Corr
.87
.57
.33
.96
.83
.80
.89
.70
.55
Committee
% error
11.8
8.36
9.79
2.5
2.0
1.63
23.25
19.72
19.05
887
Improv.
to Indiv.
11 '70
35 %
38 %
5%
34 %
49 %
10.5 %
24 %
29 %
Improv.
to baseline
29 %
17 %
20 %
35 %
15.2 %
18 %
Table 1: Error rate, correlation, and performance improvement calculated based on
the best architecture for each method. Reduction of misclassification rates compare
to the baseline committee
Correlation vs . Fraction of Data Overlap
0r-----------------------____- .
m
?
.,.,
T
N
o
!
i~ ,,
Fraction 01data overlap
Figure 3: Error correlation and fraction of overlap in training data (results from
the sinwave classification task).
The CVC and BOOTC are always better than the baseline even when the individual
network performance is worse. Figure 2 (bottom) and the table show that the
improvement of a committee over individual networks is proportional to the error
correlation between the networks in the committee. The CVC consistently produces
significant improvement over its individual network performance due to the low error
correlation, while the baseline committee only produces modest improvement. This
result confirms the basic assumption of this research: committee performance can
be improved by decorrelating the errors made by the networks.
The performance of a committee depends on two factors: individual performance of
the networks and error correlation between the networks. The gain of using BOOTC
or CVC depends on how the algorithms can reduce the error correlations while still
maintaining the individual performance as good as the individual performance of the
baseline. The BOOTC produced impressive improvement (29 %) over the baseline
on the sinwave task due to the lower correlation and good individual performance.
The performances of the BOOTC on the other two tasks were not as impressive
due to the modest reduction of error correlation and slight decrease in individual
performance. The performances were still significantly better than the baseline
committee. The CVC, on the other hand, consistently reduced the correlation and
888
B. PARMANTO, P. W. MUNRO, H. R. DOYLE
improved the committee performance. The improvement on the sinwave task was
not as good as the BOOTC due to the low individual performance.
The individual performance of the CVC and BOOTC in general are worse than the
baseline. The individual performance of CVC is 18 % and 19 % lower than the
baseline on the sinwave and cancer tasks respectively, while the BOOTC suffered
significant reduction of individual performance only on the cancer task (16 %). The
degradation of individual performance is due to the smaller training set for each
network on the CVC and the BOOTC. The detrimental effect of a small training
set, however, is compensated by low correlation between the networks. The effect
of a smaller training set depends on the size of the original training set. If the data
size is large, using a smaller set may not be harmful. On the contrary, if the data set
is small, using an even smaller data set can significantly degrade the performance.
Another interesting finding of this experiment is the relationship between the error
correlation and the overlap fraction in the training set. Figure 3 shows that small
data overlap causes the networks to have low correlation to each other.
6
SUMMARY
Training committees of networks using different set of data resampled from the
original training set can improve committee performance by reducing the error correlation among the networks in the committee. Even when the individual network
performances of the BOOTC and CVC degrade from the baseline networks, the
committee performance is still better due to the lower correlation.
Acknowledgement
This study is supported in part by Project Grant DK 29961 from the National
Institutes of Health, Bethesda, MD. We would like to thank the Pittsburgh Transplantation Institute for providing the data for this study.
References
Breiman, L, (1992) Stacked Regressions, TR 367, Dept. of Statistics., UC. Berkeley.
Breiman, L, (1994) Bagging Predictors, TR 421, Dept. of Statistics, UC. Berkeley.
Efron, B., & Tibshirani, R.J. (1993) An Introd. to the Bootstrap. Chapman & Hall.
Hashem, S. (1994). Optimal Linear Combinations of Neural Networks. PhD Thesis,
Purdue University.
Geman, S., Bienenstock, E., and Doursat, R. (1992) Neural networks and the
bias/variance dilemma. Neural Computation, 4(1), 1-58.
Murphy, P. M., &. Aha, D. W. (1994). UCI Repository of machine learning databases
[ftp: ics.uci.edu/pub/machine-Iearning-databases/]
Parmanto, B., Munro, P.W., Doyle, H.R., Doria, C., Aldrighetti, 1., Marino, I.R.,
Mitchel, S., and Fung, J.J. (1994) Neural network classifier for hepatoma detectipn.
Proceedings of the World Congress of Neural Networks 1994 San Diego, June 4-9.
Perrone, M.P. (1993) Improving Regression Estimation: Averaging Methods for
Variance Reduction with Eztension to General Convez Measure Optimization. PhD
Thesis, Department of Physics, Brown University.
Wolpert, D. (1992). Stacked generalization, Neural Networks, 5, 241-259.
| 1075 |@word repository:2 middle:2 replicate:5 confirms:1 simulation:1 tr:2 outlook:1 reduction:4 initial:1 cytology:1 contains:2 pub:1 existing:1 current:1 partition:1 benign:2 shape:1 resampling:6 v:1 half:2 item:16 along:1 constructed:2 consists:2 combine:1 expected:3 roughly:1 decreasing:1 project:1 estimating:1 finding:1 bootstrapping:2 berkeley:2 ti:1 iearning:1 exactly:1 classifier:2 unit:3 grant:1 positive:3 convez:1 congress:1 might:1 studied:1 resembles:1 collect:1 revolves:1 limited:1 range:1 cirrhosis:1 clump:1 bootstrap:10 procedure:7 significantly:2 indiv:3 put:1 missing:5 compensated:1 independently:1 rule:2 population:5 diego:1 us:1 pa:3 overlapped:1 geman:2 database:2 bottom:4 trade:2 removed:5 decrease:1 environment:1 hashem:2 trained:2 uniformity:1 dilemma:1 differently:1 represented:2 train:7 stacked:5 distinct:1 artificial:2 exhaustive:1 quite:1 larger:1 drawing:1 transplantation:3 statistic:3 differentiate:1 advantage:1 net:5 remainder:1 tu:1 uci:3 achieve:1 produce:2 perfect:2 leave:1 ftp:1 liver:2 measured:1 unsampled:1 generalization:2 considered:1 ic:2 hall:1 pitt:3 early:4 estimation:3 currently:1 leftover:4 offs:1 gaussian:2 always:2 breiman:5 june:1 improvement:14 consistently:2 ave:1 baseline:16 detect:1 stopping:4 entire:2 hidden:4 bienenstock:1 issue:1 classification:7 among:2 special:1 uc:2 marginal:1 construct:3 once:2 sampling:2 chapman:1 mimic:1 duplicate:1 hepatoma:7 primarily:1 randomly:1 doyle:6 national:1 individual:24 murphy:2 replacement:2 detection:4 possibility:1 evaluation:2 replicates:11 necessary:3 modest:2 unless:1 tree:1 divide:1 aha:2 initialized:1 harmful:1 subset:1 predictor:6 conducted:2 reported:1 stored:1 thickness:1 synthetic:1 ie:1 off:1 physic:1 thesis:2 central:2 containing:1 worse:2 li:1 potential:1 exclude:1 caused:1 depends:4 diagnose:1 analyze:1 parallel:1 li6:2 variance:5 who:1 characteristic:1 ensemble:1 xli:1 produced:1 visualizes:1 invasive:2 associated:1 di:1 gain:1 treatment:1 efron:2 higher:1 originally:1 improved:2 decorrelating:1 correlation:22 hand:3 effect:4 contain:2 brown:1 classif:1 hence:1 ll:1 during:1 sin:1 recurrence:1 die:2 criterion:2 complete:2 percent:1 common:1 slight:2 significant:4 measurement:1 grid:1 access:1 impressive:2 etc:1 binary:2 minimum:2 mr:1 ii:1 fcom:1 fie:1 calculation:2 cross:4 clinical:1 divided:1 equally:1 prediction:1 regression:8 basic:1 breast:3 patient:3 cell:1 background:1 cvc:12 adhesion:1 suffered:1 rest:2 doursat:1 med:1 tend:2 db:1 member:2 contrary:1 eve:2 presence:2 split:1 independence:1 architecture:2 reduce:2 idea:4 introd:1 munro:6 ultimate:1 nine:1 cause:1 dramatically:1 proportionally:1 documented:1 generate:4 reduced:1 notice:1 tibshirani:2 diagnosis:8 achieving:2 drawn:1 blood:1 clean:2 rectangle:1 imaging:1 asymptotically:1 fraction:21 place:1 separation:1 marino:1 decision:1 resampled:1 fold:1 occur:1 x2:3 department:3 developing:1 fung:1 combination:2 perrone:3 beneficial:2 smaller:6 appealing:1 bethesda:1 making:2 intuitively:1 dv:1 taken:2 equation:1 mutually:1 committee:49 malignant:2 adopted:1 available:4 original:13 bagging:2 top:3 remaining:1 include:2 maintaining:1 giving:3 build:2 especially:1 already:1 exclusive:1 md:1 detrimental:1 thank:1 degrade:3 evaluate:1 extent:1 relationship:1 providing:1 difficult:1 unfortunately:1 negative:3 ba:1 collective:1 twenty:1 perform:1 upper:1 howard:1 purdue:1 situation:1 david:1 introduced:1 pair:1 established:1 beyond:1 usually:2 recast:1 misclassification:2 overlap:13 improv:2 improve:4 lk:4 carried:1 categorical:2 health:1 acknowledgement:1 parmanto:8 relative:1 wisconsin:1 interesting:1 proportional:1 var:3 validation:6 degree:2 share:1 cancer:6 summary:1 supported:1 free:1 bias:2 institute:3 fifth:1 curve:1 calculated:2 boundary:1 world:3 forward:1 made:1 coincide:3 san:1 keep:1 pittsburgh:7 continuous:1 table:3 improving:6 investigated:2 necessarily:1 constructing:1 noise:3 paul:1 repeated:1 fig:3 xl:2 removing:1 dk:1 consist:1 corr:1 phd:2 wolpert:2 generalizing:1 contained:1 determines:1 chance:1 goal:1 absence:1 determined:1 reducing:1 averaging:4 degradation:1 called:1 total:2 hospital:1 dept:2 tested:1 correlated:2 |
86 | 1,076 | Learning Sparse Perceptrons
Jeffrey C. Jackson
Mathematics & Computer Science Dept.
Duquesne University
600 Forbes Ave
Pittsburgh, PA 15282
jackson@mathcs.duq.edu
Mark W. Craven
Computer Sciences Dept.
University of Wisconsin-Madison
1210 West Dayton St.
Madison, WI 53706
craven@cs.wisc.edu
Abstract
We introduce a new algorithm designed to learn sparse perceptrons over input representations which include high-order features.
Our algorithm, which is based on a hypothesis-boosting method,
is able to PAC-learn a relatively natural class of target concepts.
Moreover, the algorithm appears to work well in practice: on a set
of three problem domains, the algorithm produces classifiers that
utilize small numbers of features yet exhibit good generalization
performance. Perhaps most importantly, our algorithm generates
concept descriptions that are easy for humans to understand.
1
Introd uction
Multi-layer perceptron (MLP) learning is a powerful method for tasks such as concept classification. However, in many applications, such as those that may involve
scientific discovery, it is crucial to be able to explain predictions. Multi-layer perceptrons are limited in this regard, since their representations are notoriously difficult
for humans to understand. We present an approach to learning understandable,
yet accurate, classifiers. Specifically, our algorithm constructs sparse perceptrons,
i.e., single-layer perceptrons that have relatively few non-zero weights. Our algorithm for learning sparse perceptrons is based on a new hypothesis boosting algorithm (Freund & Schapire, 1995). Although our algorithm was initially developed
from a learning-theoretic point of view and retains certain theoretical guarantees (it
PAC-learns the class of sparse perceptrons), it also works well in practice. Our experiments in a number of real-world domains indicate that our algorithm produces
perceptrons that are relatively comprehensible, and that exhibit generalization performance comparable to that of backprop-trained MLP's (Rumelhart et al., 1986)
and better than decision trees learned using C4.5 (Quinlan, 1993).
Learning Sparse Perceptrons
655
We contend that sparse perceptrons, unlike MLP's, are comprehensible because they
have relatively few parameters, and each parameter describes a simple (Le. linear)
relationship. As evidence that sparse perceptrons are comprehensible, consider that
such linear functions are commonly used to express domain knowledge in fields such
as medicine (Spackman, 1988) and molecular biology (Stormo, 1987).
2
Sparse Perceptrons
A perceptron is a weighted threshold over the set of input features and over higherorder features consisting of functions operating on only a limited number of the
input features. Informally, a sparse perceptron is any perceptron that has relatively
few non-zero weights. For our later theoretical results we will need a more precise
definition of sparseness which we develop now. Consider a Boolean function I :
{O, 1}n -t { -1, + 1}. Let Ck be the set of all conjunctions of at most k of the inputs
to I. C k includes the "conjunction" of 0 inputs, which we take as the identically
1 function. All of the functions in C k map to {-1,+1}, and every conjunction in
C k occurs in both a positive sense (+1 represents true) and a negated sense (-1
represents true). Then the function I is a k-perceptron if there is some integer s
such that I(x) = sign(L::=1 hi(x)), where for all i, hi E Ck, and sign(y) is undefined
if y = 0 and is y/lyl otherwise. Note that while we have not explicitly shown any
weights in our definition of a k-perceptron I, integer weights are implicitly present
in that we allow a particular hi E Ck to appear more than once in the sum defining
I. In fact, it is often convenient to think of a k-perceptron as a simple linear
discriminant function with integer weights defined over a feature space with O(nk)
features, one feature for each element of Ck ?
We call a given collection of s conjunctions hi E C k a k-perceptron representation of
the corresponding function I, and we call s the size of the representation. We define
the size of a given k-perceptron function I as the minimal size of any k-perceptron
representation of I. An s-sparse k-perceptron is a k-perceptron I such that the size
of I is at most s. We denote by PI: the set of Boolean functions over {O, 1}n which
can be represented as k-perceptrons, and we define Pk = Un Pi:. The subclass of
s-sparse k-perceptrons is denoted by Pk,/l" We are also interested in the class P~
of k-perceptrons with real-valued weights, at most r of which are non-zero.
3
The Learning Algorithm
In this section we develop our learning algorithm and prove certain performance
guarantees. Our algorithm is based on a recent "hypothesis boosting" algorithm
that we describe after reviewing some basic learning-theory terminology.
3.1
PAC Learning and Hypothesis Boosting
Following Valiant (1984), we say that a function class :F (such as Pk for fixed k)
is (strongly) PAC-learnable if there is an algorithm A and a polynomial function
PI such that for any positive f and 8, any I E :F (the target junction), and any
probability distribution D over the domain of I, with probability at least 1 8, algorithm A(EX(f, D), f, 8) produces a function h (the hypothesis) such that
Pr[PrD[/(x) I- hex)] > f] < 8. The outermost probability is over the random choices
made by the EX oracle and any random choices made by A. Here EX(f, D) denotes
an oracle that, when queried, chooses a vector of input values x with probability
D and returns the pair (x,/(x)) to A. The learning algorithm A must run in time
PI (n, s, c 1 , 8- 1 ), where n is the length of the input vector to I and s is the size of
J. C. JACKSON, M. W. CRAVEN
656
AdaBoost
Input: training set S of m examples of function
is (~ - 'Y)-approximate, l'
Algorithm:
f, weak learning algorithm WL that
1. T +-- ~ In(m)
2. for all xES, w(x) +-- l/m
3. for i = 1 to T do
4.
for all XES, Di(X) +-- w(x)/
w(x).
5.
invoke WL on S and distribution Di, producing weak hypothesis hi
L:l=l
6.
+-- L:z .h;(z);oI:/(z) Di(X)
(3i +-- ?i/ (1 - ?i)
?i
7.
8.
for all XES, if h(x) = f(x) then w(x) +-- w(x) . (3i
9. enddo
Output: h(x) == sign
(L::=l -In((3i) . hi{x))
Figure 1: The AdaBoost algorithm.
f; the algorithm is charged one unit of time for each call to EX. We sometimes
call the function h output by A an ?-approximator (or strong approximator) to f
with respect to D. If F is PAC-learnable by an algorithm A that outputs only
hypotheses in class 1? then we say that F is PAC-learnable by 1?. If F is PAClearnable for ? = 1/2 - 1/'P2(n, s), where'P2 is a polynomial function, then :F is
weakly PA C-learnable, and the output hypothesis h in this case is called a weak
approximator.
Our algorithm for finding sparse perceptrons is, as indicated earlier, based on the
notion of hypothesis boosting. The specific boosting algorithm we use (Figure 1)
is a version of the recent AdaBoost algorithm (Freund & Schapire, 1995). In the
next section we apply AdaBoost to "boost" a weak learning algorithm for Pk,8 into
a strong learner for Pk,8' AdaBoost is given a set S of m examples of a function
f : {O,1}n ---+ {-1, +1} and a weak learning algorithm WL which takes ? = ! - l'
for a given l' b must be bounded by an inverse polynomial in nand s). Adaf300st
runs for T = In(m)/(2'Y2) stages. At each stage it creates a probability distribution
Di over the training set and invokes WL to find a weak hypothesis hi with respect
to Di (note that an example oracle EX(j, Di) can be simulated given Di and S).
At the end of the T stages a final hypothesis h is output; this is just a weighted
threshold over the weak hypotheses {hi I 1 ~ i ~ T}. If the weak learner succeeds
in producing a (~-'Y)-approximator at each stage then AdaBoost's final hypothesis
is guaranteed to be consistent with the training set (Freund & Schapire, 1995).
3.2
PAC-Learning Sparse k-Perceptrons
We now show that sparse k-perceptrons are PAC learnable by real-weighted kperceptrons having relatively few nonzero weights. Specifically, ignoring log factors,
Pk,8 is learnable by P~O(82) for any constant k. We first show that, given a training
set for any f E Pk,8' we can efficiently find a consistent h E p~( 8 2 )' This consistency algorithm is the basis of the algorithm we later apply to empirical learning
problems. We then show how to turn the consistency algorithm into a PAC learning
algorithm. Our proof is implicit in somewhat more general work by Freund (1993),
although he did not actually present a learning algorithm for this class or analyze
Learning Sparse Perceptrons
657
the sample size needed to ensure f-approximation, as we do. Following Freund, we
begin our development with the following lemma (Goldmann et al., 1992):
Lemma 1 (Goldmann Hastad Razhorov) For I: {0,1}n -+ {-1,+1} and H,
any set 01 functions with the same domain and range, il I can be represented as
I(x) = sign(L::=l hi(X?, where hi E H, then lor any probability distribution D
over {O, 1}n there is some hi such that PrD[f(x) ?- hi(x)] ~ ~ - 218 '
If we specialize this lemma by taking H = Ck (recall that Ck is the set of conjunctions of at most k input features of f) then this implies that for any I E Pk,8 and
any probability distribution D over the input features of I there is some hi E Ck
that weakly approximates I with respect to D. Therefore, given a training set S
and distribution D that has nonzero weight only on instances in S, the following
simple algorithm is a weak learning algorithm for Pk: exhaustively test each of the
O(nk) possible conjunctions of at most k features until we find a conjunction that
218 )-approximates I with respect to D (we can efficiently compute the approximation of a conjunction hi by summing the values of D over those inputs where hi
a-
and I agree). Any such conjunction can be returned as the weak hypothesis. The
above lemma proves that if I is a k-perceptron then this exhaustive search must
succeed at finding such a hypothesis. Therefore, given a training set of m examples
of any s-sparse k-perceptron I, AdaBoost run with the above weak learner will, after 2s2In(m) stages, produce a hypothesis consistent with the training set. Because
each stage adds one weak hypothesis to the output hypothesis, the final hypothesis
will be a real-weighted k-perceptron with at most 2s2In(m) nonzero weights.
We can convert this consistency algorithm to a PAC learning algorithm as follows.
First, given a finite set of functions F, it is straightforward to show the following
(see, e.g., Haussler, 1988):
Lemma 2 Let F be a finite set ollunctions over a domain X. For any function
lover X, any probability distribution D over X, and any positive f and ~, given a
set S ofm examples drawn consecutively from EX(f, D), where m ~ f-1(ln~-1 +
In IFI), then Pr[3h E F I "Ix E S f(x) = h(x) & Prv[/(x) ?- h(x)] > f] < ~, where
the outer probability is over the random choices made by EX(f,D).
The consistency algorithm above finds a consistent hypothesis in P~, where r
=
2s2 In(m). Also, based on a result of Bruck (1990), it can be shown that In IP~I =
o (r2 + kr log n). Therefore, ignoring log factors, a randomly-generated training set
of size O(kS4 If) is sufficient to guarantee that, with high probability, our algorithm
will produce an f-approximator for any s-sparse k-perceptron target. In other words,
the following is a PAC algorithm for Pk,8: compute sufficiently large (but polynomial
in the PAC parameters) m, draw m examples from EX(f, D) to create a training
set, and run the consistency algorithm on this training set.
So far we have shown that sparse k-perceptrons are learnable by sparse perceptron
hypotheses (with potentially polynomially-many more weights). In practice, of
course, we expect that many real-world classification tasks cannot be performed
exactly by sparse perceptrons. In fact, it can be shown that for certain (reasonable)
definitions of "noisy" sparse perceptrons (loosely, functions that are approximated
reasonably well by sparse perceptrons), the class of noisy sparse k-perceptrons is
still PAC-learnable. This claim is based on results of Aslam and Decatur (1993),
who present a noise-tolerant boosting algorithm. In fact, several different boosting
algorithms could be used to learn Pk,s (e.g., Freund, 1993). We have chosen to use
AdaBoost because it seems to offer significant practical advantages, particularly in
terms of efficiency. Also, our empirical results to date indicate that our algorithm
J. C. JACKSON, M. W. CRAVEN
658
works very well on difficult (presumably "noisy") real-world problems. However,
one potential advantage of basing the algorithm on one of these earlier boosters
instead of AdaBoost is that the algorithm would then produce a perceptron with
integer weights while still maintaining the sparseness guarantee of the AdaBoostbased algorithm.
3.3
Practical Considerations
We turn now to the practical details of our algorithm, which is based on the consistency algorithm above. First, it should be noted that the theory developed above
works over discrete input domains (Boolean or nominal-valued features). Thus, in
this paper, we consider only tasks with discrete input features. Also, because the
algorithm uses exhaustive search over all conjunctions of size k, learning time depends exponentially on the choice of k. In this study we to use k = 2 throughout,
since this choice results in reasonable learning times.
Another implementation concern involves deciding when the learning algorithm
should terminate. The consistency algorithm uses the size of the target function
in calculating the number of boosting stages. Of course, such size information is
not available in real-world applications, and in fact, the target function may not be
exactly representable as a sparse perceptron. In practice, we use cross validation
to determine an appropriate termination point. To facilitate comprehensibility, we
also limit the number of boosting stages to at most the number of weights that
would occur in an ordinary perceptron for the task. For similar reasons, we also
modify the criteria used to select the weak hypothesis at each stage so that simple
features are preferred over conjunctive features. In particular, given distribution
D at some stage j, for each hi E Ck we compute a correlation Ev[/ . hi]. We
then mUltiply each high-order feature's correlation by i. The hi with the largest
resulting correlation serves as the weak hypothesis for stage j.
4
Empirical Evaluation
In our experiments, we are interested in assessing both the generalization ability
and the complexity of the hypotheses produced by our algorithm. We compare our
algorithm to ordinary perceptrons trained using backpropagation (Rumelhart et al.,
1986), multi-layer perceptrons trained using backpropagation, and decision trees
induced using the C4.5 system (Quinlan, 1993). We use C4.5 in our experiments as
a representative of "symbolic" learning algorithms. Symbolic algorithms are widely
believed to learn hypotheses that are more comprehensible than neural networks.
Additionally, to test the hypothesis that the performance of our algorithm can be
explained solely by its use of second-order features, we train ordinary perceptrons
using feature sets that include all pairwise conjunctions, as well as the ordinary
features. To test the hypothesis that the performance of our algorithm can be
explained by its use of relatively few weights, we consider ordinary perceptrons
which have been pruned using a variant of the Optimal Brain Damage (OBD)
algorithm (Le Cun et al., 1989). In our version of OBD, we train a perceptron until
the stopping criteria are met, prune the weight with the smallest salience, and then
iterate the process. We use a validation set to decide when to stop pruning weights.
For each training set, we use cross-validation to select the number of hidden units
(5, 10, 20, 40 or 80) for the MLP's, and the pruning confidence level for the C4.5
trees. We use a validation set to decide when to stop training for the MLP's.
We evaluate our algorithm using three real-world domains: the voting data set from
the UC-Irvine database; a promoter data set which is a more complex superset of
Learning Sparse Perceptrons
domain
voting
promoter
coding
boosting
91.5%
92.7
72.9
659
Ta ble 1: 11est -set accuracy.
perceptrons
C4.5
multi-layer ordinary 2nd-order
90.8%
89.2%
89.2% * 92.2%
*
90.6
90.0
88.7
84.4
*
*
*
71.6
69.8
70.7
62.6
*
*
*
Table 2: Hypothesis complexity (# weights).
perceptrons
domain
boosting multi-layer ordinary 2nd-order
voting
450
12
651
30
promoters
41
2267
228
25764
protein coding
52
4270
60
1740
*
pruned
87.6% *
88.2
*
70.3
*
pruned
12
59
37
UC-Irvine one; and a data set in which the task is to recognize protein-coding
regions in DNA (Craven & Shavlik, 1993). We remove the physician-fee-freeze
feature from the voting data set to make the problem more difficult. We conduct
our experiments using a lO-fold cross validation methodology, except for in the
protein-coding domain. Because of certain domain-specific characteristics of this
data set, we use 4-fold cross-validation for our experiments with it.
Table 1 reports test-set accuracy for each method on all three domains. We measure the statistical significance of accuracy differences using a paired, two-tailed
t-test. The symbol '*' marks results in cases where another algorithm is less accurate than our boosting algorithm at the p ::; 0.05 level of significance. No other
algorithm is significantly better than our boosting method in any of the domains.
From these results we conclude that (1) our algorithm exhibits good generalization
performance on number of interesting real-world problems, and (2) the generalization performance of our algorithm is not explained solely by its use of second-order
features, nor is it solely explained by the sparseness of the perceptrons it produces.
An interesting open question is whether perceptrons trained with both pruning and
second-order features are able to match the accuracy of our algorithm; we plan to
investigate this question in future work.
Table 2 reports the average number of weights for all of the perceptrons. For all
three problems, our algorithm produces perceptrons with fewer weights than the
MLP's, the ordinary perceptrons, and the perceptrons with second-order features.
The sizes of the OBD-pruned perceptrons and those produced by our algorithm
are comparable for all three domains. Recall, however, that for all three tasks,
the perceptrons learned by our algorithm had significantly better generalization
performance than their similar-sized OBD-pruned counterparts. We contend that
the sizes of the perceptrons produced by our algorithm are within the bounds of
what humans can readily understand. In the biological literature, for example, linear
discriminant functions are frequently used to communicate domain knowledge about
sequences of interest. These functions frequently involve more weights than the
perceptrons produced by our algorithm. We conclude, therefore, that our algorithm
produces hypotheses that are not only accurate, but also comprehensible.
We believe that the results on the protein-coding domain are especially interesting.
The input representation for this problem consists of 15 nominal features representing 15 consecutive bases in a DNA sequence. In the regions of DNA that encode
proteins (the positive examples in our task), non-overlapping triplets of consecu-
660
J. C. JACKSON, M. W. eRA VEN
tive bases represent meaningful "words" called codons. In previous work (Craven
& Shavlik, 1993), it has been found that a feature set that explicitly represents
codons results in better generalization than a representation of just bases. However, we used the bases representation in our experiments in order to investigate the
ability of our algorithm to select the "right" second-order features. Interestingly,
nearly all of the second-order features included in our sparse perceptrons represent
conjunctions of bases that are in the same codon. This result suggests that our
algorithm is especially good at selecting relevant features from large feature sets.
5
Future Work
Our present algorithm has a number of limitations which we plan to address. Two
areas of current research are generalizing the algorithm for application to problems
with real-valued features and developing methods for automatically suggesting highorder features to be included in our algorithm's feature set.
Acknowledgements
Mark Craven was partially supported by ONR grant N00014-93-1-0998. Jeff Jackson
was partially supported by NSF grant CCR-9119319.
References
Aslam, J. A. & Decatur, S. E. (1993). General bounds on statistical query learning and
PAC learning with noise via hypothesis boosting. In Proc. of the 34th Annual Annual
Symposium on Foundations of Computer Science, (pp. 282-291).
Bruck, J . (1990). Harmonic analysis of polynomial threshold functions. SIAM Journal
of Discrete Mathematics, 3(2):168-177.
Craven, M . W. & Shavlik, J. W. (1993) . Learning to represent codons: A challenge
problem for constructive induction. In Proc. of the 13th International Joint Conf. on
Artificial Intelligence, (pp. 1319-1324), Chambery, France.
Freund, Y. (1993). Data Filtering and Distribution Modeling Algorithms for Machine
Learning. PhD thesis, University of California at Santa Cruz.
Freund, Y. & Schapire, R. E. (1995). A decision-theoretic generalization of on-line learning and an application to boosting. In Proc. of the ~nd Annual European Conf. on
Computational Learning Theory.
Goldmann, M., Hastad, J., & Razborov, A. (1992). Majority gates vs. general weighted
threshold gates. In Proc. of the 7th IEEE Conf. on Structure in Complexity Theory.
Haussler, D. (1988). Quantifying inductive bias: AI learning algorithms and Valiant's
learning framework. Artificial Intelligence, (pp. 177-221).
Le Cun, Y., Denker, J. S., & Solla, S. A. (1989). Optimal brain damage. In Touretzky,
D., editor, Advances in Neural Information Processing Systems (volume ~).
Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann.
Rumelhart, D., Hinton, G., & Williams, R. (1986). Learning internal representations
by error propagation. In Rumelhart, D. & McClelland, J., editors, Parallel Distributed
Processing: Explorations in the microstructure of cognition. Volume 1. MIT Press.
Spackman, K. A. (1988). Learning categorical decision criteria. In Proc. of the 5th
International Conf. on Machine Learning, (pp. 36-46), Ann Arbor, MI.
Stormo, G. (1987). Identifying coding sequences. In Bishop, M. J. & Rawlings, C. J.,
editors, Nucleic Acid and Protein Sequence Analysis: A Practical Approach. IRL Press.
Valiant,1. G. (1984). A theory of the learnable. Comm. of the ACM, 27(11):1134-1142.
| 1076 |@word version:2 polynomial:5 seems:1 nd:3 open:1 termination:1 selecting:1 interestingly:1 current:1 yet:2 conjunctive:1 must:3 readily:1 cruz:1 remove:1 designed:1 v:1 intelligence:2 fewer:1 boosting:16 lor:1 symposium:1 prove:1 specialize:1 consists:1 introduce:1 pairwise:1 nor:1 frequently:2 multi:5 brain:2 codon:4 rawlings:1 automatically:1 paclearnable:1 begin:1 moreover:1 bounded:1 what:1 developed:2 finding:2 guarantee:4 every:1 subclass:1 voting:4 exactly:2 classifier:2 unit:2 grant:2 appear:1 producing:2 positive:4 modify:1 limit:1 era:1 solely:3 suggests:1 limited:2 range:1 practical:4 practice:4 backpropagation:2 dayton:1 area:1 empirical:3 significantly:2 convenient:1 word:2 confidence:1 protein:6 symbolic:2 cannot:1 map:1 charged:1 straightforward:1 williams:1 identifying:1 haussler:2 importantly:1 jackson:6 notion:1 razborov:1 target:5 nominal:2 us:2 hypothesis:30 pa:2 element:1 rumelhart:4 approximated:1 particularly:1 database:1 region:2 solla:1 comm:1 complexity:3 exhaustively:1 highorder:1 trained:4 weakly:2 reviewing:1 creates:1 efficiency:1 learner:3 basis:1 joint:1 represented:2 train:2 describe:1 query:1 artificial:2 exhaustive:2 widely:1 valued:3 say:2 otherwise:1 ability:2 think:1 noisy:3 final:3 ip:1 advantage:2 sequence:4 relevant:1 date:1 description:1 assessing:1 produce:9 develop:2 p2:2 strong:2 c:1 involves:1 indicate:2 implies:1 met:1 consecutively:1 exploration:1 human:3 backprop:1 microstructure:1 generalization:8 biological:1 sufficiently:1 deciding:1 presumably:1 cognition:1 stormo:2 claim:1 consecutive:1 smallest:1 proc:5 largest:1 wl:4 basing:1 create:1 weighted:5 mit:1 ck:8 conjunction:12 encode:1 ave:1 sense:2 stopping:1 nand:1 initially:1 hidden:1 france:1 interested:2 classification:2 denoted:1 development:1 plan:2 uc:2 field:1 construct:1 once:1 having:1 biology:1 represents:3 ven:1 nearly:1 future:2 report:2 few:5 randomly:1 recognize:1 consisting:1 jeffrey:1 mlp:6 interest:1 investigate:2 multiply:1 evaluation:1 undefined:1 accurate:3 tree:3 conduct:1 loosely:1 theoretical:2 minimal:1 instance:1 earlier:2 boolean:3 modeling:1 hastad:2 retains:1 ordinary:8 chooses:1 st:1 international:2 siam:1 invoke:1 physician:1 thesis:1 conf:4 booster:1 return:1 suggesting:1 potential:1 coding:6 includes:1 explicitly:2 depends:1 later:2 view:1 performed:1 analyze:1 aslam:2 parallel:1 forbes:1 oi:1 il:1 accuracy:4 kaufmann:1 who:1 efficiently:2 characteristic:1 acid:1 weak:14 produced:4 notoriously:1 explain:1 touretzky:1 definition:3 pp:4 proof:1 di:7 mi:1 stop:2 irvine:2 recall:2 knowledge:2 actually:1 appears:1 ta:1 adaboost:9 methodology:1 strongly:1 just:2 stage:11 implicit:1 until:2 correlation:3 irl:1 overlapping:1 propagation:1 indicated:1 perhaps:1 scientific:1 believe:1 facilitate:1 concept:3 true:2 y2:1 counterpart:1 inductive:1 nonzero:3 noted:1 criterion:3 theoretic:2 harmonic:1 consideration:1 exponentially:1 volume:2 he:1 approximates:2 significant:1 freeze:1 queried:1 ai:1 consistency:7 mathematics:2 had:1 operating:1 add:1 base:5 recent:2 certain:4 n00014:1 onr:1 morgan:1 somewhat:1 prune:1 determine:1 match:1 offer:1 cross:4 believed:1 dept:2 molecular:1 paired:1 prediction:1 variant:1 basic:1 s2in:2 sometimes:1 represent:3 crucial:1 unlike:1 comprehensibility:1 induced:1 lover:1 integer:4 call:4 easy:1 identically:1 superset:1 iterate:1 ifi:1 whether:1 introd:1 returned:1 santa:1 involve:2 informally:1 mcclelland:1 dna:3 schapire:4 nsf:1 sign:4 ccr:1 discrete:3 prd:2 express:1 terminology:1 threshold:4 drawn:1 wisc:1 decatur:2 utilize:1 sum:1 convert:1 run:4 inverse:1 powerful:1 communicate:1 throughout:1 reasonable:2 decide:2 draw:1 ofm:1 decision:4 ble:1 fee:1 comparable:2 layer:6 hi:18 bound:2 guaranteed:1 fold:2 oracle:3 annual:3 occur:1 prv:1 generates:1 pruned:5 relatively:7 developing:1 craven:8 representable:1 describes:1 wi:1 cun:2 explained:4 pr:2 ln:1 agree:1 turn:2 needed:1 end:1 serf:1 junction:1 available:1 goldmann:3 apply:2 denker:1 appropriate:1 gate:2 comprehensible:5 denotes:1 include:2 ensure:1 quinlan:3 madison:2 maintaining:1 medicine:1 calculating:1 invokes:1 prof:1 especially:2 question:2 occurs:1 damage:2 exhibit:3 higherorder:1 simulated:1 majority:1 outer:1 discriminant:2 reason:1 induction:1 length:1 relationship:1 difficult:3 potentially:1 implementation:1 understandable:1 contend:2 negated:1 nucleic:1 finite:2 defining:1 hinton:1 precise:1 tive:1 pair:1 c4:6 california:1 learned:2 boost:1 address:1 able:3 ev:1 challenge:1 program:1 natural:1 bruck:2 representing:1 categorical:1 literature:1 discovery:1 acknowledgement:1 wisconsin:1 freund:8 expect:1 interesting:3 limitation:1 filtering:1 approximator:5 validation:6 foundation:1 sufficient:1 consistent:4 editor:3 pi:4 lo:1 course:2 obd:4 supported:2 hex:1 salience:1 bias:1 allow:1 understand:3 perceptron:21 shavlik:3 taking:1 sparse:27 distributed:1 regard:1 outermost:1 world:6 commonly:1 collection:1 made:3 far:1 polynomially:1 approximate:1 pruning:3 implicitly:1 preferred:1 tolerant:1 summing:1 pittsburgh:1 conclude:2 un:1 search:2 triplet:1 tailed:1 table:3 additionally:1 learn:4 reasonably:1 terminate:1 ignoring:2 complex:1 european:1 domain:17 did:1 pk:11 significance:2 promoter:3 s2:1 noise:2 west:1 representative:1 learns:1 ix:1 specific:2 bishop:1 pac:14 learnable:9 r2:1 x:3 symbol:1 evidence:1 concern:1 uction:1 valiant:3 kr:1 phd:1 sparseness:3 nk:2 generalizing:1 partially:2 acm:1 succeed:1 sized:1 quantifying:1 ann:1 jeff:1 included:2 specifically:2 except:1 lemma:5 called:2 arbor:1 succeeds:1 est:1 meaningful:1 perceptrons:42 select:3 internal:1 mark:3 constructive:1 evaluate:1 ex:8 |
87 | 1,077 | A Neural Network Autoassociator for
Induction Motor Failure Prediction
Thomas Petsche, Angelo Marcantonio, Christian Darken,
Stephen J. Hanson, Gary M. Kuhn and Iwan Santoso
[PETSCHE, ANGELO, DARKEN, JOSE, GMK, NIS]@SCR.SIEMENS.COM
Siemens Corporate Research, Inc.
755 College Road East
Princeton, NJ 08853
Abstract
We present results on the use of neural network based autoassociators
which act as novelty or anomaly detectors to detect imminent motor
failures. The autoassociator is trained to reconstruct spectra obtained
from the healthy motor. In laboratory tests, we have demonstrated that the
trained autoassociator has a small reconstruction error on measurements
recorded from healthy motors but a larger error on those recorded from a
motor with a fault. We have designed and built a motor monitoring system
using an autoassociator for anomaly detection and are in the process of
testing the system at three industrial and commercial sites.
1 Introduction
An unexpected breakdown of an electric induction motor can cause financial loss significantly in excess of the cost of the motor. For example, the breakdown of a motor in a
production line during a production run can cause the loss of work in progress as well as
loss of production time.
When a motor does fail, it is not uncommon to replace it with an oversized motor based on
the assumption that if a motor is not running at its design limit then it will survive longer.
While this is frequently effective, this leads to significantly lower operating efficiencies and
higher initial and operating costs.
The primary motivation behind this project is the observation that if a motor breakdown and
be predicted before the actual breakdown occurs, then the motor can be replaced in a more
orderly way, with minimal interruption of the process in which it is involved. The goal is
to produce a system that is conceptually similar to a fuel gauge on an automobile. When
the system detects conditions that indicate that the motor is approaching its end-of-life, the
operators are notified that a replacement is necessary in the near future.
A Neural Network Autoassociator for Induction Motor Failure Prediction
925
2 Background
At present, motors in critical operations that are subject to mechanical failures - for example,
fire pump motors on US Navy vessels - are typically monitored by a human expert who
periodically listens to the vibrations of the motor and, based on experience, determines
whether the motor sounds healthy or sounds like a problem is developing. Since mechanical
probiems in motors typically lead to increased or changed vibrations, this technique can
werk well. Unfortunately, it depends on a competent and expensive expert.
In an attempt to automate motor monitoring, several vendors have "automated motor monitoring" equipment available. For mechanical failure monitoring, such systems typically rely
on several accelerometers to measure the vibration of the motor at various points and along
various axes. The systems then display information, primarily about the vibration spectrum,
to an operator who determines whether the motor is functioning properly. These systems
are expensive since they rely on several accelerometers, each of which is itself expensive,
as well as data collection hardware and a computer. Further, the systems require an expert
operator and frequently require that the motor be tested only when it is driving a known load.
Neither the human motor expert nor the existing motor monitoring systems provide an
affordable solution for continuous on-line mechanical failure monitoring. However, the
success of the human expert and existing vibration monitors does demonstrate that in fact,
there is sufficient information in the vibration of an electric induction motor to detect
imminent mechanical failures.
Siemens Energy and Automation has proposed a new product, the Siemens Advanced Motor
Master System II (SAMMS II), that will continuously monitor and protect an electric induction motor while it is operating on-line. Like the presently available SAMMS, the SAMMS
II is designed to provide protection against thermal and electrical overload an, in addition,
it will provide detection of insulation deterioration and mechanical fault monitoring.
In contrast to existing systems and techniques, the SAMMS II is designed to (1) require
no human expert to determine if a motor is developing problems; (2) be inexpensive; and
(3) provide continuous, on-line monitoring of the motor in normal operation.
The requirements for the SAMMS II, in partiCUlar the cost constraint, require that several
issues be resolved. First, in order to produce a low cost system, it is necessary to eliminate
the need for expensive accelerometers. Second, wiring should be limited to the motor control
center, i.e., it should not be necessary to run new signal wires from the motor control center
to the motor. Third, the SAMMS II is to provide continuous on-line monitoring, so the
system must adapt to or factor out the effect of changing loads on the motor. Finally since
the SAMMS II would not necessarily be bundled with a motor and so might be used to
control and monitor an arbitrary motor from an arbitrary manufacturer, the design can not
assume that a full description of the motor construction is available.
3 Approach
The first task was to determine how to eliminate the accelerometers. Based on work done
elsewhere (Schoen, Habetler & Bartheld, 1994), SE&A determined that it might be possible
to use measurements of the current on a single phase of the power supply to estimate the
vibration of the motor. This depends on the assumption that any vibration of the motor will
cause the rotor to move radially relative to the stator which will cause changes in the airgap
which, in tum, will induce changes in the current.
Experiments were done at the Georgia Institute of Technology to determine the feasibility
of this idea using the same sort of data collection system described later. Early experiments
indicated that, for a single motor driving a variety of loads, it is possible to distinguish
926
T. PETSCHE, A. MARCANTONIO, C. DARKEN, S. J. HANSON, G. M. KUHN, I. SANTOSO
Table 1: Loads for motors #1 and #2.
Load type
constant
sinusoidal oscillation at rotating frequency
sinusoidal oscillation at twice the rotating frequency
switching load (50% duty cycle) at rotating frequency
sinusoidal oscillation 28 Hz
sinusoidal oscillation at 30 Hz
switching load (50% duty cycle) at 30 Hz
Load Magnitude
half and full rated
half and full rated
full rated
full rated
half and full rated
full rated
full rated
Table 2: Neural network classifier experiment.
Features (N)
Performance on motor #1
Performance on motor #2
48
100%
63
100%
30%
64
92%
25%
110
100%
55%
320
100%
37%
between a current spectrum obtained from the motor while it is healthy and another obtained
when the motor contains a fault. Moreover, it is also possible to automatically generate a
classifiers that correctly determine the presence or absence of a fault in the motor.
The first, obvious approach to this monitoring task would seem to be to build a classifier
that would be used to distinguish between a healthy motor and one that has developed a
fault that is likely to lead to a breakdown. Unfortunately, this approach does not work.
As described above, we have successfully built classifiers of various sorts using manual and
automatic techniques to distinguish between current spectra obtained from a motor when it
is healthy and those obtained when it contains a fault.
However, since the SAMMS II will be connected to a motor before it fails and will be asked
to identify a failure without ever seeing a labeled example of a failure from that motor, a
classifier can only be used if it can be trained on data collected from one or more motors
and then used to monitor the motor of interest. Unfortunately, experiments indicate that
this will not work.
One of these experiments is illustrated in table 2. Several feedforward neural network classifiers were trained using examples from a single motor under four conditions: (1) healthy,
(2) unbalanced, (3) containing a broken rotor bar and (4) containing a hole in the outer
bearing race. The ten different loads listed in table 1 were applied to the motor for each of
these conditions.
The networks contained N inputs (where N is given in table 2); 9 hidden units and 4
outputs. There were 40 training examples where each example is the average of 50 distinct
magnitude scaled FFrs obtained from motor #1 from a single load/fault combination. The
test data for which the results are reported in the table consisted of 40 averaged FFfs from
motor #1 and 20 averaged FFfs (balanced and unbalanced only) from motor #2. The test
set for motor #1 is completely distinct from the training set.
In the case where n = 110, the FFf components were selected to include the frequencies
identified by the theory of motor physics as interesting for the three fault conditions and
exclude all other components. This led to an improvement over the other cases where a
single contiguous set of components was chosen, but the performance still degrades to about
random chance instead of 100%.
This experiment clearly illustrates that is is possible to distinguish between healthy and
faulty spectra obtained from the same motor. However, it also clearly illustrates that a
A Neural Network Autoassociator for Induction Motor Failure Prediction
Measurements
Novelty
detection
Novelty
Decision
927
Diagnosis
Adaptation
AlgOrithm
Figure 1: The basic form of an anomaly detection system.
classifier trained on one motor does not perform well on another motor since the error rates
increase immensely. Based on results such as these, we have concluded that it is not feasible
to build a single classifier that would be trained once and then placed in the field to monitor
a motor. Instead we are pursuing an alternative based on anomaly detection which adapts
a monitor to the particular motor for which it is responsible.
4
Anomaly detection
The basic notion of anomaly detection for monitoring is illustrated in figure 1. Statistical
anomaly detection centers around a model of the data that was seen while the motor was
operating normally. This model is produced by collecting spectra from the motor while
it is operating normally. Once trained, the system compares each new spectrum to the
model to determine how similar to or different from the training set it is. This similarity
is described by an "anomaly metric" which, in the simplest case, can be thresholded to
determine whether the motor is still normal or has developed a fault. Once the "anomaly
metric" has been generated, various statistical techniques can be used to determine if there
has been a change in the distribution of values.
5 A Neural Network-based Anomaly Detector
The core of the most successful monitoring system we have built to date is a neural network
designed to function as an autoassociator (Rumelhart, Hinton & Williams, 1986, called it
an "encoder"). We use a simple three layer feedforward network with N inputs, N outputs
and K < N hidden units. The input layer is fully connected to the hidden layer which is
fully connected to the output layer. Each unit in the hidden and output layers computes
= (J ( 2::;0 Wi,jXj) , where Xi is the output of neuron i which receives inputs from Mi other
neurons and Wi,j is the weight on the connection from neuron} to neuron i. The network is
trained using the backpropagation algorithm to reconstruct the input vector on the output
units. Specifically, if Xi is one of n input vectors and Xi is the corresponding output vector,
the network is trained to minimize the sum of squared errors E = 2::~1 Ilxi - xdl 2. Once
training is complete, the anomaly metric is mi = IIXi - xi11 2 .
Xi
6
Anomaly Detection Test
We have tested the effectiveness of the neural network autoassociator as an anomaly detector
on several motors. For all these tests, the autoasociator had 20 hidden units. The hidden
layer size was chosen after some experimentation and data analysis on motor #1 , but no
attempt was made to tune the' hidden layer size for motor #2 or motor #3.
Motor #1 was tested using the ten different loads listed in table 1 and four different
T. PETSCHE, A. MARCANTONIO, C. DARKEN, S. 1. HANSON, O. M. KUHN, I: SANTOSO
928
q
,---------------------------~
. .......
,.;- ,
,_.-
<Xl
<Xl
o
o
.. ..-...
..-
... :
C\I
"!
o
o
0.0
o.oooos
0.0001
Threshold
0.00015
0.0002
balanced
unbalanced
...-
0.0
0.00002 0.00004 0.00006 0.00008 0.00010 0.00012
Threshold
Figure 2: Probability of error as a function of threshold using individual FFfs on (a) motor #1 with 319 inputs and (b) motor #2 with 320 inputs.
health/fault conditions: healthy (balanced); unbalanced; broken rotor bar; and a hole in
the outer bearing race. Motor #2 was tested while driving the same ten loads, but for one
healthy and one faulty condition: healthy (balanced) and unbalanced.
For both motors #1 and #2, recordings of a single current phase were made as follows. For
each fault condition, a load was selected and applied and the motor was run and the current
signal recorded for five minutes. Then a new load was introduced and the motor was run
again. The load was constant during any five minute recording session.
Motor #3 was tested using thirteen different loads, but only two fault conditions: healthy
(balanced) and unbalanced. In this case, however, load changes occurred at random times.
We preprocessed this data to to identify where the load changes occurred to generate the
training set and the healthy motor test sets.
6.1
Preprocessing
Recordings were made on a digital audio tape (OAT). The current on a single phase was
measured with a current transformer, amplified, notch filtered to reduce the magnitude of
the 60Hz component, amplified again and then applied as input to the OAT. The notch filter
was a switched capacitor filter which reduced the magnitude at 60Hz by about 30dB.
The time series obtained from the OAT was processed to reduce the sampling rate and then
dividing the data into non-overlapping blocks and computing the FFT of each block. A
subset of the FFf magnitude coefficients was selected and for each FFT, independent of
any other FFf, the components were linearly scaled and translated to the interval [e, 1 e] (typically e = 0.02). That is, for each FFT consisting of coefficients to, ... .tn-t,
we selected a subset, F, (the same for all FFTs) of the components and computed a =
(l - 2e)(maxiEFh - miniEFh)-t and b = miniEFh. Then the input vector, x, to the
network is Xj = a(fij - b) + e where, for allj < k: ij, ik E F and ij < ik.
6.2 Experimental Results
In figure 2a, we illustrate the results of a typical anomaly detection experiment on motor #1
using an autoassociator with 319 inputs and 20 hidden units. This graph illustrates the
performance (false alarm and miss rates) of a very simple anomaly detection system which
thresholds the anomaly metric to determine if the motor is good or bad. The decreasing
curve that starts at threshold = 0, P(error) = 1 is the false alarm rate as a function of the
threshold. Each increasing curve is the miss rate for a particular fault type.
In figure 2b we illustrate the performance of an autoassociator on motor #2 using an
A Neural Network Autoassociator for Induction Motor Failure Prediction
q
I
.....
929
-'
/~',.....
<Xl
ci
%~
ii
,.I
/
/
,,/
iL-.:t:
0
.... -.
......
'"ci
../ - - - - - - - - - - - - 1
0
ci
0.0
0.0001
0.0002
0.0003
0.0004
0.0005
Threshold
Figure 3: Probability of error for motor #3 using individual FFTs and 319 inputs.
q ,----------------------------,
q
<Xl
o
,---------------------------~
<Xl
ci
,
....... .
,/
'"ci
balanced
unbalanced
.,.-'
o
ci ~---.---,r_--.---~--_r--_.~
0.0
0.00005
0.0001
0.00015
Threshold
0.0002
0.0
0.00002 0.00004 0.00006 0.00008 0.00010 0.00012
Threshold
Figure 4: Probability of error using averaged FFTs for (a) motor #1 and 319 inputs
(b) motor #2 and 320 inputs.
autoassociator with 320 inputs and 20 hidden units. Figure 3 shows our results on motor #3
using an autoassociator with 319 inputs.
We have found significant performance improvements by averaging several consecutive
FFTs. In figure 4 we show the results for motors #1 and #2 when we averaged 11 FFTs to
produce the input features. Compare these curves to those in figure 2. In particular, notice
that the probability of error is much lower for the averaged FFTs when the good motor
curve crosses anyone of the faulty motor curves.
7
Candor System Design
Based on our experiments with autoassociators, we designed a prototype mechanical motor
condition monitoring system. The functional system architecture is shown in figure 5. In
order to control costs, the system is implemented on a PC. The system is designed so that
each PC can monitor up to 128 motors using one 16-bit analog to digital converter. The
signals are collected, filtered and multiplexed on custom external signal processing cards.
Each card supports up to eight motors (with up to 16 cards per PC).
The system records current measurements from one motor at a time. For each motor,
measurements are collected, four FFTs are computed on non-overlapping time series, and
the four FFTs are averaged to produce a vector that is input to the neural network. The system
reports that a motor is bad only if more than five of the last ten averaged FFTs produced an
anomaly metric more than five standard deviations greater than the mean metric computed
on the training set. Otherwise the motor is reported to be normal. In addition to monitoring
the motors, the prototype systems are designed to record all measurements on tape to support
930
T. PETSCHE, A. MARCANTONIO, C. DARKEN, S. 1. HANSON, G. M. KUHN, I. SANTOSO
GOOD
BAD
Figure 5: Functional architecture of Candor.
future experiments with alternative algorithms and tuning to improve performance.
To date, three monitoring systems have been installed: in an oil refinery, in a testing
laboratory and on an office building ventilation system. The system has correctly detected
the only failure it has seen so far: when a filter on the inlet to a water circulation pump
became clogged the spectrum changed so much that the average daily novelty metric jumped
from less than one standard deviation above the training set average to more than twenty
standard deviations. We hope to have further test results in a year or so.
8
Related work
Gluck and Myers (1993) proposed a model oflearning in the hippocampus based in part on
an autoassociator which is used to detect novel stimuli and to compress the representation
of the stimuli. This model has accurately predicted many of the classical conditioning
behaviors that have been observed in normal and hippocampal-damaged animals. Based on
this work, Japkowicz, Myers and Gluck (1995) independently derived an autoassociatorbased novelty detector for machine learning tasks similar to that used in our system.
Together with Gluck, we have tested an autoassociator based anomaly detector on helicopter
gearbox failures for the US Navy. In this case, the autoassociator is given 512 inputs
consisting of 64 vibration based features from each of 8 accelerometers mounted at different
locations on the gearbox. In a blind test, the autoassociator was able to correctly distinguish
between feature vectors taken from a damaged gearbox and other feature vectors taken
from normal gearboxes, all recorded in flight. Our anomaly detector will be included in
test flights of a gearbox monitoring system later this year.
References
Gluck, M. A. & Myers, C. E. (1993). Hippocampal mediation of stimulus representation:
A compuational theory. Hippocampus, 3(4), 491-561.
Japkowicz, N., Myers, c., & Gluck, M. A. (1995). A novelty detection approach to
classification. In Proceedings of the Fourteenth International Joint Conference on
Artificial Intelligence.
Rumelhart, D ., Hinton, G., & Williams, R. (1986). Learning internal representations by
error propagation. In D . Rumelhart & J. McClelland (Eds.), Parallel Distributed
Processing (pp. 318-362). MIT Press.
Schoen, R., Habetler, T., & Bartheld, R. (1994) . Motor bearing damage detection using
stator current monitoring. In Proceedings of the IEEE lAS Annual Meeting.
| 1077 |@word autoassociator:17 schoen:2 hippocampus:2 initial:1 contains:2 series:2 existing:3 current:10 com:1 protection:1 must:1 periodically:1 christian:1 motor:104 designed:7 half:3 selected:4 intelligence:1 core:1 record:2 filtered:2 location:1 five:4 along:1 supply:1 ik:2 behavior:1 frequently:2 nor:1 detects:1 decreasing:1 automatically:1 actual:1 increasing:1 project:1 moreover:1 fuel:1 developed:2 nj:1 collecting:1 act:1 classifier:8 scaled:2 control:4 unit:7 normally:2 before:2 limit:1 installed:1 switching:2 might:2 twice:1 limited:1 averaged:7 ventilation:1 responsible:1 testing:2 block:2 insulation:1 backpropagation:1 significantly:2 imminent:2 road:1 induce:1 seeing:1 operator:3 faulty:3 transformer:1 demonstrated:1 center:3 williams:2 independently:1 financial:1 notion:1 construction:1 commercial:1 damaged:2 anomaly:19 rumelhart:3 expensive:4 breakdown:5 labeled:1 observed:1 electrical:1 cycle:2 connected:3 rotor:3 balanced:6 broken:2 asked:1 trained:9 efficiency:1 completely:1 translated:1 resolved:1 joint:1 various:4 distinct:2 effective:1 detected:1 artificial:1 navy:2 larger:1 reconstruct:2 otherwise:1 encoder:1 itself:1 myers:4 oversized:1 reconstruction:1 product:1 helicopter:1 adaptation:1 date:2 adapts:1 amplified:2 description:1 requirement:1 produce:4 illustrate:2 measured:1 ij:2 progress:1 dividing:1 implemented:1 predicted:2 indicate:2 kuhn:4 fij:1 filter:3 human:4 fff:3 require:4 marcantonio:4 immensely:1 around:1 normal:5 automate:1 driving:3 jumped:1 early:1 consecutive:1 angelo:2 healthy:13 vibration:9 gauge:1 successfully:1 hope:1 mit:1 clearly:2 r_:1 office:1 ax:1 derived:1 bundled:1 properly:1 improvement:2 industrial:1 contrast:1 equipment:1 detect:3 typically:4 eliminate:2 hidden:9 japkowicz:2 issue:1 classification:1 animal:1 field:1 once:4 sampling:1 survive:1 future:2 report:1 stimulus:3 primarily:1 individual:2 replaced:1 phase:3 consisting:2 replacement:1 fire:1 attempt:2 detection:13 interest:1 custom:1 uncommon:1 pc:3 behind:1 necessary:3 experience:1 daily:1 rotating:3 minimal:1 increased:1 contiguous:1 cost:5 oflearning:1 deviation:3 subset:2 pump:2 successful:1 autoassociators:2 reported:2 international:1 physic:1 together:1 continuously:1 squared:1 again:2 recorded:4 containing:2 external:1 expert:6 exclude:1 sinusoidal:4 accelerometer:5 automation:1 coefficient:2 inc:1 race:2 depends:2 blind:1 later:2 start:1 sort:2 parallel:1 minimize:1 il:1 ni:1 circulation:1 became:1 who:2 identify:2 conceptually:1 accurately:1 produced:2 monitoring:17 detector:6 manual:1 ed:1 inexpensive:1 failure:13 energy:1 against:1 frequency:4 involved:1 pp:1 obvious:1 mi:2 monitored:1 jxj:1 radially:1 tum:1 higher:1 santoso:4 done:2 inlet:1 flight:2 receives:1 overlapping:2 propagation:1 indicated:1 building:1 effect:1 oil:1 consisted:1 functioning:1 laboratory:2 illustrated:2 wiring:1 during:2 hippocampal:2 complete:1 demonstrate:1 tn:1 scr:1 novel:1 ilxi:1 functional:2 conditioning:1 analog:1 occurred:2 measurement:6 significant:1 automatic:1 tuning:1 session:1 had:1 longer:1 operating:5 similarity:1 success:1 fault:13 life:1 meeting:1 seen:2 greater:1 novelty:6 determine:8 signal:4 stephen:1 ii:9 full:8 corporate:1 sound:2 adapt:1 cross:1 feasibility:1 prediction:4 basic:2 fffs:3 metric:7 affordable:1 deterioration:1 background:1 addition:2 gmk:1 xdl:1 interval:1 concluded:1 subject:1 hz:5 recording:3 db:1 capacitor:1 seem:1 effectiveness:1 near:1 presence:1 feedforward:2 automated:1 variety:1 fft:3 xj:1 architecture:2 approaching:1 identified:1 converter:1 reduce:2 idea:1 prototype:2 whether:3 duty:2 notch:2 cause:4 tape:2 se:1 listed:2 tune:1 ten:4 iixi:1 hardware:1 processed:1 mcclelland:1 simplest:1 reduced:1 generate:2 notice:1 correctly:3 per:1 diagnosis:1 four:4 threshold:9 monitor:7 changing:1 preprocessed:1 neither:1 thresholded:1 graph:1 sum:1 year:2 run:4 jose:1 fourteenth:1 master:1 pursuing:1 oscillation:4 decision:1 bit:1 layer:7 distinguish:5 display:1 annual:1 constraint:1 anyone:1 developing:2 combination:1 wi:2 presently:1 taken:2 vendor:1 fail:1 end:1 available:3 operation:2 experimentation:1 manufacturer:1 eight:1 petsche:5 alternative:2 thomas:1 compress:1 running:1 include:1 build:2 classical:1 move:1 occurs:1 degrades:1 primary:1 damage:1 interruption:1 card:3 outer:2 collected:3 water:1 induction:7 unfortunately:3 thirteen:1 design:3 twenty:1 perform:1 observation:1 darken:5 wire:1 neuron:4 thermal:1 hinton:2 ever:1 arbitrary:2 princeton:1 introduced:1 mechanical:7 connection:1 hanson:4 protect:1 able:1 bar:2 built:3 power:1 critical:1 rely:2 advanced:1 improve:1 technology:1 rated:7 health:1 relative:1 mediation:1 loss:3 fully:2 interesting:1 mounted:1 digital:2 switched:1 sufficient:1 production:3 elsewhere:1 changed:2 placed:1 last:1 institute:1 distributed:1 curve:5 computes:1 collection:2 made:3 preprocessing:1 far:1 excess:1 orderly:1 xi:4 spectrum:8 continuous:3 table:7 vessel:1 automobile:1 listens:1 necessarily:1 electric:3 bearing:3 linearly:1 motivation:1 oat:3 alarm:2 competent:1 site:1 georgia:1 fails:1 xl:5 third:1 ffts:9 minute:2 load:18 bad:3 false:2 ci:6 magnitude:5 illustrates:3 hole:2 gluck:5 led:1 likely:1 unexpected:1 contained:1 gary:1 determines:2 chance:1 goal:1 replace:1 absence:1 feasible:1 change:5 included:1 determined:1 specifically:1 xi11:1 typical:1 averaging:1 miss:2 called:1 experimental:1 la:1 siemens:4 east:1 college:1 internal:1 support:2 unbalanced:7 overload:1 multiplexed:1 audio:1 tested:6 |
88 | 1,078 | Modeling Interactions of the Rat's Place and
Head Direction Systems
A. David Redish and David S. Touretzky
Computer Science Department & Center for the Neural Basis of Cognition
Carnegie Mellon University, Pittsburgh PA 15213-3891
Internet: {dredi sh, ds t}@es . emu. edu
Abstract
We have developed a computational theory of rodent navigation that
includes analogs of the place cell system, the head direction system, and
path integration. In this paper we present simulation results showing how
interactions between the place and head direction systems can account for
recent observations about hippocampal place cell responses to doubling
and/or rotation of cue cards in a cylindrical arena (Sharp et at., 1990).
Rodents have multiple internal representations of their relationship to their environment.
They have, for example, a representation of their location (place cells in the hippocampal
formation, see Muller et at., 1991), and a location-independent representation of their
heading (head direction cells in the postsubiculum and the anterior thalamic nuclei, see
Taube et at., 1990; Taube, 1995).
If these representations are to be used for navigation, they must be aligned consistently
whenever the animal reenters a familiar environment. This process was examined in a set
of experiments by Sharp et at. (1990).
1
The Sharp et al., 1990 experiment
Rats spent multiple sessions finding food scattered randomly on the floor of a black cylindrical arena with a white cue card along the wall subtending 90? of arc. The animals were
not disoriented before entering the arena, and they always entered at the same location: the
northwest corner. See Figure 3a. Hippocampal place fields were mapped by single-cell
recording. A variety of probe trials were then introduced. When an identical second cue
A. D. REDISH, D. S. TOURETZKY
62
r-----~
Head
Direction
~
Path
r-----'--~ Integral_.I.o.........J
(xp,Y,,>
Local
View
'I' .;)
(T~
Place
COde
A (It)
Goal
Memory
Figure 1: Organization of the rodent navigation model.
card was added opposite the first (Figure 3c), most place fields did not double. J Instead,
the cells continued to fire at their original locations. However, if the rat was introduced into
the double-card environment at the southeast corner (Figure 3d), the place fields rotated
by 1800 ? But rotation did not occur in single-card probe trials with a southeast entry point
(Figure 3b). When tested with cue cards rotated by ?30?, Sharp et al. observed that place
field locations were controlled by an interaction of the choice of entry point with the cue
card positions (Figure 3f.)
2 The CRAWL model
In earlier work (Wan et al., 1994a; Wan et al., 1994b; Redish and Touretzky, 1996) we
described a model of rodent navigation that includes analogs of both place cells and the head
direction system. This model also includes a local view module representing egocentric
spatial information about landmarks, and a separate metric representation of location which
serves as a substrate for path integration. The existence of a path integration faculty in
rodents is strongly supported by behavioral data; see Maurer and Seguinot (1995) for
a discussion. Hypotheses about the underyling neural mechanismss are presently being
explored by several researchers, including us.
The structure of our model is shown in Figure 1. Visual inputs are represented as triples of
form (Ti, 'i, (Ji), each denoting the type, distance, and egocentric bearing ofa landmark. The
experiments reported here used two point-type landmarks representing the left and right
edges of the cue card, and one surface-type landmark representing the arena wall. For the
latter, 'i and (Ji define the normal vector between the rat and the surface. In the local view
module, egocentric bearings (Ji are converted to allocentric form <Pi by adding the current
value represented in the head direction system, denoted as tP h . The visual angle CYij between
pairs of landmarks is also part of the local view, and can be used to help localize the animal
when its head direction is unknown. See Figure 2.
I Five of the 18 cells recorded by Sharp et al. changed their place fields over the various recording
sessions. Our model does not reproduce these effects, since it does not address changes in place cell
tuning. Such changes could occur due to variations in the animal's mental state from one trial to the
next, or as a result of learning across trials .
Modeling Interactions of the Rat's Place and Head Direction Systems
(T.,}
r.,
1
63
4>.)
1
Figure 2: Spatial variables used in tuning a place cell to two landmarks i and j when the
animal is at path integrator coordinates (xl" Yl') .
Our simulated place units are radial basis functions tuned to combinations of individual
landmark bearings and distances, visual angles between landmark pairs, and path integrator
coordinates. Place units can be driven by visual input alone when the animal is trying
to localize itself upon initial entry at a random spot in the environment, or by the path
integrator alone when navigating in the dark. But normally they are driven by both
sources simultaneously. A key role of the place system is to maintain associations between
the two representations, so that either can be reconstructed from the other. The place
system also maintains a record of allocentric bearings of landmarks when viewed from the
current position; this enables the local view module to compare perceived with remembered
landmark bearings, so that drift in the head direction system can be detected and corrected.
In computer simulations using a single parameter set, the model reproduces a variety
of behavioral and neurophysiological results including control of place fields by visual
landmarks, persistence of place fields in the dark, and place fields drifting in synchrony
with drift in the head direction system. Its predictions for open-field landmark-based
navigation behavior match many of the experimental results of Collett et al. (1986) for
gerbils.
2.1
Entering a familiar environment
Upon entering a familiar environment, the model's four spatial representations (local view,
head direction, place code, and path integrator coordinates) must be aligned with the
current sensory input and with each other. Note that local view information is completely
determined given the visual input and head direction, and place cell activity is completely
determined given the local view and path integrator representations. Thus, the alignment
process manipulates just two variables: head direction and path integrator coordinates.
When the animal enters the environment with initial estimates for them, the alignment
process can produce four possible outcomes: (1) Retain the initial values of both variables,
(2) Reset the head direction, (3) Reset the path integrator, or (4) Reset both head direction
and the path integrator.
2.2
Prioritizing the outcomes
When the animal was placed at the northwest entry point and there were two cue cards
(Figure 3c), we note that the orientation of the wall segment adjacent to the place field
is identical with that in the training case. This suggests that the animal's head direction
A. D. REDISH, D. S. TOURETZKY
64
did not change. The spatial relationship between the entry point and place field was also
unchanged: notice that the distance from the entry point to the center of the field is the
same as in Figure 3a. Therefore, we conclude that the initially estimated path integrator
coordinates were retained. Alternatively, the animal could have changed both its head
direction (by 180?) and its path integrator coordinates (to those of the southeast comer) and
produced consistent results, but to the experimenter the place field would appear to have
flipped to the other card. Because no flip was observed, the first outcome must have priority
over the fourth.
In panel d, where the place field has flipped to the northwest comer, the orientation of the
segment of wall adjacent to the field has changed, but the spatial relationship between the
entry point and field center has not. Resetting the path integrator and not the head direction
would also give a solution consistent with this local view, but with the place field unflipped
(as in panel b). We conclude that the second outcome (reset head direction) must have
priority over the third (reset the path integrator).
The third and fourth outcomes are demonstrated in Figures 3b and 3f. In panel b, the
orientation of the wall adjacent to the place field is unchanged from panel a, but the spatial
relationship between the entry point and the place field center is different, as evidenced by
the fact that the distance between them is much reduced. This is outcome 3. In panel f,
both variables have changed (outcome 4).
Finally, the fact that place fields are stable over an entire session, even when there are
multiple cue cards (and therefore multiple consistent pairings of head directions and path
integrator coordinates) implies that animals do not reset their head direction or path integrator in visually ambiguous environments as long as the current values are reasonably
consistent with the local view. We therefore assume that outcome 1 is preferred over the
others.
This analysis establishes a partial ordering over the four outcomes: 1 is preferred over 4 by
Figure 3c, and over the others by the stability of place fields, and outcome 2 is preferred
over 3 by Figure 3d. This leaves open the question of whether outcome 3 or 4 has priority
over the other. In this experiment, after resetting the path integrator it's always safe for the
animal to attempt to reset its head direction. If the head direction does not change by more
than a few degrees, as in panel b, we observe outcome 3; if it does change substantially, as
in panel f, we observe outcome 4.
2.3
Consistency
The viability of an outcome is a function of the consistency between the local view and
path integrator representations. The place system maintains the association between the
two representations and mediates the comparison between them.
The activity A(u) of a place unit is the product of a local view term LV(u) and a path
integrator term C(u). LV(u) is in turn a product of five Gaussians: two tuned to bearings
and two to distances (for the same' pair of landmarks), and one tuned to the retinal angle
between a pair of landmarks. C(u) is a Gaussian tuned to the path integrator coordinates of
the center of the place field.
If the two representations agree, then the place units activated by path integrator input will
be the same as those activated by the local view module, so the product A(u) computed
by those units will be significantly greater than zero. The consistency K, of the association
Modeling Interactions of the Rat's Place and Head Direction Systems
65
between path integrator and local view representations is given by: K, = Lu A(u)/ Lu C(u).
Because A(u) < C(u) for all place units, K, ranges between 0 and 1. When the current local
view is compatible with that predicted by the current path integrator coordinates, K, will be
high; when the two are not compatible, K, will be low.
Earlier we showed that the navigation system should choose the highest priority viable
outcome. If the consistency of an outcome is more than K, * better than all higher-priority
outcomes, that outcome is a viable choice and higher-priority ones are not. K,* is an
empirically derived constant that we have set equal to 0.04.
3
Discussion
Our results match all of the cases already discussed. (See Figure 3, panels a through d
as well as f and h.) Sharp et al. (1990) did not actually test the rotated cue cards with a
northwest entry point, so our result in panel e is a prediction.
When the animals entered from the northwest, but only one cue card was available at 1800 ,
Sharp et al. report that the place field did not rotate. In our model the place field does
rotate, as a result of outcome 4. This discrepancy can be explained by the fact that this
particular manipulation was the last one in the sequence done by Sharp et at. McNaughton
et al. (1994) and Knierim et al. (1995) have shown that if rats experience the cue card
moving over a number of sessions, they eventually come to ignore it and it loses control
over place fields . When we tested our model without a cue card (equivalent to a card being
present but ignored), the resulting place field was more diffuse than normal but showed no
rotation; see Figure 3g. We thus predict that if this experiment had been done before the
other manipulations rather than after, the place field would have foIlowed the cue card .
In the Sharp et al. experiment, the animals were always placed in the environment at the
same location during training. Therefore, they could reliably estimate their initial path
integrator coordinates. They also had a reliable head direction estimate because they were
not disoriented. We predict that were the rats trained with a variety of entry points instead
of just one, using an environment with a single cue card at 0 0 (the training environment
used by Sharp et al.), and then tested with two cue cards at 0 0 and 1800 , the place field
would not rotate no matter what entry point was used. This is because when trained with a
variable entry point, the animal would not learn to anticipate its path integrator coordinates
upon entry; a path integratorreset would have to be done every time in order to establish the
animal's coordinates. The reset mechanism uses allocentric bearing information derived
from the head direction estimate, and in this task the resulting path integrator coordinates
will be consistent with the initial head direction estimate. Hence, outcome 3 will always
prevail.
If the animal is disoriented, however, then both the path integrator and the head direction
system must be reset upon entry (because consistency will be low with a faulty head
direction), and the animal must choose one cue card or the other to match against its
memory. So with disorientation and a variable entry point, the place field will be controlled
by one or the other cue card with a 50/50 probability. This was found to be true in a related
behavioral experiment by Cheng (1986).
Our model shows how interactions between the place and head direction systems handle the
various combinations of entry point, number of cue cards, and amount of cue card rotation .
It predicts that head direction reset will be observed in certain tasks and not in others. In
66
A. D. REDISH, D. S. TOURETZKY
experiments such as the single cue card task with an entry in the southeast, it predicts the
place code will shift from an initial value corresponding to the northwest entry point to the
value for the southeast entry point, but the head direction will not change. This could be
tested by recording simultaneously from place cells and head direction cells.
References
Cheng, K. (1986). A purely geometric module in the rat's spatial representation. Cognition, 23: 149-178.
Collett, T., Cartwright, B. A., and Smith, B. A. (1986). Landmark learning and visuospatial memories in gerbils. Journal of Comparative Physiology A, 158:835-851.
Knierim, J. J., Kudrimoti, H. 5., and McNaughton, B. L. (1995). Place cells, head
direction cells, and the learning of landmark stability. Journal ofNeuroscience, 15: 164859.
Maurer, R. and Seguinot, V. (1995). What is modelling for? A critical review of the
models of path integration. Journal of Theoretical Biology, 175:457-475.
McNaughton, B . L., Mizumori, S. J. Y., Barnes, C. A., Leonard, B. 1., Marquis, M.,
and Green, E. J. (1994). Cortical rpresentation of motion during unrestrained spatial
navigation in the rat. Cerebral Cortex, 4(1):27-39.
Muller, R. U., Kubie, 1. L., Bostock, E. M., Taube, J. 5., and Quirk, G. 1. (1991).
Spatial firing correlates of neurons in the hippocampal formation of freely moving rats.
In Paillard, J., editor, Brain and Space, chapter 17, pages 296-333. Oxford University
Press, New York.
Redish, A. D. and Touretzky, D. s. (1996). Navigating with landmarks: Computing
goal locations from place codes. In Ikeuchi, K. and Veloso, M., editors, Symbolic Visual
Learning. Oxford University Press. In press.
Sharp, P. E., Kubie, J. L., and Muller, R. U. (1990). Firing properties of hippocampal
neurons in a visually symmetrical environment: Contributions of multiple sensory cues
and mnemonic processes. Journal of Neuroscience, 10(9):3093-3105.
Taube, 1. s. (1995). Head direction cells recorded in the anterior thalamic nuclei of freely
moving rats. Journal of Neuroscience, 15(1): 1953-1971.
Taube, J. 5., Muller, R. I., and Ranck, Jr., J. B . (1990). Head direction cells recorded
from the postsubiculum in freely moving rats. I. Description and quantitative analysis.
Journal of Neuroscience, 10:420-435.
Wan, H . 5., Touretzky, D. 5., and Redish, A. D. (1994a). Computing goal locations
from place codes. In Proceedings of the 16th annual conference of the Cognitive Science
society, pages 922-927. Lawrence Earlbaum Associates, Hillsdale N1.
Wan, H. 5., Touretzky, D. 5., and Redish, A. D. (1994b). Towards a computational
theory of rat navigation. In Mozer, M., Smolen sky, P., Touretzky, D., Elman, J., and
Weigend, A., editors, Proceedings of the 1993 Connectionist Models Summer School,
pages 11-19. Lawrence Earlbaum Associates, Hillsdale NJ.
Modeling Interactions of the Rat's Place and Head Direction Systems
(a) 1 cue card at 0? (East)
entry in Northwest comer
angle of rotation (Sharp et al.)
precession of HD system = 0 0
= 2.7?
(c) 2 cue cards at 0 0 (East) & 180 0 (West)
entry in Northwest comer
angle of rotation (Sharp et al.) = -2.3?
precession of HD system = 0 0
67
(b) 1 cue card at 0 0
entry in Southeast comer
angle of rotation (Sharp et al.) = -6.0 0
precession of HD system = 2?
(d) 2 cue cards at 0 0 & 180 0
entry in Southeast comer
angle of rotation (Sharp et al.) = 182.5 0
precession of HD system::: 178 0
(e) 2 cue cards at 330 0 & 150 0
entry in Northwest comer
not done by Sharp et al.
precession of HD system = 331 0
(f) 2 cue cards at 330 0 & 150 0
entry in Southeast comer
angle of rotation (Sharp et al.) =158.3?
precession of HD system = 151 ?
(g) I cue card at 180 0 (West)
entry in Northwest comer
angle of rotation (Sharp et al.) ::: -5.5 0
precession of HD system =0 0
(h) 1 cue card at 180 0
entry in Southeast comer
angle of rotation (Sharp et al.) = 182.2?
precession of HD system = 179 0
Figure 3: Computer simulations of the Sharp et al. (1990) experiment showing that place
fields are controlled by both cue cards (thick arcs) and entry point (arrowhead). "Angle of
rotation" is the angle at which the correlation between the probe and training case place
fields is maximal. Because head direction and place code are tightly coupled in our model,
precession of HD is an equivalent measure in our model.
| 1078 |@word cylindrical:2 trial:4 faculty:1 open:2 simulation:3 smolen:1 initial:6 denoting:1 tuned:4 ranck:1 current:6 anterior:2 must:6 enables:1 alone:2 cue:30 leaf:1 smith:1 record:1 mental:1 location:9 disoriented:3 five:2 along:1 viable:2 pairing:1 behavioral:3 behavior:1 elman:1 integrator:25 brain:1 food:1 panel:9 what:2 substantially:1 developed:1 finding:1 nj:1 quantitative:1 every:1 sky:1 ti:1 ofa:1 control:2 unit:6 normally:1 appear:1 before:2 local:15 oxford:2 marquis:1 path:31 firing:2 black:1 examined:1 suggests:1 range:1 kubie:2 spot:1 significantly:1 physiology:1 persistence:1 radial:1 symbolic:1 faulty:1 equivalent:2 demonstrated:1 center:5 manipulates:1 continued:1 hd:9 stability:2 handle:1 variation:1 coordinate:13 mcnaughton:3 substrate:1 us:1 hypothesis:1 pa:1 associate:2 predicts:2 observed:3 role:1 module:5 enters:1 ordering:1 highest:1 mozer:1 environment:12 trained:2 segment:2 purely:1 upon:4 basis:2 completely:2 comer:10 represented:2 various:2 chapter:1 detected:1 mizumori:1 formation:2 outcome:20 itself:1 sequence:1 interaction:7 product:3 reset:10 maximal:1 aligned:2 entered:2 description:1 double:2 produce:1 comparative:1 rotated:3 spent:1 help:1 quirk:1 school:1 predicted:1 implies:1 come:1 direction:39 safe:1 thick:1 hillsdale:2 wall:5 anticipate:1 normal:2 visually:2 lawrence:2 cognition:2 predict:2 perceived:1 southeast:9 establishes:1 kudrimoti:1 always:4 gaussian:1 rather:1 derived:2 consistently:1 modelling:1 unrestrained:1 entire:1 initially:1 reproduce:1 orientation:3 denoted:1 animal:18 spatial:9 integration:4 field:31 equal:1 identical:2 flipped:2 biology:1 discrepancy:1 others:3 report:1 connectionist:1 few:1 randomly:1 simultaneously:2 tightly:1 individual:1 familiar:3 fire:1 maintain:1 n1:1 attempt:1 organization:1 arena:4 alignment:2 navigation:8 sh:1 activated:2 edge:1 partial:1 experience:1 arrowhead:1 maurer:2 theoretical:1 modeling:4 earlier:2 tp:1 subtending:1 entry:28 reported:1 retain:1 yl:1 recorded:3 wan:4 choose:2 priority:6 corner:2 cognitive:1 account:1 converted:1 retinal:1 redish:8 includes:3 matter:1 view:15 thalamic:2 maintains:2 synchrony:1 contribution:1 resetting:2 produced:1 lu:2 researcher:1 touretzky:9 whenever:1 against:1 experimenter:1 actually:1 higher:2 response:1 done:4 strongly:1 just:2 correlation:1 d:1 effect:1 true:1 hence:1 entering:3 white:1 adjacent:3 during:2 ambiguous:1 rat:15 hippocampal:5 trying:1 motion:1 rotation:12 ji:3 empirically:1 cerebral:1 analog:2 association:3 discussed:1 mellon:1 tuning:2 postsubiculum:2 consistency:5 session:4 had:2 gerbil:2 moving:4 stable:1 cortex:1 surface:2 recent:1 showed:2 driven:2 manipulation:2 certain:1 remembered:1 muller:4 greater:1 floor:1 freely:3 taube:5 multiple:5 earlbaum:2 match:3 veloso:1 long:1 mnemonic:1 controlled:3 prediction:2 metric:1 cell:17 source:1 recording:3 ikeuchi:1 collett:2 viability:1 variety:3 opposite:1 shift:1 whether:1 york:1 ignored:1 amount:1 dark:2 reduced:1 visuospatial:1 notice:1 estimated:1 neuroscience:3 carnegie:1 key:1 four:3 localize:2 egocentric:3 weigend:1 angle:12 fourth:2 place:56 internet:1 summer:1 cheng:2 barnes:1 activity:2 annual:1 occur:2 diffuse:1 department:1 combination:2 jr:1 across:1 presently:1 explained:1 agree:1 turn:1 eventually:1 mechanism:2 flip:1 serf:1 available:1 gaussians:1 probe:3 observe:2 allocentric:3 drifting:1 existence:1 original:1 northwest:10 establish:1 society:1 unchanged:2 added:1 question:1 cartwright:1 already:1 navigating:2 distance:5 separate:1 card:33 mapped:1 simulated:1 landmark:17 code:6 retained:1 relationship:4 reliably:1 unknown:1 observation:1 neuron:2 arc:2 head:39 sharp:20 bostock:1 knierim:2 drift:2 prioritizing:1 david:2 introduced:2 pair:4 evidenced:1 emu:1 mediates:1 address:1 including:2 memory:3 reliable:1 green:1 critical:1 representing:3 coupled:1 review:1 geometric:1 lv:2 triple:1 nucleus:2 degree:1 xp:1 consistent:5 editor:3 pi:1 compatible:2 changed:4 supported:1 placed:2 last:1 heading:1 cortical:1 crawl:1 sensory:2 correlate:1 reconstructed:1 ignore:1 preferred:3 reproduces:1 pittsburgh:1 conclude:2 symmetrical:1 alternatively:1 learn:1 reasonably:1 bearing:7 did:5 west:2 scattered:1 position:2 xl:1 third:2 showing:2 explored:1 adding:1 prevail:1 rodent:5 neurophysiological:1 visual:7 doubling:1 loses:1 goal:3 viewed:1 leonard:1 towards:1 change:6 determined:2 corrected:1 e:1 experimental:1 east:2 internal:1 latter:1 rotate:3 tested:4 |
89 | 1,079 | Active Gesture Recognition using
Learned Visual Attention
Trevor Darrell and Alex Pentland
Perceptual Computing Group
MIT Media Lab
20 Ames Street, Cambridge MA, 02138
trevor,sandy~media.mit.edu
Abstract
We have developed a foveated gesture recognition system that runs
in an unconstrained office environment with an active camera. Using vision routines previously implemented for an interactive environment, we determine the spatial location of salient body parts
of a user and guide an active camera to obtain images of gestures
or expressions. A hidden-state reinforcement learning paradigm is
used to implement visual attention. The attention module selects
targets to foveate based on the goal of successful recognition, and
uses a new multiple-model Q-Iearning formulation. Given a set
of target and distractor gestures, our system can learn where to
foveate to maximally discriminate a particular gesture.
1
INTRODUCTION
Vision has numerous uses in the natural world. It is used by many organisms in
navigation and object recognition tasks, for finding resources or avoiding predators.
Often overlooked in computational models of vision, however, and particularly relevant for humans, is the use of vision for communication and interaction. In these
domains visual perception is an important communication modality, either in addition to language or when language cannot be used. In general, people place
considerable weight on visual signals from another individual, such as facial expression, hand gestures, and body language. We have been developing neurally-inspired
methods which combine low-level vision and learning to model these visual abilities.
Previously, we presented a method for view-based recognition of spatia-temporal
hand gestures [2] and a similar mechanism for the analysis/real-time tracking of
facial expressions [4]. These methods offered real-time performance and a relatively
high level of accuracy, but required foveated images of the object performing the
Active Gesture Recognition Using Learned Visual Attention
859
gesture. There are many domains/tasks for which these are not unreasonable assumptions, such as interaction with a single user workstation or an automobile with
a single driver. However the method had limited usefulness in unconstrained domains, such as "intelligent rooms" or interactive virtual environments, when the
identity and location of the user are unknown.
In this paper, we expand our gesture recognition method to include an active component, utilizing a foveated image sensor that can selectively track a person's hand
or face as they walk through a room. The camera tracking and model selection
routines are guided by an action-selection system that implements visual attention
based on reinforcement learning. Using on a simple reward schedule, this attention
system learns the appropriate object (hand, head) to foveate in order to maximize
recognition performance.
2
FOVEATED GESTURE ANALYSIS
Our system for foveated gesture recognition combines person tracking routines,
an active, high-resolution camera, and view-based normalized correlation analysis.
First we will briefly describe the person tracking module and view-based analysis,
then discuss their use with an active camera.
We have implemented vision routines to track a user in in an office setting as part
of our ALIVE system, an Artificial Life Interactive Video Environment[3]. This
system can track people and identify head/hand locations as they walk about a
room, and provides the contextual environment within which view-based gesture
analysis methods can be successfully applied. The ALIVE system assumed little
prior knowledge of the user, and operated on coarse-scale images. 1 ALIVE allows
a user to interact with virtual artificial life creatures, through the use of a "magicmirror" metaphor in which user sees him/herself presented in a video display along
with virtual creatures. A wide field-of-view video camera acquires an image of the
user, which is then combined with computer graphics imagery and projected on a
large screen in front of the user. Vision routines in ALIVE compute figure/ground
segmentation and analyze the user's silhouette to determine the location of head,
hands, and other salient body features. We use only a single, calibrated, wide fieldof-view camera to determine the 3-D position of these features. 2 For details of our
person tracking method see [14].
In our approach to real-time expression matching/tracking, a set of view-based
correlation models is used to represent spatio-temporal gesture patterns. We take
a sequence of images representing the gesture to be trained, and build a set of
view models that are sufficient to track the object as it performs the gesture. Our
view models are normalized correlation templates, and can either be intensity-based
or based on band-pass or wavelet-based signal representations. 3 We applied our
model to the problem of hand gesture recognition [2] as well as for tracking facial
expressions [4]. For facial tracking, we implemented an interpolation paradigm to
map view-based correlation scores to facial motor controls. We used the Radial Basis
Function (RBF) method[7]; interpolation was performed using a set of exemplars
consisting of pairs of real faces and model faces in different expressions, which were
1 A simple mechanism for recognition of hand gestures was implemented in the original
ALIVE system but made no use of high-resolution view models, and could only recognize
pointing and waving motions defined by the motion of the centroid of the hand.
2By assuming the the user is sitting or standing on the ground plane, we use the imaging
and ground plane geometry to compute the location of the user in 3-D.
3The latter have the advantage of being less dependent on illumination direction.
860
T.DARRELL,A.PENTLAND
animation / rendering
~VideoWall
VIEW-BASED
GESTURE
ANALYSIS
Figure 1: Overview of system for person tracking and active gesture recognition.
Static, wide-field-of-view, camera tracks user's head and hands, which drives gaze
control of active narrow-field-of-view camera. Foveated images are used for viewbased gesture analysis and recognition. Graphical objects are rendered on video
wall and can react to user's position, pose, and gestures.
obtained by generating a 3-D model face and asking the user to match it. With this
simple formalism, we were able to track expressions of a real user and interpolate
equivalent 3-D model faces in real-time .
This view-based analysis requires detailed imagery, which cannot be obtained from
a single, fixed camera as the user walks about a room. To provide high resolution
images for gesture recognition, we augment the wide field-of-view camera in our
interactive environment with an active, narrow-field-of-view camera, as shown in
Figure 1. Information about head/hand location from the existing ALIVE routines
is used to drive the motor control parameters of the narrow field camera. Currently
the camera can be directed to autonomously track head or hands . Using a highly
simplified, two expression model offacial expression (neutral and surprised), we have
been able to track facial expressions as users move about the room and the narrow
angle camera followed the face. For details on this foveated gesture recognition see
[5]
3
VISUAL ATTENTION FOR RECOGNITION
The visual routines in the ALIVE system can be used to track the head and hands
of a user, and the active camera can provide foveated images for gesture recognition.
If we know a priori which body part will produce the gesture of interest, or if we
have a sufficient number of active cameras to track all body parts, then we have
solved the problem. Of course, in practice there are more possible loci of gesture
performance than there are active cameras, and we have to address the problem of
action selection for visual routines, i.e. , attention. In our active gesture recognition
system, we have adopted an action selection model based on reinforcement learning.
Active Gesture Recognition Using Learned Visual Attention
3.1
861
THE ACTIVE GESTURE RECOGNITION PROBLEM
We define an Active Gesture Recognition (AGR) task as follows . First, we assume
primitive routines exist to provide the continuous valued control and tracking of the
different body parts that perform gestures. Second, we assume that body pose and
hand/face state is represented as a feature set, based on the representation produced
by our body tracker and view-based recognition system, and we define a gesture
to be a configuration of the user's body pose and hand/face expression. Third, we
assume that, in addition to there being actions for foveating all the relevant body
parts, there is also a special action labeled accept, and that the execution of this
action by the AG R system signifies detection of the gesture. Finally, the goal of
the AGR task is to execute the accept action whenever the user is in the target
gesture state, and not to perform that action when the user is in any other (e .g.
distract or) state. The AGR system should use the foveation actions to optimally
discriminate the target pattern frqm distractor patterns, even when no single view
of the user is sufficient to decide what gesture the user is performing.
An important problem in applying reinforcement learning to this task is that our
perceptual observations may not provide a complete description of the user's state.
Indeed, because we have a foveated image sensor we know that the user's true
gestural state will be hidden whenever the user is performing a gesture and the
camera is not foveated on the appropriate body part. By definition, a system for
perceptual action selection must not assume a full observation of state is available,
otherwise there would be no meaningful perception taking place.
The AG R task can be considered as a Partially Observable Markov Decision Process
(POMDP), which is essentially a Markov Decision Process without direct access to
state[ll, 9]. Rather than attempt to solve them explicitly, we look to techniques
for hidden state reinforcement learning to find a solution [10, 8, 6, 1]. A POMDP
consists of a set of states in the world S, a set of observations 0, a set of actions
A, a reward function R. After executing an action a, the likelihood of transitioning
between two states s, s' is given by T(s, a, a'), an observation 0 is generated with
probability O(s, a, 0). In practice, T and 0 are not easily obtainable, and we use
reinforcement learning methods which do not require them a priori.
Our state is defined by the users pose, facial expression, and hand configurations, expressed in nine variables. Three are boolean and are provided directly by the person
tracker: person-present, left-arm-extended, and right-arm-extended. Three
more are provided by the foveated gesture recognition system, (face, left-hand,
right-hand), and take on an integer number of values according to the number
of view-based expressions/hand-poses: in our first experiments face can be one of
neutral, smile, or surprise, and the hands can each be one of neutral, point, or
grab. In addition, three boolean features represent the internal state of the vision
system: head-foveated, left-hand-foveated, right-hand-foveated. At each
time step, the world is defined by a state s E S, which is defined by these features .
An observation, 0 E 0, consists of the same feature variables, except that those
provided by the foveated gesture system (e.g., head and hands) are only observable
when foveated. Thus the face variable is hidden unless the head-foveated variable
is set, the left-hand variable hidden unless the left-hand-foveated variable set,
and similarly with the right hand. Hidden variables are set to a undefined value.
The set of actions, A, available to the AGR system are 4 foveation commands:
look-body, look-head, look-left-hand, and look-right-hand plus the special
accept action. Each foveation command causes the active camera to follow the
respective body part, and sets the internal foveation feature bits accordingly.
862
T. DARRELL, A. PENTLAND
The reward function provides a unit positive reward whenever the accept action
is performed and the user is in the target state (as defined by an oracle, external
to the AGR system), and a fixed negative reward of magnitude a when performed
and the user is in a distractor (non-target) state. Zero reward is given whenever a
foveation action is performed.
3.2
HIDDEN-STATE REINFORCEMENT LEARNING
We have implemented a instance-based method for hidden state reinforcement learning, based on earlier work by McCallum [10]. The instance-based approach to reinforcement learning replaces the absolute state with a distributed memory-based
state representation. Given a history of action, reward, and observation tuples,
(a[t], r[t], o[t]) , 0 :::; t :::; T, a Q-value is also stored with each time step, q[t], and
Q-Iearning[12, 13] is performed by evaluating the similarity of recently observed tuples with sequences farther back in the history chain. Q-values are computed, and
the Q-Iearning update rule applied, maintaining this distributed, memory-based
representation of Q-values.
As in traditional Q-Iearning, at each time step the utility of each action in the
current state is evaluated. If full access to the state was available and a table
used to represent Q values, this would simply be a table look-up operation, but in a
POMDP we do not have full access to state. Using a variation on the instance based
approach employed by McAllum's Nearest Sequence Memory (NSM) algorithm, we
instead find the I< nearest neighbors in the history list relative to the current time
point, and compute their average Q value. For each element on the history list, we
compute the sequence match criteria with the current time point, M(i, T), where
M(i,j) = S(i,j)
+ M(i -l,j -1)
o
if S(i,j)
> 0 and i> 0 and j > 0
otherwise.
We define Sci, j) to be 1 if o[i] = o[j] or a[i] = a(j], 2 if both are equal, and
o otherwise. Using a superscript in parentheses to denote the action index of a
Q-value, we then compute
T
Q(a)[T]
= (1/ I<) L v(a)[i]q[t]
,
(1)
i=O
where v(a*)[i] indicates whether the history tuple at time step i votes when computing the Q-value of a new action a"': v(a*)[i] is set to 1 when a[i] = a'" and M( i-I, T)
is among the I< largest match values for all k which have a[k] = a"', otherwise it is
set to O. Given Q values for each action the optimal policy is simply
lI"[T] = arg maxQ(a)[T] .
aEA
(2)
The new action a[T + 1] is chosen either according to this policy or based on an
exploration strategy. In either case, the action is executed yielding an observation
and reward, and a new tuple added to the history. The new Q-value is set to be
the Q value of the chosen action, q[T + 1] = Q(a[T+1]) [T]. The update step of Q
learning is then computed, evaluating
U[T + 1]
q[i]
+-
= maxQ(a)[T
+ 1] ,
aEA
(1 - fJ)q[i]
for each i such that v(a[T+l])[i]
= l.
+ fJ(r[i] + ')'U[T + 1]) ,
(3)
(4)
863
Active Gesture Recognition Using Learned Visual Attention
60
50
40
%error
30
20
10
0.84%
0.44"10
0.48%
0L---------~---.8----~
2
K
(a)
4
16
(\3={).5, "(=0.5, a.= 10, 2500 trialS)
Figure 2: (a) Multiple model Q-learning: one Q-learning agent for each target
gesture to be recognized, with coupled observation and action but separate reward
and Q-value. (b) Results on recognition task with 8 gesture targets; graph shows
error rate after convergence plotted as a function of number of nearest neighbors
used in learning algorithm.
4
MULTIPLE MODEL Q-LEARNING
In general, we have found the simple, instance-based hidden state reinforcement
learning described above to be an effective way to perform action selection for
foveation when the task is recognition of a single object from a set of distractors .
However, we did not find that this type of system performed well when the AG R
task was extended to include more than one target gesture. When multiple accept
actions were added to enumerate the different targets, we were not able to find
exploration strategies that would converge in reasonable time.
This is not unexpected, since the addition of multiple causes of positive reward
makes the Q-value space considerably more complex. To remedy this problem, we
propose a multiple model Q-learning system. In a multiple model approach to the
AG R problem, separate learning agents model the task from each targets perspective. Conceptually, a separate Q-learning agent exists for each target, maintains it's
own Q-value and history structure, and is coupled to the other agents via shared
observations. Since we can interpret the Q-value of an individual AGR agent as a
confidence value that its target is present, we can mediate among the actions predicted by the different agents by selecting the action from the agent with highest
Q-value (Figure 2).
Formally, in our multiple model Q-learning system all agents share the same observation and selected action , but have different reward and Q-values. Thus they
can be considered a single Q-learning system, but with vector reward and Q-values.
Our multiple model learning system is thus obtained by rewriting Eqs. (1)-(4) with
vector q[t] and r[t]. Using a subscript j to indicate the target index, we have
T
Q;a)[T]
= (1/ K) L v(a)[i]qj [t]
, 1T[11
i=O
= arg max
(maxQ;a)[T])
aEA
J
.
Rewards are computed with: if a[T] = accept then rj [T] = R(j, T) else rj [T]
= 1 if gesture j was present at time T, else R(j, T) = -(Y. Further,
(5)
= 0;
R(j, T)
Uj [T + 1] = maxQ(a)[T + 1] ,
aEA
]
(6)
T.DARRELL,A.PENTLAND
864
(1- ,8)qj[i] + ,8(rj[i] + /'Uj[T+ 1]) Vi s.t. v(a[T+1])[i] = 1 .
(7)
Note that our sequence match criteria, unlike that in [10], does not depend on r[t];
this allows considerable computational savings in the multiple model system since
v(a) need not depend on j.
qj[i]
f-
We ran the multiple model learning system on the AGR task using 8 targets, with
,8 = 0.5, /' = 0.5, Q; = 10. Results summed over 2500 trials are shown in Figure 2(b),
with classification error plotted against the number of nearest neighbors used in the
NSM algorithm. The error rate shown is after convergence; we ran the algorithm
with a period of deterministic exploration before following the optimal policy. (The
system deterministically explored each action/accept pair.) As can be seen from
the graph, for any non-degenerate value of K reasonable performance was obtained;
for K > 2, the system performed almost perfectly.
References
[1] A. Cassandra, L. P. Kaelbling, and M. Littman. Acting optimally in partially
observable stochastic domains. In Proc. AAAI-94, pages 1023-1028. Morgan
Kaufmann, 1994.
[2] T. Darrell and A. P. Pentland. Classification of Hand Gestures using a ViewBased Distributed Representation In Advances in Neural Information Processing Systems 6, Morgan Kauffman, 1994.
[3] T. Darrell, P. Maes, B. Blumberg, and A. P. Pentland, A Novel Environment
for Situated Vision and Behavior, Proc. IEEE Workshop for Visual Behaviors,
IEEE Compo Soc. Press, Los Alamitos, CA, 1994
[4] T. Darrell, I. Essa, and A. P. Pentland, Correlation and Interpolation Networks
for Real-time Expression Analysis/Synthesis, In Advances in Neural Information Processing Systems 7, MIT Press, 1995.
[5] T. Darrell and A. Pentland, A., Attention-driven Expression and Gesture Analysis in an Interactive Environment, in Proc. Inti. Workshop on A utomatic Face
and Gesture Recognition (IWAFGR '95), Zurich, Switzerland, 1995.
[6] T. Jaakkola, S. Singh, and M. Jordan. Reinforcement Learning Algorithm
for Partially Observable Markov Decision Problems. In Advances In Neural
Information Processing Systems 7, MIT Press, 1995.
[7] T. Poggio and F. Girosi, A Theory of Networks for Approximation and Learning. MIT AI Lab TR-1140, 1989.
[8] 1. Lin and T. Michell. Reinforcement learning with hidden states. In Proc.
AAAI-92. Morgan Kaufmann, 1992.
[9] W. Lovejoy. A survey of algorithmic methods of partially observed markov
decision processes. Annals of Operation Reserach, 28:47-66, 1991.
[10] R. A. McCallum. Instance-based State Identification for Reinforcement Learning. In Advances In Neural Information Processing Systems 7, MIT Press, 1995.
[11] Edward J. Sondik. The optimal control of partially observable markov processes
over the infinite horizon: Discounted costs. Operations Reserach, 26(2):282304, 1978.
[12] R. S. Sutton. Learning to predict by the method of temporal differences.
Machine Learning, 3:9-44, 1988.
[13] C. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279-292, 1992.
[14] C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, Pfinder: Real-Time
Tracking of the Human Body, Media Lab Per. Compo TR-353, 1994
| 1079 |@word trial:2 briefly:1 maes:1 tr:2 configuration:2 score:1 selecting:1 existing:1 current:3 contextual:1 must:1 girosi:1 motor:2 update:2 selected:1 accordingly:1 plane:2 mccallum:2 farther:1 compo:2 provides:2 coarse:1 ames:1 location:6 along:1 direct:1 driver:1 surprised:1 consists:2 combine:2 indeed:1 behavior:2 distractor:3 inspired:1 discounted:1 little:1 metaphor:1 provided:3 medium:3 what:1 developed:1 finding:1 ag:4 temporal:3 interactive:5 iearning:4 control:5 unit:1 positive:2 before:1 sutton:1 subscript:1 interpolation:3 plus:1 limited:1 directed:1 camera:20 practice:2 implement:2 matching:1 confidence:1 radial:1 gestural:1 cannot:2 selection:6 applying:1 equivalent:1 map:1 deterministic:1 primitive:1 attention:11 pomdp:3 resolution:3 survey:1 react:1 rule:1 utilizing:1 variation:1 annals:1 target:15 user:30 us:2 element:1 recognition:27 particularly:1 labeled:1 observed:2 module:2 solved:1 autonomously:1 highest:1 ran:2 environment:8 reward:13 littman:1 trained:1 depend:2 singh:1 basis:1 easily:1 herself:1 represented:1 describe:1 effective:1 artificial:2 valued:1 solve:1 otherwise:4 ability:1 superscript:1 agr:7 sequence:5 advantage:1 essa:1 propose:1 interaction:2 relevant:2 degenerate:1 description:1 los:1 convergence:2 darrell:9 produce:1 generating:1 executing:1 object:6 pose:5 exemplar:1 nearest:4 eq:1 edward:1 soc:1 implemented:5 predicted:1 indicate:1 direction:1 guided:1 switzerland:1 stochastic:1 exploration:3 human:2 virtual:3 require:1 wall:1 tracker:2 considered:2 ground:3 algorithmic:1 predict:1 pointing:1 sandy:1 proc:4 wren:1 currently:1 him:1 largest:1 successfully:1 mit:6 sensor:2 rather:1 command:2 jaakkola:1 office:2 foveating:1 likelihood:1 indicates:1 centroid:1 dependent:1 lovejoy:1 dayan:1 accept:7 hidden:10 expand:1 selects:1 arg:2 among:2 classification:2 augment:1 priori:2 spatial:1 special:2 summed:1 field:6 equal:1 saving:1 look:6 intelligent:1 nsm:2 recognize:1 interpolate:1 individual:2 geometry:1 consisting:1 attempt:1 detection:1 interest:1 highly:1 blumberg:1 navigation:1 operated:1 undefined:1 yielding:1 chain:1 tuple:2 poggio:1 respective:1 facial:7 unless:2 walk:3 plotted:2 instance:5 formalism:1 earlier:1 boolean:2 asking:1 signifies:1 kaelbling:1 cost:1 neutral:3 usefulness:1 successful:1 graphic:1 front:1 optimally:2 stored:1 considerably:1 combined:1 calibrated:1 person:7 standing:1 gaze:1 synthesis:1 imagery:2 aaai:2 external:1 li:1 explicitly:1 vi:1 performed:7 view:20 sondik:1 lab:3 analyze:1 maintains:1 predator:1 waving:1 accuracy:1 kaufmann:2 sitting:1 identify:1 conceptually:1 identification:1 utomatic:1 produced:1 drive:2 viewbased:2 history:7 whenever:4 trevor:2 definition:1 against:1 workstation:1 static:1 knowledge:1 distractors:1 segmentation:1 schedule:1 obtainable:1 routine:9 back:1 follow:1 maximally:1 formulation:1 execute:1 evaluated:1 correlation:5 hand:29 normalized:2 true:1 remedy:1 ll:1 acquires:1 criterion:2 complete:1 performs:1 motion:2 fj:2 image:10 novel:1 recently:1 overview:1 organism:1 interpret:1 cambridge:1 ai:1 unconstrained:2 similarly:1 language:3 had:1 access:3 similarity:1 spatia:1 own:1 perspective:1 driven:1 life:2 seen:1 morgan:3 employed:1 recognized:1 converge:1 determine:3 paradigm:2 maximize:1 period:1 signal:2 multiple:11 neurally:1 full:3 rj:3 match:4 gesture:47 lin:1 parenthesis:1 vision:9 essentially:1 represent:3 addition:4 else:2 modality:1 unlike:1 smile:1 jordan:1 integer:1 rendering:1 perfectly:1 qj:3 whether:1 expression:15 utility:1 aea:4 nine:1 cause:2 action:31 enumerate:1 detailed:1 offacial:1 band:1 situated:1 exist:1 track:10 per:1 group:1 salient:2 rewriting:1 imaging:1 grab:1 graph:2 run:1 angle:1 place:2 almost:1 reasonable:2 decide:1 decision:4 bit:1 followed:1 display:1 replaces:1 oracle:1 alive:7 alex:1 performing:3 rendered:1 relatively:1 developing:1 according:2 inti:1 resource:1 zurich:1 previously:2 discus:1 mechanism:2 know:2 locus:1 adopted:1 available:3 operation:3 unreasonable:1 appropriate:2 original:1 include:2 graphical:1 maintaining:1 build:1 uj:2 move:1 added:2 alamitos:1 strategy:2 traditional:1 separate:3 sci:1 street:1 assuming:1 index:2 executed:1 negative:1 policy:3 unknown:1 perform:3 observation:10 markov:5 pentland:9 extended:3 communication:2 head:11 intensity:1 overlooked:1 pair:2 required:1 learned:4 narrow:4 maxq:4 address:1 able:3 perception:2 pattern:3 kauffman:1 max:1 memory:3 video:4 natural:1 arm:2 representing:1 numerous:1 coupled:2 prior:1 relative:1 agent:8 offered:1 sufficient:3 share:1 course:1 guide:1 wide:4 template:1 face:12 taking:1 neighbor:3 absolute:1 distributed:3 world:3 evaluating:2 made:1 reinforcement:13 projected:1 simplified:1 observable:5 silhouette:1 active:19 assumed:1 reserach:2 spatio:1 tuples:2 continuous:1 table:2 learn:1 ca:1 interact:1 distract:1 automobile:1 complex:1 domain:4 did:1 animation:1 mediate:1 body:14 creature:2 screen:1 position:2 deterministically:1 perceptual:3 watkins:1 third:1 learns:1 wavelet:1 transitioning:1 list:2 explored:1 exists:1 workshop:2 magnitude:1 execution:1 illumination:1 foveated:18 horizon:1 cassandra:1 surprise:1 azarbayejani:1 simply:2 visual:13 expressed:1 unexpected:1 tracking:11 partially:5 foveate:3 ma:1 goal:2 identity:1 rbf:1 room:5 shared:1 considerable:2 infinite:1 foveation:6 except:1 acting:1 discriminate:2 pas:1 vote:1 meaningful:1 selectively:1 formally:1 internal:2 people:2 latter:1 avoiding:1 |
90 | 108 | 618
NEURAL NETWORKS FOR MODEL
MATCHING AND PERCEPTUAL
ORGANIZATION
Gene Gindi
EE Department
Yale University
New Haven, CT 06520
Eric Mjolsness
CS Department
Yale University
New Haven, CT 06520
P. Anandan
CS Department
Yale University
New Haven, CT 06520
ABSTRACT
We introduce an optimization approach for solving problems in computer vision that involve multiple levels of abstraction. Our objective
functions include compositional and specialization hierarchies. We cast
vision problems as inexact graph matching problems, formulate graph
matching in terms of constrained optimization, and use analog neural
networks to perform the optimization. The method is applicable to perceptual grouping and model matching. Preliminary experimental results
are shown.
1
Introduction
The minimization of objective functions is an attractive way to formulate and solve
visual recognition problems. Such formulations are parsimonious, being expressible
in several lines of algebra, and may be converted into artificial neural networks
which perform the optimization. Advantages of such networks including speed,
parallelism, cheap analog computing, and biological plausibility have been noted
[Hop field and Tank, 1985].
According to a common view of computational vision, recognition involves the construction of abstract descriptions of data governed by a data base of models. Abstractions serve as reduced descriptions of complex data useful for reasoning about
the objects and events in the scene. The models indicate what objects and properties
may be expected in the scene. The complexity of visual recognition demands that
the models be organized into compositional hierarchies which express object-part
relationships and specialization hierarchies which express object-class relationships.
In this paper, we describe a methodology for expressing model-based visual recognition as the constrained minimization of an objective function. Model-specific
objective functions are used to govern the dynamic grouping of image elements into
recognizable wholes. Neural networks are used to carry out the minimization.
?This work was supported in part by AFOSR grant F49620-88-C-002S, and by DARPA grant
DAAAlS-87-K-OOOl, by ONR grant N00014-86-0310.
Model Matching and Perceptual Organization
Previous work on optimization in vision has typically been restricted to computations occuring at a single of level of abstraction and/or involving a single model
[Barrow and Popplestone, 1971,Hummel and Zucker, 1983,Terzopoulos, 1986]. For
example, surface interpolation schemes, even when they include discontinuities
[Terzopoulos, 1986] do not include explicit models for physical objects whose surface
characteristics determine the expected degree of smoothness. By contrast, heterogeneous and hierarchical model-bases often occur in non-optimization approaches
to visual recognition [Hanson and Riseman, 1986] including some which use neural
networks [Ballard, 1986]. We attempt to obtain greater express ability and efficiency
by incorporating hierarchies of abstraction into the optimization paradigm.
2
Casting Model Matching as Optimization
We consider a type of objective function which, when minimized by a neural
network, is capable of expressing many of the ideas found in Frame systems
in Artificial Intelligence [Minsky, 1975]. These "Frameville" objective functions
[Mjolsness et al., 1988,Mjolsness et al., 1989] are particularly well suited to applications in model-based vision, with frames acting as few-parameter abstractions of
visual objects or perceptual groupings thereof. Each frame contains real-valued parameters, pointers to other frames, and pointers to predefined models (e.g. models
of objects in the world) which determine what portion of the objective function acts
upon a given frame.
2.1
Model Matching as Graph Matching
Model matching involves finding a match between a set of frames, ultimately derived
from visual data, and the predefined static models. A set of pointers represent
object-part relationships between frames, and are encoded as a graph or sparse
matrix called ina. That is, inaij = 0 unless frame j is "in" frame i as one of its
parts, in which case inaij
1 is a "pointer" from j to i. The expected objectpart relationships between the corresponding models is encoded as a fixed graph
or sparse matrix INA. A form of inexact graph-matching is required: ina should
follow INA as much as is consistent with the data.
A sparse match matrix M (0 < Meti < 1) of dynamic variables represents the
correspondence between model a and frame i. To find the best match between the
two graphs one can minimize a simple objective function for this match matrix, due
to Hopfield [Hopfield, 1984] (see also [Feldman et al., 1988,Malsburg, 1986]), which
just counts the number of consistent rectangles (see Figure 1a):
=
E(M)
= - ~~INAet~inaijMaiM~j.
et{3
(1)
ij
This expression may be understood as follows: For model a and frame i, the match
value M eti is to be increased if the neighbors of a (in the INA graph) match to the
neighbors of i (in the ina graph).
619
620
Mjolsness, Gindi and Anandan
M
1??
INA t.2
Data
side
Model
side
Figure 1: (a) Examples of Frameville rectangle rule. Shows the rectangle relationship between frames (triangles) representing a wing of a plane, and the plane
itself. Circles denote dynamic variables, ovals denote models, and triangles denote
frames. For the plane and wing models, the first few parameters of a frame are
interpreted as position, length, and orientation. (b) Frameville sibling competition among parts. The match variables along the shaded lines (M3,9 and M 2,7)
are suppressed in favor of those along the solid lines (M2,9 and M 3,7)'
Note that E(M) as defined above can be trivially minimized by setting all the elements of the match matrix to unity. However, to do so will violate additional
syntactic constraints of the form h(M) 0 which are imposed on the optimization,
either exactly (Platt and Barr, 1988] or as penalty terms (Hopfield and Tank, 1985]
~h2(M) added to the objective function. Originally the syntactic constraints
simply meant that each frame should match one model and vice versa, as in
(Hopfield and Tank, 1985]. But in Frameville, a frame can match both a model
and one of its specializations (described later), and a single model can match any
number of instances or frames. In addition one can usually formulate constraints
stating that if a model matches a frame then two distinct parts of the same model
must match two distinct part frames and vice-versa. \Ve have found the following
=
Model Matching and Perceptual Organization
formulation to be useful:
~ INAa{3Mai - ~ inaijM{3j
a
"'p, i
(2)
0, Va,j
(3)
0,
j
E inaijMai - E 1NAa{3M{3j
J
{3
where the first sum in each equation is necessary when several high-level models
(or frames) share a part. (It turns out that the first sums can be forced to zero
or one by other constraints.) The resulting competition is illustrated in Figure lb.
Another constraint is that M should be binary-valued, i.e.,
(4)
but this constraint can also be handled by a special "analog gain" term in
the objective function [Hopfield and Tank, 1985] together with a penalty term
c Eai Mai(l - Mai).
In Frameville, the ina graph actually becomes variable, and is determined by a dynamic grouping or "perceptual organization" process. These new variables require
new constraints, starting with inaij (1 - inaij) = 0, and including many high-level
constraints which we now formulate.
2.2
Franles and Objective Functions
Frames can be considered as bundles ~ of real-valued parameters Fip, where p
indexes the different parameters of a frame. For efficiency in computing complex
arithmetic relationships, such as those involved in coordinate transformations, an
analog representation of these parameters is used. A frame contains no information
concerning its match criteria or control flow; instead, the match criteria are expressed as objective functions and the control flow is determined by the partiCUlar
choice of a minimization technique.
In Figure la, in order for the rectangle (1,4,9,2) to be consistent, the parameters
F 4p and F 9p should satisfy a criterion dictated by models 1 and 2, such as a restriction on the difference in angles appropriate for a mildly swept back wing. Such a
constraint results in the addition of the following term to the objective function:
L
lNA a{3 inaij MaiM{3j Ha{3(~, Pj)
(5)
i,j,a,{3
where Ha{3(~, Fj) measures the deviation of the parameters of the data frames from
that demanded by the models. The term H can express coordinate transformation
arithmetic (e.g. H a{3(Xi, Xj) = 1/2[xi - Xj - D.x a{3]2), and its action on a frame f;.
is selectively controlled or "gated" by M and ina variables. This is a fundamental
extension of the distance metric paradigm in pattern recognition; because of the
complexity of the visual world, we use an entire database of distance metrics H a {3.
621
622
Mjolsness, Gindi and Anandan
Figure 2: Frameville specialization hierarchy. The plane model specializes
along 154. links to a propeller plane or a jet plane and correspondingly the wing
model specializes to prop-wing or jet-wing. Sibling match variables M 6 ,4 and M 4 ,4
compete as do M7,9 and M S ,9. The winner in these competitions is determined by
the consistency of the appropriate rectangles, e.g. if the 4-4-9-5 rectangle is more
consistent than the 6-4-9-7 rectangle, then the jet model is favored over the prop
model.
We index the models (and, indirectly, the data base of H metrics) by introducing
a static graph of pointers I54. OI {j to act as both a specialization hierarchy and a
discrimination network for visual recognition. A frame may simultaneously match
to a model and just one of its specializations:
Mcxi -
L I54.cx{jMf3i = o.
(6)
f3
As a result, 154. siblings compete for matches to a given frame (see Figure 2); this
competition allows the network to act as a discrimination tree.
Frameville networks have great expressive power, but have a potentially serious
problem with cost: for n data frames and m models there may be O(nm + 71 2 )
neurons widely interconnected but sparsely activated. The number of connections
is at most the number of monomials in the polynomial objective function, namely
n2 m/, where / is the fan-out of the INA graph. One solution to the cost problem, used in the line grouping experiments reported in [Mjolsness et al., 1989], is to
restrict the flexibility of the frame system by setting most M and ina neurons to
zero permanently. The few remaining variables can form an efficient data structure
Model Matching and Perceptual Organization
such as a pyramid in vision. A more flexible solution might enforce the sparseness
constraints on the M and ina neurons during minimization, as well as at the fixed
point. Then large savings could result from using "virtual" neurons (and connections) which are created and destroyed dynamically. This and other cost-cutting
methods are a subject of continuing research.
3
Experimental Results
We describe here experiments involving the recognition of simple stick figures.
(Other experiments involving the perceptual grouping of lines are reported in
[Mjolsness et al., 1989].) The input data (Figure 3(a)) are line segments parameterized by location x, y and orientation (), corresponding to frame parameters Fjp
(p
1,2,3). As seen in Figure 3(b), there are two high-level models, "T" and
"L" junctions, each composed of three low-level segments. The task is to recognize
instances of "T", "L", and their parts, in a translation-invariant manner.
The parameter check term H cx {3 of Equation 5 achieves translation invariance by
checking the location and orientation of a given part relative to a designated main
part and is given by:
=
Ha{3(~, ff;)
= I)Fip -
Fjp - ~;{3)2
(7)
P
Here Fjp and Fip are the slots of a low-level segment frame and a high-level main
part, respectively, and the quantity ~~{3 is model information that stores coordinate
differences. (Rotation invariance can also be formulated if a different parameterization is used.) It should be noted that absence of the main part does not preclude
recognition of the high-level model.
We used the unconstrained optimization technique in [Hopfield and Tank, 1985] and
achieved improved results by including terms demanding that at most one model
match a given frame, and that at most one high-level frame include a given low-level
frame as its part [Mjolsness et al., 1989].
Figure 3(c) shows results of attempts to recognize the junctions in Figure 3(a).
When initialized to random values, the network becomes trapped in unfavorable
local minima of the fifth-order objective function. (But with only a single high-level
model in the database, the system recognizes a shape amid noise.) If, however, the
network is given a "hint" in the form of an initial state with mainparts and high-level
matches set correctly, the network converges to the correct state.
There is a great deal of unexploited freedom in the design of the model base and
its objective functions; there may be good design disciplines which avoid introducing spurious local minima. For example, it may be possible to use ISA and INA
hierarchies to guide a network to the desired local minimum.
623
624
Mjolsness, Gindi and Anandan
+ +
+++++++t
+ + + + + + +
? + +
,+++++++
? + +
+ + + + + + + +
7
? + + + + +
+ + + + +
5+++++ ,+++++
4 + + + + +
+ + +
3 + + + + + t + + + + +
2 + + + + +
+ + + + +
1 + + + + + +--t--+ + + +
o-;r+ + + + + + +--t-+' +
10
o
1
2
3
4
5
I
7
?
9
10
Mpj
E; 0000.00000
1
2
3
123
1 ? ? ? 0000000
20000 ? ? ?000
30000000000
inaij
B .000000000
8 00000.0000
E3 0.00000000
E1 00.0000000
B 000000l1li000
9 0000000000
1
2
3
4
5
6
7
8
9
10
12345678910
Figure 3: (a) Input data consists of unit-length segments oriented horizontally or
vertically. The task is translation-invariant recognition of three segments forming a
liT" junction (e.g. sticks 1,2,3) or an "L" (e.g. sticks 5,6,7) amid extraneous noise
sticks. (b) Structure of network. Models occur at two levels. INA links are
shown for a liT". Each frame has three parameters: position x, y and orientation
e. Also shown are some match and ina links. The bold lines highlight a possible
consistency rectangle. (c) Experhnental result. The value of each dynamical
variable is displayed as the relative area of the shaded portion of a circle. Matrix
M{jj indicates low-level matches and MOti indicates high-level matches. Grouping
of low-level to high-level frames is indicated by the ina matrix. The parameters of
the high-level frames are displayed in the matrix Fip of linear analog neurons. (The
parameters of the low-level frames, held fixed, are not displayed.) The few neurons
circumscribed by a square, corresponding to correct matches for the main parts of
each model, are clamped to a value near unity. Shaded circles indicate the final
correct state.
Model Matching and Perceptual Organization
4
Conclusion
Frameville provides opportunities for integrating all levels of vision in a uniform notation which yields analog neural networks. Low-level models such as fixed convolution filters just require analog arithmetic for frame parameters, which is provided.
High-level vision typically requires structural matching, also provided. Qualitatively
different models may be integrated by specifying their interactions, H cx /3.
Acknowledgements
We thank J. Utans, J. Ockerbloom and C. Garrett for the Frameville simulations.
References
[1] Dana Ballard. Cortical connections and parallel processing: structure and function.
Behavioral and Brain Sciences, vol 9:67-120, 1986.
[2] Harry G. Barrow and R. J. Popplestone. Relational descriptions in picture processing.
In D. Mitchie, editor, Machine Intelligence 6, Edinborough University Press, 1971.
[3] Jerome A. Feldman, Mark A. Fanty, and Nigel H. Goddard. Computing with structured neural networks. IEEE Computer, 91, March 1988.
[4] Allen R. Hanson and E. M. Riseman. A methodology for the development of general
knowledge-based vision systems. In M. A. Arbib and A. R. Hanson, editors, Vision,
Brain, and Cooperative Computation, MIT Press, 1986.
[5] J. J. Hopfield. Personal communication. October 1984.
[6] J. J. Hopfield and D. W. Tank. 'Neural' computation of decisions in optimization
problems. Biological Cybernetics, vol. 52:141-152, 1985.
[7] Robert A. Hummel and S. W. Zucker. On the foundations of relaxation labeling
processes. IEEE Transactions on PAMI, vol. PAMI-5:267-287, May 1983.
[8] Marvin L. Minsky. A framework for representing knowledge. In P. H. Winston, editor,
The Psychology of Computer Vision, McGraw-Hill, 1975.
[9] Eric Mjolsness, Gene Gindi, and P. Anandan. Optimization in Model Matching and
Perceptual Organization: A First Look. Technical Report YALEU /DCS/RR-634,
Yale University, June 1988.
[10] Eric Mjolsness, Gene Gindi, and P. Anandan. Optimization in Model Matching and
Perceptual Organization. Neural Computation, to appear.
[11] John C. Platt and Alan H. Barr. Constraint methods for flexible models. Computer
Graphics, 22(4), August 1988. Proceedings of SIGGRAPH '88.
[12] Demitri Terzopoulos. Regularization of inverse problems involving discontinuities.
IEEE Transactions on PAMI, vol. PAMI-8:413-424, 1986.
[13] Christoph von der Malsburg and Elie Bienenstock. Statistical coding and short-term
synaptic plasticity: a scheme for knowledge representation in the brain. In Disordered
Systems and Biological Organization, pages 247-252, Springer-Verlag, 1986.
625
| 108 |@word polynomial:1 simulation:1 solid:1 carry:1 yaleu:1 initial:1 contains:2 must:1 john:1 plasticity:1 shape:1 cheap:1 discrimination:2 intelligence:2 parameterization:1 plane:6 short:1 pointer:5 provides:1 location:2 along:3 m7:1 consists:1 behavioral:1 recognizable:1 manner:1 introduce:1 expected:3 brain:3 preclude:1 becomes:2 mpj:1 provided:2 notation:1 what:2 interpreted:1 finding:1 transformation:2 act:3 exactly:1 platt:2 control:2 stick:4 grant:3 unit:1 appear:1 understood:1 local:3 vertically:1 interpolation:1 pami:4 might:1 dynamically:1 specifying:1 shaded:3 christoph:1 popplestone:2 elie:1 area:1 matching:16 integrating:1 restriction:1 imposed:1 starting:1 formulate:4 m2:1 rule:1 coordinate:3 hierarchy:7 construction:1 element:2 circumscribed:1 recognition:10 particularly:1 sparsely:1 database:2 cooperative:1 mjolsness:11 govern:1 complexity:2 dynamic:4 personal:1 ultimately:1 solving:1 segment:5 algebra:1 serve:1 upon:1 eric:3 efficiency:2 triangle:2 darpa:1 hopfield:8 siggraph:1 distinct:2 forced:1 describe:2 artificial:2 labeling:1 whose:1 encoded:2 widely:1 solve:1 valued:3 ability:1 favor:1 syntactic:2 itself:1 final:1 advantage:1 rr:1 interconnected:1 interaction:1 fanty:1 flexibility:1 description:3 competition:4 converges:1 object:8 stating:1 ij:1 lna:1 c:2 involves:2 indicate:2 correct:3 filter:1 disordered:1 unexploited:1 virtual:1 barr:2 require:2 preliminary:1 biological:3 extension:1 considered:1 great:2 achieves:1 applicable:1 vice:2 minimization:5 eti:1 mit:1 avoid:1 casting:1 derived:1 june:1 check:1 indicates:2 contrast:1 abstraction:5 typically:2 entire:1 integrated:1 spurious:1 oool:1 bienenstock:1 expressible:1 tank:6 among:1 orientation:4 flexible:2 favored:1 extraneous:1 development:1 constrained:2 special:1 field:1 f3:1 saving:1 hop:1 represents:1 lit:2 look:1 minimized:2 report:1 haven:3 few:4 meti:1 serious:1 hint:1 oriented:1 composed:1 simultaneously:1 ve:1 recognize:2 minsky:2 hummel:2 attempt:2 freedom:1 organization:9 activated:1 held:1 bundle:1 predefined:2 capable:1 necessary:1 unless:1 tree:1 continuing:1 initialized:1 circle:3 desired:1 increased:1 instance:2 cost:3 introducing:2 deviation:1 monomials:1 uniform:1 graphic:1 reported:2 nigel:1 fundamental:1 discipline:1 together:1 von:1 nm:1 amid:2 wing:6 converted:1 harry:1 bold:1 coding:1 satisfy:1 later:1 view:1 portion:2 parallel:1 minimize:1 oi:1 square:1 characteristic:1 yield:1 cybernetics:1 synaptic:1 inexact:2 involved:1 thereof:1 static:2 gain:1 knowledge:3 organized:1 garrett:1 actually:1 back:1 originally:1 follow:1 methodology:2 improved:1 formulation:2 just:3 jerome:1 expressive:1 indicated:1 regularization:1 illustrated:1 deal:1 attractive:1 during:1 noted:2 criterion:3 hill:1 occuring:1 allen:1 fj:1 reasoning:1 image:1 common:1 rotation:1 physical:1 winner:1 analog:7 expressing:2 versa:2 feldman:2 smoothness:1 unconstrained:1 trivially:1 consistency:2 zucker:2 surface:2 base:4 dictated:1 store:1 n00014:1 verlag:1 onr:1 binary:1 der:1 swept:1 seen:1 minimum:3 greater:1 anandan:6 additional:1 determine:2 paradigm:2 arithmetic:3 violate:1 multiple:1 isa:1 alan:1 technical:1 match:24 jet:3 plausibility:1 concerning:1 e1:1 va:1 controlled:1 involving:4 heterogeneous:1 vision:11 metric:3 represent:1 pyramid:1 achieved:1 addition:2 subject:1 flow:2 ee:1 near:1 structural:1 destroyed:1 xj:2 psychology:1 arbib:1 restrict:1 idea:1 sibling:3 specialization:6 expression:1 handled:1 penalty:2 e3:1 compositional:2 action:1 jj:1 useful:2 eai:1 involve:1 reduced:1 mai:3 utans:1 trapped:1 correctly:1 vol:4 express:4 pj:1 rectangle:8 graph:12 relaxation:1 sum:2 compete:2 angle:1 parameterized:1 inverse:1 parsimonious:1 decision:1 ct:3 yale:4 correspondence:1 fan:1 winston:1 marvin:1 occur:2 constraint:11 scene:2 speed:1 department:3 designated:1 according:1 structured:1 march:1 suppressed:1 unity:2 restricted:1 invariant:2 fjp:3 equation:2 turn:1 count:1 junction:3 hierarchical:1 appropriate:2 indirectly:1 enforce:1 permanently:1 remaining:1 include:4 recognizes:1 opportunity:1 malsburg:2 goddard:1 objective:16 added:1 quantity:1 gindi:6 distance:2 link:3 thank:1 riseman:2 length:2 index:2 relationship:6 october:1 robert:1 potentially:1 design:2 perform:2 gated:1 neuron:6 convolution:1 barrow:2 displayed:3 relational:1 communication:1 frame:39 dc:1 lb:1 august:1 frameville:9 cast:1 required:1 namely:1 connection:3 hanson:3 discontinuity:2 parallelism:1 usually:1 inaij:6 pattern:1 dynamical:1 including:4 power:1 event:1 demanding:1 representing:2 scheme:2 picture:1 created:1 specializes:2 acknowledgement:1 checking:1 relative:2 afosr:1 ina:16 highlight:1 dana:1 h2:1 foundation:1 degree:1 consistent:4 editor:3 share:1 translation:3 supported:1 side:2 guide:1 terzopoulos:3 neighbor:2 correspondingly:1 fifth:1 sparse:3 f49620:1 cortical:1 world:2 qualitatively:1 transaction:2 fip:4 cutting:1 mcgraw:1 gene:3 xi:2 demanded:1 ballard:2 complex:2 main:4 whole:1 noise:2 n2:1 ff:1 position:2 explicit:1 governed:1 perceptual:11 clamped:1 specific:1 grouping:7 incorporating:1 sparseness:1 demand:1 mildly:1 suited:1 cx:3 simply:1 forming:1 visual:8 horizontally:1 expressed:1 springer:1 prop:2 slot:1 formulated:1 absence:1 determined:3 acting:1 called:1 oval:1 invariance:2 experimental:2 la:1 m3:1 unfavorable:1 selectively:1 mark:1 meant:1 |
91 | 1,080 | Symplectic Nonlinear Component
Analysis
Lucas C. Parra
Siemens Corporate Research
755 College Road East, Princeton, NJ 08540
lucas@scr.siemens.com
Abstract
Statistically independent features can be extracted by finding a factorial representation of a signal distribution. Principal Component
Analysis (PCA) accomplishes this for linear correlated and Gaussian distributed signals. Independent Component Analysis (ICA),
formalized by Comon (1994), extracts features in the case of linear statistical dependent but not necessarily Gaussian distributed
signals. Nonlinear Component Analysis finally should find a factorial representation for nonlinear statistical dependent distributed
signals. This paper proposes for this task a novel feed-forward,
information conserving, nonlinear map - the explicit symplectic
transformations. It also solves the problem of non-Gaussian output
distributions by considering single coordinate higher order statistics.
1
Introduction
In previous papers Deco and Brauer (1994) and Parra, Deco, and Miesbach (1995)
suggest volume conserving transformations and factorization as the key elements
for a nonlinear version of Independent Component Analysis . As a general class
of volume conserving transformations Parra et al. (1995) propose the symplectic
transformation . It was defined by an implicit nonlinear equation, which leads to a
complex relaxation procedure for the function recall. In this paper an explicit form
of the symplectic map is proposed, overcoming thus the computational problems.
438
L. C.PARRA
In order to correctly measure the factorization criterion for non-Gaussian output
distributions, higher order statistics has to be considered. Comon (1994) includes
in the linear case higher order cumulants of the output distribution. Deco and
Brauer (1994) consider multi-variate, higher order moments and use them in the
case of nonlinear volume conserving transformations. But the calculation of multicoordinate higher moments is computational expensive.
The factorization criterion for statistical independence can be expressed in terms of
minimal mutual information. Considering only volume conserving transformations
allows to concentrate on single coordinate statistics, which leads to an important
reduction of computational complexity. So far, this approach (Deco & Schurman,
1994; Parra et aI., 1995) has been restricted to second order statistic. The present
paper discusses the use of higher order cumulants for the estimation of the single
coordinate output distributions. The single coordinate entropies measured by the
proposed technique match the entropies of the sampled data more accurately. This
leads in turns to better factorization results.
2
Statistical Independence
More general than decorrelation used in PCA the goal is to extract statistical
independent features from a signal distribution p(x). We look for a deterministic transformation on ~n: y = f(x) which generates a factorial representation
p(y) = It p(Yd, or at least a representation where the individual coordinates P(Yi)
of the output variable yare "as factorial as possible". This can be accomplished
by minimizing the mutual information M I[P(y)].
n
o::; M I[P(y)] = L
H[P(Yi)] - H[P(y)],
(1)
i=l
since M I[P(y)] = 0 holds if p(y) is factorial. The mutual information can be used
as a measure of "independence". The entropies H in the definition (1) are defined
as usual by H[P(y)] = - J~oop(y)lnp(y)dy.
As in linear PCA we select volume conserving transformations, but now without
restricting ourselves to linearity. In the noise-free case of reversible transformations
volume conservation implies conservation of entropy from the input x to the output
y, i.e. H[P(y)] = H[P(x)] = canst (see Papoulis, 1991). The minimization of mutual
information (1) reduces then to the minimization of the single coordinate output
entropies H[P(Yi)]. This substantially simplifies the complexity of the problem,
since no multi-coordinate statistics is required.
2.1
Measuring the Entropy with Cumulants
With an upper bound minimization criterion the task of measuring entropies can
be avoided (Parra et aI., 1995):
(2)
439
Symplectic Nonlinear Component Analysis
Edgeworth appIOlClmatlOr'l to second and fanh order
O.B,----~--~-~--~-___,
0.7
~.
0.6
dQ(y1)/dY1
-----~> )i
:
~0 . 5
~
04
~
03
>-
~
Q..
1
0.2
0.,
o .O.~~--=---,!------=---~----:
Figure 1: LEFT: Doted line: exponential distribution with additive Gaussian noise
sampled with 1000 data points. (noise-variance/decay-constant = 0.2). Dashed
line: Gaussian approximation equivalent to the Edgeworth approximation to second
order. Solid line: Edgeworth approximation including terms up to fourth order.
RIGHT: Structure of the volume conserving explicit symplectic map.
The minimization of the individual output coordinate entropies H(P(Yi)] simplifies
to the minimization of output variances (Ti. For the validity of that approach it is
crucial that the map y = f(x) transforms the arbitrary input distribution p(x) into
a Gaussian output distribution. But volume conserving and continuous maps can
not transform arbitrary distributions into Gaussians. To overcome this problem one
includes statistics - higher than second order - to the optimization criterion.
Comon (1994) suggests to use the Edgeworth expansion of a probability distribution. This leads to an analytic expression of the entropy in terms of measurable
higher order cumulants. Edgeworth expands the multiplicative correction to the
best Gaussian approximation of the distribution in the orthonormal basis of Hermite polynomials hcr(y). The expansion coefficients are basically given by the cumulants Ccr of distribution p~y). The Edgeworth expansions reads for a zero-mean
distribution with variance (T , (see Kendall & Stuart, 1969)
2
p(y)
-l-e-~
-j2;(J
f(y)
(3)
Note, that by truncating this expansion at a certain order, we obtain an approximation Papp(Y), which is not strictly positive. Figure 1, left shows a sampled
exponential distribution with additive Gaussian noise.
By cutting expansion (3) at fourth order, and further expanding the logarithm in
definition of entropy up to sixth order , Comon (1994) approximates the entropy by,
L.C.PARRA
440
1
1 c?
1 c~
7 c~
H(P(Y)app] ~ 2"ln(271'e) + In 0' - 120'6 - 480'8 - 480'12
1 c~
C4
+ 8" 0'60'4
(4)
We suggest to use this expression to minimize the single coordinate entropies in the
definition of the mutual information (1).
2.2
Measuring the Entropy by Estimating an Approximation
Note that (4) could only be obtained by truncating the expansion (3). It is therefore limited to fourth order statistic, which might be not enough for a satisfactory
approximation. Besides, the additional approximation of the logarithm is accurate
only for small corrections to the best Gaussian approximation, i.e. for fey) ~ 1.
For distributions with non-Gaussian tails the correction terms might be rather large
and even negative as noted above. We therefore suggest alternatively, to measure
the entropy by estimating the logarithm of the approximated distribution In Papp (y)
with the given data points Yv and using Edgeworth approximation (3) for Papp (y),
1 N
H(P(y)] ~ - N
L lnpapp (Yv) = canst + In 0' -
v=1
1 N
N LIn f(yv)
(5)
v=1
Furthermore, we suggest to correct the truncated expansion Papp by setting
fapp (y) -+ 0 for all fapp (y) < O. For the entropy measurement (5) there is in
principle no limitation to any specific order.
In table 1 the different measures of entropy are compared. The values in the row
labeled 'partition' are measured by counting the numbers n(i) of data points falling
in equidistant intervals i of width D.y and summing -pC i)D.y lnp(i) over all intervals,
with p(i)D.y = n(i)IN. This gives good results compared to the theoretical values
only because of the relatively large sampling size. These values are presented here
in order to have an reliable estimate for the case of the exponential distribution,
where cumulant methods tend to fail.
The results for the exponential distribution show the difficulty of the measurement
proposed by Comon, whereas the estimation measurement given by equation (5) is
stable even when considering (for this case) unreliable 5th and 6th order cumulants.
The results for the symmetric-triangular and uniform distribution demonstrate the
insensibility of the Gaussian upper bound for the example of figure 2. A uniform
squared distribution is rotated by an angle a. On the abscissa and ordinate a
triangular or uniform distribution are observed for the different angles a = II/4
or a = 0 respectively. The approximation of the single coordinate entropies with
a Gaussian measure is in both cases the same. Whereas measurements including
higher order statistics correctly detect minimal entropy (by fixed total information)
for the uniform distribution at a = O.
3
Explicit Symplectic Transformation
Different ways of realizing a volume conserving transformation that guarantees
H(P(x)] = H(P(x)] have been proposed (Deco & Schurman, 1994; Parra et aI.,
Symplectic Nonlinear Component Analysis
11easured entropy of
sampled distributions
partition
Gaussian upper bound (2)
Coman, eq. (4)
Estimate (5) - 4th order
Estimate (5) - 6th order
theoretical value
441
Gauss
uniform
1.35 ? .02
1.415 ? .02
1.414 ? .02
1.414 ? .02
1.414 ? .02
1.419
.024 ? .006
.18 ? .016
.14 ? .015
.13 ? .015
.092 ? .001
.0
triangular
symmetric
.14 ? .02
.18 ? .02
.17 ? .02
.17?.02
.16 ? .02
.153
exponential
noise
1.31 ? .03
1.53 ? .04
3.0 ? 2.5
1.39 ? .05
1.3 ? .5
+ Gauss
Table 1: Entropy values for different distributions sampled with N = 1000 data
points and the different estimation methods explained in the text . The standard
deviations are obtained by multiple repetition of the experiment.
1995). A general class of volume conserving transformations are the symplectic
maps (Abraham & Marsden, 1978). An interesting and for our purpose important
fact is that any symplectic transformation can be expressed in terms of a scalar
function. And in turn any scalar function defines a symplectic map. In (Parra
et al., 1995) a non-reflecting symplectic transformation has been presented. But
its implicit definition results in the need of solving a nonlinear equation for each
data point. This leads to time consuming computations which limit in practice the
applications to low dimensional problems (n~ 10). In this work reflecting symplectic transformations with an explicit definition are used to define a "feed-forward"
volume conserving maps . The input and output space is divided in two partitions
x = (Xl, X2) and Y = (Yl, Y2), with Xl, X2, Yl , Y2 E ?R n / 2 .
(6)
The structure of this symplectic map is represented in figure 1, right. Two scalar
functions P : ?R n / 2 1-+ ?R and Q : ?R n / 2 1-+ ?R can be chosen arbitrarily. Note that
for quadratic functions equation (6) represents a linear transformation. In order
to have a general transformation we introduce for each of these scalar functions a
3-layer perceptron with nonlinear hidden units and a single linear output unit:
(7)
The scalar functions P and Q are parameterized by the network parameters
Wl, W2 E Rm and Wl, W 2 E Rm x Rn/2. The hidden-unit, nonlinear activation
function 9 applies to each component of the vectors WlYl and W2X2 respectively.
Because of the structure of equation (6) the output coordinates Yl depend only additively on the input coordinates Xl. To obtain a more general nonlinear dependence
a second symplectic layer has to be added.
To obtain factorial distributions the parameters of the map have to be trained.
The approximations of the single coordinate entropies (4) or (5) are inserted in the
mutual information optimization criterion (1). These approximations are expressed
through moments in terms of the measured output data points. Therefore , the
L.C.PARRA
442
O,B,.---~-~-~-~-~-~-~---,
0,6
,,
0,4
0,2
-0.2
-0.4
-0.6
. :.':
.....
... ,,
:' "
. :.,'
,
-~~,B---0~,6-~-0~.4---0~,2-~--0~.2--0~.4--0~.6-~0,B
Figure 2: Sampled 2-dimensional squared uniform distribution rotated by 7l" /4. Solid
lines represent the directions found by any of the higher order techniques explained
in the text. Dashed lines represent directions calculated by linear PCA. (This result
is arbitrary and varies with noise) .
gradient of these expressions with respect to parameters ofthe map can be computed
in principle. For that matter different kinds of averages need to be computed.
Even though, the computational complexity is not substantially increased compared
with the efficient minimum variances criterion (2), the complexity of the algorithm
increases considerably. Therefore, we applied an optimization algorithm that does
not require any gradient information. The simple stochastic and parallel update
algorithm ALOPEX (Unnikrishnan & Venugopal, 1994) was used.
4
Experiments
As explained above, finding the correct statistical independent directions of a rotated two dimensional uniform distribution causes problems for techniques which
include only second order statistic. The statistical independent coordinates are simply the axes parallel to the edges of the distribution (see figure 2). A rotation i. e.
a linear transformation suffices for this task. The covariance matrix of the data is
diagonal for any rotation of the squared distribution and, hence, does not provide
any information about the correct orientation of the square. It is well known, that
PCA fails to find in the case of non-Gaussian distributions the statistical independent coordinates. Similarly the Gaussian upper bound technique (2)is not capable
to minimize the mutual information in this case. Instead, with anyone of the higher
order criteria explained in the previous section one finds the appropriate coordinates
for any linearly transformed multi-dimensional uniform distribution. This has been
observed empirically for a series of setups. The symplectic map was restricted in
this experiments to linea1;ity by using square scalar functions.
The second example shows that the proposed technique in fact finds nonlinear
relations between the input coordinates. An one-dimensional signal distributed
according to the distribution of figure 1 was nonlinearly transformed into a two-
Symplectic Nonlinear Component Analysis
. '.: <~.,
.' .
443
. : .. ;
'
Figure 3: Symplectic map trained with 4th and 2nd order statistics corresponding
to the equations (5) and (2) respectively. Left: input distribution. The line at
the center of the distribution gives the nonlinear transformed noiseless signal distributed according to the distribution shown in figure 1. Center and Right : Output
distribution of the symplectic map corresponding to the 4th order (right) and 2nd
order (center) criterion.
dimensional signal and corrupted with additive noise, leading to the distribution
shown in figure 3, left. The task of finding statistical independent coordinates has
been tackled by an explicit symplectic transformation with. n = 2 and m = 6.
On figure 3 the different results for the optimization according to the Gaussian
upper bound criterion (2) and the approximated entropy criterion (5) are shown.
Obviously considering higher order statistics in fact improves the result by finding
the better representation of the nonlinear dependency.
Reference
Abraham, R., & Marsden, J . (1978). Foundations of Mechanics The BenjaminCummings Publishing Company, Inc., London.
Comon, P. (1994). Independent component analysis, A new concept Signal Processing, 36, 287- 314.
Deco, G., & Brauer, W. (1994). Higher Order Statistical Decorrelation by Volume
Concerving Nonlinear Maps. Neural Networks, ? submitted.
Deco, G., & Schurman, B. (1994). Learning Time Series Evolution by Unsupervised
Extraction of Correlations. Physical Review E, ? submitted.
Kendall, M. G., & Stuart, A. (1969). The Advanced Theory of Statistics (3 edition).,
Vol. 1. Charles Griffin and Company Limited, London.
Papoulis, A. (1991). Probability, Random Variables, and Stochastic Processes. Third
Edition, McGraw-Hill, New York.
Parra, L., Deco, G., & Miesbach, S. (1995).
Redundancy reduction with
information-preserving nonlinear maps. Network, 6(1), 61-72.
Unnikrishnan, K., P., & Venugopal, K., P. (1994). Alopex: A Correlation-Based
Learning Algorithm for Feedforward and Recurrent Neural Networks. Neural
Computation, 6(3), 469- 490.
| 1080 |@word version:1 polynomial:1 nd:2 additively:1 covariance:1 solid:2 papoulis:2 reduction:2 moment:3 series:2 com:1 activation:1 additive:3 partition:3 analytic:1 update:1 realizing:1 hermite:1 introduce:1 ica:1 abscissa:1 mechanic:1 multi:3 company:2 considering:4 estimating:2 linearity:1 kind:1 substantially:2 finding:4 transformation:19 nj:1 guarantee:1 ti:1 expands:1 rm:2 unit:3 positive:1 limit:1 yd:1 might:2 suggests:1 factorization:4 limited:2 statistically:1 practice:1 edgeworth:7 procedure:1 road:1 suggest:4 measurable:1 equivalent:1 map:16 deterministic:1 center:3 truncating:2 formalized:1 orthonormal:1 ity:1 coordinate:18 element:1 expensive:1 approximated:2 labeled:1 observed:2 inserted:1 complexity:4 brauer:3 trained:2 depend:1 solving:1 basis:1 represented:1 london:2 triangular:3 statistic:12 transform:1 obviously:1 propose:1 j2:1 conserving:11 rotated:3 recurrent:1 measured:3 eq:1 solves:1 implies:1 concentrate:1 direction:3 correct:3 stochastic:2 require:1 suffices:1 symplectic:20 parra:11 strictly:1 correction:3 hold:1 considered:1 purpose:1 estimation:3 wl:2 repetition:1 minimization:5 gaussian:17 rather:1 canst:2 ax:1 unnikrishnan:2 hcr:1 detect:1 dependent:2 hidden:2 relation:1 transformed:3 orientation:1 lucas:2 proposes:1 mutual:7 extraction:1 sampling:1 represents:1 stuart:2 look:1 unsupervised:1 individual:2 ourselves:1 pc:1 accurate:1 edge:1 capable:1 logarithm:3 theoretical:2 minimal:2 increased:1 cumulants:6 measuring:3 deviation:1 uniform:8 dependency:1 varies:1 corrupted:1 considerably:1 yl:3 squared:3 deco:8 leading:1 includes:2 coefficient:1 matter:1 inc:1 multiplicative:1 kendall:2 yv:3 parallel:2 minimize:2 square:2 variance:4 ofthe:1 accurately:1 basically:1 app:1 submitted:2 definition:5 sixth:1 sampled:6 recall:1 improves:1 reflecting:2 feed:2 higher:13 though:1 furthermore:1 implicit:2 correlation:2 nonlinear:19 reversible:1 defines:1 validity:1 concept:1 y2:2 evolution:1 hence:1 read:1 symmetric:2 satisfactory:1 width:1 noted:1 criterion:10 hill:1 demonstrate:1 scr:1 novel:1 dy1:1 charles:1 rotation:2 empirically:1 physical:1 volume:12 tail:1 approximates:1 measurement:4 ai:3 similarly:1 stable:1 certain:1 arbitrarily:1 yi:4 accomplished:1 lnp:2 preserving:1 minimum:1 additional:1 accomplishes:1 signal:9 dashed:2 ii:1 multiple:1 corporate:1 reduces:1 match:1 calculation:1 lin:1 divided:1 marsden:2 noiseless:1 represent:2 whereas:2 interval:2 crucial:1 w2:1 tend:1 counting:1 feedforward:1 enough:1 independence:3 variate:1 equidistant:1 simplifies:2 expression:3 pca:5 york:1 cause:1 factorial:6 transforms:1 correctly:2 ccr:1 vol:1 oop:1 key:1 redundancy:1 coman:1 falling:1 relaxation:1 angle:2 parameterized:1 fourth:3 griffin:1 dy:1 bound:5 layer:2 tackled:1 quadratic:1 x2:2 generates:1 anyone:1 relatively:1 according:3 fey:1 comon:6 explained:4 restricted:2 ln:1 equation:6 discus:1 turn:2 fail:1 gaussians:1 yare:1 appropriate:1 include:1 publishing:1 added:1 dependence:1 usual:1 diagonal:1 gradient:2 besides:1 minimizing:1 setup:1 negative:1 upper:5 truncated:1 y1:1 rn:1 arbitrary:3 overcoming:1 ordinate:1 nonlinearly:1 required:1 c4:1 including:2 reliable:1 decorrelation:2 difficulty:1 advanced:1 extract:2 text:2 review:1 interesting:1 limitation:1 foundation:1 dq:1 principle:2 row:1 free:1 fapp:2 perceptron:1 distributed:5 overcome:1 calculated:1 forward:2 avoided:1 far:1 mcgraw:1 cutting:1 unreliable:1 summing:1 conservation:2 consuming:1 alternatively:1 continuous:1 table:2 expanding:1 expansion:7 necessarily:1 complex:1 venugopal:2 linearly:1 abraham:2 noise:7 edition:2 fails:1 explicit:6 exponential:5 xl:3 third:1 specific:1 decay:1 restricting:1 entropy:22 simply:1 expressed:3 scalar:6 applies:1 extracted:1 goal:1 principal:1 total:1 gauss:2 siemens:2 east:1 select:1 college:1 cumulant:1 miesbach:2 princeton:1 correlated:1 |
92 | 1,081 | Using the Future to "Sort Out" the
Present: Rankprop and Multitask
Learning for Medical Risk Evaluation
Rich Caruana, Shumeet Baluja, and Tom Mitchell
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213
(caruana, baluja, mitchell)@cs.cmu.edu
Abstract
A patient visits the doctor; the doctor reviews the patient's history,
asks questions, makes basic measurements (blood pressure, .. .), and
prescribes tests or treatment . The prescribed course of action is
based on an assessment of patient risk-patients at higher risk are
given more and faster attention. It is also sequential- it is too
expensive to immediately order all tests which might later be of
value . This paper presents two methods that together improve
the accuracy of backprop nets on a pneumonia risk assessment
problem by 10-50%. Rankprop improves on backpropagation with
sum of squares error in ranking patients by risk. Multitask learning
takes advantage of future lab tests available in the training set, but
not available in practice when predictions must be made. Both
methods are broadly applicable.
1
Background
There are 3,000,000 cases of pneumonia each year in the U.S., 900,000 of which
are admitted to the hospital for treatment and testing. Most pneumonia patients
recover given appropriate treatment, and many can be treated effectively without
hospitalization. Nonetheless, pneumonia is serious: 100,000 of those hospitalized
for pneumonia die from it, and many more are at elevated risk if not hospitalized.
1.1
The Problem
A primary goal of medical decision making is to accurately, swiftly, and economically identify patients at high risk from diseases like pneumonia so they may be
hospitalized to receive aggressive testing and treatment; patients at low risk may be
more comfortably, safely, and economically treated at home. Note that the diagno-
R. CARUANA, S. BALUJA, T. MITCHELL
960
sis of pneumonia has already been made; the goal is not to determine the illness, but
how much risk the illness poses to the patient. Some of the most useful tests for doing this require hospitalization and will be available only if preliminary assessment
indicates it is warranted. Low risk patients can safely be treated as outpatients and
can often be identified using measurements made prior to admission .
The problem considered in this paper is to learn to rank pneumonia patients according to their probability of mortality. We present two learning methods that
combined outperform standard backpropagation by 10-50% in identifying groups
of patients with least mortality risk . These methods are applicable to domains
where the goal is to rank instances according to a probability function and where
useful attributes do not become available until after the prediction must be made.
In addition to medical decision making, this class includes problems as diverse as
investment analysis in financial markets and autonomous vehicle navigation .
1.2
The Pneumonia Database
The Medis Pneumonia Database [6] contains 14,199 pneumonia cases collected from
78 hospitals in 1989. Each patient in the database was diagnosed with pneumonia
and hospitalized. 65 measurements are available for most patients. These include
30 basic measurements typically acquired prior to hospitalization such as age, sex,
and pulse, and 35 lab results such as blood counts or gases not available until after
hospitalization. The database indicates how long each patient was hospitalized and
whether the patient lived or died. 1,542 (10.9%) of the patients died.
1.3
The Performance Criterion
The Medis database indicates which patients lived or died. The most useful decision
aid for this problem would predict which patients will live or die. But this is too
difficult. In practice, the best that can be achieved is to estimate a probability
of death (POD) from the observed symptoms. In fact, it is sufficient to learn to
rank patients by POD so lower risk patients can be discriminated from higher risk
patients. The patients at least risk may then be considered for outpatient care.
The performance criterion used by others working with the Medis database [4] is the
accuracy with which one can select a prespecified fraction of the patient population
that do not die. For example, given a population of 10,000 patients, find the 20%
of this population at least risk. To do this we learn a risk model and a threshold
for this model that allows 20% of the population (2000 patients) to fall below it. If
30 of the 2000 patients below this threshold died, the error rate is 30/2000 = 0.015.
We say that the error rate for FOP 0.20 is 0.015 for this model ("FOP" stands for
fraction of population). In this paper we consider FOPs 0.1, 0.2, 0.3, 0.4, and 0.5 .
Our goal is to learn models and model thresholds, such that the error rate at each
FOP is minimized. Models with acceptably low error rates might then be employed
to help determine which patients do not require hospitalization.
2
Methodology
The Medis database is unusually large, with over 14K training patterns. Because we
are interested in developing methods that will be effective in other domains where
databases of this size are not available, we perform our experiments using small
training sets randomly drawn from the 14K patterns and use the remaining patterns
as test sets. For each method we run ten trials. For each trial we randomly sample
2K patterns from the 14K pool for training. The 2K training sample is further split
into a 1K backprop set used to train the net and a 1K halting set used to determine
Rankprop and Multitask Learning for Medical Risk Evaluation
961
when to halt training.! Once the network is trained, we run the 1K halt set through
the model again to find the threshold that passes 10%,20%,30%,40%, and 50% of
the halt set. The performance ofthe model is evaluated on the 12K unused patterns
by determining how many of the cases that fall below threshold in this test set die.
This is the error rate for that model at that FOP.
3
The Traditional Approach: SSE on 0/1 Targets
Sections 3-5 present three neural net approaches to pneumonia risk prediction. This
section presents the standard approach: using backpropagation on sum of squares
errors (SSE) with 0=lives/1=dies to predict mortality. This works well if early
stopping is used to prevent overfitting. Section 4 presents rank prop (SSE on ranks
instead of 0/1 targets). Rankprop, which learns to rank patients by risk instead
of directly predicting mortality, works better. Section 5 uses multitask learning
(MTL) to benefit from tests in the database that in practice will not be available
until after deciding to admit the patient. Rankprop with MTL works even better.
The straightforward approach to this problem is to use backprop to train a net to
learn to predict which patients live or die, and then use the real-valued predictions of
this net to sort patients by risk. This net has 30 inputs, 1 for each of the observed
patient measurements, a hidden layer with 8 units 2 , and a single output trained
with O=lived, 1=died. 3 Given an infinite training set, a net trained this way should
learn to predict the probability of death for each patient, not which patients live or
die. In the real world, however, where we rarely have an infinite number of training
cases, a net will overtrain and begin to learn a very nonlinear function that outputs
values near 0/1 for cases in the training set, but which does not generalize well. In
this domain it is critical to use early stopping to halt training before this happens .
Table 1 shows the error rates of nets trained with SSE on 0/1 targets for the five
FOPs. Each entry is the mean of ten trials. The first entry in the table indicates
that on average, in the 10% of the test population predicted by the nets to be at
least risk, 1.4% died. We do not know the best achievable error rates for this data.
Table 1: Error Rates of SSE on 0/1 Targets
FOP
Error Rate
4
Using Rankprop to Rank Cases by Risk
Because the goal is to find the fraction of the population least likely to die, it is
sufficient just to learn to rank patients by risk. Rankprop learns to rank patients
without learning to predict mortality. "Rankprop" is short for "backpropagation
using sum of squares errors on estimated ranks". The basic idea is to sort the
training set using the target values, scale the ranks from this sort (we scale uniformly
to [0.25,0.75] with sigmoid output units), and use the scaled ranks as target values
for standard backprop with SSE instead of the 0/1 values in the database.
lperformance at different FOPs sometimes peaks at different epochs. We halt training
separately for each FOP in all the experiments to insure this does not confound results.
2To make comparisons between methods fair, we first found hidden layer sizes and
learning parameters that performed well for each method.
3Different representations such as 0.15/0.85 and different error metrics such as cross
entropy did not perform better than SSE on 0/1 targets.
R. CARUANA, S. BALUJA, T. MITCHELL
962
Ideally, we'd rank the training set by the true probabilities of death. Unfortunately,
all we know is which patients lived or died. In the Medis database, 89% of the target
values are O's and 11% are l's. There are many possible sorts consistent with these
values. Which sort should backprop try to fit? It is the large number of possible
sorts of the training set that makes backpropagating ranks challenging. Rankprop
solves this problem by using the net model as it is being learned to order the training
set when target values are tied. In this database, where there are many ties because
there are only two target values, finding a proper ranking of the training set is a
serious problem. Rankprop learns to adjust the target ranks of the training set at
the same time it is learning to predict ranks from that training set.
How does rankprop do this? Rankprop alternates between rank passes and backprop
passes. On the rank pass it records the output of the net for each training pattern.
It then sorts the training patterns using the target values (0 or 1 in the Medis
database), but using the network's predictions for each pattern as a secondary
sort key to break ties. The basic idea is to find the legal rank of the target values (0
or 1) maximally consistent with the ranks the current model predicts. This closest
match ranking of the target values is then used to define the target ranks used on
the next backprop pass through the training set. Rankprop's pseudo code is:
foreach epoch do {
foreach pattern do {
network_output[pattern] = forward_pass(pattern)}
target_rank = sort_and_scale_patterns(target_value, network_output)
foreach pattern do {
backprop(target_rank[pattern] - network_output[pattern])}}
where "sorkand..scale_patterns" sorts and ranks the training patterns using the sort
keys specified in its arguments , the second being used to break ties in the first.
Table 2 shows the mean rankprop performance using nets with 8 hidden units.
The bottom row shows improvements over SSE on 0/1 targets. All differences are
statistically significant. See Section 7.1 for discussion of why rank prop works better.
Table 2: Error Rates of Rankprop and Improvement Over Standard Backprop
FOP
Error Rate
% Change
5
Learning From the Future with Multitask Learning
The Medis database contains results from 36 lab tests that will be available only
after patients are hospitalized. Unfortunately, these results will not be available
when the model is used because the patients will not yet have been admitted . Multitask learning (MTL) improves generalization by having a learner simultaneously
learn sets of related tasks with a shared representation; what is learned for each
task might benefit other tasks. In this application , we use MTL to benefit from the
future lab results. The extra lab values are used as extra backprop outputs as shown
in Figure 1. The extra outputs bias the shared hidden layer towards representations
that better capture important features of the domain. See [2][3][9] for details about
MTL and [1] for other ways of using extra outputs to bias learning.
The MTL net has 64 hidden units . Table 3 shows the mean performance of ten runs
of MTL with rankprop. The bottom row shows the improvement over rankprop
Rankprop and Multitask Learning for Medical Risk Evaluation
Mortality
Rank
RANKPROP
OUTPUT
1
963
~--~
H e matocnt
While Blood
Cell ("oun l
1
Pn t.a.<i1ilUm
1
- - FUT\JRE LABS
1
~
OUTPUT LAYER
~~o~
SHAREDHIDOEN LAYER
INPUT LAYER
INPUTS
Figure 1: Using Future Lab Results as Extra Outputs To Bias Learning
alone. Although MTL lowers error at each FOP, only the differences at FOP = 0.3,
0.4, and 0.5 are statistically significant with ten trials. Feature nets [7], a competing
approach that trains nets to predict the missing future labs and uses the predictions
as extra net inputs does T}ot yield benefits comparable to MTL on this problem.
Table 3: Error Rates of Rankprop+MTL and Improvement Over Rankprop Alone
FOP
Error Rate
% Change
6
Comparison of Results
Table 4 compares the performance of backprop using SSE on 0/1 targets with the
combination of rankprop and multitask learning. On average, Rankprop+MTL reduces error more than 25%. This improvement is not easy to achieve-experiments
with other learning methods such as Bayes Nets, Hierarchical Mixtures of Experts,
and K-Nearest Neighbor (run not by us , but by experts in their use) indicate SSE
on 0/1 targets is an excellent performer on this domain[4].
Table 4: Comparison Between SSE on 0/1 Targets and Rankprop+MTL
FOP
SSE on 0/1
Rankprop+ MTL
% Change
7
7.1
0.1
.0140
.0074
-47.1%
I
0.2
.0190
.0127
-33.2%
I
0.3
.0252
.0197
-21.8%
I
0.4
.0340
.0269
-20.9%
I
0.5
.0421
.0364
-13.5%
Discussion
Why Does Rankprop Work?
We are given data from a target function f (x). Suppose the goal is not to learn a
model of f(x), but to learn to sort patterns by f(x). Must we learn a model of f(x)
and use its predictions for sorting? No . It suffices to learn a function g( x) such that
for all Xl , X2, [g(xd::; g(X2)]- [J(xd::; f(X2)]. There can be many such functions
g(x) for a given f(x), and some of these may be easier to learn than f(x).
R. CARUANA, S. BALUJA, T. MITCHELL
964
Consider the probability function in Figure 2.1 that assigns to each x the probability
p = f(x) that the outcome is 1; with probability 1 - p the outcome is O. Figure
2.2 shows a training set sampled from this distribution. Where the probability is
low , there are many O's. Where the probability is high , there are many l 's. Where
the probability is near 0.5, there are O's and 1 'so This region causes problems for
backprop using SSE on 0/1 targets: similar inputs are mapped to dissimilar targets .
.
i2
.8
I"
111111 111111
I
08
000 0 011001010110001101111'
f ??
t
i ::
02
0...
0.6
08
llllllllllll
02
04
06
08
02
04
06
08
Figure 2: SSE on 0/1 Targets and on Ranks for a Simple Probability Function
Backprop learns a very nonlinear function if trained on Figure 2.2. This is unfortunate: Figure 2.1 is smooth and maps similar inputs to similar outputs. If the
goal is to learn to rank the data, we can learn a simpler , less nonlinear function
instead. There exists a ranking of the training data such that if the ranks are used
as backprop target values, the resulting function is less nonlinear than the original
target function. Figure 2.3 shows these target rank values. Similar input patterns
have more similar rank target values than the original target values .
Rankprop tries to learn simple functions that directly support ranking. One difficulty with this is that rankprop must learn a ranking of the training data while also
training the model to predict ranks . We do not yet know under what conditions this
parallel search will converge. We conjecture that when rankprop does converge, it
will often be to simpler models than it would have learned from the original target
values (0/1 in Medis), and that these simpler models will often generalize better.
7.2
Other Applications of Rankprop and Learning From the Future
Rankprop is applicable wherever a relative assessment is more useful or more learnable than an absolute one. One application is domains where quantitative measurements are not available, but relative ones are[8]. For example, a game player
might not be able to evaluate moves quantitatively , but might excel at relative
move evaluation[10]. Another application is where the goal is to learn to order data
drawn from a probability distribution, as in medical risk prediction . But it can also
be applied wherever the goal is to order data. For example, in information filtering
it is usually important to present more useful information to the user first, not to
predict how important each is[5].
MTL is a general method for using related tasks. Here the extra MTL tasks are
future measurements. Future measurements are available in many offline learning
problems where there is opportunity to collect the measurements for the training
set. For example, a robot or autonomous vehicle can more accurately measure the
size, location, and identity of objects when it passes near them-road stripes can be
detected reliably as a vehicle passes alongside them, but detecting them far ahead of
a vehicle is hard. Since driving brings future road into the car's present, stripes can
be measured accurately when passed and used as extra features in the training set .
They can't be used as inputs for learning to drive because they will not be available
until too late when driving. As MTL outputs , though, they provide information
Rankprop and Multitask Learning for Medical Risk Evaluation
965
that improves learning without requiring they be available at run time[2] .
8
Summary
This paper presents two methods that can improve generalization on a broad class
of problems. This class includes identifying low risk pneumonia patients. The
first method, rankprop , tries to learn simple models that support ranking future
cases while simultaneously learning to rank the training set. The second, multitask
learning, uses lab tests available only during training, as additional target values to
bias learning towards a more predictive hidden layer. Experiments using a database
of pneumonia patients indicate that together these methods outperform standard
backpropagation by 10-50%. Rankprop and MTL are applicable to a large class of
problems in which the goal is to learn a relative ranking over the instance space,
and where the training data includes features that will not be available at run
time. Such problems include identifying higher-risk medical patients as early as
possible, identifying lower-risk financial investments, and visual analysis of scenes
that become easier to analyze as they are approached in the future.
Acknowledgements
We thank Greg Cooper, Michael Fine, and other members of the Pitt/CMU Cost-Effective
Health Care group for help with the Medis Database. This work was supported by ARPA
grant F33615-93-1-1330, NSF grant BES-9315428, Agency for Health Care Policy and
Research grant HS06468, and an NSF Graduate Student Fellowship (Baluja) .
References
[1] Y .S. Abu-Mostafa, "Learning From Hints in Neural Networks," Journal of Complexity
6:2, pp. 192-198, 1989.
[2] R. Caruana, "Learning Many Related Tasks at the Same Time With Backpropagation," Advances in Neural Information Processing Systems 7, pp. 656-664, 1995.
[3] R. Caruana, "Multitask Learning: A Knowledge-Based Source of Inductive Bias,"
Proceedings of the 10th International Conference on Machine Learning, pp. 41-48,
1993.
[4] G. Cooper, et al., "An Evaluation of Machine Learning Methods for Predicting Pneumonia Mortality," submitted to AI in Medicine, 1995.
[5] K. Lang, "NewsWeeder: Learning to Filter News," Proceedings of the 12th International Conference on Machine Learning, pp. 331-339, 1995.
[6] M. Fine, D. Singer, B. Hanusa, J . Lave, and W. Kapoor, "Validation of a Pneumonia
Prognostic Index Using the MedisGroups Comparative Hospital Database," American
Journal of Medicine, 94 1993.
[7] I. Davis and A . Stentz, "Sensor Fusion For Autonomous Outdoor Navigation Using
Neural Networks," Proceedings of IEEE 's Intelligent Robots and Systems Conference,
1995.
[8] G.T. Hsu, and R. Simmons, "Learning Footfall Evaluation for a Walking Robot, "
Proceedings of the 8th International Conference on Machine Learning, pp. 303-307,
1991.
[9] S.C. Suddarth and A.D.C. Holden, "Symbolic-neural Systems and the Use of Hints for
Developing Complex Systems," International Journal of Man-Machine Studies 35:3,
pp. 291-311, 1991.
[10] P. Utgoff and S. Saxena, "Learning a Preference Predicate," Proceedings of the 4th
International Conference on Machine Learning, pp. 115-121, 1987.
| 1081 |@word multitask:11 trial:4 economically:2 achievable:1 prognostic:1 sex:1 pulse:1 pressure:1 asks:1 contains:2 lave:1 current:1 si:1 yet:2 lang:1 must:4 alone:2 short:1 prespecified:1 record:1 detecting:1 location:1 preference:1 simpler:3 hospitalized:6 five:1 admission:1 become:2 acquired:1 market:1 overtrain:1 begin:1 insure:1 what:2 finding:1 safely:2 pseudo:1 quantitative:1 saxena:1 unusually:1 xd:2 tie:3 scaled:1 unit:4 medical:8 acceptably:1 grant:3 before:1 shumeet:1 died:7 might:5 collect:1 challenging:1 graduate:1 statistically:2 testing:2 practice:3 investment:2 backpropagation:6 road:2 symbolic:1 risk:29 live:3 map:1 missing:1 outpatient:2 straightforward:1 attention:1 pod:2 identifying:4 immediately:1 assigns:1 financial:2 population:7 autonomous:3 sse:14 simmons:1 target:30 suppose:1 user:1 us:3 pa:1 expensive:1 walking:1 stripe:2 predicts:1 database:17 observed:2 bottom:2 capture:1 region:1 news:1 disease:1 agency:1 complexity:1 utgoff:1 ideally:1 prescribes:1 trained:5 predictive:1 learner:1 train:3 effective:2 detected:1 approached:1 outcome:2 valued:1 say:1 advantage:1 net:18 kapoor:1 achieve:1 comparative:1 object:1 help:2 pose:1 measured:1 nearest:1 school:1 solves:1 c:1 predicted:1 indicate:2 attribute:1 filter:1 backprop:14 require:2 suffices:1 generalization:2 preliminary:1 considered:2 deciding:1 predict:9 pitt:1 mostafa:1 driving:2 early:3 applicable:4 sensor:1 pn:1 improvement:5 rank:31 indicates:4 stopping:2 typically:1 holden:1 hidden:6 interested:1 once:1 having:1 broad:1 future:12 minimized:1 others:1 quantitatively:1 serious:2 hint:2 intelligent:1 randomly:2 simultaneously:2 evaluation:7 adjust:1 navigation:2 mixture:1 hospitalization:5 arpa:1 instance:2 caruana:7 cost:1 entry:2 predicate:1 too:3 swiftly:1 combined:1 peak:1 international:5 pool:1 michael:1 together:2 again:1 mortality:7 admit:1 expert:2 american:1 aggressive:1 halting:1 student:1 includes:3 ranking:8 later:1 vehicle:4 performed:1 lab:9 try:3 doing:1 analyze:1 doctor:2 break:2 sort:12 recover:1 bayes:1 parallel:1 square:3 accuracy:2 greg:1 yield:1 identify:1 ofthe:1 generalize:2 accurately:3 drive:1 history:1 submitted:1 nonetheless:1 pp:7 sampled:1 hsu:1 treatment:4 mitchell:5 knowledge:1 car:1 improves:3 higher:3 mtl:17 tom:1 methodology:1 maximally:1 evaluated:1 though:1 diagnosed:1 symptom:1 just:1 until:4 working:1 nonlinear:4 assessment:4 brings:1 jre:1 requiring:1 true:1 inductive:1 fop:14 death:3 i2:1 game:1 during:1 backpropagating:1 davis:1 die:7 criterion:2 sigmoid:1 discriminated:1 foreach:3 comfortably:1 elevated:1 illness:2 mellon:1 measurement:9 significant:2 ai:1 robot:3 closest:1 life:1 additional:1 care:3 performer:1 employed:1 determine:3 converge:2 suddarth:1 reduces:1 smooth:1 faster:1 match:1 cross:1 long:1 visit:1 halt:5 prediction:8 basic:4 patient:43 cmu:2 metric:1 sometimes:1 achieved:1 cell:1 receive:1 background:1 fellowship:1 addition:1 separately:1 fine:2 source:1 extra:8 ot:1 pass:5 rankprop:34 member:1 near:3 unused:1 split:1 easy:1 fit:1 identified:1 competing:1 idea:2 whether:1 passed:1 cause:1 action:1 useful:5 ten:4 outperform:2 nsf:2 estimated:1 broadly:1 diverse:1 carnegie:1 abu:1 group:2 key:2 threshold:5 blood:3 drawn:2 prevent:1 fraction:3 year:1 sum:3 run:6 home:1 decision:3 dy:1 comparable:1 layer:7 ahead:1 x2:3 scene:1 argument:1 prescribed:1 stentz:1 f33615:1 conjecture:1 developing:2 according:2 alternate:1 combination:1 making:2 happens:1 wherever:2 confound:1 legal:1 count:1 singer:1 know:3 available:16 hierarchical:1 appropriate:1 original:3 remaining:1 include:2 unfortunate:1 opportunity:1 medicine:2 move:2 question:1 already:1 primary:1 pneumonia:17 traditional:1 thank:1 mapped:1 collected:1 code:1 index:1 difficult:1 unfortunately:2 reliably:1 lived:4 proper:1 policy:1 perform:2 gas:1 specified:1 learned:3 able:1 alongside:1 below:3 pattern:17 usually:1 critical:1 treated:3 difficulty:1 predicting:2 improve:2 excel:1 health:2 review:1 prior:2 epoch:2 acknowledgement:1 determining:1 relative:4 filtering:1 age:1 validation:1 sufficient:2 consistent:2 row:2 course:1 summary:1 supported:1 offline:1 bias:5 fall:2 neighbor:1 absolute:1 benefit:4 stand:1 world:1 rich:1 made:4 far:1 overfitting:1 pittsburgh:1 search:1 why:2 table:9 learn:21 warranted:1 excellent:1 complex:1 domain:6 did:1 fair:1 cooper:2 aid:1 xl:1 outdoor:1 tied:1 late:1 oun:1 learns:4 learnable:1 fusion:1 exists:1 sequential:1 effectively:1 sorting:1 easier:2 entropy:1 admitted:2 likely:1 visual:1 newsweeder:1 prop:2 goal:10 identity:1 towards:2 shared:2 fut:1 man:1 change:3 hard:1 baluja:6 infinite:2 uniformly:1 hospital:3 pas:2 secondary:1 player:1 rarely:1 select:1 support:2 dissimilar:1 evaluate:1 |
93 | 1,082 | Prediction of Beta Sheets in Proteins
Anders Krogh
The Sanger Centre
Hinxton, Carobs CBIO IRQ, UK.
Email: krogh@sanger.ac. uk
S~ren Kamaric Riis
Electronics Institute, Building 349
Technical University of Denmark
2800 Lyngby, Denmark
Email: riis@ei.dtu.dk
Abstract
Most current methods for prediction of protein secondary structure
use a small window of the protein sequence to predict the structure
of the central amino acid. We describe a new method for prediction
of the non-local structure called ,8-sheet, which consists of two or
more ,8-strands that are connected by hydrogen bonds. Since,8strands are often widely separated in the protein chain , a network
with two windows is introduced. After training on a set of proteins
the network predicts the sheets well, but there are many false positives. By using a global energy function the ,8-sheet prediction is
combined with a local prediction of the three secondary structures
a-helix, ,8-strand and coil. The energy function is minimized using
simulated annealing to give a final prediction.
1
INTRODUCTION
Proteins are long sequences of amino acids. There are 20 different amino acids with
varying chemical properties, e. g. , some are hydrophobic (dislikes water) and some
are hydrophilic [1]. It is convenient to represent each amino acid by a letter and
the sequence of amino acids in a protein (the primary structure) can be written as
a string with a typical length of 100 to 500 letters. A protein chain folds back on
itself, and the resulting 3D structure (the tertiary structure) is highly correlated to
the function of the protein. The prediction of the 3D structure from the primary
structure is one of the long-standing unsolved problems in molecular biology. As
an important step on the way a lot of work has been devoted to predicting the
local conformation of the protein chain, which is called the secondary structure.
Neural network methods are currently the most successful for predicting secondary
structure. The approach was pioneered by Qian and Sejnowski [2] and Bohr et al.
[3], but later extended in various ways, see e.g. [4] for an overview. In most of this
work, only the two regular secondary structure elements a-helix and ,8-strand are
being distinguished, and everything else is labeled coil. Thus, the methods based
918
A. KROGH, S. K. RIIS
H-\
t
o=c
H-\
,,=c!"
{-~
{-H
/=0
H- \ ' H-\
f
o=c
fa
/o=c
~-o ~-:
Figure 1: Left: Anti-parallel,B-sheet. The vertical lines correspond to the backbone
of the protein. An amino acid consists of N-Ca-C and a side chain on the C a that
is not shown (the 20 amino acids are distinguished by different side chains). In the
anti-parallel sheet the directions of the strands alternate, which is here indicated
quite explicitly by showing the middle strand up-side down. The H-bonds between
the strands are shown by 11111111. A sheet has two or more strands, here the antiparallel sheet is shown with three strands. Right: Parallel ,B-sheet consisting of two
strands .
on a local window of amino acids give a three-state prediction of the secondary
structure of the central amino acid in the window.
Current predictions of secondary structure based on single sequences as input have
accuracies of about 65-66%. It is widely believed that this accuracy is close to
the limit of what can be done from a local window (using only single sequences as
input) [5], because interactions between amino acids far apart in the protein chain
are important to the structure. A good example of such non-local interactions
are the ,B-sheets consisting of two or more ,B-strands interconnected by H-bonds,
see fig. 1. Often the ,B-strands in a sheet are widely separated in the sequence,
implying that only part of the available sequence information about a ,B-sheet can
be contained in a window of, say, 13 amino acids. This is one of the reasons why the
accuracy of ,B-strand predictions are generally lower than the accuracy of a-helix
predictions. The aim of this work is to improve prediction of secondary structures
by combining local predictions of a-helix, ,B-strand and coil with a non-local method
predicting ,B-sheets.
Other work along the same directions include [6] in which ,B-sheet predictions are
done by linear methods and [7] where a so-called density network is applied to the
problem.
2
A NEURAL NETWORK WITH TWO WINDOWS
We aim at capturing correlations in the ,B-sheets by using a neural network with
two windows, see fig. 2. While window 1 is centered around amino acid number i
(ai), window 2 slides along the rest of the chain. When the amino acids centered in
each of the two windows sit opposite each other in a ,B-sheet the target output is 1,
and otherwise O. After the whole protein has been traversed by window 2, window 1
is moved to the next position (i + 1) and the procedure is repeated. If the protein is
L amino acids long this procedure yields an output value for each of the L(L -1)/2
Prediction of Beta Sheets in Proteins
919
Figure 2: Neural network for predicting ,B-sheets. The network
employs weight sharing to improve the encoding of the amino
acids and to reduce the number
of adjustable parameters.
pairs of amino acids. We display the output in a L x L gray-scale image as shown in
fig. 3. We assume symmetry of sheets, i.e., if the two windows are interchanged, the
output does not change. This symmetry is ensured (approximately) during training
by presenting all inputs in both directions.
Each window of the network sees K amino acids. An amino acid is represented by a
vector of20 binary numbers all being zero, except one, which is 1. That is, the amino
acid A is represented by the vector 1,0,0, ... ,0 and so on. This coding ensures that
the input representations are un correlated , but it is a very inefficient coding, since
20 amino acids could in principle be represented by only 5 bit. Therefore, we use
weight sharing [8] to learn a better encoding [4]. The 20 input units corresponding
to one window position are fully connected to three hidden units. The 3 x (20 + 1)
weights to these units are shared by all window positions, i.e., the activation of the
3 hidden units is a new learned encoding of the amino acids, so instead of being
represented by 20 binary values they are represented by 3 real values. Of course the
number of units for this encoding can be varied, but initial experiments showed that
3 was optimal [4]. The two windows of the network are made the same way with
the same number of inputs etc .. The first layer of hidden units in the two windows
are fully connected to a hidden layer which is fully connected to the output unit, see
fig. 2. Furthermore, two structurally identical networks are used: one for parallel
and one for anti-parallel ,B-sheets.
The basis for the training set in this study is the set of 126 non-homologous protein
chains used in [9], but chains forming ,B-sheets with other chains are excluded. This
leaves us with 85 proteins in our data set. For a protein of length L only a very small
fraction of the L(L - 1)/2 pairs are positive examples of ,B-sheet pairs. Therefore
it is very important to balance the positive and negative examples to avoid the
situation where the network always predicts no ,B-sheet. Furthermore, there are
several types of negative examples with quite different occurrences: 1) two amino
acids of which none belong to a ,B-sheet; 2) one in a ,B-sheet and one which is not in
a ,B-sheet; 3) two sitting in ,B-sheets, but not opposite to each other. The balancing
was done in the following way. For each positive example selected at random a
negative example from each of the three categories were selected at random.
If the network does not have a second layer of hidden units, it turns out that the
result is no better than a network with only one input window, i.e., the network
cannot capture correlations between the two windows. Initial experiments indicated
that about 10 units in the second hidden layer and two identical input windows of
size K = 9 gave the best results. In fig. 3(left) the prediction of anti-parallel sheets
is shown for the protein identified as 1acx in the Brookhaven Protein Data Bank
A. KROGH, S. K. RIIS
920
120
100
:g..
'"o
80
.!:
~ 60
/
40
".
20
Figure 3: Left: The prediction of anti-parallel ,8-sheets in the protein laex. In the
upper triangle the correct structure is shown by a black square for each ,8-sheet
pair. The lower triangle shows the prediction by the two-window network. For
any pair of amino acids the network output is a number between zero (white) and
one (black), and it is displayed by a linear gray-scale. The diagonal shows the
prediction of a-helices. Right: The same display for parallel ,8-sheets in the protein
4fxn. Notice that the correct structure are lines parallel to the diagonal, whereas
they are perpendicular for anti-parallel sheets. For both cases the network was
trained on a training set that did not contain the protein for which the result is
shown.
[10]. First of all, one notices the checker board structure of the prediction of ,8sheets. This is related to the structure of ,8-sheets. Many sheets are hydrophobic
on one side and hydrophilic on the other. The side chains of the amino acids in
a strand alternates between the two sides of the sheet, and this gives rise to the
periodicity responsible for the pattern.
Another network was trained on parallel ,8-sheets. These are rare compared to
the anti-parallel ones, so the amount of training data is limited. In fig. 3(right)
the result is shown for protein 4fxn. This prediction seems better than the one
obtained for anti-parallel sheets, although false positive predictions still occurs at
some positions with strands that do not pair. Strands that bind in parallel ,8-sheets
are generally more widely separated in the sequence than strands in anti-parallel
sheets. Therefore, one can imagine that the strands in parallel sheets have to be
more correlated to find each other in the folding process, which would explain the
better prediction accuracy.
The results shown in fig. 3 are fairly representative. The network misses some of the
sheets, but false positives present a more severe problem. By calculating correlation
coefficients we can show that the network doe!> capture some correlations, but they
seem to be weak. Based on these results, we hypothesize that the formation of ,8sheets is only weakly dependent on correlations between corresponding ,8-strands.
This is quite surprising. However weak these correlations are, we believe they can
still improve the accuracy of the three state secondary structure prediction. In
order to combine local methods with the non-local ,8-sheet prediction, we introduce
a global energy function as described below.
921
Prediction of Beta Sheets in Proteins
3
A GLOBAL ENERGY FUNCTION
We use a newly developed local neural network method based on one input window
[4] to give an initial prediction of the three possible structures. The output from
this network is constrained by soft max [11], and can thus be interpreted as the
probabilities for each of the three structures. That is, for amino acid ai, it yields
three numbers Pi,n, n = 1,2 or 3 indicating the probability of a-helix (Pi,l) , (3sheet (pi,2), or coil (pi,3). Define Si,n
1 if amino acid i is assigned structure n
and Si,n = 0 otherwise. Also define hi,n = 10gPi,n. We now construct the 'energy
function'
(1)
=
i
n
where weights Un are introduced for later usage. Assuming the probabilities Pi,n are
independent for any two amino acids in a sequence, this is the negative log likelihood
of the assigned secondary structure represented by s, provided that Un = 1. As it
stands, alone, it is a fairly trivial energy function, because the minimum is the
assignment which corresponds to the prediction with the maximum Pi,n at each
position i-the assignment of secondary structure that one would probably use
anyway.
For amino acids ai and aj the logarithm of the output of the (3-sheet network
described previously is called qfj for parallel (3-sheets and qfj for anti-parallel sheets.
We interpret these numbers as the gain in energy if a (3-sheet pair is formed. (As
more terms are added to the energy, the interpretation as a log-likelihood function
is gradually fading.) If the two amino acids form a pair in a parallel (3-sheet, we
set the variable T~ equal to 1, and otherwise to 0, and similarly with Tii for antiparallel sheets. Thus the Tii and T~ are sparse binary matrices. Now the total
energy of the (3-sheets can be expressed as
Hf3(s, T a, TP) = - ~[CaqfjTij
+ CpqfjT~],
(2)
'J
where Ca and Cp determine the weights of the two terms in the function. Since
an amino acid can only be in one structure, the dynamic T and S variables are
constrained: Only Tii or T~ can be 1 for the same (i, j), and if any of them is 1 the
amino acids involved must be in a (3-sheet, so Si,2 = Sj,2 = 1. Also, Si ,2 can only be
1 if there exists a j with either Iii or T~ equal to 1. Because of these constraints
we have indicated an S dependence of H f3.
The last term in our energy function introduces correlations between neighboring
amino acids. The above assumption that the secondary structure of the amino acids
are independent is of course a bad assumption, and we try to repair it with a term
Hn(s) =
L: L: Jnm Si,n Si+l,m,
i
nm
(3)
that introduces nearest neighbor interactions in the chain. A negative J11, for
instance, means that a following a is favored, and e.g., a positive h2 discourages
a (3 following an a.
Now the total energy is
(4)
Since (3-sheets are introduced in two ways, through h i ,2 and qij, we need the weights
Un in (1) to be different from 1.
The total energy function (4) has some resemblance with a so-called Potts glass
in an external field [12]. The crucial difference is that the couplings between the
A. KROGH, S. K. RIIS
922
'spins' Si are dependent on the dynamic variables T. Another analogy of the energy
function is to image analysis, where couplings like the T's are sometimes used as
edge elements.
3.1
PARAMETER ESTIMATION
The energy function contains a number of parameters, Un, Ca , C p and J nm . These
parameters were estimated by a method inspired by Boltzmann learning [13]. In
the Boltzmann machine the estimation of the weights can be formulated as a minimization of the difference between the free energy of the 'clamped' system and
that of the 'free-running' system [14]. If we think of our energy function as a free
energy (at zero temperature), it corresponds to minimizing the difference between
the energy of the correct protein structure and the minimum energy,
where p is the total number of proteins in the training set. Here the correct structure
of protein J-l is called S(J-l) , Ta(J-l), TP(p), whereas s(J-l), Ta(J-l) , TP(J-l) represents the
structure that minimizes the energy Htotal. By definition the second term of C is
less than the first, so C is bounded from below by zero.
The cost function C is minimized by gradient descent in the parameters. This is
in principle straightforward, because all the parameters appear linearly in Htotal.
However, a problem with this approach is that C is minimal when all the parameters
are set to zero, because then the energy is zero. It is cured by constraining some of
the parameters in Htotal. We chose the constraint l:n Un = 1. This may not be the
perfect solution from a theoretical point of view, but it works well. Another problem
with this approach is that one has to find the minimum of the energy Htotal in the
dynamic variables in each iteration of the gradient descent procedure. To globally
minimize the function by simulated annealing each time would be very costly in
terms of computer time. Instead of using the (global) minimum of the energy for
each protein, we use the energy obtained by minimizing the energy from the correct
structure. This minimization is done by a greedy algorithm in the following way.
In each iteration the change in s, Ta, TP which results in the largest decrease in
Htotal is carried out. This is repeated until any change will increase Htotal. This
algorithm works towards a local stability of the protein structures in the training
set. We believe it is not only an efficient way of doing it, but also a very sensible
way. In fact, the method may well be applicable in other models, such as Boltzmann
machines.
3.2
STRUCTURE PREDICTION BY SIMULATED ANNEALING
After estimation of the parameters on which the energy function Htotal depends, we
can proceed to predict the structure of new proteins. This was done using simulated
annealing and the EBSA package [15]. The total procedure for prediction is,
1. A neural net predicts a-helix, ,8-strand or coil. The logarithm of these
predictions give all the hi,n for that protein.
2. The two-window neural networks predict the ,8-sheets. The result is the qfj
from one network and the qfj from the other.
3. A random configuration of S, Ta, TP variables is generated from which the
simulated annealing minimization of Htotal was started. During annealing,
all constraints on s, Ta, TP variables are strictly enforced.
923
Prediction of Beta Sheets in Proteins
4. The final minimum configuration s is the prediction of the secondary structure. The ,B-sheets are predicted by a and
t
tv.
Using the above scheme, an average secondary structure accuracy of 66.5% is obtained by seven-fold cross validation. This should be compared to 66.3% obtained
by the local neural network based method [4] on the same data set. Although these
preliminary results do not represent a significant improvement, we consider them
very encouraging for future work. Because the method not only predicts the secondary structure, but also which strands actually binds to form ,B-sheets, even a
modest result may be an important step on the way to full 3D predictions.
4
CONCLUSION
In this paper we introduced several novel ideas which may be applicable in other
contexts than prediction of protein structure. Firstly, we described a neural network
with two input windows that was used for predicting the non-local structure called
,B-sheets. Secondly, we combined local predictions of a-helix, ,B-strand and coil
with the ,B-sheet prediction by minimization of a global energy function. Thirdly,
we showed how the adjustable parameters in the energy function could be estimated
by a method similar to Boltzmann learning.
We found that correlations between ,B-strands in ,B-sheets are surprisingly weak.
Using the energy function to combine predictions improves performance a little.
Although we have not solved the protein folding problem, we consider the results
very encouraging for future work. This will include attempts to improve the performance of the two-window network as well as experimenting with the energy function,
and maybe add more terms to incorporate new constraints.
Acknowledgments: We would like to thank Tim Hubbard, Richard Durbin and
Benny Lautrup for interesting comments on this work and Peter Salamon and
Richard Frost for assisting with simulated annealing. This work was supported
by a grant from the Novo Nordisk Foundation.
References
[1] C. Branden and J. Tooze, Introduction to Protein Structure (Garland Publishing,
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Inc., New York, 1991).
N. Qian and T. Sejnowski, Journal of Molecular Biology 202, 865 (1988).
H. Bohr et al., FEBS Letters 241, 223 (1988).
S. Riis and A. Krogh, Nordita Preprint 95/34 S, submitted to J. Compo BioI.
B. Rost, C. Sander, and R. Schneider, J Mol. BioI. 235, 13 (1994).
T. Hubbard, in Proc. of the 27th HICSS, edited by R. Lathrop (IEEE Computer Soc.
Press, 1994), pp. 336-354.
D. J. C. MacKay, in Maximum Entropy and Bayesian Methods, Cambridge 1994,
edited by J. Skilling and S. Sibisi (Kluwer, Dordrecht, 1995).
Y. Le Cun et al., Neural Computation 1, 541 (1989).
B. Rost and C. Sander, Proteins 19, 55 (1994).
F. Bernstein et al., J Mol. BioI. 112,535 (1977).
J. Bridle, in Neural Information Processing Systems 2, edited by D. Touretzky (Morgan Kaufmann, San Mateo, CA, 1990), pp. 211-217.
K. Fisher and J. Hertz, Spin glasses (Cambridge University Press, 1991).
D. Ackley, G. Hinton, and T. Sejnowski, Cognitive Science 9, 147 (1985).
J. Hertz, A. Krogh, and R. Palmer, Introduction to the Theory of Neural Computation
(Addison-Wesley, Redwood City, 1991).
R. Frost, SDSC EBSA, C Library Documentation, version 2.1. SDSC Techreport.
| 1082 |@word version:1 middle:1 seems:1 electronics:1 configuration:2 contains:1 initial:3 current:2 surprising:1 activation:1 si:7 written:1 must:1 hypothesize:1 implying:1 alone:1 leaf:1 selected:2 greedy:1 tertiary:1 compo:1 firstly:1 along:2 beta:4 qij:1 consists:2 combine:2 introduce:1 inspired:1 globally:1 encouraging:2 little:1 window:27 provided:1 bounded:1 what:1 backbone:1 interpreted:1 string:1 minimizes:1 developed:1 ensured:1 uk:2 unit:9 grant:1 appear:1 htotal:8 positive:7 local:15 bind:2 limit:1 encoding:4 fxn:2 approximately:1 black:2 chose:1 mateo:1 garland:1 limited:1 palmer:1 perpendicular:1 acknowledgment:1 responsible:1 procedure:4 convenient:1 regular:1 protein:37 cannot:1 close:1 sheet:62 context:1 straightforward:1 qian:2 stability:1 anyway:1 target:1 imagine:1 pioneered:1 element:2 documentation:1 predicts:4 labeled:1 ackley:1 preprint:1 solved:1 capture:2 ensures:1 connected:4 cured:1 decrease:1 benny:1 edited:3 dynamic:3 trained:2 weakly:1 basis:1 triangle:2 various:1 represented:6 separated:3 describe:1 sejnowski:3 formation:1 dordrecht:1 quite:3 widely:4 say:1 otherwise:3 novo:1 sdsc:2 think:1 itself:1 final:2 sequence:9 net:1 interaction:3 interconnected:1 neighboring:1 combining:1 moved:1 perfect:1 tim:1 coupling:2 ac:1 nearest:1 conformation:1 krogh:7 soc:1 predicted:1 direction:3 correct:5 centered:2 everything:1 preliminary:1 secondly:1 traversed:1 strictly:1 around:1 predict:3 interchanged:1 estimation:3 proc:1 applicable:2 bond:3 currently:1 hubbard:2 largest:1 city:1 minimization:4 always:1 aim:2 avoid:1 varying:1 improvement:1 potts:1 likelihood:2 experimenting:1 glass:2 dependent:2 anders:1 hidden:6 favored:1 constrained:2 fairly:2 mackay:1 equal:2 construct:1 f3:1 field:1 biology:2 identical:2 represents:1 of20:1 future:2 minimized:2 richard:2 employ:1 consisting:2 attempt:1 highly:1 severe:1 introduces:2 devoted:1 chain:12 bohr:2 edge:1 modest:1 logarithm:2 theoretical:1 minimal:1 instance:1 soft:1 tp:6 assignment:2 cost:1 rare:1 successful:1 combined:2 density:1 standing:1 central:2 nm:2 hn:1 external:1 cognitive:1 inefficient:1 tii:3 coding:2 coefficient:1 inc:1 explicitly:1 depends:1 later:2 qfj:4 lot:1 try:1 view:1 doing:1 parallel:19 minimize:1 square:1 formed:1 accuracy:7 spin:2 acid:33 kaufmann:1 branden:1 correspond:1 yield:2 sitting:1 weak:3 bayesian:1 ren:1 antiparallel:2 none:1 submitted:1 explain:1 touretzky:1 sharing:2 email:2 definition:1 energy:30 pp:2 involved:1 unsolved:1 bridle:1 gain:1 newly:1 improves:1 actually:1 back:1 salamon:1 wesley:1 ta:5 done:5 furthermore:2 correlation:8 until:1 ei:1 aj:1 indicated:3 resemblance:1 gray:2 believe:2 usage:1 building:1 contain:1 assigned:2 chemical:1 excluded:1 white:1 during:2 presenting:1 cp:1 temperature:1 image:2 novel:1 discourages:1 overview:1 thirdly:1 belong:1 interpretation:1 kluwer:1 interpret:1 significant:1 cambridge:2 ai:3 similarly:1 centre:1 etc:1 add:1 feb:1 showed:2 apart:1 irq:1 binary:3 hydrophobic:2 morgan:1 minimum:5 schneider:1 determine:1 assisting:1 full:1 technical:1 believed:1 long:3 cross:1 molecular:2 prediction:38 iteration:2 sometimes:1 represent:2 folding:2 whereas:2 annealing:7 else:1 crucial:1 rest:1 checker:1 probably:1 comment:1 j11:1 seem:1 constraining:1 iii:1 sander:2 bernstein:1 gave:1 identified:1 opposite:2 reduce:1 idea:1 gpi:1 peter:1 proceed:1 york:1 generally:2 maybe:1 amount:1 slide:1 category:1 notice:2 estimated:2 nordita:1 fraction:1 enforced:1 hydrophilic:2 letter:3 package:1 bit:1 capturing:1 layer:4 hi:2 display:2 fold:2 durbin:1 fading:1 constraint:4 tv:1 alternate:2 hertz:2 frost:2 cun:1 gradually:1 repair:1 lyngby:1 previously:1 turn:1 addison:1 riis:6 available:1 occurrence:1 distinguished:2 rost:2 skilling:1 running:1 include:2 publishing:1 sanger:2 calculating:1 lautrup:1 added:1 occurs:1 fa:1 primary:2 dependence:1 costly:1 diagonal:2 gradient:2 thank:1 simulated:6 sensible:1 seven:1 trivial:1 water:1 reason:1 denmark:2 assuming:1 length:2 balance:1 minimizing:2 negative:5 rise:1 boltzmann:4 adjustable:2 upper:1 vertical:1 descent:2 anti:10 displayed:1 situation:1 extended:1 hinton:1 varied:1 redwood:1 introduced:4 pair:8 learned:1 below:2 pattern:1 nordisk:1 max:1 homologous:1 predicting:5 scheme:1 improve:4 library:1 dtu:1 started:1 carried:1 cbio:1 dislike:1 fully:3 interesting:1 analogy:1 validation:1 h2:1 foundation:1 principle:2 bank:1 helix:8 pi:6 balancing:1 course:2 periodicity:1 surprisingly:1 last:1 free:3 supported:1 side:6 institute:1 neighbor:1 sparse:1 stand:1 made:1 san:1 far:1 sj:1 global:5 hydrogen:1 un:6 why:1 learn:1 ca:4 symmetry:2 mol:2 did:1 linearly:1 whole:1 repeated:2 amino:33 fig:7 representative:1 board:1 structurally:1 position:5 clamped:1 down:1 bad:1 showing:1 dk:1 sit:1 exists:1 false:3 entropy:1 forming:1 expressed:1 strand:24 contained:1 corresponds:2 coil:6 bioi:3 formulated:1 towards:1 shared:1 fisher:1 change:3 typical:1 except:1 miss:1 called:7 total:5 secondary:15 brookhaven:1 lathrop:1 indicating:1 incorporate:1 correlated:3 |
94 | 1,083 | Unsupervised Pixel-prediction
William R. Softky
Math Resp.arch Branch
NIDDK, NIH
9190 Wisconsin Ave #350
Bethesda, MD 20814
bill@homer.niddk.nih.gov
Abstract
When a sensory system constructs a model of the environment
from its input, it might need to verify the model's accuracy. One
method of verification is multivariate time-series prediction: a good
model could predict the near-future activity of its inputs, much
as a good scientific theory predicts future data. Such a predicting model would require copious top-down connections to compare
the predictions with the input. That feedback could improve the
model's performance in two ways : by biasing internal activity toward expected patterns, and by generating specific error signals if
the predictions fail. A proof-of-concept model-an event-driven,
computationally efficient layered network, incorporating "cortical"
features like all-excitatory synapses and local inhibition- was constructed to make near-future predictions of a simple, moving stimulus. After unsupervised learning, the network contained units not
only tuned to obvious features of the stimulus like contour orientation and motion, but also to contour discontinuity ("end-stopping")
and illusory contours.
1
Introduction
Somehow, brains make very accurate models of the outside world from their raw
sensory input. How might brains check and improve those models? What signal is
there to verify a model of the world?
The scientific method faces a similar problem: how to verify theories. In science,
theories are verified by predicting future data, using the implicit assumption that
W.R.SOFfKY
810
good predictions can only result from good models. By analogy, it is possible that
brains predict their afferent input (e.g. at the thalamus), and that making such
predictions and using them as feedback is a unifying design principle of cortex.
The proof-of-concept model presented here uses unsupervised Hebbian learning to
predict, pixel-wise, the location of a moving pattern slightly in the future.
Why try prediction?
? Predicting future data usually requires a good generative model. For instance: to
predict the brightness of individual TV pixels even a fraction of a second in advance,
one would need models of contours, objects, motion, occlusion, shadow, etc.
? A successful prediction can help filter out input noise, like a Kalman filter.
? A failed prediction provides a specific, high-dimensional error signal.
? Prediction is not only possible in cortex-which has massive feedback
connections-but necessary as well, because those feedback fibers, their target dendrites, and synaptic integration impose inevitable delays. So for a feedback signal
to arrive at the cell body "on time," it would need to have been generated tens of
milliseconds earlier, as a prediction of imminent activity.
? In this model, "prediction" means producing spikes in advance which will correlate
with subsequent input spikes. Specifically, the network's goal is to produce at each
grid point a train of spikes at times Pj which predicts the input train Ik, in the
sense of maximizing their normalized cross-correlation. The objective function L
("likeness") can be expressed in terms of a smoothing "bump" function B(t:J;, ty)
(of spikes at times t:J; and ty) and a correlation function C(trainl, train2, ~t):
exp ( -It:J;T- t y l )
C(P,I,
~T)
L: L: B(P + ~t, Ik)
j
j
L(P,I,~T)
k
C(P, I,
~T)
JC(P, P, O)C(I, 1,0)
? In order to avoid a trivial but useless prediction ("the weather tomorrow will be
just like today;'), one must ensure that a unit cannot usually predict its own firing
(for example, pick ~t ~ T greater than the autocorrelation time of a spike train).
2
Model
The input to the network is a 16 x 16 array of spike trains, with toroidal array
boundary conditions. The spikes are driven by a "stimulus" bar of excitation one
unit wide and seven units long, which moves smoothly perpendicular to its orientation behind the array (in a broad circle, so that all orientations and directions
are represented; Fig. 1A). The stimulus point transiently generates spikes at each
grid point there according to a Poisson process: the whole array of spikes can be
visualized as a twinkling, moving contour.
Unsupervised Pixel-prediction
A
811
B
trigger &
forward
helper
t ----
syn~
_
_
/.. _
_
_
_
_
inputs
_
_
delay
_
_
tuned, precise, predictive
feedback
Figure 1: A network predicts dynamic patterns. A A moving pattern on
a grid of spiking pixels describes a slow circle, and drives activity in a network
above. B The three-layer network learns to predict that activity just before it
occurs. Forward connections, evolving by Hebbian rules, produce top-level units
with coarse receptive fields and fine stimulus-tuning (e.g. contour orientation and
motion). Each spike from a top unit is "bound" (by coincidence detection) with
the particular spike which triggered it, to produce feedback which is both stimulustuned and spatially specific. A Hebb rule determines how the delayed, predictive
feedback will drive middle-layer units and be compared to input-layer units. Because
all connections are excitatory, winner-take-all inhibition within local groups of units
prevents runaway excitation.
2.1
Network Structure
The network has three layers. The bottom layer contains the spiking pixels, and
the "surprise" units described below. The middle layer, having the same spatial
resolution as the input, has four coarsely-tuned units per input pixel. And the
top layer contains the most finely-tuned units, spaced at half the spatial resolution
(at every fourth gridpoint, i.e. with coarser spatial resolution and larger receptive
fields). The signal flow is bi-directional [10, 7], with both forward and feedback
synaptic connections. All connections between units are excitatory, and excitation
is kept in check by local winner-take-all inhibition (WTA). For example, a given
input spike can only trigger one spike out of the 16 units directly above it in the
top layer (Fig. IB).
Unsupervised learning occurs through two local Hebb-like rules. Forward connections evolve to make nearby (competing) units strongly anticorrelated-for instance,
units typically become tuned to different contour orientations and directions of
motion-while feedback connections evolve to maximally correlate delayed feedback
signals with their targets.
2.2
Binary multiplication in single units
While some neural models implement multiplication as a nonlinear function of the
sum of the inputs, the spiking model used here implements multiplication as a
binary operation on two distinct classes of synapses.
W.R.SOFrKY
812
A
~'I~ ~ ~
helper
"helper"
inh
trigger
I I I
comc.
detector delay
B
prediction
of X
I
.m~
V
-
-
---
*--------
I
out
Figure 2: Multiplicative synapses and surprise detection. A A spiking unit
multiplies two types of synaptic inputs: the "helper" type increments an internal
bias without triggering a spike, and the "trigger" type can trigger a spike (*),
without incrementing, but only if the bias is above a threshold. Spike propagation
may be discretely delayed, and coincidences of two units fired by the same input
spike can be detected. B Once the network has generated a (delayed) prediction of
a given pixel's activity, the match of prediction and reality can be tested by specialpurpose units: one type which detects unpredicted input, the other which detects
unfulfilled predictions. The firing of either type can drive the network's learning
rules, so units above can become tuned to consistent patters of failed predictions,
as occur at discontinuities and illusory contours.
A helper synapse, when activated by a presynaptic spike, will increment or decrement the postsynaptic voltage without ever initiating a spike. A trigger synapse, on
the other hand, can initiate a spike (if the voltage is above the threshold determined
by its WTA neighbors), but cannot adjust the voltage (Fig. 2A; the helper type is
loosely based on the weak, slow NMDA synapses on cortical apical dendrites, while
triggers are based on strong, brief AMPA synapses on basal dendrites.) Thus, a
unit can only fire when both synaptic types are active, so the output firing rate
approximates the product of the rates of helpers and triggers. Each unit has two
characteristic timescales: a slower voltage decay time, and the essentially instantaneous time necessary to trigger and propagate a spike.
This scheme has two advantages. One is that a single cell can implement a relatively
"pure" multiplication of distinct inputs, as required for computations like motiondetection. The other advantage is that feedback signals, restricted to only helper
synapses, cannot by themselves drive a cell, so closed positive-feedback loops cannot
"latch" the network into a fixed state, independent of the input. Therefore, all
trigger synapses in this network are forward, while all delayed, lateral, and feedback
connections are of the helper type .
2.3
Feedback
There are two issues in feedback: How to construct tuned, specific feedback, and
what to do with the feedback where it arrives.
Unsupervised Pixel-prediction
813
An accurate prediction requires information about the input: both about its exact
present state, and about its history over nearby space and recent time. In this model,
those signals are distinct: spatial and temporal specificity is given by each input
spike, and the spatia-temporal history is given by the stimulus-tuned responses of
the slow, coarse-grained units in the top layer. Spatially-precise feedback requires
recombining those signals. (Feedback from V1 cortical Layer VI to thalamus has
recently been shown to fit these criteria, being both spatially refined and directionselective; [3] Grieve & Sillito, 1995).
In this network, each feedback signal results from the AND of spikes from a inputlayer spike (spatially specific) and the resulting top-layer spike it produces (stimulustuned). This "binding" across levels of specificity requires single-spike temporal
precision, and may even be one of the perceptual uses for spike timing in cortex
[1, 9].
2.4
Surprise detection
Once predictive feedback is learned, it can be used in two ways: biasing units toward
expected activity, and comparing predictions against actual input. Feedback to the
middle layer is used as a bias signal through helper synapses, by adding the feedback
to the bias signal. But feedback to the bottom , input-layer is compared with actual
input by means of special "surprise" units which subtract prediction from input
(and vice versa).
Because both prediction and input are noisy signals, their difference is even noisier,
and must be both temporally smoothed and thresholded to generate a mismatchspike. In this model , these prediction/input differences are accomplished pixel-bypixel using ad-hoc units designed for the purpose (Fig. 2B). There is no indication
that cortex operates so simplistically, but there are indications that cortical cells
are in general sensitive to mismatches between expectation and reality, such as
discontinuities in space (edges) , in time (on- and off-responses), and in context
(saliency) .
The resulting error vector can drive upper-layer units just as the input does, so that
the network can learn patterns of failed predictions, which typically correspond to
discontinuities in the stimulus. Learning consistent patterns of bad predictions is
a completely generic recipe for discovering such discontinuitites, which often correspond closely to visually important features like contour ends, corners, illusory
contours, and occlusion .
3
Results and Discussion
After prolonged exposure to the stimulus, the network produces a blurred cloud of
spikes which anticipates the actual input spikes, but which also consistently predicts
input beyond the bar's ends (leading to small clouds of surprise-unit activity tracking the ends). The top-level units, driven both by input signals and by feedback ,
become tuned either to different motions of the bar itself (due to Hebbian learning
of the input), or to different motions of its ends (due to Hebbian learning of the
surprise-units); see Fig. 3. Cells tuned to contour ends ( "end-stopped") have been
found in visual cortex [11], although the principles of their genesis are not known .
Using the same parameters but a different stimlus, the network can also evolve units
W.R.SOFfKY
814
38
36
34
32
30
CD
........- ...
XX>IMIOQO(
28
26
a. 24
~ 22
C\I 20
Q;~ 1186
...J 14
12
10
8
6
4
2
14
CD 12
~ 10
8
Q; 6
4
!
2
8+t!tHIIIIII__IIIH!Io'
+ <H-$iIII-
x x x _
*- -
\=.
\= +
'\=
x
--.III.BijMl!mj.
+
?_
???
X
*
>MC1M_cc_IIf-1
t
~
o
Figure 3: Single units are highly stimulus-specific. Spikes from all units at
one location are shown (with time) as a stimlus bar (insets) passes them with six
different relative positions and motions . Out of the many units available, only one
or two are active in each layer for a given stimulus configuration. The inactive
units are tuned to stimulus orientations not shown here. Some units are driven by
"surprise" units (Figure 2 and text), and respond only to the bar's ends (. and x),
but not to its center (+). Such responses lag behind those of ordinary units, because
they must temporally integrate to determine whether a significant mismatch exists
between the noisy prediction and the noisy input. Spikes from five passes have been
summed to show the units' reliability.
which detect the illusory contours present in certain moving gratings.
Several researchers propose that cortex (or similar networks) might use feedback
pathways to recreate or regenerate their (static) input [7,4, 10]. The approach here
requires instead that the network forecast future (dynamic) input [8] . In a general
sense, predicting the future is a better test of a model than predicting the present,
in the same sense that scientific theories which predict future experimental data are
more persuasive than theories which predict existing data. Prediction of the raw
input has advantages over prediction of some higher-level signal [5, 6, 2]: the raw
input is the only unprocessed "reality" available to the network, and comparing the
prediction with that raw input yields the highest-dimensional error vector possible.
Spiking networks are likewise useful. As in cortex, spikes both truncate small inputs
and contaminate them with quantization-noise, crucial practical problems which
real-valued networks avoid. Spike-driven units can implement purely correlative
computations like motion-detection, and can avoid parasitic positive-feedback loops.
Spike timing can identify which of many possible inputs fired a given unit, thereby
making possible a more specific feedback signal. The most practical benefit is that
interactions among rare events (like spikes) are much faster to compute than real-
Unsupervised Pixel-prediction
815
valued ones; this particular network of 8000 units and 200,000 synapses runs faster
than the workstation can display it.
This model is an ad-hoc network to illustrate some of the issues a brain might face
in trying to predict its retinal inputs; it is not a model of cortex. Unfortunately, the
hypothesis that cortex predicts its own inputs does not suggest any specific circuit
or model to test. But two experimental tests may be sufficiently model-independent.
One is that cortical "non-classical" receptive fields should have a temporal structure
which reflects the temporal sequences of natural stimuli, so a given cell's activity will
be either enhanced or suppressed when its input matches contextual expectations.
Another is that feedback to a single cell in thalamus, or to an individual cortical
apical dendrite, should arrive on average earlier than afferent input to the same
cell.
References
[1] A. Engel , P. Koenig, A. Kreiter, T. Schillen, and W. Singer. Temporal coding
in the visual cortex: New vistas on integration in the nervous system. TINS,
15:218-226, 1992.
[2] K. Fielding and D. Ruck. Recognition of moving light displays using hidden
markov models. Pattern Recognition, 28:1415-1421,1995.
[3] K. 1. Grieve and A. M. Sillito. Differential properties of cells in the feline
primary visual cortex providing the cortifugal feedback to the lateral geniculate
nucleus and visual claustrum. J. Neurosci., 15:4868-4874,1995.
[4] G. Hinton, P. Dayan, B. Frey, and R. Neal. The wake-sleep algorithm for
unsupervised neural networks. Science, 268:1158-1161,1995.
[5] P. R. Montague and T. Sejnowski. The predictive brain: Temporal coincidence
and? temporal order in synaptic learning mechanisms. Learning and Memory,
1:1-33, 1994.
[6] P. Read Montague, Peter Dayan, Christophe Person, and T. Sejnowski. Bee
foraging in uncertain environments using predictive hebbian learning. Nature,
377:725-728, 1995.
[7] D. Mumford . Neuronal architectures for pattern-theoretic problems. In C. Koch
and J. Davis, editors, Large-scale theories of the cortex, pages 125-152. MIT
Press, 1994.
[8] W. Softky. Could time-series prediction assist visual processing? Soc. Neurosci.
Abstracts, 21:1499, 1995.
[9] W. Softky. Simple codes vs. efficient codes. Current Opinion in Neurbiology,
5:239-247, 1995.
[10] S. Ullman. Sequence-seeking and counterstreams: a model for bidirectional information flow in cortex. In C. Koch and J . Davis, editors, Large-scale theories
of the cortex, pages 257-270. MIT Press, 1994.
[11] S. Zucker, A. Dobbins, and L. Iverson. Two stages of curve detection suggest
two styles of visual computation. Neural Computation, 1:68-81, 1989.
| 1083 |@word middle:3 propagate:1 brightness:1 pick:1 thereby:1 configuration:1 series:2 contains:2 tuned:11 existing:1 current:1 comparing:2 contextual:1 must:3 subsequent:1 designed:1 v:1 generative:1 half:1 discovering:1 nervous:1 provides:1 math:1 coarse:2 location:2 five:1 constructed:1 iverson:1 become:3 differential:1 ik:2 tomorrow:1 pathway:1 autocorrelation:1 grieve:2 expected:2 themselves:1 brain:5 detects:2 initiating:1 gov:1 actual:3 prolonged:1 xx:1 circuit:1 what:2 directionselective:1 homer:1 contaminate:1 temporal:8 every:1 toroidal:1 unit:43 producing:1 before:1 positive:2 local:4 timing:2 frey:1 io:1 firing:3 might:4 perpendicular:1 bi:1 practical:2 implement:4 evolving:1 weather:1 imminent:1 specificity:2 suggest:2 cannot:4 layered:1 context:1 bill:1 center:1 maximizing:1 exposure:1 resolution:3 feline:1 pure:1 rule:4 array:4 increment:2 resp:1 target:2 today:1 trigger:10 massive:1 exact:1 enhanced:1 dobbin:1 us:2 hypothesis:1 recognition:2 predicts:5 coarser:1 bottom:2 cloud:2 coincidence:3 highest:1 environment:2 dynamic:2 predictive:5 purely:1 completely:1 patter:1 montague:2 represented:1 fiber:1 vista:1 train:4 distinct:3 sejnowski:2 detected:1 outside:1 refined:1 lag:1 larger:1 valued:2 noisy:3 itself:1 hoc:2 triggered:1 advantage:3 indication:2 sequence:2 propose:1 interaction:1 product:1 loop:2 fired:2 recipe:1 produce:5 generating:1 object:1 help:1 illustrate:1 grating:1 soc:1 strong:1 shadow:1 direction:2 closely:1 filter:2 runaway:1 opinion:1 require:1 sufficiently:1 copious:1 koch:2 exp:1 visually:1 predict:9 bump:1 purpose:1 geniculate:1 sensitive:1 vice:1 engel:1 reflects:1 mit:2 avoid:3 voltage:4 comc:1 consistently:1 check:2 ave:1 sense:3 detect:1 dayan:2 stopping:1 typically:2 hidden:1 pixel:11 issue:2 among:1 orientation:6 multiplies:1 smoothing:1 integration:2 spatial:4 special:1 summed:1 field:3 construct:2 once:2 having:1 broad:1 unsupervised:8 inevitable:1 future:9 stimulus:12 transiently:1 individual:2 delayed:5 occlusion:2 fire:1 william:1 detection:5 highly:1 adjust:1 arrives:1 light:1 behind:2 activated:1 accurate:2 edge:1 helper:10 necessary:2 loosely:1 circle:2 stopped:1 uncertain:1 instance:2 earlier:2 ordinary:1 apical:2 rare:1 delay:3 successful:1 foraging:1 anticipates:1 person:1 off:1 corner:1 leading:1 style:1 ullman:1 retinal:1 coding:1 blurred:1 jc:1 afferent:2 vi:1 ad:2 multiplicative:1 try:1 closed:1 accuracy:1 characteristic:1 likewise:1 spaced:1 saliency:1 correspond:2 yield:1 directional:1 identify:1 gridpoint:1 raw:4 weak:1 schillen:1 correlative:1 drive:5 researcher:1 history:2 detector:1 synapsis:9 synaptic:5 against:1 ty:2 obvious:1 proof:2 static:1 workstation:1 illusory:4 nmda:1 syn:1 bidirectional:1 higher:1 response:3 maximally:1 synapse:2 strongly:1 just:3 implicit:1 arch:1 stage:1 correlation:2 hand:1 koenig:1 nonlinear:1 propagation:1 somehow:1 scientific:3 verify:3 concept:2 normalized:1 spatially:4 read:1 neal:1 latch:1 niddk:2 davis:2 excitation:3 criterion:1 trying:1 theoretic:1 motion:8 wise:1 likeness:1 instantaneous:1 recently:1 nih:2 spiking:5 winner:2 approximates:1 significant:1 versa:1 tuning:1 grid:3 reliability:1 moving:6 zucker:1 cortex:14 inhibition:3 etc:1 spatia:1 multivariate:1 own:2 recent:1 driven:5 certain:1 binary:2 christophe:1 accomplished:1 greater:1 impose:1 determine:1 signal:16 branch:1 thalamus:3 hebbian:5 match:2 faster:2 cross:1 long:1 prediction:34 essentially:1 expectation:2 poisson:1 cell:9 fine:1 iiii:1 wake:1 crucial:1 finely:1 pass:2 flow:2 near:2 iii:1 fit:1 architecture:1 competing:1 triggering:1 inactive:1 whether:1 six:1 recreate:1 unprocessed:1 assist:1 peter:1 useful:1 ten:1 visualized:1 generate:1 millisecond:1 fielding:1 per:1 coarsely:1 group:1 basal:1 four:1 threshold:2 pj:1 verified:1 thresholded:1 kept:1 v1:1 fraction:1 sum:1 run:1 fourth:1 respond:1 inputlayer:1 arrive:2 layer:15 bound:1 display:2 sleep:1 discretely:1 activity:9 occur:1 nearby:2 generates:1 relatively:1 recombining:1 tv:1 according:1 unpredicted:1 truncate:1 describes:1 slightly:1 across:1 postsynaptic:1 suppressed:1 bethesda:1 wta:2 making:2 restricted:1 computationally:1 fail:1 mechanism:1 singer:1 initiate:1 end:8 available:2 operation:1 generic:1 slower:1 top:8 ensure:1 unifying:1 classical:1 seeking:1 objective:1 move:1 spike:35 occurs:2 receptive:3 primary:1 mumford:1 md:1 softky:3 lateral:2 seven:1 presynaptic:1 trivial:1 toward:2 kalman:1 code:2 useless:1 providing:1 unfortunately:1 design:1 anticorrelated:1 upper:1 regenerate:1 markov:1 hinton:1 ever:1 precise:2 inh:1 genesis:1 smoothed:1 required:1 connection:9 learned:1 discontinuity:4 beyond:1 bar:5 usually:2 pattern:8 below:1 mismatch:2 biasing:2 memory:1 event:2 natural:1 predicting:5 scheme:1 improve:2 brief:1 temporally:2 text:1 bee:1 evolve:3 multiplication:4 relative:1 wisconsin:1 analogy:1 integrate:1 nucleus:1 verification:1 consistent:2 principle:2 editor:2 cd:2 excitatory:3 bias:4 wide:1 neighbor:1 face:2 benefit:1 feedback:31 boundary:1 cortical:6 world:2 curve:1 contour:12 sensory:2 forward:5 correlate:2 active:2 sillito:2 why:1 reality:3 learn:1 mj:1 nature:1 dendrite:4 ampa:1 ruck:1 timescales:1 neurosci:2 decrement:1 whole:1 noise:2 incrementing:1 body:1 neuronal:1 fig:5 hebb:2 slow:3 precision:1 position:1 perceptual:1 ib:1 tin:1 learns:1 grained:1 down:1 bad:1 specific:8 inset:1 decay:1 incorporating:1 exists:1 quantization:1 adding:1 forecast:1 surprise:7 subtract:1 smoothly:1 visual:6 failed:3 prevents:1 expressed:1 contained:1 tracking:1 binding:1 determines:1 goal:1 specifically:1 determined:1 operates:1 experimental:2 kreiter:1 parasitic:1 internal:2 noisier:1 tested:1 |
95 | 1,084 | High-Speed Airborne Particle Monitoring
Using Artificial Neural Networks
Alistair Ferguson
ERDC, Univ. of Hertfordshire
A.Ferguson@herts.ac.uk
Theo Sabisch
Dept. Electrical and Electronic Eng.
Univ. of Hertfordshire
Paul Kaye
ERDC, Univ. of Hertfordshire
Laurence C. Dixon
NOC, Univ. of Hertfordshire
Hamid Bolouri
ERDC, Univ. of Hertfordshire, Herts, ALtO 9AB, UK
Abstract
Current environmental monitoring systems assume particles to be
spherical, and do not attempt to classify them. A laser-based system developed at the University of Hertfordshire aims at classifying airborne particles through the generation of two-dimensional
scattering profiles. The pedormances of template matching, and
two types of neural network (HyperNet and semi-linear units) are
compared for image classification. The neural network approach is
shown to be capable of comparable recognition pedormance, while
offering a number of advantages over template matching.
1
Introduction
Reliable identification of low concentrations of airborne particles requires high speed
monitoring of large volumes of air, and incurs heavy computational overheads. An
instrument to detect particle shape and size from spatial light scattering profiles has
High-speed Airborne Particle Monitoring Using Artificial Neural Networks
981
previously been described [6]. The system constrains individual particles to traverse
a laser beam. Thus, spatial distributions of the light scattered by individual particles
may be recorded as two dimensional grey-scale images.
Due to their highly distributed nature, Artificial Neural Networks (ANNs) offer the
possibility of high-speed non-linear pattern classification. Their use in particulate
classification has already been investigated. The work by Kohlus [7] used contour
data extracted from microscopic images of particles, and so was not real-time. While
using laser scattering data to allow real-time analysis, Bevan [2] used only three
photomultipliers, from which very little shape information can be collected.
This paper demonstrates the plausibility of particle classification based on shape
recognition using an ANN. While capable of similar recognition rates, the neural
networks are shown to offer a number of advantages over template matching.
2
The HyperN et Architecture
HyperNet is the term used to denote the hardware model of a RAM-based sigma-pi
neural architecture developed by Gurney [5]. The architecture is similar in nature
to the pRAM of Gorse and Taylor (references in [4]). The amenability of these
nodes to hardware realisation has been extensively investigated, leading to custom
VLSI implementations of both nodes [3, 4]. Each HyperNet node is termed a multicube unit (MeU), and consists of a number of subunits, each with an arbitrary
number of inputs. j references the nodes, with i = 1, ... ,Ii indexing the subunits.
IJ denotes the site addresses, and is the set of bit strings 1J1, ... ,lJn wl1ere n denotes
the number of inputs to the subunit. Zc refers to the cth real-valued input, with
Zc E [0,1] and Zc == (1 - zc). For each of the 2n site store locations, two sets are
if IJc = 0; c E
if IJc = 1. The access probability p(lJii) for
defined: c E
location IJ in subunit i of hidden layer node j is therefore
M:!o
M:!t
(1)
The activation (ai ) is formed by accumulating the proportional site values (SIS';)
from every subunit. The activation is then passed through a sigmoidal transfer
function to yield the node output (yi).
(2)
.
.
y' = u(a1 ) =
1
.
1 + e a1 / p
(3)
where p is a positive parameter determining the steepness of the sigmoidal curve.
By combining equations (1) and (2), it becomes apparent that the node is a higherorder or sigma-pi node [9]. A wide variety of learning algorithms have been tailored
for these nodes, notably reward-penalty and back-propagation [5].
982
3
A. FERGUSON, T. SABISCH, P. KAYE. L. C. DIXON. H. BOWURJ
Description of the Particle Monitoring System
The instrument draws air through the laser scattering chamber at approximately
1.5 min- 1 , and is constrained to a column of approximately 0.8mm diameter at the
intersection with the laser beam. Light scattered into angles between 300 and 141 0
to the beam direction is reflected through the optics and onto the photocathode of
an intensified CCD (charge-coupled device), thus giving rise to the scattering profile.
The imaging device used has a pixel resolution of 385 x 288, which is quantised into
2562 8-bit pixels by the frame grabbing processor card of the host computer.
Data was collected on eight particle types, namely: long and short caffeine fibres;
31lm and 121lm micro-machined silicon dioxide fibres; copper flakes (2- 5Ilm in length
and O.lllm thick); 31lm and 4.31lm polystyrene spheres; and salt crystals. An exemplar profile for each class is given in figure 1. Almost all the image types are highly
variable. In particular, the scattering profile obtained for a fibrous particle is affected by its orientation as it passes through the laser beam. The scattering profiles
are intrinsically centred, with the scaling giving important information regarding
the size of the particle. The experiments reported here use 100 example scattering
profiles for each of the eight particle classes. For each class, 50 randomly selected
images were used to construct the templates or train the neural network (training
set), and the remainder used to test the performance of the pattern classifiers.
4
Experimental Results
The performance of template matching is compared to both HyperNet and networks
of semi-linear units. In all experiments, high-speed classification is emphasised by
?
~.
~
" l,: ~t..
,
?'.,
..
.
. ,
~..,
.
t ',
,
.
'.
'l.J;
Figure 1: Exemplar Image Profile For Each Of The Eight Benchmark Classes
High-speed Airborne Particle Monitoring Using Artificial Neural Networks
983
avoiding image preprocessing operations such as transformation to the frequency
domain, histogram equalisation, and other filtering operations. Furthermore, all
experiments use the scatter profile image as input, and include no other information.
The current monitoring system produces a 2562 8-bit pixel image. The sensitivity of
the camera is such that a single pixel can represent the registration of a single photon
of light. Two possible methods of reducing computation, implementable through
the use of a cheaper, less sensitive camera were investigated. The first grouped
neighbouring pixels to form a single average intensity value. The neighbourhood
size was restricted to powers of two, producing images ranging in size from 2562 to
42 pixels. The second banded grey levels into groups, again in powers of two. Each
pixel could therefore range from eight bits down to one.
4.1
Template Matching Results
The construction of reference templates is crucial to successful classification. Two
approaches to template construction were investigated
<D Single reference image for each class. Various techniques were applied ranging from individual images, to mode, median, and mean averaged templates.
Mean averaged templates were found to lead to the highest classification
rates. In this approach, each pixel location in the template takes on the
averaged value of that location across the 50 training images.
? Multiple templates per class. A K-means clustering algorithm [1] was used
to identify clusters of highly correlated images within each class. The initial
cluster centres were hand selected. The maximum number of clusters within
each class was limited to six. For each cluster, the reference template was
constructed using the mean averaging approach above.
Tables 1 and 2 summarise the recognition rates achieved using single, and multiple
mean averaged templates for each particle class. In both cases, the best average
recognition rate using this approach was gained with 1282 3-bit pixel images. With
a single template this lead to a recognition rate of 78.2%, increasing to 85.2% for
multiple templates. However, the results for both 162 and 82 pixel images are
reasonable approximations of the best performance, and represent an acceptable
trade-off between computational cost and performance. With few exceptions, multiple templates per class led to higher recognition rates than for the corresponding
single template results. This is attributable to the variability of the particles within
a class. As expected, the effect of grey level quantisation is inversely proportional
to that of local averaging.
In order to evaluate the efficiency of the template construction methods, every image
in the training set was used as a reference template. 2562 8-bit, 1282 3-bit, and 642
2-bit pixel images were used for these experiments. However, the recognition rate
did not exceed 85%, demonstrating the success of the template generation schemes
previously employed.
984
A. FERGUSON, T. SABISCH, P. KAYE, L. C. DIXON, H. BOLOURI
Table 1: Single Template Per Class % Recognition Rates
image size
grey levels
42
2562 1282
64 2
32 2
162
82
73.5 75.0 74.7 74.7 74.7 75.0 67.2
256
128
73.5 75.0 74.7 74.7 74.5 75.0 68.5
64
73.0 75.0 74.5 74.5 74.2 74.7 66.2
32
73.0 74.7 75.2 75.5 74.7 74.2 66.5
16
74.0 76.0 76.7 76.0 75.0 15.5 56.0
15.5 18.2 11.5 11.5 16.0 73.7 38.7
8
68.4
69.7 71.0 70.7 69.7 58.5 18.7
4
2
69.7 68.7 65.5 66.2 46.2 23.0 16.6
Table 2: Multiple Templates Per Class % Recognition Rates
image size
grey levels
256 2 1282
64 2
32 2
162
42
82
78.0 80.0 80.2 80.5 79.0 76.7 10.2
256
128
78.5 80.2 80.5 80.5 79.0 77.0 69.7
64
78.7 80.2 80.2 80.5 79.2 76.0 69.2
78.2 81.2 81.7 80.0 78.7 76.7 67.7
32
80.2 83.5 83.0 81.2 79.5 78.5 56.0
16
8
82.2 85.2 84.5 84.1 81.0 80.0 43.5
4
72.7 74.5 72.2 72.2 69.5 61.2 39.2
69.7 70.2 70.7 62.7 51.7 51.7 0.03
2
4.2
Neural Network Results
A fully connected three layer feed-forward network was used in all experiments. The
number of hidden layer neurons was equal to the square root of the number of pixels.
The target patterns were chosen to minimise the number of output layer nodes, while
ensuring an equitable distribution of zeros and ones. Six output layer neurons were
used to give a minimum Hamming distance of two between target patterns. The
classification of a pattern was judged to be the particle class whose target pattern
was closest (lowest difference error). The HyperNet architecture was trained using
steepest descent, though the line search was hardware based and inexact. The semilinear network was trained using a variety of back-propagation type algorithms, with
the best results obtained reported. Both networks were randomly initialised. Due
to the enormous training overhead, only 162 and 82 pixel images were tried. The
recognition rates achieved are given in table 3.
Both neural networks are significantly better than the single, and some of the multiple template matching results. With optimisation of the network structures, it is
likely that the ANNs could exceed the performance of multiple templates.
High-speed Airborne Particle Monitoring Using Artificial Neural Networks
985
Table 3: Neural Network % Recognition Rates
Quantisation Levels
Classifier
162 4 bit 1162 3-bit 82 4-bit 82 3-bit
HyperNet
82.3
83.0
76.8
83.8
Semi-linear
84.5
76.0
86.3
77.8
I
le+09
,
,,
,-.. le+08
'"c::
'-"
,,
,
~ Ie+{)?
o~
ll!
0;;
'"
8
~
le+06
le+05
le+04
82
162
322
642
1282
2562
Number of pixels
Figure 2: Hardware classification speeds for a single pattern against image size
5
Speed Considerations
Single processor, pipelined hardware implementations of the three classification
techniques have been considered. A fast (45ns) multiply-accumulate chip (Logic
Devices Ltd, LMA201O) was utilised for semi-linear units. Both template matching
and HyperNet were implemented using the Logic Devices LGC381 ALU (26ns per
accumulate). The cost of these devices is approximately the same (?10-20). The
HyperNet implementation uses a bit-stream approach to eliminate the probability
multiplications [8], with a stream length of 256 bits. Figure 2 plots single pattern
processing time for each classifier against image size.
For small image resolutions, the semi-linear network offers the best performance, being almost three times faster than template matching. However, template matching
and HyperNet yield faster performance at higher image resolutions. At the optimum (indicated by template matching results (?4.1); 1282 pixels), HyperNet is
almost seven times faster than the comparable implementation of semi-linear units.
While the hardware performance of template matching is similar to HyperNet, it
suffers from a number of disadvantages to which the neural approaches are immune
CD Recognition rate is dependent on the choice of reference images.
? Multiple reference images must be used to achieve good recognition rates
986
A. FERGUSON, T. SABISCH, P. KAYE, L. C. DIXON, H. BOLOURI
which drastically increases the amount of computation required.
6
@
New reference images must be found whenever a new class is introduced.
@
Difficult to make behaviour adaptive, ie. respond to changing conditions.
Conclusions
The feasibility of constructing an airborne particle monitoring system capable of
reliable particle identification at high speeds has been demonstrated. Template
matching requires multiple reference images and is cumbersome to develop. The
neural networks offer easier training procedures and equivalent recognition rates. In
addition, HyperNet has the advantage of high speed operation at large image sizes.
Acknowledgements
The authors would like to thank Dr. Eric Dykes and Dr. Edwin Hirst at the University of Hertfordshire, Dr. Kevin Gurney at BruneI University, and the EPSRC
and the Royal Society for financial support.
References
[1] Stephen Banks. Signal Processing, Image Processing, and Pattern Recognition.
Prentice Hall, 1990.
[2] A V Bevan et al. The application of neural networks to particle shape classification. Journal of Aerosol Science, 23(Suppl. 1):329-332, 1992.
[3] Hamid Bolouri et al. Design, manufacture, and evaluation of a scalable highperformance neural system. Electronics Letters, 30(5):426-427, 3 March 1994.
[4] T G Clarkson et al. The pRAM: An adaptive VLSI chip. IEEE 'Ihmsactions
on Neural Networks, 4(3):408-412, May 1993.
[5] Kevin N Gurney. Learning in networks of structured hypercubes. PhD thesis,
Department of Electrical Engineering, UK, 1995.
[6] Paul H Kaye et al. Airborne particle shape and size classification from spatial
light scattering profiles. Journal of Aerosol Science, 23(6):597--611, 1992.
[7] R Kohlus et al. Particle shape analysis as an example of knowledge extraction
by neural nets. Part. Part. Syst. Charact., 10:275-278, 1993.
[8] Paul Morgan et al. Hardware implementation of a real-valued sigma-pi network.
In Artificial Neural Networks 5, volume 2, pages 351-356, North-Holland, 1995.
[9] David E Rumelhart et al. Parallel Distributed Processing: Explorations in the
Macrostructure of Cognition, volume 1. MIT Press, 1986.
PART IX
CONTROL
| 1084 |@word laurence:1 grey:5 tried:1 eng:1 bolouri:4 incurs:1 electronics:1 initial:1 offering:1 current:2 noc:1 activation:2 si:1 scatter:1 must:2 j1:1 shape:6 plot:1 selected:2 device:5 steepest:1 short:1 node:10 location:4 traverse:1 sigmoidal:2 constructed:1 consists:1 overhead:2 notably:1 expected:1 spherical:1 little:1 increasing:1 becomes:1 alto:1 lowest:1 string:1 developed:2 transformation:1 every:2 charge:1 demonstrates:1 classifier:3 uk:3 control:1 unit:5 producing:1 positive:1 engineering:1 local:1 approximately:3 dyke:1 limited:1 range:1 averaged:4 camera:2 procedure:1 manufacture:1 significantly:1 matching:12 refers:1 onto:1 pipelined:1 judged:1 prentice:1 accumulating:1 equivalent:1 demonstrated:1 resolution:3 financial:1 aerosol:2 construction:3 target:3 neighbouring:1 us:1 rumelhart:1 recognition:16 epsrc:1 electrical:2 connected:1 trade:1 highest:1 constrains:1 reward:1 trained:2 efficiency:1 eric:1 edwin:1 chip:2 various:1 laser:6 herts:2 univ:5 train:1 fast:1 artificial:6 kevin:2 apparent:1 whose:1 valued:2 advantage:3 net:1 remainder:1 combining:1 achieve:1 description:1 cluster:4 optimum:1 produce:1 develop:1 ac:1 exemplar:2 ij:2 implemented:1 direction:1 amenability:1 thick:1 exploration:1 behaviour:1 hamid:2 mm:1 considered:1 hall:1 cognition:1 lm:4 hypernet:12 sensitive:1 grouped:1 mit:1 aim:1 detect:1 dependent:1 ferguson:5 eliminate:1 hidden:2 vlsi:2 pixel:15 classification:12 orientation:1 spatial:3 constrained:1 equal:1 construct:1 extraction:1 summarise:1 realisation:1 micro:1 few:1 randomly:2 individual:3 cheaper:1 ab:1 attempt:1 brunei:1 highly:3 possibility:1 multiply:1 custom:1 evaluation:1 light:5 capable:3 taylor:1 classify:1 column:1 disadvantage:1 cost:2 successful:1 reported:2 hypercubes:1 sensitivity:1 ie:2 off:1 again:1 thesis:1 recorded:1 dr:3 leading:1 highperformance:1 syst:1 photon:1 centred:1 ilm:1 north:1 dixon:4 stream:2 root:1 utilised:1 parallel:1 air:2 formed:1 square:1 kaye:5 yield:2 identify:1 identification:2 monitoring:9 processor:2 anns:2 banded:1 suffers:1 cumbersome:1 whenever:1 inexact:1 against:2 frequency:1 initialised:1 hamming:1 intrinsically:1 knowledge:1 back:2 feed:1 scattering:9 higher:2 reflected:1 charact:1 though:1 furthermore:1 gurney:3 hand:1 propagation:2 mode:1 indicated:1 alu:1 effect:1 ll:1 crystal:1 ranging:2 image:31 consideration:1 salt:1 volume:3 accumulate:2 silicon:1 ai:1 particle:25 centre:1 immune:1 access:1 quantisation:2 closest:1 termed:1 store:1 success:1 yi:1 equitable:1 hertfordshire:7 minimum:1 morgan:1 employed:1 signal:1 semi:6 ii:1 multiple:9 stephen:1 faster:3 plausibility:1 offer:4 sphere:1 long:1 dept:1 host:1 a1:2 feasibility:1 ensuring:1 scalable:1 optimisation:1 histogram:1 represent:2 tailored:1 suppl:1 achieved:2 pram:2 beam:4 addition:1 airborne:8 median:1 grabbing:1 crucial:1 pass:1 exceed:2 variety:2 architecture:4 regarding:1 minimise:1 six:2 passed:1 ltd:1 penalty:1 clarkson:1 amount:1 extensively:1 hardware:7 diameter:1 semilinear:1 per:5 affected:1 steepness:1 group:1 demonstrating:1 enormous:1 changing:1 registration:1 ram:1 imaging:1 fibre:2 quantised:1 angle:1 letter:1 respond:1 almost:3 reasonable:1 electronic:1 draw:1 acceptable:1 scaling:1 comparable:2 bit:14 layer:5 optic:1 speed:11 min:1 structured:1 department:1 march:1 flake:1 across:1 alistair:1 cth:1 bevan:2 restricted:1 indexing:1 equation:1 dioxide:1 previously:2 instrument:2 operation:3 eight:4 chamber:1 neighbourhood:1 denotes:2 clustering:1 include:1 meu:1 ccd:1 emphasised:1 giving:2 society:1 already:1 concentration:1 microscopic:1 distance:1 higherorder:1 card:1 thank:1 intensified:1 seven:1 collected:2 length:2 difficult:1 sigma:3 rise:1 implementation:5 design:1 neuron:2 benchmark:1 implementable:1 descent:1 subunit:5 variability:1 frame:1 arbitrary:1 intensity:1 copper:1 introduced:1 namely:1 required:1 david:1 address:1 pattern:9 reliable:2 royal:1 power:2 scheme:1 inversely:1 coupled:1 acknowledgement:1 multiplication:1 determining:1 fully:1 generation:2 proportional:2 filtering:1 bank:1 classifying:1 pi:3 heavy:1 cd:1 theo:1 zc:4 drastically:1 allow:1 wide:1 template:31 distributed:2 curve:1 contour:1 forward:1 author:1 adaptive:2 preprocessing:1 logic:2 search:1 table:5 nature:2 transfer:1 investigated:4 constructing:1 domain:1 did:1 paul:3 profile:10 site:3 scattered:2 attributable:1 n:2 ijc:2 ix:1 down:1 hypern:1 gained:1 phd:1 ljn:1 easier:1 intersection:1 led:1 likely:1 holland:1 environmental:1 extracted:1 ann:1 polystyrene:1 reducing:1 averaging:2 hirst:1 experimental:1 exception:1 support:1 evaluate:1 avoiding:1 correlated:1 |
96 | 1,085 | ?
?
(
?!?
)*
+
,
?
)-
.0/
% 21
?
,
/
!
354687
?
#"$&% '
*:9
<;
?
='>@? ACBEDGFIHGJKFLHEMN DGOQPRFQSUTV
WX? YZ>\[[]>@^-? _a` D
bdcfehg-ikjmlXconIj0prqCs0ptl eIuQjvcxwzy\{x| }~c]n{??
? w ?x?@p-n yhj g jvc??5nh| ???@cxwm??? ??j??
sCp?wk?-gr????? ???r? ? w ?x?@p?n2???????h?x???t?????
??@?h??I?C?U???\?h?-???Q?t?L???-??? ??? ?t???m??? ???C???????
??d???L?h?G? ?
? p ?I???m?hp e ? {k?Ic? uh? ? ??nL?<? ?~?0g-nU? ?~l e p w jkg-?Ij j??X??q?p@wCl?g-n u q?gt{ j?uhw?| ? n\??? ??n\? u ???
j?w ? ?~? ?
? gwkc?? ? nLjmc w co?xj c? ? ?~n j ? ?ehg-wkj?? ?Z{?uL?ag w j???? prq??x{??\??IuL??? ? n ? e g ?@? p g ?
?
?E?
w
x
c
m
?
?
?
}
L
n
?q?p?wC?5? y ? ?5? ? g?{ c??m?Iuhjkj???c e wmp?? w g?l???:?h? ? ? e g e c]w0?]?Il?l g ?
h
p
Z
?
?
?
?
wv? ?~? ? ??p uhw e w?co?t? ? p?uh???0p?w???p?nq?p w l u ??g j ? ? nL?djv?t? ? ????gr????q?p w ??p ??uQjm| ??p@n??L??jm?\?
??? ?h? wkj?{?ptl?? } ?\? ? q j ? ? ???
wkcx? ? ntq?ptwx{xcflco?hj<???g w n\? ? n ?Rg-????p?wm? ??j??Ql????????
?
e w?cx?@? ??ptuI???Cp?w?? ???| ?Zj ? w?c ? ? ? g-nQ{oc<p?nR?hg?nQ?????]nh?@? ? nhccow?c]? ? ??n e j?q ??j u wmcf?
?5?I? ? ? e g e ? w ? ? ? ???2?tp-? j?pcj ? n? j?? c#j ? ? lXc ? ?Icf??gr? nhc u wxg-??n c]j??Cptwv?
?
? ?db
:???0g w {v? ? Zjvc{xj u wmc?j
<g ehe ????? Zj
j?p?? ? w?c ? u ? g ? cxn\?t? ? ?]{??Lc ?tu c]?
?
h
n
c
?
j
0
?
t
p
m
w
?
]
{
g
n
?
l
g
?
j
v
{
?
?
j
?
?
j
Q
?
?
g
z
j
?
j
\
?
?
?
:
?
?
?
:
b
?
?
?
?
?
?
!
?
???Lp-?
c ? ~l?c]nLj?gr??jmcx?
j??c c#"mqap@w?l?grn {k??prq\p u w%$ w?c ??? & p?u ? ?g n ?' c]nI??? ? nQcfcxw ? ?? ? ?xjmc)(+*-, ?Ic?jmcx?xj??
g ???mp ?x?tp ? j??/. j ? ptjk? n\c uhw g ??n\c]j?? p w ? g ehe w ptg?{m?h?f?<?m? ? ?tnI? ?10?{xg-n'2????2p?uQj??
?
e cxw?q ? w?l j ? c??cx?oj w??]??? ? u ? ? n ? n ? ???g w n ? 3?nh?L????p?? u j?? ? p?n2j?p jk?t? ? ? e p ?4 c]l
? nRjmcfw?l?p655j ? ?87 u g???? 9~j ? ? q j ?h? w ?x? uI??j ? 3 nh? ?]{??Qcf? u ??cx??g?nQ? j??h?n u (;: cfw
p?qC??c g w {??2??j ? e ??w?c 7 u ? ?~w ? ? j?p { p?nL?xj?w {xjj??Qcfl
3
<
='> ?h??A@B ? ?DC E? >
?
F n?Ecx?og u wvp ?
G ?t??? ??g-?\?hl?g-wm???Cp?w)H?p?nJIb ?Lg-l;K p?n ?x? c ? ?Lp-??cf?jm?\g?jGjm?hc5jmc]l e p?wog ? ?@? L qNM
5??]wk?]nQ{k?Xg ? ??p?wk? ? j??O( ?P ???/QSR yLu jmj ? n ? ? ?UTVTW {fg-n2?YX g n g?nc {xc??????]nIj??]??g ? g ??? ? p?nUq u n?{xj?? ? ? ?
?
q?p j?? c ?Ig[Z ?prqC?gt{\@? g (2lUp?n ?:?I? ? ?z? ? ? j ?h?]( ? ?xj
?]?h{#^oc#_???q?u4`?g eIe ??? ?a^]g?? ? b ? nUp-q w ?]? n@q?p?wx{o???
?
( c n j ? X grwknDc ??n ? jmp?hg-jv? ? ? ced?pIg ? p-q?p u w cx?k?g-wx{v?X? f ? j ? ? c]jv? w l ? ?~nhceg?? cfj ?hc w?j ? ? ? ? ? u { {?? ?m?
?
{]g-n ? c??Iu\e ? ? ?Z{fg-j?cf?X? ??n g-n2g-eheih?? ?Z{]g?j ?? p np-q ? ? n ? uI?xj?wm? j~g-??? ? lXe\p?wxjvg-nQ{oc6k? ? p : ???m?hp@eX?]{?? ? ?@u ??? ? n ?
c0g-wv? ? ? n jvc]w?c ? jm? ? ? 9 n g e g w?j ? ??{ uI?ag w ? { ? c ? u4h ? ?~nL? e wvp ? ??c]l ? ? e g?{x?0??? jvjm??c e gr? ? ptg?? e wvph{xc]?v? ?
?
?
? ? n ? q?p w ?ml y ? ?5?h? ? p gr? ? ? j p??]{?? c ? uDn?c?g ? cfj p qGj?gr?k?t?:j ? ?og?j ? ??q??g???cj
p-q jvc]l e p?wo. ? g ?h?
?
w ?x?kp uIw {x??{kp?nh?o? w g-? ? nhj?? ? ?h? ? ????g ? ? ? ?v?]c]?@? ??nI? jmp?lU? ? n\? ? l<? ??? ? j ?\? j p j?gr? ? uIwog?j?? ??p@np?lXg-?@cx? e grn Q p-q
j??Qc5?]{m?Lc?Iu\??c ? ? crqQco?x??c ? ? ?kjm? ? nh?lXcfj??hpL??q p wCjm?@? ~?:jkgr?m? ? ?zg n<? s~jkcxwxg j?? ? ??cw?c e g6t ~w
?x{m?Qcf?Iu ? c]w
?
j?? g j {kptl<?\? ? n c ???h? uhw ? ? ? j?? Z{??5? ? u j ?X?k? ? l u ? g?jmc ?g nIn ? g ? | n ?vaw ??c : cxnyx{z|[}~Y? ?r?????? F ? vaw ? g ? ?
?
9
!#" $ % &('*),+.- /10 '32547689#:,; < $
=>
? @#ACB
D E7FHGIJLKNMMPONQI %.RPSUTWV
X#X#YZ T#[ K Q\O
]7^ QM_I `ba7I % MPI `LDc QK1dNe c MPd_d S7f [ Ig.aih M3j,D MkDml O1nio D p,a KNM [ ^CQ3q#rtsWDdKNF
uwvyxPzW{ d R| KNFCn c KNQ~}
D E ^ n M ]?? QP? f Q??M | I?bd5? ? M3KNQ1? MP?JL?#K QK] D? ?bQ~Dp h f Q I?LM S ?
?
? ^ f s MD IJ.a?M S7f d K?Q ONd n c MPd T [ K S D a,F#?PONa h ? ?ba K
K1QK F?D?d O
M?^ ? ? ? a ] e M ? K D??1eCo? d
?k? E DF ? D EiMD h K
^? a?K1n7Q1D?a7O
M [ ^Q? D ? h ^C?3??LM |#? d T | ^ ? K ?#ONQ T ? dwM | D M_?Pj7ON? R D a f ? M3?NaH?K DQ E h ^ f F?? ? ? K
D M3niQO d3? ?
x ? ? K? TiS I ?.F7F K a?n7a?IJLMPd {k? Q3^ ? ??^ Q3K ]iQ3? ?.???? MPI?.?O T Q D [ ? K
D M n??1K1d ? j K [ ^#Q3q FiK1d R Q3? `sKF?? % a?M | ? ? d
] D ? ]?? Q d | ^ [k??| ^ [ M3^?D ]7]C? ??? | K?MP? ? ? K r F K c D ? a K1n7Q1D p?a7K
J M [ ^ Q q D Q R| ? ? MPO R M n QK??b??D E h??1?
??L?????N?#?#? T,?W?
? n a??1????L? T ? ?? ??*M ^ M3? ? ?bd M?Dd q M ^ c K
DQ3???tQ ^ ??? D [ ? O
D M n7QK1d?D a,F???7K QK s ?
? c ? ?L??? J a D#M3K | D E7F r K1aih I % a7KNONQ3? % aih??
? a?M\? K ? ^C?? f#? I ?ba h dK R M ? ?b^#a7d
? [ K?? QPd1M?? ONd R QI?.? ? MPj K d R3? ONF n c I ? a h M3D d q?? a,??d S ^ [ j f[ M | ? ?bd
MDd q R D E??,K??t^#Q ? ni??D M3K
F ? ^ Q ? G?? z,?
O M | ONE?FCIg.d1}Nedd M | K] Q ^ s pON??^ ? d R3S O?F n7?K QO1? r
QK1dOa7MD M3I? ^ a D a,F ^ n7Q a7O
M [ ^#Q q D ? R S ? ? M ??K R ? ? n?Q3O ??? ^ cc ^ [w?? l h M | ??b? ? [ K ] QK d ONaiM?K?i??ONQ3I ? ? O a7MPd
^ a?D?dKNM ^ ? dI? ???c D#M3K ? ]iQ ^C? c K ?d D a7F ?iI? ? R ? d3d M | K QK1d? pMPd ? ?| K1d K~Q?K1d3n7?MPdwd S ^ [ M | D MkM S K
MPI % ??K?? F K c D ? a KNM ? ^CQ q n7dI % E h c ^ [ pKN? K1?,? K
D M n7QK1d R D a?a ^ ? ^ a ? ?? D#M R3S M S K??,K Q ?P^ Q? Da R K^ ?
M | K | D E7F#?K1a h ? a,KK1QK ? ? ? D#M n7Q K d ? % ? R D
a R M n D p?L? ] KNQ ? ^Q? ? pI S M?
? K M3M3K Q ?
!#"$ "%& ('*),+- .0/1
2 354
?| K768:9 ? ? ] D R K?d S n,M3M;K=< D?Cc ^ D#F ? Q ^ R K1dd> ?ba h? 9191@ @ { F ^ ? D? ? a Q3O*A#n ? ?bQ ONd?d R j K1F n c I ` l h
M S OM Dd q d?M | D M ? n d1M O ] K1Q ? ^ Q1? K
? ? f ? % a?dMDCB c D E F?M Od M?M S K ] D ?Cc f D FidMPj,D M D Q?K ] c D R ? F
? ? a?M S OEDGF DQ h f sWD?? ^ ? M | K?d ] D R O ? | n M3M3?K ?IH aKJ f rPd S7f ] d R| K F n c I Lba h M3K1Q1??? ?.a ^ c ^ h ? T K
D R3S
d | n,M3M c K??I % d ? ? Lb^M?I % d?DON ^ T_[ | I % R j | D ? D??i?1P ??c D n a RPS FQ M3K ?SR D R jJ1^ s Rf aid?J d1M ? ^ ? D
] UD T MPI ? D T ?? ?#? f WQ V KNQ3*K VYX O
M_^? MD d q ? M | D M ?eXM s Kw] K QP? ^ Q ??K[Z % \ f d1M f ? M S ONdK~M] q dkD Q3K?? ^ ] Q KW_
`M ] q d\? M S D M ??nid1M O ] KNQ ?t^#Q? *K V ]iQ3I ? ^ QM ^?? D ?ia RPS ? ?Cn M d f ?KwD Q3Ka^ ? ]?^ dMPr3M3D d q db cM S D MMD q K
? pD R P?D ?PM3K1Q?? | K?d | n,M M c K | *D Xed D a7F KNF ?:f D R | M D ? q | D d D Fin7Q1D M ? ?.^ a?Da F D c ? ?bd M ^ ? Q?K d ^#n Q R K
S KQK1d ^ e Q R K ? D Q K Qh T ^#n,]?K ? ?baiM f Q3K d ^n7Q R K ] ^ f c d ? ^ Q?K
D RPS M?Dd q D E7?
Q3K A n ? ? QK ? KNai#M g =
?
h
KND RPS M ? ] K ^ ? Q1K1d ^ niQ R K T M | KQ K A n ? % QK
??D ? ^#n a M^ ? MPj7K Q Kd ^#n7Q R K ? n7d? sW? K ^ ? D ?J a,KNF ? Q ^Ci D
dI?ba h cK??3KNd ^#n7Q R K ]?^i^ p ? ? R ^ ??]?? KNM Kd R3S KNF n c O ? n7dM_d ] O R Ij ? ? M j7O d1MD ? M ?P? % ? O ^ ? O
D R3S M Dd q
D a7F?M | O Q ONd ^ n7Q R O ]7^i^ ? s ? [ | I? RPS K D RPS Q3O1d ^ n?Q R O QK A n ? ? Q K
??ONa M^? K
D R S ? ? ] q?I? dd D M ? ? dlk7O F ? ?
q ON? h ^ D p U^ m n S O~d R | O F n7? ? ? Qa ?d ? d M3KN? ? ? gwM ^ ??? ?.a ? % ??? p% o O M q K M f M3D cWF n7?D#M I ^ a ^?_MPj7K d R\S K
F ? c K %
? j7I ?bd5I ? ? ? sn r F | ? ^Q K R| D c ? K a h % a h M | D E?dIt.? ] p ?~?,a Fi?.a h v
D u ? x? wzy { |N? ? d R } K F n p UK ~
,[ K ?KNE O
M UD ;#????? ?#U? ? ? ? xK ?#K1?^ ] KNFM | =
K ? m ^ c ? f [ ? % M h ? ?.M O QD M\I % ?#O?Q K ]7D I? Q??O
M |if F ? ^ Q?d ^ c ?CI ? a h
M } I ? ? ? r F | KNF n ;L? ?pM h ? Q3^ ic O
??? ? ? Q d M T D?}NQ I n I? R D d ] D M q ? r F | KNF ni? K?? ? ? R ^ a dM\Q3n R MPK
F?Qs ? [ ^CQ q ? a h
D R q [ QxV? E V ? f ? T [ D Q F?? Q ^ ? ? n#? K;D nQM R| C? V p?D a FCI ? a h F7D M3K1d ?? Q3Kd3^#n7Q R K R ^ aid MPQ1D? ?pMiMPd
D Q O ? M ^ Q3O ?
? ?wj7? X R Q ? ? MPI ? R D ? #r ? D M | d r?? j7K Z ?n dLK d ? `Q ? K ? D X M3j ? X `M QMP? ? M?dMD M K ?t^ Q?D d`M M3`K ?
?`? D r F ? d ? D Q r?? |??:? a? D r ? | ? M3D#MPK?^ ?M | ? ? ? ? Q f ? c K? d ] D R K#?,M | Kx? ? D T K?M ?^ ]7^ d`X3I ? s pK f ] KNQ1D M ^CQPd
M | D#M R D ?
E ????D ?7?ic I% K
s? ? qi=
O ?:???s?z?z?????????x??????? ^ ?,O QD M ^CQ R| D E h ONd?M | O ]7^ f c D d3dI ? h E7??K a7M
? f Q?^ a O ^ ? ? | KQK1d ^ n7Q R h K Q K A n ?? QKN??xK MiM ? ^??E
D n Dd q K
% ? M?? % d ^ a c ? D ]i] c I? O
F [ j O a?M | K ] ^C^ c
Q3O*dd ? ?bh a7??KNE M [ ^ n c F D pc f [ ? | K Q3ONd`? n Q R K~QOz??e? ? Q3O
?ONa1?*M f s K ? n R R KNd3d ? e p c? d1D M3I?bd3?,O ? ? ? S O
? ????? ^ ] KNQ D M f Q ? ^ ?#O1d D?M Dd q(M ^ ?FiI % ? KNQ ? a7M M3I % ?OD E?F M | O1a Q O1d R j,KNF e c KNd D c p ^ ?~M j K
? K
? ] ^?Q D c FiKN] K1a,F K a M d ^ ?WM | K MD?d q~n ? I? a h M | K R Q ? ? M3I? R D c ] D M | ?O
M | ^ F x pK D?CI ?ba h M | K Q3Od ^#n Q R K
]7^C^#? D?d ? ??bh 1E ??K a M3d ^?? M | OwFCK ] K1a ?F ?NaiM3d*n E R q,D E h K F { ? h | ?
K ?E? ??? ^C] K QD M f Q?I?bd ^ a c ? D ] ] cI? O
F
|
|
M ^ ? ^ ??KDk`M ????M ^ M K k Q3Wd ?K D ?3p ? KNQ ^ Q*M K ?WQPd ? M ? ? M3KNQ?M3I?.? KUD#M [ qi? ? R| M j ? ?CI % ^ c D#M3K F QK d ^ eCQ R O
Q3K A n ? ? QK??KNa M R D E ? d1D MP? ? ? ?,K F ? ?| ? I ?.M K Q D M\I ? ??QK ] D ? % Q??HKNM | f F n d1KdyD R ^ ??s,I % E,D M ? ? ^ E f ?
M | Q P ? | K n Q3? ? ? ?3I? R X M ^ R qi^i^ ? K D M D d q M ^ ? K < D I% Q ?
% ? M ] ? QK ? K Q X f MP^ ? ^ S ?#K?M jiO MD d3q?M | *DR ? ? D {
QK A n I? Q3Od?D E D#? ^ eCa?Mk^ ?kQK ? ^ ?p? Q r?? Ka,K D Q c? ? A e D B,? ^ M K?D? ? a MwM D M I?.d ^ ? KNQ D ?? ^ D#M KNF ?
x ? { | UD X ? K [ M KN? < ? T UD ;#FiO ]7K E ?iKNE n d? D a,?
? ? R { M KNK
? X M ? ? Ok? ^ ? KNF ^ aic??D5d S ^ Q M FC? ? ? M3D a R K M ^
|
R
?
|
? ? KNd ^CniQ K ? K A n K1d M ?
??^ ?#KNQD p d R ^ MiM Q3^ ; d ?tQ1e R M e Q K ^? M | K D ? h ^ Q3I ? M S ? D ]7] p ? ? O1d
? M I? #? ? ? M ?
%
o
? ??G? ? c DM?NF=?aia7OD B I ?ba h M ^ ?I? a7I ? I ? O D ?iK1d3? ? h a?D#M3K
? R ^ ? n,m?? a R M3I?b^ a % ? S K?d K
D Q R|?R ^#E?M ?.a ? KNd
n a7MP? p? BD ? R j7zK ?C1n BK?I? ? ? ? n a,? M ? D#? ?j ] a f R ^ 1a X n Q D?pJ M n ?CI % f p?D M\I? ^ E ? ?
? ^ ? I ? K [ MPj7K ? R1| K
F7e c I ?ba h ] Q3^ ip K ? D d D Q3KNI % a ? f Q R KN??WK ?iM c K?D Q a I?.a h ] QP^ ? c KN? ? [ K?? e ? M FiWK ?
KyKN? ] c ^ ??M | O ? D ??K
? R Q3I? s ? M | K?iQ3^ s,c K
? d ] D R K D a7F M | K?Q K1? ? a ?t^ Q?rG? KN? KNaiM??tn7a R MPIJ ^ a
L?
!"#%$'&
(')+*,-/.1032(465.7482:9; .=<?>A@B.1CD4FEHGJILK'.M2C 4 ) CHN OP@?QR2CS4CT.VU=WMN 2DC+X.Y5 )TZ O[C]\ ^/5A4=_ ( 4
C+K`2 5 X'.1a6b E .
cedZ f 25b2 2 . ag4hij
.
a'.o@'>VCK'. ) >qp O @r *)5>sutA@ C r b @5C+p v *
wyx{z U8|DCHi~} \ P? >?4 ) >1? ?@ r]i ) 5.?
fl>=kn
0 tAw C * r ?{? ?
?? r * )
4
5Sm Kn25?'. a b E >uURCSK4
C?2CT???E E?5 * @'C? Z ? @'2V5i
@?2CT)S4Z ?P@C ?N v i??4
C?\ i
@2????LK\ ?P2
?~? . ? 2+. 2?4M20 f ? E???.@?4- CH? r]i6)?>=45KY2q5K'.??b E N ?P@'} 4
??? C Z ??i
@???4@a~N ?? \ ??2JN OP@?CT.@a'.1a?C * >A@5i
b ) 4 } .
) . N O @r]i)??? ??0Y.A@???????4 )T@?N ? @}~CHi (?) .r?.A):46??? C N v i@2DC+K4
?M?A??? ? ??=?/?D?@a`4 } * i'a`25K'> d b ? .
?J? * )J4F@'?
2??? K. d b ? > ? ??CTK?4
C?? ? 2 r]) .A??i r?'N O *? 4
C+N ?Pi
@ ? ? CTX?. ).N @?]i)5.q0R.@?C?N ? 2?CTK'.l@. } 4
? N ? . i8r C K .7??SU=???? ??
? ? ? ???=?H? ? ??7? ?
?A? ? ?A???:???7?%? U
?U W? IDK?.R???{? 4CTC+.A0 ( CH2MCHi (') i ?6Z a.%4R2q54 _ .S?]\ OP? a'. ( . ? a?>A@ C
?
0?.1? 2?6) . i=r{CSK'.V??.@ } ?TKgi ? C K'.725K'.8?
b??.??4 @a C K'N ? 2?o@4 _ ) .Z ? @r?i ) 5S.A0R.q@'C?N ?P2:N ? @?CT.@?a'.1agCHi
.A@5i
b ) 4=}?. ) . Z ?P@r]i)5.A0~. ? C???.q4 )T@??@'}?CHi7o?@a~2K * )C?o?@4??25T? .Aa b?? .2=?
? ?T?'?
?LK'.`? ??? ??2Va'>o@'.?a?4F2 r * _ _ i ; 21?u?.1CM?A?=???'?A?
? ? |{?? C K?.Y5A4 ( 45N ? C]?yi=r ) .2Ti b ) 5.?CH? ( .
? ? CK?4C N ?[28?CK?>75Si0???Z ? @.?a 5 4? 45Z ? C ? ir?4? ? ) .2 * b ) 5. ( i6i??2?ir?)S.2i b ) 5>~C ? ( .7?
C{.145 ?
?
?
?J?
C Z O 0?>
N ?[@gCTK.V25K?. a b?_?. ? _ >1C ? ? ? ??? | ?.uCH?. 5
b )T>A@'C b C?N ? ? N ? 4CH\ *
@ =r ) .2+i
b ) 5.A2?i=r{C?? ( .u?
.Da'.So@.?CK?.7? ?+U ? ? ??? ? ?]? ?/? ?=??? ? ? ? ? ? ?
? 6? ? T?+|?r i )?) .2 * b ) 5.?C?? ( . ? =4C?C Z /0Y
> C
i .?
?
? ?+? ? ?
| "4 !#%$ ? &('*),+ '"- . / 0 1 2 3,4"5
rJCK?. ) > 2Ti b ) 5 >?N O 2 @'i C * j
. =) < 4 ? ??i5A4CT.Aa ?
? >'? ? ?P? ?LN O 2
?
;
:
. 6 798
P? O
R? ? Q ?? * CTK??A) @lN v 2? N f[C?N fP2LCTK.l(r A 4? B CHZ O i@Ri8r?i?6. ) 48? ? i 5q4 C N ? i@ ?DCFE .V? ? H? GJI?S?U ? ? ?? ?? ? ? ?]? K L?? ?8?]? M ?
? ? N ?
b S . * r ? ? ? } C+KR??? v 2JCT? .J2 ? 0 i=r??+?> ) >A2 * b ) 5.MbC?\ OP? N 4FCTN i@?N ? @
x
| ? i ) 4 25 K . a T
{ a'> UuCW4 V>A@
?
i?. ) 48? Y
? \[]_^a`b
? X )>q2 * b ) 5.2?4 ? a 4 ? ? ?l?]Z v 0 > 2 v
? ? ? O ? | yw{ N ?
. @ C >2 .
>
dlcfehg 3 i mcfe
uwv xzk
? Z
E
fyj
a.So@N OPC?N ? i@?2 ? CSK'. )S.2Ti b ) 5.Va'N OPE?4
CHN i w rH465C i)JZ ? 2La.??@.Aa J4 n p
? o?
? q U ? U?JW r%s] tu
5}|
|~ 7
i ?
; ??? 4
C @ > K4 ? . 2 ( > 5 \ ?>1a i ; CTi%?'N OP. ; ) > ( ?
4 ? ?[() ?H??4=2>1aV2 5+? ?Aa b ? N ?P@ } 4=2L4 ) >AN @6r]i ) 5>s`.q@'C
? .14 )@'\ v @'} ( )Ti
? ? .A0y? ; .YCTb )T@gi E b ) 4
CTC+.@C?N v i
@gC * CTK.V? .14 ) @N ? @ } ?
4 ? }i
)+\ ? CT? s
b (?(i
2S.`4 C 4
?
O ?
}N ? ? . @ (i
Z v @ C \ @?CTK?. S .?4 ) @?N ? ? }~(')Ti?5 > 2T2 ; .MK48?
.:a?.?.? i ( .Aa (* ? N 5?
? ? ? ; K \ O 5K 2A4?? 2 C 4 C \ ? @
E @ ??? ?
? CT4C > U C ?. 45ACHZ ? i@nCHiu2.??. 5 C Z v 2M?
? ???? z UF? .V5A4@ga'.?@.?4 ? 4 2 2 i 5\ O 4CT.1a?r]b@5C N ?* ?
?
5A4 _ _ .Aa??K'?
. ?WGF?[?? ? ? ? ? (? ? ?F? ?2 b 5 C 4?
C ?}? ? U | CT.? ? 2{C ? .75 ? 0 ? ? 4C N ? . ) >A;lW
4 a CH??
C ; .?; \ ? ???
E E
?
) .15q.p ? ?.{N ??r @ . rHi
E ? i ; ( i
E N 5S?
? ?~r ) i6?
0 nCT4
?T. U * @ ; 4 )a ? i ) 0u4??E }
? ?J? ? Uk
| ??
g?? ? / c? W?? ??U ? /??f??? ?
?
; K?. )
. ? N v 2JC K . @ b 0M?.A) i=r?2S?T> ( 2 b @?CHN v ? 475 i?
@ ? N ?/5C < r )..M25 9. ? b ? >?N ? 2 r * b @a
?
E
2 Z v @~0Yi
2S%
? A.Z f @r]i ) 5.?0Y.A@?C E?.?4 )T@?N ?P@'} ; i?) V? ; . ; \ ?[? _ 4FCCT.q0 ( ?
?
C ?]i%??.A4 ) @YC .D?F4? b. ?]b'@?5ACHN ? i
@
?
=r CH? .Li ?'C+N ? 0`?4 ? ? i6E p ??5?
? ?D? ?a .q? i'? . a ? ?p?? ?????
} ? ????? @5.
? ? 4*C ??. ) CSK4@ d N fP) >15ACH? ? S .?4 ) @?p ? @ ?
?
; >VK ?8? . ? .A4 ) ? .1ayC ? N ??2 * ( CT? O 0 4 _ ?4 ??? . r b @5ACHN i
@ ? @??%5A4 ? C ) 4F@'2Hr?i ) 0 N ? CLN ? @'? * CK?.?i ( CHN 0u4=?
( i ? p O?5???'p f 4 4 2 N ? 0R(????i6@. ?H2CH.1( ? i}
i V
4
K?.14 d 2.14 ) 5 ?
i?? . 4F)T@YC+K'.J?4=E b . r b @5C+N ? i
@ ? ; .L4 ('( ?E ?
? f ?
ou=? ? r ?
o ??? ?
c ? r?i ) 0 i=rL?
4 ? b. N O C . )S4
CHN O i@
N ?P2? 4 (( ??p O .Aa * @ ? N ? @ .%C i C .?2 ". ?Fb .@5A.2~i=r
Z
? Z
2 C 4 C >2?4@aV)>AN ? @r]i ) 5.A0~.A@'C?2?CTK?4
C )T>A2 ? ??Cr ) i'0 5 i6i
2N @?}?45CTN O i @'2 4 5 5 i)a'N ? @ E }?C i C K??J5 b )T) .q@'C
E
.2AC?N O[0R4CH.1a ? 4? b .?r b @5CTN O * @ ? ? ?
C .14
5SKY2SC4
C >?U ?b )N ??@'}?E?.14 ) @?N ? @'} ? ; .?5i6@ab5AC?4?i@?. ? 2 C ?A(
? ?
? C?i?.A?F4E b4?H.l=C ?>?2C4
C+? 2
E i'}
i V4K?.14
a?2.14 ) 5 K b 2N ? @'}?C ? .D5 b )T) > ? C .2CHN ? 0 4FCT. aM?4
? b .?r b @5AC?N ? * ?
@ ?R
.?CSK'.@ 2 .??.15 ? C K'.y4
5qCHN O * @ ? K?4C
) .A2Sb'? CHN ? @'}B?])Ti0 4 ('( ? ?6\ fP@}g.145K ( i 2T2SZ ?? _ .?i ( . ) 4FCTi )
v
0`?4 UN ?[0VN ? .q2JC ? . ?')S.Aa\ v 5 C .1a ?
4 ? b .?i=rJC+X. ) . 2 b ? C \ P@ } 2 C+? CT.V"U ?
??CT. ) 4 ?
( ? ???N fP@?}YCTK?N f?2?4 5A?]\ ? i@
? ?
4=@a ) .?5?. ? v ?6p O @ } C .`). ; 4)Sa ? ; . b ( a?4C. * b ) .2SCTN ??0~4
CT. * r ? ? C?i ) . ?. 5C C . a\ v ? . ) .@?5>
E
E
?
??
? .1C ; .A. @ CSK'.??4? b .Di=?
r ??? ?]U |{4 ? aR?+K'.M0Yi ) .JN ? @ r i
) 0 >1aV?
4FE b . ? U ? D
| ??? U ? ?
?
?
?F??>?f?p????????;?>?T?????f?k?????? ?H? ???k?????????W???????? ??? ? ?H? ?
K?. 0 4 Z P? @ 5 K?4E E >q@'}.ur]i u
) a'>q2? ?P}
@?N ?P@}"4 5TK. a b? > )T> (') >q2.@CS4CH\ OPiF@ N ? 2?C+K4
C ?\ )SC*?4 _ _?? 4F? ?
C0Y> C K * a2r *
) ? .?4 ) @?N v @'} . ? 4 b 4C+N ?[i
@?r @?5C ?
?
?
N Pi@'2?5q4 ? i
@? ???. 4 (( ? N ? > a C * oU . a ? _ >@}C ? ?
.85CHi ) 2
* r?rH.14
??b ) . 2
i ; .A?
. ) ? CH?. ? . @ } ? K i8r{25SK'>1a6b E .2 ? 4 )TN v .2%a. ( .A@?a'N ? @ } i
@BC*??.7@6b 0??.)Mir
?
C4 2?
? .l5i 0~(??.?
V 2?48@aMC O?
U N ??C]?Li=rC ? .N ? ) CT. 0R(i ) 4?64 ? a ) .2Si
bq5. 5Si
@'2C ) 4=\ @CH2 ? i?? ) (') .q?N [ib 2
; i )*
V ? ; .??? @a ? .q@'} ? @?. tA) . a 4 o
U .Aa 2.1C i=rLrH.14 CTb ).2?CTK?4
C%2Tb0~0`4 )H\ ? . %a ? C??> 2C b 5AC b ) .
.~N ? @5? ba?. a 2b 5Kgr].?4
CHb ) .2~427C????? ? ? * CTK?.?5b )) >q@ ? 25 . a'E?.
?
* r CTK'>u25 ? .Aab ? .
?
CTK'?%0Y.?4@ 4? d 2C4a4 ) aya'. ?6N 4 C N O i
@Ri8r?CTK?.7b?@'b'2S. a?)S.2 * b ) 5.~5A4 ( 45SN O[CH? ir .?4 5 ? ).E 2Si b ) 5.
?
@ .A}4C N ?[? . \ O r?CHX. ( i * ? N ? 2 *?.)
T4F??? i?5A4CT.?a ? C K'. s?.?4@g4=@ay2?T4
(* i? ?
? 4 ) a d .A?6\ ??4 ? \ * g
@ ir
? ?. ? S ? *5 V ?Hp O
. ? ? ?[a ? .HN ? 0 2 C ; t > @a?+.A0 (?*
? ) 4 ??
d t @ ?. @??]2 | ?L4 w a ? i * @
i ; ? . )=?
m
m
m
?
m
! "# $!%&('*)!,+.-0/ 12'43576 $ 78:9 1<;>=
?#@BA /(+C!DE
JK2LMON
RR
URS
RS
SR
e N
\\
S
J K2LMWV RRXR
S
]S
e P
FG#HBI
J K2LMQP
^b _c
egfhe Z eji
T
Y Z
J K2L R[
S
`a
ejk
\d
lCm npo#q<rsutwvvyx{zB|~} s?2??mv0? m ?*??? ?!??? ? ?<???<?????? ??
?2????w? | ??owm?0??s7s r? ?0? o?m?0? ? r |?g? ? ? z s ? ? ? ? ?!? ? r | ???m?*??oh? ? |7? ???? ? m? ? ? ? m ?*???????2??r | ? q ? | ? ? z |
?!q ? ???<?w?????? ?? | ? s7?r ?Bm?0??o ??? ?w? ? s? ?? ? | r | ???#r?s ? ??|{? m ? ? ? ???? ?7? | ? ?w?j???js ? ? ? ? ? ? ? ? ? ? ?
?q ? ????? ? ? ? ? ????? ? | ? r ??? ? ????? ?2??q ? ? | ? ? q r | ? ?
?v v
?
?
x ? ? ? m? ? |? ? |7? ?!??? |?? r ? ? ? | ? ? ??r ?h???? ? ? o???? ???!? n ?2????#?v ? ? ??? r? ? s? ??? ??s ? | r ? |?? ?y| ? ? m ?w|???wm ? ?
?
?
? ?
?
?
s
#
?
w
o
?
?
m
?
m
0
?
?
?
?
2
?
?
?
r
? | ?? ? m ?*?<o?o#??? ??? ?? m? ? ? ?w???m?0? } s??s7?2? | ? ? | ? q<r s m? ? ? ? qB?!????
|7| ?
?
v
?
0
v
?
?
?
?
?
r
?
?
? ?[??? |7| ?? r?s ? ??o ? m m ? ?w? ? m ? m ? ?~?????w??m s } ? ? ?#? ? | r?? ?!??m? ? ? q ? |!? q | ? ? | ? ? ? | s z ? ? | ?
m ?*? ? ? ???W? ? ? m? ? }w| ? ? | ? q | ? ? | ? ? ? ? |?? ? r ? ? ? s!? ? | ? ? q<r | ? vu? ??? ? ?? m?4 | r ? ? | ????? ?
????? ??? ? o ? z ? ? ?
?
?
v ??
z m ?<??s7?? ? s ? q | ? ? | v ? ? ? ? ??? ? ? ? m v | v ? | ? ? ? z ?<??? m? m ?p????? m ? | ? ? |? q | ? ? | ? ? ? | ??? ?<? ??? ? ???oj??r?w?
???<? r m ? ? q ? ? r ? ? ? m ? ? ?#??m? ??? z sjm ? ? q ? | ? q | ? ? | ? ?????r m? ? ? ? ? r ??? ??? ? m? ? ??m? ? z2| ? ?0?B? | ?
? |? q s7? ? s?m ? ? ? | ?????#r?? | ?????Q? ? ? ? | r ? | ??? ? ??s7q r ?? ? | ? ? ? r?? ? ? zB?? ?? m ? ? ? ????? | ? ?? ? ?wo ? ? s
m v ?2? q ? ? |? q | ? ? s ? ? s ? ??? ? ? ? | ? ? ? m ? } m ? ? o ? m ? ?2??? ? ? ? ? ?? v ? ? | ? ? ? ? ? m ? ? ? ??? ? m? ? ? ? | ? ?? q | ? ? |?v
?
?
v
?? ?
| #| r?? | ? | ? ? ? ?w? ? ? z ? ? ??m v ? o ? | ? ? | ? ? ? ?
| o z ? ? ? ? ? r | ??<? ? m s } ? ?!? ?>? ? ??m ? m?*?#? ? m? ?
x? ? ?
? ?m? ? ? s ??po z ?
| ? | q | ? ? s ?
? ?#q?o z s ? ?? ? ?? m m ?*? ? ? ?? z ? ?#| m? ? ? ?
?
?
m?0r ? ? ? ? |
x ? ??? ?<? ? ? ? ? ? ? ? ?r ? ?? m? ? | ? ? q<? r ? | v ? ? ? ? z2| ? q ? m *??o ? ? ? | ? ? ? ? ? ? ? ? ? ? ????r s ? ? ??? | q ? ? ? ?
?
? | ? m? ? | z ? ?
?#?<? ? ?
? q ? ??s?2?<s ? ? ? ? ?p??? ? ? m ? ? ? ? ? | ?? ? m?*? | ?7?? | ? ? |
? ? q ? | ? ? | ? ? ? m? ??? | ? ? s ? | ? ? | ? ? ? ? ? ???g?wq ? | ? ? | ? ? q r s ? ? |
?
?
?
? ? ? |? q?s? ? | ?!? ? ? ? ? ? ???? ? "
?
?
? ! ???
?
?
x ?u? | <? | ? ? ? ? m ? m? ? ? ? ? ? ? ? ?5| ?? q ? ??m? ? ? m? ? | z ? s ? ? ? s? ? q | m ? ?)??
?
(
?
?
m
p
?
o
? ?? ??
?
~
s
?
?
?
?
?
?
?
>
?
?
<
?
?
?
r
?
?
#
?
<
q
r
? ? ? ? m ? ?>?$#?? ? ? ?&%??!? ? ? | m? ? | r ? ? m ? ? ? m ?? | ?r' s7?
|
v
?
?
?
?
?
s ,Cm ? o q?r | /t . ? ? r |0 ? z21<? ? ? ? ?5| ? ?? ??? ? ? q | ??r ? ? m ? m ? s"? ? ? q<r | ?
??????? ??? ? ? o |+* ? | ?
?
?
?q ? ?
s ? 4q 3 | r ??yrs ???#q r ? | q?? ? ? ? ? ? ? ? ? ? ? ??? s?? ? ? s ? ?? ? ? ?#? ? ??? 5 ? ? s ? ? s r | ? ? 6 ? ? ???
? ? ? ? ??| r ? ? ?7 ? ? ? ? s? v 8?| ?
???????!? ? ? ? ? ? ? ? ???m ?? ? ? m 9*?;:? 90? | ? ? ? ??? ? ?? m v ? ? ? ? ?? ? ?~? ? ? r ? ? ? ? ?y? ? ? |
? s7o v ? ?<m? ? ou?? ? z s????
? ?? ?=< m v ?w| ? ? z m? ? ? ?| > ??m v ? m ??? ? ? | ? ?!? ? ?w? ??q ? | ???? ? ? ? m v ?#?B? ? ??rm ?gm ? ? ? ??|
? | ? ? q ' s7?"?!? ?@ | ? ? ?w? mA*? ? m? ? | ? ? | ? ? ?<?? ?? ?jm ? ? m? ? ? ?? } C? B | r ?o | ? ? ??? ? ? m ? ? ? ? ? q ??? s7r ? ?
? | ? | ?2? | ? ? ? ? ???? ??q #u? | r ?!? ? ? ? ? v ? ? m 7? D | ? ? | ? E? ????? ? ? ? ? | r | ?? | ? q E ? m ? ? | ? ? ??? m v ? ? ? m?? |
?j? ??? ? G? F ?5| ? ??? ? q ? | ? z2| ? ? | r??o s ? ? ? q | ??? ? ? ? | ? | I? H? ? q?r | ? ? ? ? ? ? ?#???!??Ct J ? r m ? ?jm ? m? ?#|
? | ? ? q r | ? ?r?L
s K ??? ? q ? |} ???#r?s ?#? ? ? ? ? ?? ?
?
? ?
Pv O
M | ? ? | ? }?N
| > ? ?<v s? ??| ? | r ??? s ?
? ? | ?w? 6??!? ? ? m? ? s ? ??? ? ? | ??? ? mQ ? o R | ? ? ? q<rr?s?? ?<?
? ?? ?p? ?? }
?
s ? s ? s r ? ?? ? r s ?7? ??? ? ??? ?#? o
? ? ? ? ? ? ?:?
m ????s7? ? ? ? ? q?r s ? ? ? r ???
K ?w? ? q s ? ?
? ? | s7? ? ? ? r | ??? ? | ? q ? s ? | ? ? ? ? | ? ? r | ? ? z | ??? ? rm 7? S ?| ? D ? | ? | q s ? ? ? s ? ??? | ? ? ? v ?<? ? ??o > S | | ? ? 6 ? ? ??? | ?
? | ? ? q r s ? ? ? ??U? T | ? ? z ? ? ? ? ??? W? V ? ? ? ? z ? q ? ? ? ? ? ? ???sr?m? ?#| ??? |7? q<s?? ? L
| X<| ??r? ? s ? ? | ? ? ? ? Z? Y ? q ? |
? ? |~| ? ? ? ? ?g? ? | ? ? ? q | ? q ? ? ? ? ?#? [
9
\ ? | ^
? ] ? r ?w? ??? ? ?#q??? ? | ( ? ? ? ?!??? ? | ? ? ??r ? ? ? ?#??? ? ? s ? ?_ ?<??s O ? | q | ? ? su??B???? ? ? s ? ? |
?
r F ? ? ? q ? s???q ' o ? d? c?m? ? ? ? ?? e?? ??? ?
? ? ? ? ? ? q ? ? ? ? q | ? q ? ? q ?$` ? ? ? ? ? ? ? | ? ? ?#r ? b
? z |gfih , ??? ? z s B?2?? ??? ? | ??q ? | m ? ? | s ? ? ? ? a ?? ? ?5| r |?#? | ? ? v |?| ? ? z<| ? | ?? s ? ? ? ? | ?| ??? ? m ?*??o ? ? |
?
?
o r ? r ? ?w| r ?!owm ? 4o t ? ? ? |
? o>? ? q2?g? mq0? s
r | ? ?#o ? mk? j m
| l ? ? ?#? ? ??;? ? n2? m? ? ? ? ? ? ? ? ?B? ? ? ?w?<??? r ? q??<? m v ? m ? s ? ? p
?
?
?
? ? z m? ??? | ??? | ? q<r?s ? ? ? ? | ? | ? ??? ? ? ?? | u| ? m ? s ?
? ? ?Zv ? z2| ?wq2? ? s rxw ? ?w??? ? r ? q ??my ? m s ?
???<? ? ?#z? n2? m v ? ? ?,? ?B? | ? ? ?? ??? ? s ? ? ? ? ? ? ? ?u??r s ??? {
| f"| ,???? m? ?
?2? @ ???ows ? | ?? r | ??? | ? ? ;? nBmA ? ? ?
?
?
?
?
? r | | ? m??jm ?2? s ? ? a | ? ? | ? ?5| ? ? s | ? ?#? ? ? m? ? o ?#??? r ??? ? ?
? ? m? r ? >2?#|
??
v? ? (
?
x?z | ? |? q s?? ? | ?!?h? ? z ?} ??? | ?? | ? q<r | ? ? ? m? ? ? ~?? ? m ?*? ? ? m? r ? ? ? M m ? ? ? | ? ???
!"$#%
&('*)+-,.0/
12) .43657 89;:,+=<>@?BADCEF>G HJIK +4L < 9 <) INM )OQP 3R8S& <T)U0L 8S&WV(X <ZY M < X[ H >> <\I & <])+-,B^4<1
8 M /. + L<`_ba 8 U0c1 G I + LBG 1 + L[ . >
[ hi1 Kj M /
1k) +(8+ )a 8mlRnog& < )m+-,.4<1Sp XF/\1-/ & /T) + , . <
1]qF)a 8 I K
qq
e
dg<\f y(z
d
L
+
+
B
L
/
a
)
O
K
8
+
x
.
w
w
8
_
T
<
)
,
&
+
.
+({ <}|`~6 8]&+(? <`U4, .-. < I + 1U
L< >B,F? <g)I >D+ ? < .4<1 8 ,b.
U
<
rts u
rRv
, + G a?G ?? ) + G ? 8 I 8??+ LF<t1 V-?m.
+ G ? I K@1U L < >B,? <0? )m?-<xG ? EF:,b+k+(8?) I <+ r 8. c;LF)S??G ? I K6?? ? G ?>B>F< I , I G h + 1
? LB< 8 ,+4:B, +?8]&?? L) + I < + r 8 .4c$G ??1 V4? c< I?? + L < < 10+-G ?? )+-<> M )O,/ &?8.?+ LF<tU ,F.0. / I + 1
U4LB/T> ,O < h
H
? 8;. <*U ? : q +-Lb< I < + r v . c p
+ ? .=<< ? G H?>F>B< I?? ) A / . 1 I ?9 <>??x?q??`? q ) I >??g?
)TA?< . ?x?
?
c '. I <a 1???<*)U-L r?? + fL?
?
n*?? r <
? ? K?L + 1
??
? <
.0<?) . <?? o _ G ? )S1 < 1Sq 8 I < &?8. /S)u?U-X? & <T) +-,b. <
?I ?? ?
d
G ? <T)U0L +=X[ H . > 8S&?+ LB<?1U0?< >B,O <??? 8 ?$??Lb)y?) + v +4)O 8S&??o?: ) . ) 9 < + < . 1
?t??Lb)S1?? ?
, I G ? + 1 Hqg? o?8?t+-? <
y <?)m.4<NLBG >>b< I??I G ??+=y@????1
< + 1 8]&2? ? ) M < .4? K?<> ? . 89 ?$nN)]EF>?? 8?}? Xb<
14/
) ? < +-LB<?+ r 8 Ka 8?)??G ? I :, +2& /T) + P . <1 ?t? L)S1 I 8 ? )>T? H ,b1 + )]_a/ : ) . ) 9 < + <?-1 ?t? ? ) 1???
?
?
LG ??>?<\E?, E ? H +-1 &(?? a?aQA U 8 I? < U + <S? + v ?t
?Fq &?8. ) V?8+ ) ? 8?$?????: ).
) 9 / V < . 1
X< v ,+-:B,+
?
?Q)TA '. LF)1}?@?i? E ? ? V 1 ? Pa?a?A?U v I Ib< U
+ < ??+ v??t? ? ) I? '\? U 8 >G I K + Lb< :b?-/T>BG ???? + /*> |t?2d ? ,14G ??EK + L<
p
+-<TU0L I G ??? , < 8S& v M /\.4??)?b?[ ? I KN?) , 1-10G )SI?? ? I K ' 1???? v 9 <.
?<T)m, q?\?? ??(?
L < 8 , + :,+g? )SA /.xL
d
?
?? ? ???? ?g? ??? ?g?!nS?-?Z?F).
) 9 < + < . 1 ?Z? /.4< &?8. < q+ LBG ? 1 I < V L
) +(8B+ )a v & ? n ? ??: ) . ) 9 < + < .-1
?
?
???a , I G +(1tG I
?q? ? qB) I? + LB< 8 P +4: ,b+?O?)]A< . ? 14< 4G ? K 9 8 G ? ?) O + .4) I 1 & <\. & , I U + ? iv I 1
<RU 8 I 1 +-.=, 5 +"! >?) I ) .4+ G? # U
G? ) O?:. 8$&% /(' 14/ + _)14<> 8 E$1 : <*U4GH C U
) + Gh 8 I 1 &?8 ^ +-? /*),+ ? + ?.- ?0/
:. 8 _F??/ 9
: )U ' U 8 I 1 +(. )mG I +(1?> 8 I 8?+ :F/ .49 G ? + );U 89;:43 < + < > <
1
U .-[ H : + [ ? 8 I 8&g+ L/?: .-8 _Fa '*9 1
?
1
?2 I G ? I K?? .(8565 <T>, . < ? 1 / /87:9Lb) I K )Eb> ? G < +=+ < .([ h U L q; ??o<
<
?
$
.
+
L
)
+
v
p ?
&?8.2& ,Fa?a ? / + )][=1 ? ? ? ??
1 U LB<T>,ba?G I K :.=84> ?/@? 1 r < .0'RK?< I /.0) + <\>
L< 1 / r < . <gy
P ?> s M G > < >?G ? I + 8CB ?`:.=8 _O < 9 1 & v . C.D
p6G
G
? )? + <
1 + s ?KqFE??t: . 8 ??Q< 9 1 &?8. M ) O s ) + G v hI A + < 1 + G H I KIH ) ? + L . << +(. )G EG ? I K y4? < + 1 8]&KJ ? :.-8 _F?Q< 9 1
< )U LL + I / I 1"M 9N> ??/ 8S&P?
O ? < + r 8?.RQ 1 r ? U v ? 1 +(. ,IS +T G > A + .
)SG? E s? ? K?)?10< : ) . ) + < I < + r 8 .VU
& v .?<S)X4W ? 8m&x+ Lb<
y4/ + X . /YM +-.
)G I G h I K?y4< + y?) ?F>?& v .Z\[] ? ? ??) I >^Z\_? ? ?
.
)mG u I G ? I K r )S1
d
9N8 I G ? +?8. <T> > A + <
1 + G ` I K?8 I +-Lb< M )]??G a >F) + ? ?8 I 14< + < M T .Rb ? ?? < : 8 U Xb1 ?dc ) ? ? I ? G I K r
? )S? + <>
?
I
M
I
H
X
<
L
<
S
)
?
G
?
)
<
1
I
+
+
+
@
+
1
<
>
,
b
L
<
F
L
m
)
E
K
<
]
<
)
I
b
?
8
?
8
&
.
+
;
.
f
5
e
.
4
U
L
+
F
X
'
<
+
i
S
8
&
O
r
sv
r
v .4c?1Sq
gv
+ ? <}C I )S?10< +`8]& r <
G ? KL + 12)mE >?+(X< 14< +g8S& r /G ?iK?L + 1gK ? HiM G ??EK ?h
+(? <6?<1 + M )a?G j >b) + G h 8 I 1\U 8. < r < . <
. / V )lk ??IF<\> & v .?) +(8+ )? 8]&PJ ? I < + r 8.
U 1*?
?
?
? G K ,F. /???U 8 9 :F)?=/1 + LF< + /1 + 14/ +$: < .?& 8 . 9 ) I U
/ 8& 14G mon >[ p / .4' I + 1q0W L/T> , a?G I K U 8 I CbK ,F. )r
+ GH 8 I 1
j*s ; d ?dt [? 1 + X ' ? /T)E
+ <
1 V 1
< + | ? 8S& + L/ _ <
1 ?T+ ? 1-[u I~ ?t a?< G? d1 ~+ LB)< )w9 v,<T) ?x I?? Z|}? ~2I ? < + v r & 8)S. a?c ?
?()S1?>F< + < .
9 [ ? < >?_BA M )??G ?y >b) + G 8 I y0< ?x: < .(&?8. g 9 )EU / ?
?T? ? <T). I <T> I < +i 8 .V
U 1 ? + I )a 8 K 8, 1-a?A q s ?Fz G??1W+ ? /}? ? '
1s +e14Gj I d KO< I < V r 8?. c V . )SGH I <> , 14G? E K v P .
LF)E?l-{ < I K G H?E/ /. ? ? I K & /S) + , . <1 )mEF>}|RS~ ?T? ) ? ?1 + L < 9 / ) I | ?}? 8S& ; ?@Lb)EF>(D < EKG h I<\< . < > IF<(?RD
q p
r 8?. c1F???q?0D? ) I >d?V?K? |2?}? ) . <,? r <
?<E 1 ? + < .4) + ? ? M /R.0<:F)mG . ) % K 8. G + L 9 1,B14G ? I K + LB<kI , ? _ < .
8S& M ? H 8 ??) + ? H 8 I y`) I ? + L<?| ? ? .4<
1 : <TU + G M <TO A ? ) 1 + L< / .-. 8 . & , I U
+(G 8 I +-8 _< 9 G h I G 9 G ? <*> M G ? )
y-G 9 ,O ) + < >?)E I <T)a?G I K ? .08 9 +-XFG ? 1 r <?U
) ??1 <\<?? L) +
? ? z :? v >B,U<
1$1U XF<T>?,F?Q<
y r [ H +({
?< +=+ < . ) M < . ) K?/ |}?`? + LF) I ) I A 8]& +-LB< 8B+ L < .t9 < + X 8 >s 1 d I : ) .4+ ? ? U4,??). + L/ ? +-<1 + 10L 8 r 1
H??
+ LF) + ? h + [ 1?10G H K I G CU) I +-O A _< +-+ < .6+ L) I )ma?a 8 V4L<.x)S? K 8 .-G + L 9 1?/?
n U4/ :b+ s?? ? d ~ )
g)?Gi? K1 8&,F% ? P /$+ ? ? 8 E U 8 ? :F8 ? . .4/+ 1 L/?+ Xb? / | 9 )F< ? ) K I?8 . I sj +=, L 9 9 _1 / .R8 I 8m3 &kA . < . :F< : ) )]G .-[ 1 . 1?.0/F)? U,bU Ghi/. :F< +->?/ ? ?A?_BA/T)10U G? ? )], a??K ) 8+ . /TG? >?+ L ) 9 IFI +(<*8?)C a [? II ?K
p
?g
r < .=/ U 8, I + <T> ? d ? G 1k10L 8 r 1?+-Lb) +W+ LF< I < ,b. )SaI <T+ r 8.-c 9 < + L 8 >1 ? ) M <RaQ<*) . I < > M < . A$K 88 >
< M )mO,) +R? hi8 E &(, I U +(G ? 8 I 1R? +-LB<A )?-<?) _?? < + 8 C I > )?K 88 ? 1 8O?
? + [? 8 I 9 ? U0L 9?8. <?>[H . <SU + a?A
+-L )mI + XF<?10G j 9@, aQ) + < > )E I <T)a?G h E K 9 < + L v > 1F
?+?W U v .
>BG ? I K +(8!+ L<?? + < 1 + 1 q + X< I < + r 8. c?1
r G +=L@/ I K[ I <<?4<\> & < ? +(,. < 1t) . /g14a?G ? K?L + ?QA _ < +-+(/ .g+ Lb) I + L< ? ? z?? I /T+ r 8 ^4c?1Sq_b, + )a?? 8]&?+-LB<
I < + r 8 .=c1`)m. 3 10G h K I G C U ) I +-a A ?/ +-+ < .`V Lb)E 1 ? H 9 ?i? a?)+=<T>N) IFI <)a?G ? I KI?
+ 1 ?F8 .0+ 5?? ?9 G? I K 8S&Z+ LF< 1 < C K P . <
1Sq f 8 r < M < .TqFG?J1 + X ) +`+ X< A 8 I ??A p . < U 8 ^4> + ? < 9 <T) I . <p 1
P? +(1
8S&??? ? ?4???????(??? ?
< +-+ < .t. < 10,? + 1t) ? < + A : G h U) Oa A 8 _ + ) ? ? I /*> &?.=89 14G ? ,FaQ)+(/T>?)E I <T)a?G ? I K G & +(Xb/
p?
p
)O K 8. G ? 9 s y .u0
P I ? 9 )E A + G 9 < 1?) I > + L < ? <
1 + 1 8 a?, + ? v I .-/ + )G EF<>
G ??K ,F.0'
1P;
? ) I > o 14L 8 r
+-XF< 9 < ) I?|`~
? +-L<t_ < 1 + 1U4LF< ? , ?Q< )S1t) & , I U + G ? 8 I 8S&e+ XF<2I , 9 _g / .?8S& 1
U4L<*? , ??/ .-/ : )G ? . 1
v
)EF>??
? ??? + G:? g < <Vn :F< I >F< ? h?? |rR? )&B? > ? ?K-r | ? g r <(? <?<*)U4L .-, ?h? ? V(Gh 9 /
1 hq s ; ?&? )Eb>
s
!"#$&% ('*),+.-0/1$,234 !657-8(/19;:<>=@?*6A ' BC
`ah o
I,J KL
I,N O
I,J K\
]^N K
Ib,c
I,Jg b L
I,J t
I N{ \
I N{
I,? ?
V
Y
MM
PQ>R
DE FHG
[
\
Z
S3TVU WYX
Z,[
d Q U eYf
pghq r
u w I^? vwx y
???????? ??
? w? ? y ? w I^?,?(???1???????(
? ? ???????
?
?
?
?
?
?
?
?
?
?? ?? ?? ? ??????????? ??? ???? ???3???????
??? ?
U
_M
i
jlknm
U
K[
z
} w ~ (y??3? I ????
?3? x y ? w~ ?????? ???? ??????
?^? ? ?? ?? ? ???
{|
b[
W
?|
?1??^?? ???3???*? ? ???????3????l?????*? ? ?????^?????6??? ?
? ????3??? ???.???6?l? ? ?6? ?? ? ??? ? ??l? ?
??
?
? ! ? ?? "$#% ?? ? ??? ??'& ?0?;?? ? ( &)* ? & ),+ ?
?6?? ?>? !l? ? ? ??? ? ? ?/.?^? ? 10 ??325476 ?82
=?>???6? & ?3@? &BADCFE E ?
(
X
???? ?? ??????? ? ? ??? % ? ?(? ??? ?6?;???6?>? ? ???6? ? ?6????? ? ?
?? ????? ?? ?? ????6? ?6?? ?>?? ? ?
?? ?
?
? - ? ? ? ?? " ??? ? ? ? ? ??? ?'& ? ?3?l? ??
??? ? * 7
? & ??;?? ? ?>?9&$ : ?
;^? ?9.? ? ? <0 ?
?3
? 2GH6 8? I =?? ? ? ! ?3?J& A CFE ?LK ? ?
M3NJOPQSRUTV,WXZY[V<\3]^T_a` Y T Wb ] _dc,c<e ? c<f ^@gh[ijWb Y V,kFN9Olc ^ Ym W][nToW cph ^qW c ^Jg h i Y \TrY
s ][Wtpuv^'w M3x'y g c<z3M N'P{ y T[i<WX Y i ^] ^Tr_,| Y T7}9~][_ c,c,? ? c,? Y[?d^t8k'T??Tre ?Lc ? u?^ c ^J? NJ? Ye ? w ^T
W c ^JghV Y[^T? s ][W?tp`?\ w ?$? ^Jh'g? T^ ^??gu?_pg Y[e ? W c b?,c h Y e ? W c Y e ^'Tg?? ^t ] W n^ c?]g c z W?????? Y[? kT\
???
NJ? ?^ ^ Y?e ? Y e ? W? TD_1T _pgu?u??l?^c ^]? Y ^ z? ^ ]^ c,Y T'hi<k z _,?v^T ?F? gh ??Y[e ? ??kDg?c<^ Y X?W]???<c z T g/?
?e w s ] Ws ? ^ z T[Wu?_ Y[e( W c?Y Wqg s ][Wt u?kJwF? g s W e? c<Y5? ? T s u?W YY[^ z } cFY V,^ ? ? g s V
?
? V k?? ]g s V,T?TV,} X Y V<g YDY?? kF`?^Jg] c ^ z?c \ Y X5W][n,Tqh ? kJg/? u ? W_ Y
? s k ]b
W?[w??pX7^'?<k c?? T3?[??g ???
? W ]? ? Yipw T
e ? ? _<]?^q? T? WX:T Yi8gY?Y V,k c<^'? X W ][?,T s ^ ] b
W ]w ??g c ? bk'X5^?3]^ s g ? ? ] T?YWF? c z
???
l?
T hV,k z,_ `?k?T W9b Y V ^ T g ?l^7?_ g? ? ; Y??g? Y ip^?? ?j? ? ? W]? ? Y[V ? T
WY ^ Y V<g Y7Y i,k?i<W] ? W?<Y gu8gJ?<? ??T e ? T
?8?
k
X?W?]nT?w g ? ? ?1Y g? ?L? g hrWc T Y g c<Y??9??'??? g z ?g c<Y g ? \ W? ^ ] ?[? ?
? ? W Y Y[\ z W??g?u?W ? T'h ? ??k'????1
? ? cp\ Y
]?gDThV ^ z _ ?v^?W?b g3?^Jg ] z,_<]g Ye ? W c ? Y ?,e ? T e ? w s ] W?^?q^ c Y X W _ u zFY ]g c T? g Y ^ ? ? ? Y W3? ^? ^]g/u
?<?
z g ? T??g? z i? cpz ]r^ z T}9b Y? W_ T g? z T:}9b zW u?uvg/][T??Tg9?k z
?
b Y ^ ]?x Oe ?^']g Y? W c T?g caz O N ?L???
? u Ye ? w?gY k ? ? ? Mq?'OP { y z W ^rT?T[| e ? ? V Y uv?t ^ Y Y ^'] Y ipg c M?? O y
J
?
?
] k s g? ?L] T ? M NJOP {
s ]W z _ph^ z T W ? _pYe ? W c T$? e ? Y ? g?Dg9?^] g?? k?? Q??W b N ? NJ? ? ? ? ? hWw s g] e ? TW? ?
?
???9O y?? ^?? bW]w3T N@? ? ?? ? ]^ s g e ? ?[T g ca? s ] W ? _ahkT g cFgJ??k'? g ? ^?? {
W? ??L? ?O
?
?
e ? _<]^F? e ? u?u?_<TY ]g Y ^ Tlg s ]?Wtpu ^'w X ? ? Y? Y[? ^ ? ^'_ ] g9? c ^ Y X7} ] n g s<s ][}g h i ?? Y? ^ c ^ Ym W ] ?T
?
T? s ^ c
w W ][k
Y[e ??w3kT ^ ? ^Jh Ye ? c,? k?g hV ][^ s g ? ] P V e T]k ? _8hkT Y? ^ z? ? ^]^ c hk T t ^ Y X7^ ^ c
Y V,kFwqk'Y ? } z T
M O y?k ? ? e ? t e ? Y Tq?i<k?t ^ YqY ]rg z k'}
tak Y X k ^ c
Ye ( w ^?g cpz T'h i<k z _ ??^
?
?
_pgu e ? Y ? ? g vY W ? V M x OP
y g Y[Y g e Lc T?Yip\ p^?T Y?<c guBT V,k z _<uv\ _pg9| e Y?
?
g W]
h W ? Y W
P <k
N P {
e T Y[? \ h WTY W b7t ][^Jgn e ? c<f Y? ^?Th[? ^ z _ ? k3e Lc ? W t ? W h[nT
?
g qhW c ? W ? e ? c,? Y V ^n^] c ^'u c \ Y X?W]nT X e Y ? Y?? ^ ? W<h[n ? ? P V<^] ^og kwqg c? } s<s }?]Y _,c<e ( Y[e ^T
gn^ Y ?<e ? T?w3W ] ^5k lh? ? ^ c Y ? Y[? n e ? c ? g z ?g ? Y g ? ^ Wb Y V<^?b[gh Y5Y i8gY ^Jgh V3]^ s g ? ? ]5h[? g c,? ^ T
YW
W c ? ? s g] Y? W9b Y ip^$T?h p^ z _ u?^??g c z?Y? ^']r^b ] ^ ? } c `?? s g ] Y T W9b Y V ^ c ^?_<]g ? c ^ Y X7W][nDh g? h _ ? g Ye W c
?
c k'k z Y W t ^ s z g Y ^ z
! #" $&%
(')
*+
,.0/
5!
76
8
9
2:
<;
=
1
2
(3
4
4
?
>@?BADC.E7FGIHJ?KA G
ML
(N
O ' P RQ
&"
TV,W/X??i<WX?Y[} g ,
P V eT s g s ^ ] ?
s s u ?Y ^ w s W ] g ? e ? ^]^ c hku?^Jg ] c<e ? ? ? Y W WtDT[i W s Th ? k z _ ? ?
e ??c1? s [] W?,? kJw3T ? b W]w?_,|v?Y ? ? c ? Yi ^Jw g9? ? ? Y k']rg [Y e ?^?] ^'? g ? ? ] s ] }t ?vk'w T s g h k T ? P iak s g s ^']
VdgT g ? ? } ? ]^ k c<Y \ z?g?w W z,e ? ? h'g Y[e W c W 8Yi<^ Q
y ?hi e ?Y k9hY <?^g s<s ][} s ] e ? g Y kb W]BT?h i<^ z u ?
?
S T:
"!
#%$
&(' )
&06 7
*?6 K
&0S
&^P ]
*(d
* 6.
* 6* ,
QR
\
?
* +,
&9+ 7
*(P K
WRX* S
\ &^c ]
*(pq,
& +?
* ' * , *(?
1
82 6
- * .0- /& .41
< =?> @
< =ML N
ACBO DFE E
TVU
e?f g hji k k l e?m n o
& ????
> > u > n v^w?x yjz
?
? &(? ? .?* ? ???? ???
?
* ? ?? .5? & ?
?? ?????a? ?
??????????? ?F? ?(? ?????????? ???
????????? ????
?C?4??? ????C? ???
?
1
28 6
- * 34- /5* :;
G =?H @
G =?H N ACIO D9J E
Y[Z
_a` H b >
?
?
m f r > + s?t k g
{}|~ ??
? ??? ? v
?
? ,*.
??* . ??? ? ? ? ?
&??????
?}? ? ?? ??????? ? ???
? ? a? ? ?(??????? ?(?????????9? ? ? ?
?
5? ? ? ?
? ! ? "# ?$ % ?^? ???'&
??)(+*-,/. ??10# ?
")2+243 #5 6 87 69 : 8; < ???=?>@ /% 7??9(??A#BDC 0 ? ? ??"& ? ? ?A # ? FE ?
? 5 ? ? ????? ? ?+7?? 5 ? ?G? , 0
?" ? ?? "` #a5)$ ? ? ? ?"&??: ? ? b0? ? ? ?
H)IBJLKNMPOQ9R STH (VUXWY J "Z Q ? % ? ?"[ ? ? #-\ ??X] ? ??] ?j? ? $? ?^ ? ??_
e
E
d
?
?
5* 0 , r@q $ 2 . G< 2
c # )2 H ? ???g??f??f aW ? ?
?-7[h}? ?)$ ? $ ? ? ??? ? s?q ? ? ?i ?"?Z ?k? j l@?? ,
? ? /? t #? ? ???m? ? 7 * ( ? $ ? &40 ?? # ? # ? ? ? $?v#Pu5n ?w.? ?? ? ?? p .o?x, 5)7?p?
# 80#y I z \ & -aW ? # ? ? ?o0 ?)
, ? ? .{i ? ? #- 0 ? h ? ??F|
} ?k~ ? 0 ?N# ? ??"h A? ? ? ? ??w ????2a, ? $ ???
?}?" ? ??o82 r2? ?? ??? 5)$ ????? &
? ? ? ?0 ???? ??
?W ^ ?)? # ??? l ? ^ ? ??? ?
???8(?"???? ?' ??5)( ,& ? ?x?? ?? ??c ? ? ?" O ? ? ??P # ?" 5 ? ? Q ?? ??? ?
? A? Q
O/H? ? % ?
J
?^? ?? & ?? ( ?)?
? ?F" ??
& ? ? & ? # ?? < ?
?
? [ # % ? ?
? ? 0 ?A?)5 ? ? ? ?A?
? ?"24?Y, a? ??? ??w)2n ???Ba[ # ???!? S ? ???-n??? ? ? ???'?0? ? ? ? ? ?? ? ??0# ??
?
?"?? ?
?
? 5
? # ?^?j?L ??> ?P? ?????"5
?" ? ?4# ?^?? ?-5
> ?B??2?"? 0b ?? d . ?? ? ? ? ? ?8?? ? ?? ? ? j ??8 ? ??? &4i? ? # ? ? d ? ?} ? ?
?{?P????@???)??? ?m???r???P?
? ? # ? ?
???x ?@ ?P?? ? ??[ I?? # )`a? ? ?B?? ?
? $g?B. ? ?? 0 ?
????"& ? ? FE ? ? ? ? WZ ??4 ? ? # ??? ? ? ??
? ? ? #-2}?A5 ? ?
? ?W # Ee? ( ?}??. ? $ a? $ ? ? ?# ?? ?
?" ?W
?? ? ? ? ? /%
?? ??? z?A?A??0n ? ?
o? ? ???G? ?? ? ? ?? n )2{? ? ? ?? ? P ?? ?? q
, ? ? ?8# ?? ? ? .?? ? ?),&N ? ?? (
??? ? ? ?a?7 ?
? ? ??? ?-?w?P?F?N? ?BL? ??? ?N? ??B? ?A??? q
? ?@????P? ?G?@?r?
??? ? ?????F? ?F?A? ??????F?B?4? /% ? ? F? L? i ? ? ??? ?P ? ]?? ?4? ? ? ?
? ? ??? ??? ? ? Q ( ?5?"# ? FE
?5? H)? ,
? p ? j ? ? ?!???"?# ? d 0 ?
??? ? ?? j ? ?w??
. ?
?) ? ????
?
? ? ? 0? ? ? q?? ?w?? ??? ? ? ?w??o?/?)??? j ? ?? ?N?F?4?a? l
???A? ?
? d # ? "??F? ??? ? ? d # ?
?
p ? ? ? ? ? l ? ? ? ? ? ? ? ???? j ?B. ,/???? ? ??5 jj ? ? ? ? ? ? ?P? ? ?a b? ? ??? ??5 j ? % ?-? ??? ? ? ? ? ?? ? ?B" ?F? ? ? ??o??
??? H ? ?? ? ?
? ? ? j ? ? ? ? ? j ??? A? ?
? ??? ???^? # ? ? ?P? g ? %? ? ??? ???^? # ]
?? ? ???Y?? ? ? ? ? ? ??? 0 ?)?? >?? ? ? ? $#?n ???5??.??? p ?
0 ?
? # ?? ?Pj , ? ? # ? ? & ?L ? ??? ? ? b ? ? ? ?F? ?? ??F? 6 ? j ? ? ?b ? ? j ???N? ?
? # )?? ? ? ? ? ? ? ] # ??
? ? ? ??? ?? ? ? 7 ? ? ? ?w ? ! t (?2}? ? $? }2 ? ? ? 0 ?? * ? ? OQ } ?6 0 C
0 ??? ?-?l ? ? ? ? ? ? ? ? j ? )? ? ? ? ?A? j ? ? # ?"7 ?F? ?
? ^ ? ? # ? ? ? ? ? ? ? ? ^ ?? # ? , ? ? i ? O/Q ? ? ???"Z ?? ? ] ? ?? ? ? ? ? ?G ? b ? ??? ? $ Q $ ??? ? ? ? ? ?
? j ? ?? ? ? l b ? F? ? ? ? ? ?A? j F? ?k?
? ? ? O ?? ?B V?? ? )??? ?? ? ? ? ? ? ? ?- ?}? ?B ? ? ? ? ?7????? ? 82 ? ? ??? ? 0 , ?"???? ?@y ? ??? ?? ?}?
? ? 5 ? [2??? ,N ? ? ? j ? ?"[ $ # ??? ? ?? ? ? ?
? ?A$ ? ??A? ?v? ? ? ? ???N? ?
j
? ? . ? ? ? ? ?o?F? ???N? ? ? ? ?? ? . d ? ? # ? ?B V? OQ ? ? ? ??Z4y ? ? ? j O ?? ????")2}? # ?
?? ? ?
. ???- ? b? ? 5 ? j OH ? & d ? d ? ? 5 ? ? a? ? ? j? . ?" $ ? % O ? j ? ? ? ? ? ? ???kj ? ? ? ? n ? j ?? ? ? P? ? ?k?
??
???
? #k0 ? ? ? j ?
j ? ?? ? j ? ?N3 ? ?
??? ? ?
? ??}???}?????9???? ?
? ? ?
? ? ?? ?
?
? ??? ?
? ?
??? ?
?
?
? ? ? ? ?? ? ? ? ???
? ?
?^? ? ? ? ?
? ? ?? ?
?j?
*)
,+
'& #(
-.
))
.
ON
IH
H
#PRQS& (
UT
N
[Z
& c ( ]T
(d& e (
V
7698;: $=<
54
f
AW
!
#"$
%
@?
(
BA
DC
M
Y
kj
pokq
sr
|{
#?
)
y
7x
]y
y
Rl
)
w
o %utv
W
G
z ?
=E
Z
E
)
nm
>&
OX
) a`
7i
Ah
g
& ? ?G?
L
KJ
& (GV
(
]\_^
PbQ
23
10
G
,/
;F
k}
k~
(@?
?
??? ? ?
(
S??PuQ
u?
%
| 1085 |@word knd:5 ona:1 lup:1 wog:1 t_:2 r:2 gfih:1 pg:3 q1:1 d3d:2 n8:1 ld:1 t7:1 k1d:10 bc:1 ka:2 z2:4 cxn:1 si:5 bd:4 ctn:2 wx:3 j1:2 xb1:1 gv:2 dcfe:1 yr:1 nq:6 rts:1 xk:2 mef:1 ik:1 ghi:1 g4:1 opc:1 chi:2 ijw:1 jm:6 mpj:2 k1q:3 qw:1 xed:1 jmp:2 cm:2 m_:2 q2:4 ag:2 nj:3 ptl:1 ti:7 nf:1 mkm:1 uk:1 csk:5 m3d:5 t1:1 eb:2 u0c:1 r4:1 co:3 dlk:1 a0y:1 ond:6 vu:3 cfj:2 lf:10 fci:1 sq:4 idk:1 w4:1 oqp:1 ga:1 av2:1 bh:2 py:1 knk:1 qc:1 lxe:1 m2:3 q:1 gihj:1 d5:1 oh:1 mq:2 rhi:1 qq:1 qh:2 gm:1 ydy:1 jk:1 wkc:1 u4:6 aqa:1 rq:1 ui:3 wmc:1 q5:1 ipw:1 f2:1 uh:2 po:1 k0:1 kp:2 wcl:1 sc:1 vyx:1 mon:2 gi:2 ip:4 ctb:1 rr:2 mg:3 mb:1 tu:4 j2:1 tvu:1 tni:1 oz:1 g9:3 qr:1 ach:1 nin:1 r1:1 ayc:1 tk:1 iq:1 ac:4 ij:3 op:5 b0:1 sa:3 p2:3 ecx:1 wzy:1 f4:2 vc:1 rpd:1 im:1 kdg:1 kykn:1 ic:4 cb:1 mo:1 lm:2 a2:3 him:1 mv0:1 e7:2 hj:1 og:4 chb:1 q3:7 k1a:3 vk:2 fq:2 am:1 wf:1 nn:1 bt:1 a0:4 tak:1 fhg:1 iu:3 j5:1 k6:1 wqg:2 s1t:1 amc:1 taw:1 tow:1 kw:1 t2:1 rjc:2 dg:1 iq3:2 tq:2 tdt:1 a5:2 nl:4 kwd:1 hg:2 xb:3 rps:6 lh:1 bq:2 iv:1 nij:1 eyf:1 mk:3 hbi:1 inm:1 ctk:11 ay2:1 ar:1 tp:3 tg:2 c1n:1 kq:1 gr:8 cln:1 nhj:1 acb:1 kn:8 aw:2 sv:1 my:1 akj:1 l5:1 v4:1 bu:1 ubt:1 ym:1 nm:1 hn:1 dr:1 tz:1 li:1 knm:4 gy:1 wk:4 jc:1 mp:6 bg:2 try:1 wm:5 il:1 ni:4 t3:1 grn:2 zy:1 lu:1 eie:1 cc:2 ah:2 npo:1 di:1 wgf:1 mi:2 ut:2 ou:2 e7f:3 ok:1 ta:3 dt:1 lba:1 jw:2 ox:1 wmp:1 mwm:1 hpl:1 i6i:2 tre:1 xzy:1 aih:2 su:3 a7:1 kn2:1 kjm:1 q0:1 x5:1 mpi:5 oc:2 djv:1 nhc:2 eca:1 eco:1 cp:3 gh:3 tn:1 puq:1 jlknm:1 ef:6 fi:1 ctc:2 qp:4 rl:2 cpz:2 nh:6 jl:1 rd:1 uv:2 pm:1 hp:3 i6:4 lcm:1 aq:1 jg:8 pq:1 d1d:2 j4:1 tlg:1 gt:2 j:1 fii:1 wv:3 fid:1 yi:2 ced:2 wq2:1 v3:1 ud:4 ii:1 u0:2 eji:1 xf:5 va:2 qi:3 knf:10 udn:1 faq:1 c1:1 nji:1 dcb:1 wkj:2 dwm:1 sr:3 db:1 n7:3 ee:1 njo:1 xj:8 w3:1 ifi:2 cow:1 cn:1 d1m:4 o0:1 n7q:10 ul:1 s7:8 gdt:1 wo:2 icf:1 jj:1 hi8:1 yw:2 s4:2 ph:1 fz:1 zj:2 vy:1 eim:1 rb:1 hji:2 xzk:1 zv:1 d3:2 jv:2 pj:3 hi1:1 f8:2 ch2:1 wmn:1 aya:1 vn:1 fl:1 ct:17 ki:2 aic:1 ope:1 chx:1 wc:1 x7:2 qb:2 bdc:1 uf:1 fct:1 jio:1 fio:1 y0:1 ur:2 sth:1 lp:2 a0r:1 g3:1 s1:5 hl:1 mpo:1 k2l:4 ln:2 r3:1 cbk:1 pbq:1 v2:1 ptg:2 jn:2 cf:2 a4:9 bd5:1 sw:1 qsr:1 xc:2 yx:1 dfe:1 k1:4 yz:1 v5:1 mim:2 fa:2 rt:1 prq:2 nr:1 hq:1 w0:1 me:1 y5:2 ru:1 o1:1 y4:4 cq:3 gji:1 ql:1 lg:1 fe:4 gk:1 ba:13 uch:1 av:2 sm:1 ipg:1 ejk:1 dc:6 gc:1 lb:22 wyx:2 prqs:1 wxg:1 nlj:1 nu:1 qa:2 ctx:1 yc:2 fp:3 pig:1 tb:1 kgr:1 oj:2 ia:1 hr:1 ne:1 lk:4 xg:3 sn:2 kj:4 sg:1 kf:1 ows:1 h2:1 pi:2 qf:1 jvc:3 cfe:1 jh:2 mpk:2 fg:2 fb:1 ig:2 bm:1 sj:1 vaw:2 ml:3 q4:3 b1:1 don:1 ehe:2 sk:1 jz:1 ca:1 dkd:1 hc:1 uwv:1 da:1 sp:1 rh:1 n2:4 aid:2 lc:5 n:1 pv:1 lbg:2 yjz:1 xl:1 ib:1 lw:1 qnm:1 kna:1 rk:1 badc:2 r8:1 dk:1 t9:1 ih:2 hrw:1 kr:1 ci:5 w9:1 t4:1 kx:1 rg:4 cx:5 chn:7 fc:2 x3i:1 jmj:1 u2:1 tpu:1 ch:10 aa:10 ma:2 cti:1 u8:1 z21:1 n2j:1 zb:2 la:1 m3:5 kjg:1 zg:1 l4:3 caz:1 wvp:2 scp:1 wq:2 d1:2 ex:1 |
97 | 1,086 | Examples of learning curves from a modified
VC-formalism.
A. Kowalczyk & J. Szymanski
Telstra Research Laboratories
770 Blackbtun Road,
Clayton, Vic. 3168, Australia
{akowalczyk,j.szymanski }@trl.oz.au)
P.L. Bartlett & R.C. Williamson
Department of Systems Engineering
Australian National University
Canberra, ACT 0200, Australia
{bartlett,williams }@syseng.anu.edu.au
Abstract
We examine the issue of evaluation of model specific parameters in a
modified VC-formalism. Two examples are analyzed: the 2-dimensional
homogeneous perceptron and the I-dimensional higher order neuron.
Both models are solved theoretically, and their learning curves are compared against true learning curves. It is shown that the formalism has
the potential to generate a variety of learning curves, including ones
displaying ''phase transitions ."
1 Introduction
One of the main criticisms of the Vapnik-Chervonenkis theory of learning [15] is that the
results of the theory appear very loose when compared with empirical data. In contrast,
theory based on statistical physics ideas [1] provides tighter numerical results as well as
qualitatively distinct predictions (such as "phase transitions" to perfect generalization).
(See [5, 14] for a fuller discussion.) A question arises as to whether the VC-theory can
be modified to give these improvements. The general direction of such a modification is
obvious: one needs to sacrifice the universality of the VC-bounds and introduce model (e.g.
distribution) dependent parameters. This obviously can be done in a variety of ways. Some
specific examples are VC-entropy [15], empirical VC-dimensions [16], efficient complexity
[17] or (p., C)-uniformity [8, 9] in a VC-formalism with error shells. An extension of the
last formalism is of central interest to this paper. It is based on a refinement of the
"fundamental theorem of computational learning" [2] and its main innovation is to split the
set of partitions of a training sample into separate "error shells", each composed of error
vectors corresponding to the different error values.
Such a split introduces a whole range of new parameters (the average number of patterns
in each of a series of error shells) in addition to the VC dimension. The difficulty of
determining these parameters then arises. There are some crude, "obvious" upper bounds
345
Examples of Learning Curves from a Modified VC-fonnalism
on them which lead to both the VC-based estimates [2, 3, 15] and the statistical physics
based formalism (with phase transitions) [5] as specific cases of this novel theory. Thus
there is an obvious potential for improvement of the theory with tighter bounds. In particular
we find that the introduction of a single parameter (order of uniformity), which in a sense
determines shifts in relative sizes of error shells, leads to a full family of shapes of learning
curves continuously ranging in behavior from decay proportional to the inverse of the
training sample size to "phase transitions" (sudden drops) to perfect generalization in small
training sample sizes. We present initial comparison of the learning curves from this new
formalism with "true" learning curves for two simple neral networks.
2 Overview of the formalism
The presentation is set in the typical PAC-style; the notation follows [2]. We consider
a space X of samples with a probability measure J1., a subspace H of binary functions
X -+ {O, 1} (dichotomies) (called the hypothesis space) and a target hypothesis t E H.
Foreachh E H andeachm-samplez = (:el, ... , :em) E xm (m E {1, 2, ... }),wedenoteby
?h,z d;j ~ E::llt-hl(:ei)theempiricalerrorofhonz,andbY?h d;j fx It- h l(:e)J1.(d:e)
the expected error of h E H.
For each m E {1, 2, ... } let us consider the random variable
?
maa:
de!
H (-)
:l:
=
max{ ?h
hEH
j
?h z
'
= O}
(1)
defined as the maximal expected error of an hypothesis h E H consistent with t on z. The
learning curve of H, defined as the expected value of tJiaa: ,
?j{(m) d;j Exm.[?Jiaa:] =
f
Jx =
?Jiaa: (z)Jr (dz)
(z E xm)
(2)
is of central interest to us. Upper bounds on it can be derived from basic PAC-estimates as
follows. For ? ~
and by
?we denote by HE = {h E H
de!
Q;! d;j {z E Xm
j
3 hE H. ?h,ri
j
?h
= O} = {z E Xm
~ ?} the subset of ?-bad hypotheses
j
3 hEH ?h,ri
= ?& ?h
~ ?}
(3)
the subset of m-samples for which there exists an ?-bad hypothesis consistent with the
target t.
Lemmal IfJ1.m(Q;!) ~ 1J!(?,m), then?j{(m) ~ folmin(l,1J!(?,m))J1.(d?), and equality
in the assumption implies equality in the conclusion. 0
Proof outline. If the assumption holds, then 'lr(?, m) d~ 1 - min(l, 1J!( ?, m)) is a lower
bound on the cumulative distribution of the random variable (1). Thus E x= [?Jiaa:] ~
f01 ? tE 'lr( ?, m)d? and integration by parts yields the conclusion.
o
Givenz = (:el, ... ,:em) E Xm,letusintroducethetransformation(projection)1rt,ri: H-+
{O, l}m allocating to each h E H the vector
1rt,:i!(h) d;j (Ih(:el) - t(:el)l, ... , Ih(:e m) - t(:em)l)
called the error pattern of h on z. For a subset G C H, let 1rt,:i!(G)
= {1rt,:i!(h) : hE G}.
The space {o,l}m is the disjoint union of error shells ?i d~ {(el, ... ,em) E
{O,l}m j el + ... + em
i} for i
0,1, ... , m, and l1rt,ri(HE) n ?il is the number
=
=
346
A. KOWALCZYK, J. SZYMANSKI, P. L. BARTLETT, R. C. WILLIAMSON
of different error patterns with i errors which can be obtained for h E HE' We shall emplOy
the following notation for its average:
IHEli d~ Ex ... [l1I't,z(HE) n t:in =
r
Jx ...
l'II't,z(HE) n t:ilJ.?m(dz).
(4)
The central result of this paper, which gives a bOlUld on the probability of the set Qr;' as
in Lemma 1in terms of IHE Ii, will be given now. It is obtained by modification of the
proof of [8, Theorem 1] which is a refinement of the proof of the ''ftmdamental theorem of
computational learning" in [2]. It is a simplified version (to the consistent learning case) of
the basic estimate discussed in [9, 7].
~
Theorem 2 For any integer Ie
0 and 0 ::;
I-'m(Q';")::; A f ,k,7
d~
(1- E}~~
'Y ::; 1
t (~) (m:-
j~7k
whereA E,k,7
E,
J
1e)-lIHElj+A:,
(5)
J
O)Ej(l-E)k- j ) -l,forle
> OandA
E
,o,7
d~ 1.0
Since error shells are disjoint we have the following relation:
PH(m)
d~ 2- m i_I".(H)I!r(dZ) = 2- m t.IHli ~ IIH(m)/2m
(6)
where 1I'z(h) d~ 1I'0,z(h), IHli d~ IHoli and IIH(m) d~ maxz E x'" I'll'z (H) I is the
growth function [2] of H. (Note that assuming that the target t == 0 does not affect the
cardinality of 1I't,z(H).) If the VC-dimension of H, d = dvc(H), is finite, we have the
well-known estimate [2]
IIH(m)::;
~(d,m) d~
t (rr:) : ;
j=O
(em/d)d.
(7)
J
Corollary 3 (i) If the VC-dimension d of H is finite and m > 8/E, then J.?m(Qr;') ::;
22 - mE / 2 (2em/ d)d.
(ii) If H has finite cardinality, then J.?m (Qr;') ::; EhEH. (1 - Eh)m.
Proof. (i) Use the estimate A E,k,E/2 ::; 2 for Ie ~ 8/E resulting from the Chernoff bound
and set'Y = E /2 and Ie = m in (5). (ii) Substitute the following crude estimate:
IHEli ::;
m
m
i=O
i=O
L IHEli ::; L IHli ::; PH ::; (em/d)d,
into the previous estimate. (iii) Set Ie
IHli::;
L
= 0 into (i) and use the estimate
Prx ... (Eh,z = i/m) =
L (1- Eh)m-iEhi. 0
The inequality in Corollary 3.i (ignoring the factor of 2) is the basic estimate of the VCformalism (c.f. [2]); the inequality in Corollary 3.ii is the union bound which is the starting
point for the statistical physics based formalism developed in [5]. In this sense both of
these theories are unified in estimate (5) and all their conclusions (including the prediction
Examples of Learning Curves from a Modified VC-formalism
(a)
347
(b)
100
,\
\
\
I
I
I
I
I
I
I
I
10- 1
I
I
I I
I
I I
I \
......
CJ
_-' ....
5
4
7
6
8
= 3 : chain line
= 1 : broken line
10- 2 '-'-"'--'-.L...l.....L...l.....L...l....~~.L...l.....L...l.....L...l.....L...l.....L...l.....L...J
o
10
20
30
40
50
CJ
10- 2
9
= 3 and COl.
mid
mi d
Figure 1: (a) Examples of upper bounds on the learning curves for the case of finite VCdimension d
dvc(H) implied by Corollary 4.ii for Cw,m == const. They split into five
distinct "bands" of four curves each, according to the values of the order of uniformity w =
2, 3,4,5, 10 (in the top-down order). Each band contains a solid line (Cw,m == 1, d 100),
a dotted line (Cw,m == 100, d = 100), a chain line (Cw,m == 1, d = 1000) and a broken line
(Cw,m == 100, d = 1000).
(b) Various learning curves for the 2-dimensional homogeneous perceptron. Solid lines
(top to bottom): (i) - for the VC-theory bound (Corollary 3.ii) with VC-dimension d 2;
(ii) - for the bound (for Eqn, 5 and Lemma 1) with'Y = f, k = m and the upper bounds
IHElr ~ IHlr = 2 for i = 1, " " m - 1 and IHElr ~ IHlr = 1 for i = 0, m ; (iii) - as in
(ii) but with the exact values for IH Elr as in (11); (iv) - true learning curve (Eqn. 13). The
w-uniformity bound for w 2 (with the minimal C w,m satisfying (9), which turn out to be
= const = 1) is shown by dotted line; for w = 3 the chain line gives the result for minimal
C w ,m and the broken line for C w ,m set to 1.
=
=
=
=
of phase transitions to perfect generalization for the Ising perceptron for a = mj d < 1.448
in the thermodynamic limit [5]) can be derived from this estimate, and possibly improved
with the use of tighter estimates on IH E Ir.
We now formally introduce a family of estimates on IHElr in order to discuss a potential
of our formalism . For any m, f and w ~ 1.0 there exists Cw,m > 0 such that
IH.lr s: IHlr s: Cw,m (7) PH(m)l-ll-2i/m l'"
(for 0
s: i ~ m),
(8)
We shall call such an estimate an w-uniformity bound.
Corollary 4 (i) If an w -lllliformity bolllld (8) holds, then
ILm(Qm)
rE
<A
_
Elm ?..,
(ii) if additionallyd = dvc(H)
m
m
J1- (Q.)
3
CW,m
~
~
j~",m
(m)P
H (2m)l-ll-j/m l"',,
.
(9)
J
< 00, then
s: A.,m,,,,Cw,m Lm
(
.
m)
j~",m
(T
2m
d
(2emjd))
l-Il-j/ml'"
.0
(10)
J
Examples of learning curves
In this section we evaluate the above formalism on two examples of simple neural networks.
348
A. KOWALCZYK, J. SZYMANSKI, P. L. BARTLETT, R. C. WILLIAMSON
20
I
(b)
I
_.-r;-"=-'3-'~-
15 fu
2
<III
I
I
.-
10 -
..2
/
"
"
C'oj
/
10- 1
/
5rC'oj
Col
0
-
---- --.
- -- ---
=2
",
-
/
'/J~ ~
=2 :
= 3:
dolled line
chain line
10
I,'
20
30
m/(d+l)
40
50
0
I'
0
,/ '
I
I
I
I
100
200
300
400
500
m
Figure 2: (a) Different learning curves for the higher order neuron (analogous to Fig. l.b).
Solid lines (top to bottom)(i) - forthe VC-theory bound (Corollary 3.ii) with VC-dimension
d + 1 = 21; (ii) - for the bound (5) with 'Y = ? and the upper bounds IH E Ii ~ IH Ii with
IHli given by (15); (iii) - true learning curve (the upper bound given by (18)). The wuniformity bound/approximation are plotted as chain and dotted lines for the minimal C w,m
satisfying (8), and as broken (long broken) line for C w,m const 1 with w 2 (w
3).
(b) Plots of the minimal value of Cw,m satisfying condition of w-uniformity bound (8) for
higher order neuron and selected values of w.
=
3.1
=
=
=
2-dimensional homogeneous perceptron
We consider X d.~ R2 and H defined as the family of all functions (el, 6) ~ 8(el Wl +
6W2)' where (Wl, W2) E R2 and 8(r) is defined as 1 if r ~ 0 and 0, otherwise, and the
probability measure jJ. on R2 has rotational symmetry with respect to the origin. Fix an
arbitrary target t E H . In such a case
2(1 - ?)m - (1 - 2?)m
{
IH E,I~= 1 . ( )
22:;=0 j ?i (1- ?)m-;
In particular we find that
IHli = 1 for i = 0, m
(for i
(fori
= 0 and 0 ~
= m),
?
~ 1/2),
(11)
( otherwise).
and
IHli = 2, otherwise, and
m
PH(m)
= L IHli /2 m = (1 + 2 + ... + 2 + 1)/2m = m/2
m - l .
(12)
and the true learning curve is
?j{ (m) = 1.5(m + 1)-1.
(13)
The latter expression results from Lemma 1 and the equality
m(Qm) _ { 2(1 - ?)m - (1 - 2?)m
jJ.
f
-
2(1 - ?)m
(for 0 ~ ? ~ 1/2),
(for 1/2 < ? ~ 1),
(14)
Different learning curves (bounds and approximations) for homogeneous perceptron are
plotted in Figure 1.b.
3.2
I-dimensional higher order neuron
We consider X d.~ [0,1] c R with a continuous probability distribution jJ., Define the
hypothesis space H C {O, l}X as the set of all functions of the form 8op(z) where p is a
Examples of Learning Curves from a Modified VC-formalism
349
polynomial of degree :::; d on R. Let the target be constant, t == 1. It is easy to see that H
restricted to a finite subset of [0,1] is exactly the restriction of the family of all fimctions
iI c {O, 1}[O,lj with up to d "jumps" from to lor 1 toO and thus dvc(H) = d+ 1. With
probability 1 an m-sample Z = (Zl' "" zm) from xm is such that Zi #- Zj for i #- j. For
such a generic Z, l7rt,z(H) n t:il = const = IHli. This observation was used to derive
the following relations for the computation of IH Ii:
a
min(d,m-l)
L
IHli =
liI(6)li
+ liI(6)1:_i,
(15)
6=0
for
? : :; i :::; m, where liI(6)li, for 0 = 0,1, ... ,d, is defined as follows . We initialize
?
?
= liI(l)li d~ 1 fori = 1, .. " m-1, liI(1) 10 = liI(l)l~ d~ and liI(6)li d~
= 0, 1, ... , m, 0 = 2,3, .. " d, and then, recurrently, for 0 ~ 2 we set liI(6) Ii d~
liI(O)lo
for i
~m-l
. liI(6-l)l~~-m+k if 0 is odd and liI(6)1~
d~ ~m-l
liI(6-l)l~~ ifo is even.
L.Jk=max(6,m-~)
~
L.Jk=6
(Here liI(6)li is defined by the relation (4) with the target t == 1 for the hypothesis space
H(6) C iI composed of functions having the value 1 near and exactly 0 jumps in (0,1),
exactly at entries of z; similarly as for H, IH(6)li = l7rl,zH(6) n t:il for a generic
m-sample z E (0, l)m.)
a
Analyzing an embedding of R into R d, and using an argument based on the Vandermonde
determinant as in [6,13], itcan be proved that the partition function IIH is given by Cover's
counting function [4], and that
(16)
For the uniform distribution on [0, 1] and a generic z E [0, l]m letAk(z) denote the sum of
Ie largest segments of the partition of [0, 1] into m + 1 segments by the entries of Z. Then
Ald/:lJ(Z):::; e'J;arz:(z):::; Ald/:lJ+l(Z),
(17)
An explicit expression for the expected value of Ak is known [11], thus a very tight bound
on the true learning curve eH (m) defined by (2) can be obtained:
~/2J1 (1 +
+
E
~):::; eH(m):::; Ld~2J :
i=ld/:lJ+1 J
+
1 (1
+
E
~),
(18)
i=ld/:lJ+:l J
Numerical results are shown in Figure 2.
4 Discussion and conclusions
The basic estimate (5) of Theorem 1 has been used to produce upper bounds on the learning
curve (via Lemma 1) in three different ways: (i) using the exact values of coefficients
IHEli (Fig. 1a), (ii) using the estimate IHEli :::; IHli and the values of IHli and (iii)
using the w-uniformity bound (8) with minimal value of Cw,m and as an "apprOximation"
with Cw,m = const = 1. Both examples of simple learning tasks considered in the paper
allowed us to compare these results with the true learning curves (or their tight bounds)
which can serve as benchmarks.
Figure 1.a implies that values of parameter w in the w-uniformity bound (approximation)
governing a distribution of error patterns between different error shells (c.f, [10)) has a
350
A. KOWALCZYK, J. SZYMANSKI, P. L. BARTLETT, R. C. WILLIAMSON
significant impact on learning curve shapes, changing from slow decrease to rapid jumps
(''phase transitions',) in generalization.
Figure l.b proves that one loses tightness of the bound by using IHI i rather than IHE Ii , and
even more is lost if w-unifonnity bounds (with variable C W,17l) are employed. Inspecting
Figures l.b and 2.a we find that approximate approaches consisting of replacing IHElr
by a simple estimate (w-uniforrnity) can produce learning curves very close to IHlilearning curves suggesting that an application of this formalism to learning systems where
neither IHElr nor IHlr can by calculated might be possible. This could lead to a sensible
approximate theory capturing at least certain qualitative properties of learning curves for
more complex learning tasks.
Generally, the results of this paper show that by incorporating the limited knowledge of the
statistical distribution of error patterns in the sample space one can dramatically improve
bounds on the learning curve with respect to the classical universal estimates of the VCtheory. This is particularly important for "practical" training sample sizes (m ~ 12 x
VC-dimension) where the VC-bounds are void.
Acknowledgement. The permission of Director, Telstra Research Laboratories, to publish
this paper is gratefully acknowledged. A.K. acknowledges the support of the Australian
Research Council.
References
(1) S. Amari, N. Fujita, and S. Shinomoto. Four types of learning curves. Neural Computation,
4(4):605-618, 1992.
(2) M. Anthony and N. Biggs. Computational Learning Theory. Cambridge University Press, 1992.
(3) A. Blumer, A. Ehrenfeucht, D. Haussler, and M .K. Warmuth. Learnability and the VapnikChervonenkis dimensions. Journal of the ACM, 36:929-965, (Oct. 1989).
(4) T.M. Cover. Geometrical and statistical properties of linear inequalities with applications to
pattern recognition. IEEE Trans . Elec. Comp., EC-14:326-334, 1965.
(5) D. Haussler, M. Keams, H.S. Seung, and N. Tishby. Rigorous learning curve bounds from
statistical mechanics. In Proc. 7th Ann. ACM Con[. on Compo Learn. Theory, pages 76-87,
1994.
(6) A. Kowalczyk. Estimates of storage capacity of multi-layer perceptron with threshold logic
hidden units. Neural Networks, to appear.
(7) A. Kowalczyk. VC-formalism with explicit bounds on error shells size distribution. A
manuscript, 1994.
(8) A. Kowalczykand H. Ferra. Generalisation in feedforward networks. Adv. in NIPS 7, The MIT
Press, Cambridge, 1995.
(9) A. Kowalczyk, J. Szymanski, and H. Ferra. Combining statistical physics with VC-bounds on
generalisation in learning systems. In Proc. ACNN'95, Sydney, 1995. University of Sydney.
(10) A. Kowalczyk, J. Szymanski, and R.C. Williamson. Learning curves from a modified vcformalism: a case stUdy. In Proceedings of ICNN'95, Perth (CD'ROM), volume VI, pages
2939-2943, Rundle Mall, South Australia, 1995. IEEE'J'Causal Production.
(11) J.G. Mauldon. Random division of an interval. Proc. Cambridge Phil. Soc., 47:331-336,1951.
(12) K.R. Muller, M. Finke, N. Murata, and S. Amari. On large scale simulations for learning curves.
In Proc . ACNN'95, pages 45-48, Sydney, 1995. University of Sydney.
(13) A. Sakurai. n-h-l networks store no less n h + 1 examples but sometimes no more. In
Proceedings of the 1992 International Conference on Neural Networks,pagesill-936-ill-941.
IEEE, June 1992.
(14) H. Sompolinsky, H.S. Seung, and N. Tishby. Statistical mechanics of learning curves. Physical
Reviews, A45:6056-6091, 1992.
(15) V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982.
(16) V. Vapnik, E. Levin, and Y. Le Cun. Measuring the VC-dimension ofa learning machine. Neural
Computation, 6 (5):851-876, 1994.
(17) C. Wang and S.S. Venkantesh. Temporal dynamics of generalisation in neural networks. Adv.
in NIPS 7, The MIT Press, Cambridge, 1995.
| 1086 |@word determinant:1 version:1 polynomial:1 simulation:1 solid:3 ld:3 initial:1 series:1 contains:1 chervonenkis:1 universality:1 numerical:2 partition:3 j1:5 shape:2 drop:1 plot:1 selected:1 warmuth:1 lr:3 compo:1 sudden:1 provides:1 five:1 lor:1 rc:1 director:1 qualitative:1 introduce:2 theoretically:1 sacrifice:1 expected:4 rapid:1 behavior:1 telstra:2 examine:1 nor:1 mechanic:2 multi:1 cardinality:2 notation:2 developed:1 unified:1 elm:1 temporal:1 act:1 ofa:1 growth:1 exactly:3 qm:2 zl:1 unit:1 appear:2 engineering:1 limit:1 ak:1 analyzing:1 might:1 au:2 limited:1 range:1 ihi:1 practical:1 union:2 lost:1 universal:1 empirical:3 projection:1 road:1 close:1 storage:1 restriction:1 maxz:1 dz:3 phil:1 williams:1 starting:1 haussler:2 embedding:1 fx:1 analogous:1 target:6 exact:2 homogeneous:4 hypothesis:7 origin:1 satisfying:3 forthe:1 jk:2 particularly:1 recognition:1 ising:1 bottom:2 solved:1 wang:1 adv:2 sompolinsky:1 decrease:1 broken:5 complexity:1 seung:2 dynamic:1 uniformity:8 tight:2 segment:2 serve:1 division:1 biggs:1 various:1 elec:1 distinct:2 dichotomy:1 tightness:1 otherwise:3 amari:2 obviously:1 rr:1 maximal:1 zm:1 combining:1 oz:1 qr:3 produce:2 perfect:3 derive:1 vcdimension:1 odd:1 op:1 sydney:4 soc:1 implies:2 australian:2 direction:1 vc:23 dvc:4 australia:3 fix:1 generalization:4 icnn:1 tighter:3 inspecting:1 extension:1 hold:2 considered:1 lm:1 jx:2 estimation:1 proc:4 council:1 largest:1 wl:2 mit:2 modified:7 rather:1 ej:1 corollary:7 derived:2 june:1 improvement:2 vapnikchervonenkis:1 contrast:1 rigorous:1 criticism:1 sense:2 dependent:1 el:8 lj:5 hidden:1 relation:3 keams:1 fujita:1 issue:1 ill:1 integration:1 initialize:1 fuller:1 having:1 chernoff:1 employ:1 composed:2 national:1 phase:6 consisting:1 interest:2 evaluation:1 introduces:1 analyzed:1 chain:5 allocating:1 fu:1 iv:1 re:1 plotted:2 causal:1 minimal:5 ferra:2 formalism:15 cover:2 sakurai:1 measuring:1 finke:1 subset:4 entry:2 uniform:1 levin:1 too:1 learnability:1 tishby:2 fundamental:1 international:1 ie:5 physic:4 continuously:1 ilj:1 central:3 possibly:1 lii:13 style:1 li:6 suggesting:1 potential:3 de:2 ilm:1 ifo:1 coefficient:1 vi:1 a45:1 il:4 ir:1 murata:1 yield:1 comp:1 llt:1 against:1 obvious:3 proof:4 mi:1 con:1 proved:1 knowledge:1 cj:2 manuscript:1 higher:4 improved:1 done:1 governing:1 eqn:2 ei:1 replacing:1 true:7 equality:3 laboratory:2 ehrenfeucht:1 shinomoto:1 ll:3 elr:1 outline:1 geometrical:1 ranging:1 novel:1 physical:1 overview:1 volume:1 discussed:1 he:7 significant:1 cambridge:4 similarly:1 gratefully:1 store:1 verlag:1 certain:1 inequality:3 binary:1 muller:1 employed:1 ii:20 full:1 thermodynamic:1 heh:2 long:1 impact:1 prediction:2 basic:4 publish:1 sometimes:1 addition:1 interval:1 void:1 w2:2 south:1 integer:1 call:1 near:1 counting:1 feedforward:1 split:3 iii:5 easy:1 variety:2 affect:1 fori:2 zi:1 idea:1 shift:1 whether:1 expression:2 bartlett:5 syseng:1 jj:3 dramatically:1 generally:1 mid:1 band:2 ph:4 generate:1 zj:1 dotted:3 disjoint:2 shall:2 four:2 threshold:1 acknowledged:1 changing:1 neither:1 sum:1 inverse:1 family:4 capturing:1 bound:32 layer:1 ri:4 argument:1 min:2 f01:1 department:1 according:1 neral:1 jr:1 em:8 cun:1 modification:2 hl:1 restricted:1 turn:1 loose:1 ihe:2 discus:1 kowalczyk:8 generic:3 permission:1 substitute:1 top:3 perth:1 const:5 prof:1 classical:1 implied:1 question:1 rt:4 dependence:1 subspace:1 cw:12 separate:1 capacity:1 sensible:1 me:1 fonnalism:1 rom:1 assuming:1 rotational:1 innovation:1 trl:1 upper:7 neuron:4 observation:1 benchmark:1 finite:5 mauldon:1 arbitrary:1 clayton:1 iih:4 nip:2 trans:1 pattern:6 xm:6 including:2 max:2 oj:2 mall:1 difficulty:1 eh:5 improve:1 vic:1 acknowledges:1 review:1 acknowledgement:1 zh:1 determining:1 relative:1 proportional:1 vandermonde:1 degree:1 consistent:3 displaying:1 cd:1 production:1 lo:1 last:1 perceptron:6 curve:33 dimension:9 l7rt:1 transition:6 cumulative:1 calculated:1 qualitatively:1 refinement:2 jump:3 simplified:1 ec:1 approximate:2 logic:1 ml:1 szymanski:7 jiaa:3 continuous:1 mj:1 learn:1 ignoring:1 symmetry:1 williamson:5 complex:1 anthony:1 acnn:2 main:2 whole:1 prx:1 allowed:1 fig:2 canberra:1 slow:1 explicit:2 col:2 crude:2 theorem:5 down:1 bad:2 specific:3 pac:2 recurrently:1 r2:3 decay:1 exists:2 incorporating:1 ih:10 vapnik:3 te:1 anu:1 entropy:1 maa:1 springer:1 loses:1 determines:1 ald:2 acm:2 shell:8 oct:1 presentation:1 blumer:1 ann:1 typical:1 generalisation:3 lemma:4 called:2 formally:1 support:1 latter:1 arises:2 l1i:1 evaluate:1 ex:1 |
98 | 1,087 | Using Feedforward Neural Networks to
Monitor Alertness from Changes in EEG
Correlation and Coherence
Scott Makeig
Naval Health Research Center, P.O. Box 85122
San Diego, CA 92186-5122
Tzyy-Ping Jung
Naval Health Research Center and
Computational Neurobiology Lab
The Salk Institute, P.O. Box 85800
San Diego, CA 92186-5800
Terrence J. Sejnowski
Howard Hughes Medical Institute and
Computational Neurobiology Lab
The Salk Institute, P.O. Box 85800
San Diego, CA 92186-5800
Abstract
We report here that changes in the normalized electroencephalographic (EEG) cross-spectrum can be used in conjunction with
feedforward neural networks to monitor changes in alertness of operators continuously and in near-real time. Previously, we have
shown that EEG spectral amplitudes covary with changes in alertness as indexed by changes in behavioral error rate on an auditory
detection task [6,4]. Here, we report for the first time that increases
in the frequency of detection errors in this task are also accompanied by patterns of increased and decreased spectral coherence in
several frequency bands and EEG channel pairs. Relationships
between EEG coherence and performance vary between subjects,
but within subjects, their topographic and spectral profiles appear
stable from session to session. Changes in alertness also covary
with changes in correlations among EEG waveforms recorded at
different scalp sites, and neural networks can also estimate alertness from correlation changes in spontaneous and unobtrusivelyrecorded EEG signals.
1
Introduction
When humans become drowsy, EEG scalp recordings of potential oscillations change
dramatically in frequency, amplitude, and topographic distribution [3]. These
changes are complex and differ between subjects [10]. Recently, we have shown
S. MAKEIG, T.-P. JUNG, T. J. SEJNOWSKl
932
that using principal components analysis in conjunction with feedforward neural
networks, minute-scale changes in performance on a sustained auditory detection
task can be estimated in near real-time from changes in the EEG spectrum at one
or more scalp channels [4, 6]. Here, we report, first, that loss of alertness during
auditory detection task performance is also accompanied by changes in spectral coherence of EEG signals recorded at different scalp sites. The extent, topography,
and frequency content of coherence changes linked to changes in alertness differ
between subjects, but within subjects they appear stable from session to session.
Second, since most coherence changes linked to alertness are not associated with
significant phase differences, moving correlation measures applied to wideband or
bandlimited EEG waveforms also covary with changes in alertness. Incorporating coherence and/or correlation information into neural network algorithms for
estimating alertness from the EEG spectrum should enhance their accuracy and robustness and contribute to the design of practical neural human-system interfaces
performing real-time monitoring of changes in operator alertness .
2
Methods
Concurrent EEG and behavioral data were collected for the purpose of developing a
method of objectively monitoring the alertness of operators of complex systems [6] .
Ten adult volunteers participated in three or more half-hour sessions during which
they pushed one button whenever they detected an above-threshold auditory target
stimulus (a brief increase in the level of the continuously-present background noise).
To maximize the chance of observing alertness decrements, sessions were conducted
in a small, warm, and dimly-lit experimental chamber, and subjects were instructed
to keep their eyes closed.
Targets were 350 ms increases in the intensity of a 62 dB white noise background ,
6 dB above their threshold of detectability, presented at random time intervals at a
mean rate of 10/min. Short, and task-irrelevant probe tones oftwo frequencies (568
and 1098 Hz) were interspersed between the target noise bursts at 2-4 s intervals.
EEG was collected from thirteen electrodes located at sites of the Internation 10-20
System, referred to the right mastoid, at a sampling rate of 312.5 Hz . A bipolar
diagonal electrooculogram (EOG) channel was also recorded for use in eye movement
artifact correction and rejection. Two sessions each from three of the subjects were
chosen for analysis on the basis of their including more than 50 detection lapses.
A continuous performance measure, local error rate, was computed by convolving
an irregularly-spaced performance index (hit=0/lapse=1) with a 95 s smoothing
window advanced through the performance data in 1.64 s steps. Target hits were
defined as targets responded to within a 100-3000 ms window; other targets were
called lapses. After eye movement artifacts were removed from the data using a
selective regression procedure [5], and data containing other large artifacts were
rejected from analysis, complex EEG spectra were computed by advancing a 512point (1.64 s) data window through the data in 0.41 s steps, multiplying by a
Hanning window, and converting to frequency domain using an FFT.
Complex coherence was then computed for each channel pair in 1.64 s spectral
epochs. In the coherence studies, error rate was smoothed with a bell-shaped Papoulis window; a 36 s rectangular window was used to smooth the coherence estimates. Finally, complex coherence was converted to coherence amplitude and phase
and results were correlated with local error rate. A moving correlation measure between (1-20 Hz) bandlimited EEG waveforms was computed for each channel pair
in a moving 1.64 s smoothing window, and then smoothed using a causal 95-s exponential window . The same window was used to smooth the error rate time series
for the correlation studies.
933
Using Feedforward Neural Networks to Monitor Alertness from EEG
Results
3
u
1
~
40
~ 0.8
<
e~
30
B
.
0.6
\9Chan"I~~
0.4
l!
0
u
~
Frequency: 9.1 Hz
~8
:
~
10
0
g ?10
0.8
B
20
~
0.6
0.4
?20
0.2
0
30
0
Time on Task (min)
(a)
Time On Task (min)
(b)
Figure 1: (a) Changes in coherence amplitude at 9.1 Hz (upper traces) are correlated
with simultaneous changes in error rate during a half auditory detection task (lower
trace) in nine indicated central-frontal channel pairs. (b) Concurrent changes in
coherence phase at 15.25 Hz (upper traces) and local error rate (lowertrace) for the
same session and channel-pairs.
3.1
Relation of Coherence Changes to Detection Performance.
During the first 2-3 minutes of the session shown in Fig. la, the subject detected
all targets presented, and coherence amplitudes remained high (0.9). In minutes
8-10, however, when the subject failed to make a single detection response(lower
trace), coherence amplitude fell to as low as 0.6. Overall correlations for this session
between the coherence and error rate time series in these channel pairs ranged from
-0.590 to -0.776.
In the same session, coherence phase at 15 Hz also covaried with performance
(Fig. 1b). During low-error portions of the session, there was no detectable coherence phase lag at 15 Hz within the same nine channel pairs, whereas while the
subject performed poorly, a 20 degree phase lag appeared during which 15 Hz activity at frontal sites lead activity at frontal sites by 3 ms. Overall correlations for
this session between coherence phase and error rate for these channel pairs ranged
from 0.416 to 0.689. Correlations between coherence amplitude and error rate at
80 EEG frequencies (Fig. 2a, upper traces) included two broad bands of strong negative correlations (3-12 Hz and 15-20 Hz), while appreciable correlations between
coherence phase and performance were confined to much narrower frequency bands
(lower traces).
To estimate the significance ofthese coherence correlations, surrogate moving coherence records were collected 10 times using randomly-selected, asynchronous blocks
of contiguous EEG data for each channeL Correlations between the resulting surrogate moving coherence time series and error rate were computed, and the 99.936th
percentile of the distribution of (absolute) correlations was determined. For the
subject whose data is shown here, this value was 0.485 . Under conservative assumptions of complete independence of adjacent frequencies, this should give the
(p=0.05) significance level for the maximum absolute correlation in each 80-bin
correlation spectrum. (The heuristic estimate of this significance level from the
surrogate data was 0.435). In the two sessions from this subject, however, more
than 20% of all the 78 channel-pair coherence correlations were larger in absolute
value than 0.485, implying that coherence amplitude changes at many scalp sites
and frequencies are significantly related to changes in alertness in this subject.
934
3.2
S. MAKEIG, T.-P. JUNG, T. J. SEJNOWSKI
Spectral and Topographic Stability
c
.'"
0
i
(30.5
c.
e
<
~ 9 C:annel pairs
..... .. .??. ... . .
0
.. .... .. ..
.. . ..
~
]
t
.... 1-~ir@t;B
Subject p37
0
."
25essions
.....
?2
-S
.~
t?
]-0.5
0
(,)
~:
~
c
0
'i o.
1l o .
~ -0.
~:
~
0
(,)
?i"
il
0
(,)
...:
1!
0
0
(,)
5
10
Frequency (Hz)
(a)
15
20
25
Frequency (Hz)
(b)
Figure 2: (a) Correlation spectra showing correlations between moving-average coherence and error rate for the same session and channel-pairs. Small letters 'a,b,c'
indicate the frequencies analyzed in Figs. 1 and 3. (b) Cluster analysis of correlations between coherence amplitude and error rate at 41 frequencies (0.6 Hz to 25
Hz). Means of six sets of channel pairs derived from cluster analysis of 78 similar
coherence correlation spectra from all pairs of 13 scalp channels; superimposed on
the same means for a second session from the same subject.
The sign, size, and spectral and topographic structure of correlations between coherence amplitude and error rate at each frequency were stable across two sessions
for most channel pairs and frequency bands. Fig. 2b shows mean spectral correlations in both sessions from the same subject for six clusters of similar channel-pair
correlation spectra identified by cluster analysis on results of the first session. Except near 5 Hz, the size and structure of the correlation spectra for the second
session replicate results of the first session. The spectral stability of monotonic
relationships between EEG coherence and auditory detection performance suggests
that coherence may be used to predict changes in performance level from spontaneous EEG data collected continuously and unobtrusively from two or more scalp
channels.
3.3
EEG Waveform Correlations and Performance
In most cases, coherence phase lags in these data are small, and correlations between changes in phase lag and performance were insignificant. We therefore investigated whether moving-average correlations between band-limited EEG signals
in different scalp channels might also be used to predict changes in alertness, possibly at a lower computational cost, by studying the relationship between error
rate and changes in moving-average correlations of time-domain EEG waveforms
(1-20 Hz bandpass) in the same 6 sessions. Again, we found that the strength
and topographic structure of significant relationships between moving-correlation
and performance measures are stable within, and variable between subjects. For
each subject, we selected 8 EEG channel pairs whose moving-correlation time series
correlated most highly with error rate, and used these to train a multilinear regression network and three feedforward three-layer perceptrons to estimate error rate
from moving-average correlations. The feedforward neural networks had 3, 4, and
5 hidden units, respectively. Weights and biases of the network were adjusted using
the error backpropagation algorithm [9]. Conjugate gradient descent was used to
minimize the mean-squared error between network output and the actual error rate
30
Using Feedforward Neural Networks to Monitor Alertness from EEG
935
time series. Cross-validation [7] was used to prevent the network from overfitting
the training data. For each of the 6 training-testing session pairs and each neural
network architecture, the time course of error rate was estimated five times using
different random initial weights between -0.3 and 0.3. We tested the generalization
ability of the models on second sessions from the same subjects. The procedure
simulated potential real-world alertness monitoring applications in which pilot data
for each operator would be used to train a network to estimate his or her alertness
in subsequent sessions from unobtrusively-recorded EEG data.
Accuracy of error rate estimation in the test sessions was almost identical for neural
networks with 3, 4, and 5 hidden units. Each was more accurate than multivariate
linear regression. Figure 3 shows the time courses of actual and estimated error rate
in one pair of training (top paneQ and test sessions. Results for two other subjects
were equivalent. Table 1 shows the average correlations and root-mean-squared
estimation error between actual and estimated error rate time series for 6 sessions,
2 each on 3 subjects using a feedforward neural network with 3 hidden units. Results
using 4 or 5 hidden units are equivalent. Diagonal cells show results for training
sessions, off-diagonal cells for test sessions. The nonlinear adaptability of threelayer perceptrons give improved estimation performance over multivariate linear
regression, reducing the RMS estimation error in the test sessions from 0.255 to
0.225 (F(l, 5) = 1234.29;p ~ 0.0001), and increasing the mean correlation between
actual and estimated error rate time series from 0.63 to 0.67 (F(l, 5) = 549.5;p ~
0.0001).
4
Discussion
Spectral coherence of EEG waveforms at different scalp sites has been measured
for nearly 30 years [11]. and is the subject of a steadily increasing number of clinical, behavioral, and developmental EEG studies. Coherence values are known to
be higher in sleep than in waking [8], and wake-sleep transitions have been noted
to be preceded by increased coherence at some frequencies [2]. Our results, from
data on three subjects performing a sustained auditory detection task under soporific conditions, suggest that during drowsiness, coherence may either increase
or decrease, depending on the subject, analysis frequency, and electrode sites analyzed. However, in individual subjects the spectral and topographic structure of
alertness-related coherence changes appears stable from session to session.
EEG correlation and coherence are intimately related: changes in moving-average
correlations of EEG waveforms reflect changes in broad-band, zero-lag coherence
of activity at the same sites. The possibility of using moving-average correlation
measures of electrophysiological activity to monitor state changes in animals was
discussed by Arduini [1], but to our knowledge this approach has not previously
been applied to human EEG.
The origin and function of nonstationarity in EEG synchrony are not yet understood. Decreased EEG coherence during drowsiness might result from inactivation
of subcortical brain systems coordinating activity in separate cortical EEG generators during wakefulness, or from emergence of drowsiness-related EEG activity
projecting preferentially to one part of the scalp surface. Similarly, increases in
coherence in drowsiness might either result from increased synchrony between cortical generators, or from volume conduction of enhanced activity generated at a
single cortical or subcortical site. Measuring changes in EEG coherence and correlation during other cognitive tasks give clues to the possible role of variable EEG
synchrony in brain and cognitive dynamics.
We are now investigating to what extent moving EEG coherence and/or correlation
S. MAKEIG, T.-P. JUNG, T. J. SEJNOWSKl
936
measures, in combination with spectral amplitude measures [4], will allow practical, robust, continuous, and near-real time estimation of alertness level in auditory
detection and other task environments.
Estimated and Actual Error Rates
Tralnln~ : 3674
?i
8
E
G>
Testse : 3674
RMS : 0.1706
Corr : 0.8246
0.8
0.6
OJ
a: 0.4
g 0.2
W
0
-0.2
0
~
8
.. .:..,.,,.,,, .
,"
I
5
.-.-.-.~.~.10
15
Time on Task (min)
20
25
30
Tralnl~: 3674
Test se : 3648
RMS : 0.2418
?.rCorr : 0.7159
p ,
0.8
~ 0.6
~
a: 0.4
g 0.2
w
0
-0.2
0
5
10
15
Time on Task (min)
20
25
30
Figure 3: Changes in detection rate (95-s exponential window) and their estimate
using a feedforward three-layer perceptron on moving correlations between (1-20
Hz) band passed EEG signals for 8 selected pairs of 7 scalp channels. The top panel
shows the training session, the bottom panel the testing session. Solid lines show
the actual error rate time course; dashed lines, the estimate. Correlation and RMS
error between the two are indicated.
Table 1: The results of alertness monitoring using moving EEG pairwise correlation.
Test
set
3648
3674
Test
set
3665
3673
Subject A
Training Set
3674
3648
rms: 0.17 rms : 0.26
corr : 0.87 corr : 0.68
rms: 0.21 rms: 0.17
corr : 0.73 corr: 0.83
Subject C
Training Set
3665
3673
rms : 0.19 rms: 0.23
corr : 0.76 corr : 0.65
rms : 0.18 rms: 0.17
corr : 0.67 corr: 0.70
Test
set
3654
3656
Subject B
Training Set
3654
3656
rms: 0.17 rms : 0.22
corr : 0.83 corr : 0.73
rms: 0.25 rms : 0.14
corr : 0.54 corr: 0.76
Using Feedforward Neural Networks to Monitor Alertness from EEG
937
Acknowledgments
This work was supported by a grant (ONR.Reimb.30020.6429) to the Naval Health
Research Center by the Office of Naval Research. The views expressed in this
article are those of the authors and do not reflect the official policy or position of
the Department of the Navy, Department of Defense, or the U.S. Government. We
acknowledge the contributions of Keith Jolley, F.scot Elliott, and Mark Postal in
collecting and processing the data, and thank Tony Bell for suggestions.
References
[1] Arduini A .. 1979. In-phase brain activity and sleep. Electroencephalog clin NeurophysioI47,441-9
[2] Borodkin SM, GrindeI' OM, Boldyreva GN, Zaitsev VA & Luk'ianov V.1.1987.
Dynamics of the spectral-coherent characteristics of the human EEG in healthy
subjects and brain pathology. Zh Vyssh Nerv Deiat 37, 22-30
[3] Davis H., Davis P.A., Loomis A.L., Harvey E.N., & Hobart G. 1938. Human
brain potentials during the onset of sleep. J Neurophysiol1, 24-38
[4] Jung T-P, Makeig S., Stensmo M., & Sejnowski T. Estimating alertness from
the EEG power spectrum, submitted for publication.
[5] Kenemans J.L ., Molenaar P.C.M. , Verbaten M.N. & Slangen J.L. 1991. Removal of the ocular artifact from the EEG: a comparison of time and frequency
domain methods with simulated and real data. Psychophysiolog 28, 114-121
[6] Makeig S. & Inlow M. 1993. Lapses in alertness: Coherence of fluctuations in
performance and EEG spectrum. Electroencephalog clin Neurophysiol86, 23-35
[7] Morgan N., & Bourlard H. 1990. Generalization and parameter estimation in
feedforward nets: some experiments. Neural Information Processing Systems,
2, 630-637.
[8] Nielsen T, Abel A, Lorrain D, & Montplaisir J . 1990. Interhemispheric EEG coherence during sleep and wakefulness in left- and right-handed subjects. Brain
and Cognition 14, 113-25
[9] Rumelhart D, Hinton G, & Williams R, 1986. Learning internal representation
by error propagation, Parallel distributed processing, Chap. 8.
[10] Santamaria J. & Chiappa K.H. 1987. The EEG of drowsiness in normal adults.
J Clin Neurophysiol4, 327-82
[11] Walter D.O. 1968. Coherence as a measure of relationship between EEG
records. Electroencephalog clin N europhysiol 24, 282
| 1087 |@word luk:1 replicate:1 solid:1 papoulis:1 initial:1 series:7 molenaar:1 yet:1 subsequent:1 implying:1 half:2 selected:3 tone:1 short:1 record:2 postal:1 contribute:1 five:1 burst:1 become:1 sustained:2 behavioral:3 pairwise:1 brain:6 chap:1 actual:6 window:10 increasing:2 estimating:2 panel:2 what:1 electroencephalog:3 collecting:1 bipolar:1 makeig:6 hit:2 unit:4 medical:1 grant:1 appear:2 drowsy:1 understood:1 local:3 fluctuation:1 might:3 suggests:1 wideband:1 limited:1 practical:2 acknowledgment:1 testing:2 hughes:1 block:1 backpropagation:1 procedure:2 bell:2 significantly:1 suggest:1 operator:4 equivalent:2 center:3 williams:1 rectangular:1 his:1 stability:2 diego:3 spontaneous:2 target:7 enhanced:1 origin:1 rumelhart:1 located:1 bottom:1 role:1 alertness:25 decrease:1 movement:2 removed:1 developmental:1 environment:1 abel:1 dynamic:2 interhemispheric:1 threelayer:1 basis:1 train:2 walter:1 sejnowski:3 detected:2 navy:1 whose:2 lag:5 heuristic:1 larger:1 objectively:1 ability:1 topographic:6 emergence:1 net:1 wakefulness:2 poorly:1 slangen:1 electrode:2 cluster:4 depending:1 chiappa:1 measured:1 keith:1 strong:1 indicate:1 differ:2 waveform:7 human:5 bin:1 government:1 generalization:2 sejnowskl:2 multilinear:1 adjusted:1 correction:1 normal:1 cognition:1 predict:2 vary:1 purpose:1 estimation:6 healthy:1 concurrent:2 inactivation:1 office:1 conjunction:2 publication:1 derived:1 naval:4 electroencephalographic:1 superimposed:1 hidden:4 relation:1 her:1 selective:1 overall:2 among:1 animal:1 smoothing:2 shaped:1 sampling:1 identical:1 lit:1 broad:2 nearly:1 report:3 stimulus:1 randomly:1 individual:1 phase:11 detection:12 highly:1 possibility:1 analyzed:2 accurate:1 indexed:1 unobtrusively:2 causal:1 santamaria:1 increased:3 handed:1 gn:1 contiguous:1 measuring:1 cost:1 conducted:1 conduction:1 terrence:1 off:1 enhance:1 continuously:3 again:1 central:1 recorded:4 squared:2 containing:1 reflect:2 possibly:1 cognitive:2 convolving:1 potential:3 converted:1 accompanied:2 onset:1 performed:1 root:1 view:1 lab:2 closed:1 linked:2 observing:1 mastoid:1 portion:1 parallel:1 synchrony:3 contribution:1 minimize:1 il:1 ir:1 accuracy:2 responded:1 om:1 characteristic:1 spaced:1 monitoring:4 multiplying:1 submitted:1 ping:1 simultaneous:1 whenever:1 nonstationarity:1 frequency:20 steadily:1 ocular:1 associated:1 auditory:8 pilot:1 knowledge:1 electrophysiological:1 amplitude:11 adaptability:1 nielsen:1 appears:1 higher:1 response:1 improved:1 box:3 rejected:1 correlation:43 nonlinear:1 propagation:1 artifact:4 indicated:2 normalized:1 ranged:2 covaried:1 lapse:4 covary:3 white:1 adjacent:1 during:12 davis:2 noted:1 percentile:1 m:3 complete:1 interface:1 tzyy:1 recently:1 preceded:1 volume:1 interspersed:1 discussed:1 significant:2 session:35 similarly:1 pathology:1 had:1 moving:16 stable:5 surface:1 multivariate:2 chan:1 irrelevant:1 harvey:1 onr:1 kenemans:1 morgan:1 converting:1 maximize:1 signal:4 dashed:1 smooth:2 cross:2 clinical:1 va:1 drowsiness:5 inlow:1 regression:4 volunteer:1 confined:1 cell:2 background:2 whereas:1 participated:1 decreased:2 interval:2 wake:1 electrooculogram:1 fell:1 subject:30 recording:1 hz:18 db:2 near:4 feedforward:11 fft:1 independence:1 architecture:1 identified:1 whether:1 six:2 rms:16 defense:1 passed:1 nine:2 dramatically:1 se:1 band:7 ten:1 sign:1 estimated:6 coordinating:1 detectability:1 threshold:2 monitor:6 prevent:1 advancing:1 button:1 year:1 letter:1 europhysiol:1 oftwo:1 almost:1 oscillation:1 coherence:49 pushed:1 layer:2 sleep:5 scalp:11 activity:8 strength:1 loomis:1 min:5 performing:2 department:2 developing:1 combination:1 conjugate:1 across:1 intimately:1 ofthese:1 projecting:1 previously:2 detectable:1 irregularly:1 studying:1 probe:1 spectral:13 chamber:1 robustness:1 top:2 tony:1 clin:4 diagonal:3 surrogate:3 gradient:1 separate:1 thank:1 simulated:2 extent:2 collected:4 index:1 relationship:5 preferentially:1 verbaten:1 thirteen:1 trace:6 negative:1 design:1 policy:1 upper:3 sm:1 howard:1 acknowledge:1 descent:1 neurobiology:2 hinton:1 smoothed:2 waking:1 intensity:1 pair:19 coherent:1 hour:1 adult:2 pattern:1 scott:1 appeared:1 including:1 oj:1 bandlimited:2 power:1 warm:1 bourlard:1 advanced:1 brief:1 eye:3 health:3 eog:1 epoch:1 removal:1 zh:1 loss:1 topography:1 suggestion:1 subcortical:2 generator:2 validation:1 degree:1 elliott:1 article:1 course:3 jung:5 supported:1 asynchronous:1 bias:1 allow:1 perceptron:1 institute:3 absolute:3 distributed:1 cortical:3 world:1 transition:1 instructed:1 author:1 clue:1 san:3 keep:1 overfitting:1 investigating:1 spectrum:11 continuous:2 table:2 channel:21 dimly:1 robust:1 ca:3 eeg:49 investigated:1 complex:5 domain:3 official:1 significance:3 decrement:1 noise:3 profile:1 site:10 referred:1 fig:5 paneq:1 salk:2 position:1 hanning:1 bandpass:1 exponential:2 scot:1 minute:3 remained:1 showing:1 insignificant:1 incorporating:1 corr:13 rejection:1 failed:1 expressed:1 monotonic:1 chance:1 narrower:1 internation:1 appreciable:1 stensmo:1 content:1 change:34 included:1 determined:1 except:1 reducing:1 principal:1 conservative:1 called:1 experimental:1 la:1 perceptrons:2 internal:1 mark:1 frontal:3 tested:1 correlated:3 |
99 | 1,088 | Softassign versus Softmax: Benchmarks
in Combinatorial Optimization
Steven Gold
Department of Computer Science
Yale University
New Haven, CT 06520-8285
Anand Rangarajan
Dept. of Diagnostic Radiology
Yale University
New Haven, CT 06520-8042
Abstract
A new technique, termed soft assign, is applied for the first time
to two classic combinatorial optimization problems, the traveling salesman problem and graph partitioning. Soft assign , which
has emerged from the recurrent neural network/statistical physics
framework, enforces two-way (assignment) constraints without the
use of penalty terms in the energy functions. The soft assign can
also be generalized from two-way winner-take-all constraints to
multiple membership constraints which are required for graph partitioning. The soft assign technique is compared to the softmax
(Potts glass). Within the statistical physics framework, softmax
and a penalty term has been a widely used method for enforcing the
two-way constraints common within many combinatorial optimization problems. The benchmarks present evidence that soft assign
has clear advantages in accuracy, speed, parallelizabilityand algorithmic simplicity over softmax and a penalty term in optimization
problems with two-way constraints.
1
Introduction
In a series of papers in the early to mid 1980's, Hopfield and Tank introduced
techniques which allowed one to solve combinatorial optimization problems with
recurrent neural networks [Hopfield and Tank, 1985]. As researchers attempted
to reproduce the original traveling salesman problem results of Hopfield and
Tank, problems emerged, especially in terms of the quality of the solutions obtained. More recently however, a number of techniques from statistical physics
have been adopted to mitigate these problems. These include deterministic annealing which convexifies the energy function in order help avoid some local minima and the Potts glass approximation which results in a hard enforcement of
a one-way (one set of) winner-take-all (WTA) constraint via the softmax. In
Softassign versus Softmax: Benchmarks in Combinatorial Optimization
627
the late 80's, armed with these techniques optimization problems like the traveling salesman problem (TSP) [Peterson and Soderberg, 1989] and graph partitioning [Peterson and Soderberg, 1989, Van den Bout and Miller III, 1990] were reexamined and much better results compared to the original Hopfield-Tank dynamics
were obtained.
However, when the problem calls for two-way interlocking WTA constraints, as
do TSP and graph partitioning, the resulting energy function must still include
a penalty term when the softmax is employed in order to enforce the second set
of WTA constraints. Such penalty terms may introduce spurious local minima
in the energy function and involve free parameters which are hard to set. A
new technique, termed soft assign, eliminates the need for all such penalty terms.
The first use of the soft assign was in an algorithm for the assignment problem
[Kosowsky and Yuille, 1994] . It has since been applied to much more difficult
optimization problems, including parametric assignment problems-point matching [Gold et aI., 1994, Gold et aI., 1995, Gold et aI., 1996] and quadratic assignment problems-graph matching [Gold et aI., 1996, Gold and Rangarajan, 1996,
Gold, 1995] .
Here, we for the first time apply the soft assign to two classic combinatorial optimization problems, TSP and graph partitioning. Moreover, we show that the
soft assign can be generalized from two-way winner-take-all constraints to multiple
membership constraints, which are required for graph partitioning (as described below). We then run benchmarks against the older softmax (Potts glass) methods and
demonstrate advantages in terms of accuracy, speed, parallelizability, and simplicity
of implementation.
It must be emphasized there are other conventional techniques, for solving
some combinatorial optimization problems such as TSP, which remain superior to this method in certain ways [Lawler et aI., 1985]. (We think for some
problems-specifically the type of pattern matching problems essential for cognition [Gold, 1995]-this technique is superior to conventional methods.) Even within
neural networks, elastic net methods may still be better in certain cases. However,
the elastic net uses only a one-way constraint in TSP. The main goal of this paper
is to provide evidence, that when minimizing energy functions within the neural
network framework, which have two-way constraints, the soft assign should be the
technique of choice. We therefore compare it to the current dominant technique,
softmax with a penalty term.
2
2.1
Optimizing With Softassign
The Traveling Salesman Problem
The traveling salesman problem may be defined in the following way. Given a set of
intercity distances {hab} which may take values in R+ , find the permutation matrix
M such that the following objective function is minimized.
1
E 1 (M)
N
N
N
= 2 LLL hab M ai M b(i6H)
(1)
a==lb==li=l
subject to Va L~l Mai
=1 ,
Vi L~=l Mai
=1 ,
Vai Mai E {O, 1}.
In the above objective hab represents the distance between cities a and b. M is a
permutation matrix whose rows represent cities, and whose columns represent the
day (or order) the city was visited and N is the number of cities. (The notation i EEl 1
S.GOLD,A.RANGARAJAN
628
is used to indicate that subscripts are defined modulo N, i.e.
if Mai = 1 it indicates that city a was visited on day i .
Ma(N+I)
=
Mal.)
So
Then, following [Peterson and Soderberg, 1989, Yuille and Kosowsky, 1994] we employ Lagrange multipliers and an x log x barrier function to enforce the constraints,
as well as a 'Y term for stability, resulting in the following objective:
1 N
E2(M, 1',11)
INN
+p I: I: Mai(10g M ai a=l i=l
=
2L
N
N
LL
N
babMaiMb(ieJ I ) -
a=l b=l i=l
N
N
1) +
i=l
L M;i
a=l i=l
I: J.la(I: Mai a=l
~L
N
1) +
N
N
i=l
a=l
I: lIi(I: Mai -
1)
(2)
In the above we are looking for a saddle point by minimizing with respect to M
and maximizing with respect to I' and 11, the Lagrange multipliers.
2.2
The Soft assign
In the above formulation of TSP we have two-way interlocking WTA constraints.
{Mai} must be a permutation matrix to ensure that a valid tour-one in which
each city is visited once and only once-is described. A permutation matrix means
all the rows and columns must add to one (and the elements must be zero or one)
and therefore requires two-way WTA constraints-a set of WTA constraints on the
rows and a set of WTA constraints on the columns. This set of two-way constraints
may also be considered assignment constraints, since each city must be assigned to
one and only one day (the row constraint) and each day must be assigned to one
and only one city (the column constraint).
These assignment constraints can be satisfied using a result from [Sinkhorn, 1964].
In [Sinkhorn, 1964] it is proven that any square matrix whose elements are all
positive will converge to a doubly stochastic matrix just by the iterative process
of alternatively normalizing the rows and columns. (A doubly stochastic matrix is
a matrix whose elements are all positive and whose rows and columns all add up
to one-it may roughly be thought of as the continuous analog of a permutation
matrix).
The soft assign simply employs Sinkhorn's technique within a deterministic annealing context. Figure 1 depicts the contrast between the soft assign and the softmax.
In the softmax, a one-way WTA constraint is strictly enforced by normalizing over
a vector.
[Kosowsky and Yuille, 1994] used the soft assign to solve the assignment problem,
i.e. minimize: - 2:~=1 2:{=1 MaiQai. For the special case of the quadratic assignment problem, being solved here, by setting Qai
and using the values of
M from the previous iteration, we can at each iteration produce a new assignment
problem for which the soft assign then returns a doubly stochastic matrix. As the
temperature is lowered a series of assignment problems are generated, along with
the corresponding doubly stochastic matrices returned by each soft assign , until a
permutation matrix is reached.
= - :J:i'
The update with the partial derivative in the preceding may be derived using a
Taylor series expansion. See [Gold and Rangarajan, 1996, Gold, 1995] for details.
The algorithm dynamics then become:
Softassign versus Softmax: Benchmarks in Combinatorial Optimization
Softassign
629
Softmax
Positivity
M.i = exP(I3Q.)
Positivity
1
Mi = exP(I3Qi)
Two-way constraints
(
1
Row Normalization
One-way
constraint
M? _ _ _
M?1_
1
l:M.
i )
~)
""""'~_
Mai--_
1
l:M.i
0<;.
M.i-
l:~.
a
1
Figure 1: Softassign and softmax. This paper compares these two techniques.
(3)
Mai = Softassignai (Q)
(4)
E2 is E2 without the {3, J.l or II terms of (2), therefore no penalty terms are now included. The above dynamics are iterated as (3, the inverse temperature, is gradually
increased.
These dynamics may be obtained by evaluating the saddle points of the objective
in (2). Sinkhorn's method finds the saddle points for the Lagrange parameters.
2.3
Graph Partitioning
The graph partitioning problem maybe defined in the following way. Given an unweighted graph G, find the membership matrix M such that the following objective
function is minimized.
A
E3(M)
I
I
= - I:L:L:GijMaiMaj
i=1 j=1
E:=1 Mai = 1,
(5)
a=1
subject to Va E;=1 Mai = IIA, Vi
Vai Mai E
G has I nodes which should be equally partitioned into A bins.
to, I} where graph
{Gij} is the adjacency matrix of the graph, whose elements must be 0 or 1. M
is a membership matrix such that Mai = 1 indicates that node i is in bin a. The
permutation matrix constraint present in TSP is modified to the membership constraint. Node i is a member of only bin a and the number of members in each bin
is fixed at IIA. When the above objective is at a minimum, then graph G will be
partitioned into A equal sized bins, such that the cutsize is minimum for all possible
partitionings of G into A equal sized bins. We assume IIA is an integer.
Then following the treatment for TSP, we derive the following objective:
S.GOLD,A. RANGARAJAN
630
A
E 4(M,p,v)
I
I
=- I: I:L: CijMaiMaj a=l i=l j=l
1AI
+:8 I: I: Mai(lOgMai
a=li=l
-
A
I
a=l
i=l
1) + I:Pa(2: Mai -
[fA)
A
I
~ L:L:M;i
a=l i=l
I
A
i=l
a=l
+ 2: Vi (2: Mai
-1) (6)
which is minimized with a similar algorithm employing the softassign. Note however
now in the soft assign the columns are normalized to [j A instead of 1.
8
Experimental Results
Experiments on Euclidean TSP and graph partitioning were conducted. For each
problem three different algorithms were run. One used the soft assign described
above. The second used the Potts glass dynamics employing synchronous update
as described in [Peterson and Soderberg, 1989]. The third used the Potts glass
dynamics employing serial update as described in [Peterson and Soderberg, 1989].
Originally the intention was to employ just the synchronous updating version of
the Potts glass dynamics, since that is the dynamics used in the algorithms employing soft assign and is the method that is massively parallelizable. We believe
massive parallelism to be such a critical feature of the neural network architecture
[Rumelhart and McClelland, 1986] that any algorithm that does not have this feature loses much of the power of the neural network paradigm. Unfortunately the
synchronous updating algorithms just worked so poorly that we also ran the serial
versions in order to get a more extensive comparison. Note that the results reported
in [Peterson and Soderberg, 1989] were all with the serial versions.
3.1
Euclidean TSP Experiments
Figure 2 shows the results of the Euclidean TSP experiments. 500 different 100city tours from points uniformly generated in the 2D unit square were used as
input. The asymptotic expected length of an optimal tour for cities distributed
J( Vn where n is the number of cities and
in the unit square is given by L( n)
0.765 ~ J( ~ 0.765 +.1 [Lawler et al., 1985]. This gives the interval [7.65,8.05] for
the 100 city TSP. 95<70 of the tour lengths fall in the interval [8,11] when using the
soft assign approach. Note the large difference in performance between the soft assign
and the Potts glass algorithms. The serial Potts glass algorithm ran about 5 times
slower than the soft assign version. Also as noted previously the serial version is
not massively parallelizable. The synchronous Potts glass ran about 2 times slower.
Also note the softassign algorithm is much simpler to implement-fewer parameters
to tune.
=
3.2
Graph Partitioning Experiments
Figure 3 shows the results of the graph partitioning experiments. 2000 different
randomly generated 100 node graphs with 10% connectivity were used as input.
These graphs were partitioned into four bins. The soft assign performs better than
the Potts glass algorithms, however here the difference is more modest than in the
TSP experiments. However the serial Potts glass algorithm again ran about 5 times
slower then the soft assign version and as noted previously the serial version is not
massively parallelizable. The synchronous Potts glass ran about 2 times slower.
Softassign versus Softmax: Benchmarks in Combinatorial Optimization
631
r--
?
?
?
?
?
?
"'
"'
,.
r--
,.
?
--
,. "
r-
-
?
I
??
?
r-
? ..--:I
I!'--,,:,:...u:~~-""'=---!;-~,........_---!,..
11
It
11
n
,.
r-
II
11
n
"'
.........
"I
Inn
11
,.1
,.
.r--
'.1
1.~'
"
.
........
II
It
?
.. ..
Figure 2: 100 City Euclidean TSP. 500 experiments. Left: Softassign .. Middle:
Softmax (serial update). Right: Softmax (synchronous update).
Also again note the softassign algorithm was much simpler to implement-fewer
parameters to tune.
--
..
--....
e-
e-
,.
.n
"'
-
,.
Inn.
..
til
0-
-
~
-
r-
-
til
.n
..
r-
"'
,
,.
?
---
,
,.
r-
"'
..
',-
,.??
r-
...
.."'
"'
n_
til
?
_
~
~n
..
_
M
..
nn.
"'
Figure 3: 100 node Graph Partitioning, 4 bins. 2000 experiments. Left: Softassign ?. Middle: Softmax (serial update). Right: Softmax (synchronous
update).
A relatively simple version of graph partitioning was run. It is likely that as the
number of bins are increased the results on graph partitioning will come to resemble
more closely the TSP results, since when the number of bins equal the number of
nodes, the TSP can be considered a special case of graph partitioning (there are
some additional restrictions). However even in this simple case the softassign has
clear advantages over the softmax and penalty term.
4
Conclusion
For the first time, two classic combinatorial optimization problems, TSP and graph
partitioning, are solved using a new technique for constraint satisfaction, the soft assign. The softassign, which has recently emerged from the statistical physics/neural
networks framework, enforces a two-way (assignment) constraint, without penalty
terms in the energy function . We also show that the softassign can be generalized
from two-way winner-take-all constraints to multiple membership constraints, which
are required for graph partitioning. Benchmarks against the Potts glass methods,
using softmax and a penalty term, clearly demonstrate its advantages in terms
of accuracy, speed, parallelizability and simplicity of implementation. Within the
neural network/statistical physics framework, soft assign should be considered the
technique of choice for enforcing two-way constraints in energy functions.
_
632
S. GOLD,A. RANGARAJAN
References
[Gold, 1995] Gold, S ~ (1995). Matching and Learning Structural and Spatial Representations with Neural Networks. PhD thesis, Yale University.
[Gold et al., 1995] Gold, S., Lu, C. P., Rangarajan, A., Pappu , S., and Mjolsness,
E. (1995). New algorithms for 2-D and 3-D point matching: pose estimation
and correspondence. In Tesauro, G., Touretzky, D. S., and Leen, T. K., editors,
Advances in Neural Information Processing Systems 7, pages 957-964. MIT Press,
Cambridge, MA.
[Gold et al. , 1994] Gold, S., Mjolsness, E., and Rangarajan, A. (1994). Clustering
with a domain specific distance measure. In Cowan, J., Tesauro, G., and AIspector, J., editors, Advances in Neural Information Processing Systems 6, pages
96-103. Morgan Kaufmann, San Francisco, CA.
[Gold and Rangarajan, 1996] Gold, S. and Rangarajan, A. (1996) . A graduated assignment algorithm for graph matching. IEEE Transactions on Pattern Analysis
and Machine Intelligence, (in press).
[Gold et al., 1996] Gold, S., Rangarajan , A., and Mjolsness, E. (1996). Learning
with preknowledge: clustering with point and graph matching distance measures.
Neural Computation, (in press) .
[Hopfield and Tank, 1985] Hopfield, J. J. and Tank, D. (1985) . 'Neural' computation of decisions in optimization problems. Biological Cybernetics, 52:141-152.
[Kosowsky and Yuille, 1994] Kosowsky, J . J . and Yuille, A. L. (1994). The invisible
hand algorithm: Solving the assignment problem with statistical physics. Neural
Networks, 7(3):477-490.
[Lawler et al., 1985] Lawler, E. L., Lenstra, J. K., Kan, A. H. G. R., and Shmoys,
D. B., editors (1985). The Traveling Salesman Problem. John Wiley and Sons,
Chichester.
[Peterson and Soderberg, 1989] Peterson, C. and Soderberg, B. (1989). A new
method for mapping optimization problems onto neural networks. Inti. Journal of Neural Systems, 1(1):3-22.
[Rumelhart and McClelland, 1986] Rumelhart, D. and McClelland, J. L. (1986).
Parallel Distributed Processing, volume 1. MIT Press, Cambridge, MA.
[Sinkhorn, 1964] Sinkhorn, R. (1964). A relationship between arbitrary positive
matrices and doubly stochastic matrices. Ann. Math . Statist., 35:876-879.
[Van den Bout and Miller III, 1990] Van den Bout, D. E. and Miller III, T . K.
(1990). Graph partitioning using annealed networks. IEEE Trans. Neural Networks, 1(2):192-203.
[Yuille and Kosowsky, 1994] Yuille, A. L. and Kosowsky, J. J. (1994). Statistical
physics algorithms that converge. Neural Computation, 6(3):341-356.
| 1088 |@word middle:2 version:8 series:3 current:1 must:8 john:1 update:7 intelligence:1 fewer:2 math:1 node:6 simpler:2 along:1 become:1 doubly:5 introduce:1 expected:1 roughly:1 armed:1 lll:1 moreover:1 notation:1 mitigate:1 partitioning:19 unit:2 positive:3 local:2 subscript:1 enforces:2 pappu:1 implement:2 thought:1 matching:7 intention:1 get:1 onto:1 context:1 restriction:1 conventional:2 deterministic:2 interlocking:2 maximizing:1 annealed:1 simplicity:3 classic:3 stability:1 qai:1 modulo:1 massive:1 us:1 pa:1 element:4 rumelhart:3 updating:2 steven:1 solved:2 mal:1 mjolsness:3 ran:5 dynamic:8 solving:2 yuille:7 hopfield:6 whose:6 emerged:3 widely:1 solve:2 radiology:1 think:1 tsp:17 advantage:4 net:2 inn:3 poorly:1 gold:23 rangarajan:11 produce:1 help:1 derive:1 recurrent:2 n_:1 pose:1 resemble:1 indicate:1 come:1 closely:1 stochastic:5 bin:10 adjacency:1 assign:26 biological:1 strictly:1 considered:3 exp:2 algorithmic:1 cognition:1 mapping:1 early:1 estimation:1 combinatorial:10 visited:3 city:13 mit:2 clearly:1 modified:1 avoid:1 derived:1 potts:13 indicates:2 contrast:1 glass:13 membership:6 nn:1 spurious:1 reproduce:1 tank:6 spatial:1 softmax:21 special:2 equal:3 once:2 represents:1 minimized:3 haven:2 employ:3 randomly:1 softassign:15 chichester:1 partial:1 modest:1 taylor:1 euclidean:4 increased:2 column:7 soft:26 assignment:13 tour:4 conducted:1 reported:1 lenstra:1 physic:7 eel:1 connectivity:1 again:2 thesis:1 satisfied:1 positivity:2 lii:1 derivative:1 til:3 return:1 li:2 vi:3 reached:1 parallel:1 minimize:1 square:3 accuracy:3 kaufmann:1 miller:3 shmoys:1 iterated:1 lu:1 researcher:1 cybernetics:1 parallelizable:3 touretzky:1 against:2 energy:7 e2:3 mi:1 treatment:1 lawler:4 originally:1 parallelizability:2 day:4 formulation:1 leen:1 just:3 until:1 traveling:6 hand:1 quality:1 believe:1 normalized:1 multiplier:2 assigned:2 ll:1 noted:2 generalized:3 demonstrate:2 invisible:1 performs:1 temperature:2 recently:2 common:1 superior:2 winner:4 volume:1 analog:1 cambridge:2 ai:8 iia:3 lowered:1 sinkhorn:6 add:2 dominant:1 optimizing:1 tesauro:2 termed:2 massively:3 certain:2 morgan:1 minimum:4 additional:1 preceding:1 employed:1 converge:2 paradigm:1 ii:3 multiple:3 serial:9 equally:1 va:2 iteration:2 represent:2 normalization:1 hab:3 annealing:2 interval:2 eliminates:1 subject:2 cowan:1 member:2 anand:1 call:1 integer:1 structural:1 iii:3 graduated:1 architecture:1 synchronous:7 penalty:11 returned:1 e3:1 clear:2 involve:1 tune:2 maybe:1 mid:1 statist:1 mcclelland:3 mai:17 diagnostic:1 four:1 graph:27 enforced:1 run:3 inverse:1 vn:1 decision:1 ct:2 yale:3 correspondence:1 quadratic:2 constraint:32 worked:1 speed:3 relatively:1 department:1 remain:1 cutsize:1 son:1 partitioned:3 wta:8 den:3 gradually:1 inti:1 previously:2 enforcement:1 salesman:6 adopted:1 apply:1 enforce:2 slower:4 original:2 clustering:2 include:2 ensure:1 especially:1 objective:7 parametric:1 fa:1 distance:4 enforcing:2 length:2 relationship:1 minimizing:2 difficult:1 unfortunately:1 vai:2 implementation:2 benchmark:7 looking:1 lb:1 arbitrary:1 introduced:1 required:3 kosowsky:7 extensive:1 bout:3 trans:1 below:1 parallelism:1 pattern:2 including:1 power:1 critical:1 satisfaction:1 older:1 asymptotic:1 permutation:7 proven:1 versus:4 soderberg:8 editor:3 intercity:1 row:7 free:1 iej:1 preknowledge:1 fall:1 peterson:8 barrier:1 van:3 distributed:2 valid:1 evaluating:1 unweighted:1 san:1 employing:4 transaction:1 francisco:1 alternatively:1 continuous:1 iterative:1 ca:1 elastic:2 expansion:1 domain:1 main:1 allowed:1 depicts:1 reexamined:1 wiley:1 late:1 third:1 specific:1 emphasized:1 evidence:2 normalizing:2 essential:1 phd:1 simply:1 saddle:3 likely:1 lagrange:3 loses:1 kan:1 ma:3 goal:1 sized:2 ann:1 hard:2 included:1 specifically:1 uniformly:1 gij:1 experimental:1 la:1 attempted:1 dept:1 |