Datasets:
Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
0 | 1 | 767
SELF-ORGANIZATION OF ASSOCIATIVE DATABASE
AND ITS APPLICATIONS
Hisashi Suzuki and Suguru Arimoto
Osaka University, Toyonaka, Osaka 560, Japan
ABSTRACT
An efficient method of self-organizing associative databases is proposed together with
applications to robot eyesight systems. The proposed databases can associate any input
with some output. In the first half part of discussion, an algorithm of self-organization is
proposed. From an aspect of hardware, it produces a new style of neural network. In the
latter half part, an applicability to handwritten letter recognition and that to an autonomous
mobile robot system are demonstrated.
INTRODUCTION
Let a mapping f : X -+ Y be given. Here, X is a finite or infinite set, and Y is another
finite or infinite set. A learning machine observes any set of pairs (x, y) sampled randomly
from X x Y. (X x Y means the Cartesian product of X and Y.) And, it computes some
estimate j : X -+ Y of f to make small, the estimation error in some measure.
Usually we say that: the faster the decrease of estimation error with increase of the number of samples, the better the learning machine. However, such expression on performance
is incomplete. Since, it lacks consideration on the candidates of J of j assumed preliminarily. Then, how should we find out good learning machines? To clarify this conception,
let us discuss for a while on some types of learning machines. And, let us advance the
understanding of the self-organization of associative database .
. Parameter Type
An ordinary type of learning machine assumes an equation relating x's and y's with
parameters being indefinite, namely, a structure of f. It is equivalent to define implicitly a
set F of candidates of
(F is some subset of mappings from X to Y.) And, it computes
values of the parameters based on the observed samples. We call such type a parameter
type.
For a learning machine defined well, if F 3 f, j approaches f as the number of samples
increases. In the alternative case, however, some estimation error remains eternally. Thus,
a problem of designing a learning machine returns to find out a proper structure of f in this
sense.
On the other hand, the assumed structure of f is demanded to be as compact as possible
to achieve a fast learning. In other words, the number of parameters should be small. Since,
if the parameters are few, some j can be uniquely determined even though the observed
samples are few. However, this demand of being proper contradicts to that of being compact.
Consequently, in the parameter type, the better the compactness of the assumed structure
that is proper, the better the learning machine. This is the most elementary conception
when we design learning machines .
1.
. Universality and Ordinary Neural Networks
Now suppose that a sufficient knowledge on f is given though J itself is unknown. In
this case, it is comparatively easy to find out proper and compact structures of J. In the
alternative case, however, it is sometimes difficult. A possible solution is to give up the
compactness and assume an almighty structure that can cover various 1's. A combination
of some orthogonal bases of the infinite dimension is such a structure. Neural networks 1 ,2
are its approximations obtained by truncating finitely the dimension for implementation.
? American Institute of Physics 1988
768
A main topic in designing neural networks is to establish such desirable structures of 1.
This work includes developing practical procedures that compute values of coefficients from
the observed samples. Such discussions are :flourishing since 1980 while many efficient methods have been proposed. Recently, even hardware units computing coefficients in parallel
for speed-up are sold, e.g., ANZA, Mark III, Odyssey and E-1.
Nevertheless, in neural networks, there always exists a danger of some error remaining
eternally in estimating /. Precisely speaking, suppose that a combination of the bases of a
finite number can define a structure of 1 essentially. In other words, suppose that F 3 /, or
1 is located near F. In such case, the estimation error is none or negligible. However, if 1
is distant from F, the estimation error never becomes negligible. Indeed, many researches
report that the following situation appears when 1 is too complex. Once the estimation
error converges to some value (> 0) as the number of samples increases, it decreases hardly
even though the dimension is heighten. This property sometimes is a considerable defect of
neural networks .
. Recursi ve Type
The recursive type is founded on another methodology of learning that should be as
follows. At the initial stage of no sample, the set Fa (instead of notation F) of candidates
of I equals to the set of all mappings from X to Y. After observing the first sample
(Xl, Yl) E X x Y, Fa is reduced to Fi so that I(xt) = Yl for any I E F. After observing
the second sample (X2' Y2) E X x Y, Fl is further reduced to F2 so that i(xt) = Yl and
I(X2) = Y2 for any I E F. Thus, the candidate set F becomes gradually small as observation
of samples proceeds. The after observing i-samples, which we write
is one of the most
likelihood estimation of 1 selected in fi;. Hence, contrarily to the parameter type, the
recursive type guarantees surely that j approaches to 1 as the number of samples increases.
The recursive type, if observes a sample (x" yd, rewrites values 1,-l(X),S to I,(x)'s for
some x's correlated to the sample. Hence, this type has an architecture composed of a rule
for rewriting and a free memory space. Such architecture forms naturally a kind of database
that builds up management systems of data in a self-organizing way. However, this database
differs from ordinary ones in the following sense. It does not only record the samples already
observed, but computes some estimation of l(x) for any x E X. We call such database an
associative database.
The first subject in constructing associative databases is how we establish the rule for
rewri ting. For this purpose, we adap t a measure called the dissimilari ty. Here, a dissimilari ty
means a mapping d : X x X -+ {reals > O} such that for any (x, x) E X x X, d(x, x) > 0
whenever l(x) t /(x). However, it is not necessarily defined with a single formula. It is
definable with, for example, a collection of rules written in forms of "if? .. then?? .. "
The dissimilarity d defines a structure of 1 locally in X x Y. Hence, even though
the knowledge on f is imperfect, we can re:flect it on d in some heuristic way. Hence,
contrarily to neural networks, it is possible to accelerate the speed of learning by establishing
d well. Especially, we can easily find out simple d's for those l's which process analogically
information like a human. (See the applications in this paper.) And, for such /'s, the
recursive type shows strongly its effectiveness.
We denote a sequence of observed samples by (Xl, Yd, (X2' Y2),???. One of the simplest
constructions of associative databases after observing i-samples (i = 1,2,.,,) is as follows.
i
i"
I,
Algorithm 1. At the initial stage, let So be the empty set. For every i =
1,2" .. , let i,-l(x) for any x E X equal some y* such that (x*,y*) E S,-l and
d(x, x*) =
min
(%,y)ES.-t
d(x, x) .
Furthermore, add (x" y,) to S;-l to produce Sa, i.e., S, = S,_l U {(x"
(1)
y,n.
769
Another version improved to economize the memory is as follows.
Algorithm 2, At the initial stage, let So be composed of an arbitrary element
in X x Y. For every i = 1,2"", let ii-lex) for any x E X equal some y. such
that (x?, y.) E Si-l and
d(x, x?) =
min
d(x, x) .
(i,i)ES.-l
Furthermore, if ii-l(Xi) # Yi then let Si = Si-l, or add (Xi, Yi) to Si-l to
produce Si, i.e., Si = Si-l U {(Xi, Yi)}'
In either construction, ii approaches to f as i increases. However, the computation time
grows proportionally to the size of Si. The second subject in constructing associative
databases is what addressing rule we should employ to economize the computation time. In
the subsequent chapters, a construction of associative database for this purpose is proposed.
It manages data in a form of binary tree.
SELF-ORGANIZATION OF ASSOCIATIVE DATABASE
Given a sample sequence (Xl, Yl), (X2' Y2), .. " the algorithm for constructing associative
database is as follows.
Algorithm 3,'
Step I(Initialization): Let (x[root], y[root]) = (Xl, Yd. Here, x[.] and y[.] are
variables assigned for respective nodes to memorize data.. Furthermore, let t = 1.
Step 2: Increase t by 1, and put x, in. After reset a pointer n to the root, repeat
the following until n arrives at some terminal node, i.e., leaf.
Notations nand
d(xt, x[n)), let n
n mean the descendant nodes of n.
=n. Otherwise, let n =n.
If d(x" r[n)) ~
Step 3: Display yIn] as the related information. Next, put y, in. If yIn] = y" back
to step 2. Otherwise, first establish new descendant nodes n and n. Secondly,
let
(x[n], yIn))
(x[n], yIn))
(x[n], yIn)),
(Xt, y,).
(2)
(3)
Finally, back to step 2. Here, the loop of step 2-3 can be stopped at any time
and also can be continued.
Now, suppose that gate elements, namely, artificial "synapses" that play the role of branching by d are prepared. Then, we obtain a new style of neural network with gate elements
being randomly connected by this algorithm.
LETTER RECOGNITION
Recen tly, the vertical slitting method for recognizing typographic English letters3 , the
elastic matching method for recognizing hand written discrete English letters4 , the global
training and fuzzy logic search method for recognizing Chinese characters written in square
styleS, etc. are published. The self-organization of associative database realizes the recognition of handwritten continuous English letters.
770
9 /wn"
NOV
~ ~ ~ -xk :La.t
~~ ~ ~~~
dw1lo'
~~~~~of~~
~~~ 4,-?~~4Fig. 1. Source document.
2~~---------------'
lOO~---------------'
H
o
o
Fig. 2. Windowing.
1000
2000
3000
4000
Number of samples
o
1000
2000
3000
4000
NUAlber of sampl es
Fig. 3. An experiment result.
An image scanner takes a document image (Fig. 1). The letter recognizer uses a parallelogram window that at least can cover the maximal letter (Fig. 2), and processes the
sequence of letters while shifting the window. That is, the recognizer scans a word in a
slant direction. And, it places the window so that its left vicinity may be on the first black
point detected. Then, the window catches a letter and some part of the succeeding letter.
If recognition of the head letter is performed, its end position, namely, the boundary line
between two letters becomes known. Hence, by starting the scanning from this boundary
and repeating the above operations, the recognizer accomplishes recursively the task. Thus
the major problem comes to identifying the head letter in the window.
Considering it, we define the following.
? Regard window images as x's, and define X accordingly.
? For a (x, x) E X x X, denote by B a black point in the left area from the boundary on
window image X. Project each B onto window image x. Then, measure the Euclidean
distance 6 between fj and a black point B on x being the closest to B. Let d(x, x) be
the summation of 6's for all black points B's on x divided by the number of B's.
? Regard couples of the "reading" and the position of boundary as y's, and define Y
accordingly.
An operator teaches the recognizer in interaction the relation between window image and
reading& boundary with algorithm 3. Precisely, if the recalled reading is incorrect, the
operator teaches a correct reading via the console. Moreover, if the boundary position is
incorrect, he teaches a correct position via the mouse.
Fig. 1 shows partially a document image used in this experiment. Fig. 3 shows the
change of the number of nodes and that of the recognition rate defined as the relative
frequency of correct answers in the past 1000 trials. Speciiications of the window are height
= 20dot, width = 10dot, and slant angular = 68deg. In this example, the levels of tree
were distributed in 6-19 at time 4000 and the recognition rate converged to about 74%.
Experimentally, the recognition rate converges to about 60-85% in most cases, and to 95% at
a rare case. However, it does not attain 100% since, e.g., "c" and "e" are not distinguishable
because of excessive lluctuation in writing. If the consistency of the x, y-relation is not
assured like this, the number of nodes increases endlessly (d. Fig. 3). Hence, it is clever to
stop the learning when the recognition rate attains some upper limit. To improve further
the recognition rate, we must consider the spelling of words. It is one of future subjects.
771
OBSTACLE AVOIDING MOVEMENT
Various systems of camera type autonomous mobile robot are reported flourishingly6-1O.
The system made up by the authors (Fig. 4) also belongs to this category. Now, in mathematical methodologies, we solve usually the problem of obstacle avoiding movement as
a cost minimization problem under some cost criterion established artificially. Contrarily,
the self-organization of associative database reproduces faithfully the cost criterion of an
operator. Therefore, motion of the robot after learning becomes very natural.
Now, the length, width and height of the robot are all about O.7m, and the weight is
about 30kg. The visual angle of camera is about 55deg. The robot has the following three
factors of motion. It turns less than ?30deg, advances less than 1m, and controls speed less
than 3km/h. The experiment was done on the passageway of wid th 2.5m inside a building
which the authors' laboratories exist in (Fig. 5). Because of an experimental intention, we
arrange boxes, smoking stands, gas cylinders, stools, handcarts, etc. on the passage way at
random. We let the robot take an image through the camera, recall a similar image, and
trace the route preliminarily recorded on it. For this purpose, we define the following.
? Let the camera face 28deg downward to take an image, and process it through a low
pass filter. Scanning vertically the filtered image from the bottom to the top, search
the first point C where the luminance changes excessively. Then, su bstitu te all points
from the bottom to C for white, and all points from C to the top for black (Fig. 6).
(If no obstacle exists just in front of the robot, the white area shows the ''free'' area
where the robot can move around.) Regard binary 32 x 32dot images processed thus
as x's, and define X accordingly.
? For every (x, x) E X x X, let d(x, x) be the number of black points on the exclusive-or
image between x and X.
? Regard as y's the images obtained by drawing routes on images x's, and define Y
accordingly.
The robot superimposes, on the current camera image x, the route recalled for x, and
inquires the operator instructions. The operator judges subjectively whether the suggested
route is appropriate or not. In the negative answer, he draws a desirable route on x with the
mouse to teach a new y to the robot. This opera.tion defines implicitly a sample sequence
of (x, y) reflecting the cost criterion of the operator.
.::l" !
-
IibUBe
_. -
22
11
Roan
12
{-
13
Stationary uni t
Fig. 4. Configuration of
autonomous mobile robot system.
~
I
,
23
24
North
14
rmbi Ie unit (robot)
-
Roan
y
t
Fig. 5. Experimental
environment.
772
Wall
Camera image
Preprocessing
A
::: !fa
?
Preprocessing
0
O
Course
suggest ion
??
..
Search
A
Fig. 6. Processing for
obstacle avoiding movement.
x
Fig. 1. Processing for
position identification.
We define the satisfaction rate by the relative frequency of acceptable suggestions of
route in the past 100 trials. In a typical experiment, the change of satisfaction rate showed
a similar tendency to Fig. 3, and it attains about 95% around time 800. Here, notice that
the rest 5% does not mean directly the percentage of collision. (In practice, we prevent the
collision by adopting some supplementary measure.) At time 800, the number of nodes was
145, and the levels of tree were distributed in 6-17.
The proposed method reflects delicately various characters of operator. For example, a
robot trained by an operator 0 moves slowly with enough space against obstacles while one
trained by another operator 0' brushes quickly against obstacles. This fact gives us a hint
on a method of printing "characters" into machines.
POSITION IDENTIFICATION
The robot can identify its position by recalling a similar landscape with the position data
to a camera image. For this purpose, in principle, it suffices to regard camera images and
position data as x's and y's, respectively. However, the memory capacity is finite in actual
compu ters. Hence, we cannot but compress the camera images at a slight loss of information.
Such compression is admittable as long as the precision of position identification is in an
acceptable area. Thus, the major problem comes to find out some suitable compression
method.
In the experimental environment (Fig. 5), juts are on the passageway at intervals of
3.6m, and each section between adjacent juts has at most one door. The robot identifies
roughly from a surrounding landscape which section itself places in. And, it uses temporarily
a triangular surveying technique if an exact measure is necessary. To realize the former task,
we define the following .
? Turn the camera to take a panorama image of 360deg. Scanning horizontally the
center line, substitute the points where the luminance excessively changes for black
and the other points for white (Fig. 1). Regard binary 360dot line images processed
thus as x's, and define X accordingly.
? For every (x, x) E X x X, project each black point A on x onto x. And, measure the
Euclidean distance 6 between A and a black point A on x being the closest to A. Let
the summation of 6 be S. Similarly, calculate S by exchanging the roles of x and X.
Denoting the numbers of A's and A's respectively by nand n, define
773
d(x, x) =
~(~
+ ~).
2 n
n
(4)
? Regard positive integers labeled on sections as y's (cf. Fig. 5), and define Y accordingly.
In the learning mode, the robot checks exactly its position with a counter that is reset periodically by the operator. The robot runs arbitrarily on the passageways within 18m area
and learns the relation between landscapes and position data. (Position identification beyond 18m area is achieved by crossing plural databases one another.) This task is automatic
excepting the periodic reset of counter, namely, it is a kind of learning without teacher.
We define the identification rate by the relative frequency of correct recalls of position
data in the past 100 trials. In a typical example, it converged to about 83% around time
400. At time 400, the number of levels was 202, and the levels oftree were distributed in 522. Since the identification failures of 17% can be rejected by considering the trajectory, no
pro blem arises in practical use. In order to improve the identification rate, the compression
ratio of camera images must be loosened. Such possibility depends on improvement of the
hardware in the future.
Fig. 8 shows an example of actual motion of the robot based on the database for obstacle
avoiding movement and that for position identification. This example corresponds to a case
of moving from 14 to 23 in Fig. 5. Here, the time interval per frame is about 40sec.
,~. .~ (
;~"i..
~
"
"
.
..I
I
?
?
"
I'
.
'.1
t
;
i
-:
, . . , 'II
Fig. 8. Actual motion of the robot.
774
CONCLUSION
A method of self-organizing associative databases was proposed with the application to
robot eyesight systems. The machine decomposes a global structure unknown into a set of
local structures known and learns universally any input-output response. This framework
of problem implies a wide application area other than the examples shown in this paper.
A defect of the algorithm 3 of self-organization is that the tree is balanced well only
for a subclass of structures of f. A subject imposed us is to widen the class. A probable
solution is to abolish the addressing rule depending directly on values of d and, instead, to
establish another rule depending on the distribution function of values of d. It is now under
investigation.
REFERENCES
1. Hopfield, J. J. and D. W. Tank, "Computing with Neural Circuit: A Model/'
Science 233 (1986), pp. 625-633.
2. Rumelhart, D. E. et al., "Learning Representations by Back-Propagating Errors," Nature 323 (1986), pp. 533-536.
3. Hull, J. J., "Hypothesis Generation in a Computational Model for Visual Word
Recognition," IEEE Expert, Fall (1986), pp. 63-70.
4. Kurtzberg, J. M., "Feature Analysis for Symbol Recognition by Elastic Matching," IBM J. Res. Develop. 31-1 (1987), pp. 91-95.
5. Wang, Q. R. and C. Y. Suen, "Large Tree Classifier with Heuristic Search and
Global Training," IEEE Trans. Pattern. Anal. & Mach. Intell. PAMI 9-1
(1987) pp. 91-102.
6. Brooks, R. A. et al, "Self Calibration of Motion and Stereo Vision for Mobile
Robots," 4th Int. Symp. of Robotics Research (1987), pp. 267-276.
7. Goto, Y. and A. Stentz, "The CMU System for Mobile Robot Navigation," 1987
IEEE Int. Conf. on Robotics & Automation (1987), pp. 99-105.
8. Madarasz, R. et al., "The Design of an Autonomous Vehicle for the Disabled,"
IEEE Jour. of Robotics & Automation RA 2-3 (1986), pp. 117-125.
9. Triendl, E. and D. J. Kriegman, "Stereo Vision and Navigation within Buildings," 1987 IEEE Int. Conf. on Robotics & Automation (1987), pp. 1725-1730.
10. Turk, M. A. et al., "Video Road-Following for the Autonomous Land Vehicle,"
1987 IEEE Int. Conf. on Robotics & Automation (1987), pp. 273-279.
| 1 |@word trial:3 version:1 compression:3 instruction:1 km:1 delicately:1 recursively:1 initial:3 configuration:1 denoting:1 document:3 past:3 current:1 si:8 universality:1 written:3 must:2 realize:1 subsequent:1 periodically:1 distant:1 succeeding:1 sampl:1 stationary:1 half:2 selected:1 leaf:1 accordingly:6 xk:1 record:1 pointer:1 filtered:1 node:7 height:2 mathematical:1 descendant:2 incorrect:2 symp:1 inside:1 ra:1 indeed:1 roughly:1 terminal:1 endlessly:1 actual:3 window:10 considering:2 becomes:4 project:2 estimating:1 notation:2 moreover:1 circuit:1 what:1 kg:1 kind:2 surveying:1 fuzzy:1 guarantee:1 every:4 passageway:3 subclass:1 exactly:1 classifier:1 control:1 unit:2 positive:1 negligible:2 vertically:1 local:1 limit:1 mach:1 establishing:1 yd:3 pami:1 black:9 initialization:1 practical:2 camera:11 recursive:4 practice:1 differs:1 procedure:1 danger:1 area:7 attain:1 matching:2 word:5 intention:1 road:1 suggest:1 onto:2 clever:1 cannot:1 operator:10 put:2 writing:1 equivalent:1 imposed:1 demonstrated:1 center:1 starting:1 truncating:1 identifying:1 rule:6 continued:1 osaka:2 autonomous:5 construction:3 suppose:4 play:1 exact:1 us:2 designing:2 hypothesis:1 associate:1 element:3 crossing:1 recognition:11 rumelhart:1 located:1 database:19 labeled:1 observed:5 role:2 bottom:2 wang:1 calculate:1 connected:1 decrease:2 movement:4 counter:2 observes:2 balanced:1 environment:2 kriegman:1 trained:2 rewrite:1 f2:1 eyesight:2 accelerate:1 easily:1 hopfield:1 various:3 chapter:1 surrounding:1 fast:1 artificial:1 detected:1 heuristic:2 supplementary:1 solve:1 say:1 drawing:1 otherwise:2 triangular:1 itself:2 associative:13 preliminarily:2 sequence:4 interaction:1 product:1 reset:3 maximal:1 loop:1 organizing:3 achieve:1 empty:1 adap:1 produce:3 converges:2 depending:2 develop:1 propagating:1 finitely:1 sa:1 memorize:1 come:2 judge:1 implies:1 direction:1 correct:4 filter:1 hull:1 human:1 wid:1 odyssey:1 suffices:1 wall:1 investigation:1 probable:1 elementary:1 secondly:1 summation:2 clarify:1 scanner:1 around:3 mapping:4 major:2 arrange:1 purpose:4 recognizer:4 estimation:8 realizes:1 faithfully:1 reflects:1 minimization:1 suen:1 always:1 mobile:5 superimposes:1 improvement:1 likelihood:1 check:1 attains:2 sense:2 nand:2 compactness:2 relation:3 tank:1 equal:3 once:1 never:1 definable:1 excessive:1 future:2 report:1 hint:1 few:2 employ:1 randomly:2 widen:1 composed:2 ve:1 intell:1 recalling:1 cylinder:1 organization:7 possibility:1 navigation:2 arrives:1 necessary:1 respective:1 orthogonal:1 tree:5 incomplete:1 euclidean:2 re:2 stopped:1 obstacle:7 cover:2 exchanging:1 ordinary:3 applicability:1 cost:4 addressing:2 subset:1 rare:1 recognizing:3 too:1 loo:1 front:1 reported:1 answer:2 scanning:3 periodic:1 teacher:1 jour:1 ie:1 physic:1 yl:4 together:1 mouse:2 quickly:1 recorded:1 management:1 slowly:1 compu:1 conf:3 american:1 expert:1 style:3 return:1 japan:1 hisashi:1 sec:1 includes:1 coefficient:2 north:1 int:4 automation:4 depends:1 performed:1 root:3 tion:1 vehicle:2 observing:4 parallel:1 square:1 opera:1 abolish:1 identify:1 landscape:3 handwritten:2 identification:8 manages:1 none:1 trajectory:1 published:1 converged:2 synapsis:1 whenever:1 against:2 ty:2 failure:1 frequency:3 pp:10 turk:1 naturally:1 couple:1 sampled:1 stop:1 recall:2 knowledge:2 back:3 reflecting:1 appears:1 methodology:2 response:1 improved:1 done:1 though:4 strongly:1 box:1 furthermore:3 angular:1 stage:3 just:1 rejected:1 until:1 hand:2 su:1 lack:1 defines:2 mode:1 disabled:1 grows:1 building:2 excessively:2 y2:4 former:1 hence:7 assigned:1 vicinity:1 laboratory:1 white:3 adjacent:1 self:11 uniquely:1 branching:1 width:2 criterion:3 motion:5 fj:1 passage:1 pro:1 loosened:1 image:23 consideration:1 recently:1 fi:2 console:1 arimoto:1 he:2 slight:1 relating:1 slant:2 stool:1 automatic:1 consistency:1 similarly:1 dot:4 moving:1 robot:23 calibration:1 etc:2 base:2 add:2 subjectively:1 closest:2 showed:1 belongs:1 route:6 binary:3 arbitrarily:1 yi:3 accomplishes:1 surely:1 ii:4 windowing:1 desirable:2 faster:1 long:1 divided:1 essentially:1 vision:2 cmu:1 sometimes:2 adopting:1 achieved:1 ion:1 robotics:5 interval:2 source:1 rest:1 contrarily:3 subject:4 goto:1 effectiveness:1 call:2 integer:1 near:1 door:1 iii:1 conception:2 easy:1 wn:1 enough:1 architecture:2 imperfect:1 whether:1 expression:1 stereo:2 speaking:1 hardly:1 collision:2 proportionally:1 repeating:1 prepared:1 locally:1 hardware:3 processed:2 category:1 simplest:1 reduced:2 exist:1 percentage:1 notice:1 per:1 write:1 discrete:1 indefinite:1 nevertheless:1 prevent:1 rewriting:1 economize:2 luminance:2 defect:2 tly:1 run:1 angle:1 letter:11 place:2 parallelogram:1 draw:1 acceptable:2 fl:1 display:1 precisely:2 x2:4 aspect:1 speed:3 min:2 stentz:1 developing:1 combination:2 contradicts:1 character:3 gradually:1 equation:1 remains:1 discus:1 turn:2 end:1 operation:1 appropriate:1 alternative:2 gate:2 substitute:1 compress:1 assumes:1 remaining:1 top:2 cf:1 ting:1 build:1 establish:4 especially:1 chinese:1 comparatively:1 move:2 already:1 lex:1 fa:3 exclusive:1 spelling:1 distance:2 capacity:1 topic:1 length:1 ratio:1 difficult:1 jut:2 teach:4 trace:1 negative:1 design:2 implementation:1 proper:4 anal:1 unknown:2 upper:1 vertical:1 observation:1 sold:1 finite:4 gas:1 situation:1 head:2 frame:1 arbitrary:1 pair:1 namely:4 smoking:1 recalled:2 established:1 trans:1 brook:1 beyond:1 suggested:1 proceeds:1 usually:2 pattern:1 reading:4 memory:3 video:1 shifting:1 suitable:1 satisfaction:2 natural:1 improve:2 identifies:1 catch:1 understanding:1 relative:3 loss:1 suggestion:1 generation:1 sufficient:1 principle:1 land:1 ibm:1 course:1 repeat:1 free:2 english:3 institute:1 wide:1 fall:1 face:1 distributed:3 regard:7 boundary:6 dimension:3 stand:1 computes:3 author:2 suzuki:1 collection:1 made:1 preprocessing:2 universally:1 founded:1 nov:1 compact:3 uni:1 implicitly:2 logic:1 deg:5 global:3 reproduces:1 assumed:3 xi:3 excepting:1 search:4 demanded:1 continuous:1 decomposes:1 nature:1 elastic:2 flourishing:1 complex:1 necessarily:1 constructing:3 artificially:1 assured:1 main:1 plural:1 fig:22 precision:1 position:15 xl:4 candidate:4 printing:1 learns:2 formula:1 xt:4 symbol:1 exists:2 dissimilarity:1 te:1 downward:1 cartesian:1 demand:1 yin:5 distinguishable:1 visual:2 horizontally:1 temporarily:1 partially:1 ters:1 brush:1 corresponds:1 consequently:1 considerable:1 dissimilari:2 change:4 experimentally:1 infinite:3 determined:1 typical:2 panorama:1 called:1 pas:1 e:3 la:1 experimental:3 tendency:1 mark:1 latter:1 scan:1 arises:1 avoiding:4 correlated:1 |
1 | 10 | 683
A MEAN FIELD THEORY OF LAYER IV OF VISUAL CORTEX
AND ITS APPLICATION TO ARTIFICIAL NEURAL NETWORKS*
Christopher L. Scofield
Center for Neural Science and Physics Department
Brown University
Providence, Rhode Island 02912
and
Nestor, Inc., 1 Richmond Square, Providence, Rhode Island,
02906.
ABSTRACT
A single cell theory for the development of selectivity and
ocular dominance in visual cortex has been presented previously
by Bienenstock, Cooper and Munrol. This has been extended to a
network applicable to layer IV of visual cortex 2 . In this paper
we present a mean field approximation that captures in a fairly
transparent manner the qualitative, and many of the
quantitative, results of the network theory. Finally, we consider
the application of this theory to artificial neural networks and
show that a significant reduction in architectural complexity is
possible.
A SINGLE LAYER NETWORK AND THE MEAN FIELD
APPROXIMATION
We consider a single layer network of ideal neurons which
receive signals from outside of the layer and from cells within
the layer (Figure 1). The activity of the ith cell in the network is
c'1 -- m'1 d + ""'
~ T .. c'
~J J'
J
(1)
Each cell
d is a vector of afferent signals to the network.
receives input from n fibers outside of the cortical network
through the matrix of synapses mi' Intra-layer input to each cell
is then transmitted through the matrix of cortico-cortical
synapses L.
? American Institute of Physics 1988
684
Afferent
Signals
>
... ..
m2
m1
mn
~
r;.
",...-
d
.L
:
1
,~
2
... ..
, ...c.. ,
~
~
Figure 1: The general single layer recurrent
network.
Light circles are the LGN -cortical
synapses.
Dark circles are the (nonmodifiable) cortico-cortical synapses.
We now expand the response of the i th cell into individual
terms describing the number of cortical synapses traversed by
the signal d before arriving through synapses Lij at cell i.
Expanding Cj in (1), the response of cell i becomes
ci
=mi d + l: ~j mj d + l: ~jL Ljk mk d + 2: ~j 2Ljk L Lkn mn d +... (2)
J
J
K
J
K' n
Note that each term contains a factor of the form
This factor describes the first order effect, on cell q, of the
cortical transformation of the signal d.
The mean field
approximation consists of estimating this factor to be a constant,
independant of cell location
(3)
685
This assumption does not imply that each cell in the network is
selective to the same pattern, (and thus that mi = mj). Rather,
the assumption is that the vector sum is a constant
This amounts to assuming that each cell in the network is
surrounded by a population of cells which represent, on average,
all possible pattern preferences.
Thus the vector sum of the
afferent synaptic states describing these pattern preferences is a
constant independent of location.
Finally, if we assume that the lateral connection strengths are
a function only of i-j then Lij becomes a circular matrix so that
r. Lij ::: ~J Lji = Lo = constan t.
1
Then the response of the cell i becomes
(4)
for I
~
I <1
where we define the spatial average of cortical cell activity C = in
d, and N is the average number of intracortical synapses.
Here, in a manner similar to that in the theory of magnetism,
we have replaced the effect of individual cortical cells by their
average effect (as though all other cortical cells can be replaced
by an 'effective' cell, figure 2). Note that we have retained all
orders of synaptic traversal of the signal d.
Thus, we now focus on the activity of the layer after
'relaxation' to equilibrium. In the mean field approximation we
can therefore write
(5)
where the mean field
a
with
=am
686
and we asume that
inhibitory).
Afferent
Signals
d
Lo < 0 (the network is,
on
average,
>
Figure 2: The single layer mean field network.
Detailed connectivity between all cells of the
network is replaced with a single (nonmodifiable) synapse from an 'effective' cell.
LEARNING IN THE CORTICAL NETWORK
We will first consider evolution of the network according to a
synaptic modification rule that has been studied in detail, for
single cells, elsewhere!? 3.
We consider the LGN - cortical
synapses to be the site of plasticity and assume for maximum
simplicity that there is no modification of cortico-cortical
synapses. Then
(6)
.
Lij = O.
In what follows c denotes the spatial average over cortical cells,
while Cj denotes the time averaged activity of the i th cortical cell.
The function cj> has been discussed extensively elsewhere.
Here
we note that cj> describes a function of the cell response that has
both hebbian and anti-hebbian regions.
687
This leads to a very complex set of non-linear stochastic
equations that have been analyzed partially elsewhere 2 . In
general, the afferent synaptic state has fixed points that are
stable and selective and unstable fixed points that are nonselective!, 2. These arguments may now be generalized for the
network. In the mean field approximation
(7)
The mean field, a has a time dependent component m. This
varies as the average over all of the network modifiable
synapses and, in most environmental situations, should change
slowly compared to the change of the modifiable synapses to a
single cell. Then in this approximation we can write
?
(mi(a)-a) = cj>[mi(a) - a] d.
(8)
We see that there is a mapping
mi' <-> mica) - a
(9)
such that for every mj(a) there exists a corresponding (mapped)
point mj' which satisfies
the original equation for the mean field zero theory. It can be
shown 2, 4 that for every fixed point of mj( a = 0), there exists a
corresponding fixed point mj( a) with the same selectivity and
stability properties.
The fixed points are available to the
neurons if there is sufficient inhibition in the network (ILo I is
sufficiently large).
APPLICATION OF THE MEAN FIELD NETWORK TO
LAYER IV OF VISUAL CORTEX
Neurons in the primary visual cortex of normal adult cats are
sharply tuned for the orientation of an elongated slit of light and
most are activated by stimulation of either eye. Both of these
properties--orientation selectivity and binocularity--depend on
the type of visual environment experienced during a critical
688
period of early postnatal development. For example, deprivation
of patterned input during this critical period leads to loss of
orientation selectivity while monocular deprivation (MD) results
in a dramatic shift in the ocular dominance of cortical neurons
such that most will be responsive exclusively to the open eye.
The ocular dominance shift after MD is the best known and most
intensively studied type of visual cortical plasticity.
The behavior of visual cortical cells in various rearing
conditions suggests that some cells respond more rapidly to
environmental changes than others.
In monocular deprivation,
for example, some cells remain responsive to the closed eye in
spite of the very large shift of most cells to the open eye- Singer
et. al. 5 found, using intracellular recording, that geniculo-cortical
synapses on inhibitory interneurons are more resistant to
monocular deprivation than are synapses on pyramidal cell
dendrites. Recent work suggests that the density of inhibitory
GABAergic synapses in kitten striate cortex is also unaffected by
MD during the cortical period 6, 7.
These results suggest that some LGN -cortical synapses modify
rapidly, while others modify relatively slowly, with slow
modification of some cortico-cortical synapses. Excitatory LGNcortical synapses into excitatory cells may be those that modify
primarily.
To embody these facts we introduce two types of
LGN -cortical synapses:
those (mj) that modify and those (Zk)
that remain relatively constant. In a simple limit we have
and
(10)
We assume for simplicity and consistent with the above
physiological interpretation that these two types of synapses are
confined to two different classes of cells and that both left and
right eye have similar synapses (both m i or both Zk) on a given
cell. Then, for binocular cells, in the mean field approximation
(where binocular terms are in italics)
689
where dl(r) are the explicit left (right) eye time averaged signals
arriving form the LGN.
Note that a1(r) contain terms from
modifiable and non-modifiable synapses:
al(r) =
a (ml(r) + zl(r?).
Under conditions of monocular deprivation, the animal is reared
with one eye closed. For the sake of analysis assume that the
right eye is closed and that only noise-like signals arrive at
cortex from the right eye. Then the environment of the cortical
cells is:
d = (di, n)
(12)
Further, assume that the left eye synapses have reached their
1
r
selective fixed point, selective to pattern d 1 ? Then (mi' m i )
(m:*, xi) with IXil ?lm!*1.
linear analysis of the
the closed eye
<I> -
=
Following the methods of BCM, a local
function is employed to show that for
Xi =
a (1 - }..a)-li.r.
(13)
where A. = NmIN is the ratio of the number modifiable cells to the
total number of cells in the network. That is, the asymptotic
state of the closed eye synapses is a scaled function of the meanfield due to non-modifiable (inhibitory) cortical cells. The scale
of this state is set not only by the proportion of non-modifiable
cells, but in addition, by the averaged intracortical synaptic
strength Lo.
Thus contrasted with the mean field zero theory the deprived
eye LGN-cortical synapses do not go to zero.
Rather they
approach the constant value dependent on the average inhibition
produced by the non-modifiable cells in such a way that the
asymptotic output of the cortical cell is zero (it cannot be driven
by the deprived eye). However lessening the effect of inhibitory
synapses (e.g. by application of an inhibitory blocking agent such
as bicuculine) reduces the magnitude of a so that one could once
more obtain a response from the deprived eye.
690
We find, consistent with previous theory and experiment,
that most learning can occur in the LGN-cortical synapse, for
inhibitory (cortico-cortical) synapses need not modify.
Some
non-modifiable LGN-cortical synapses are required.
THE MEAN FIELD APPROXIMATION AND
ARTIFICIAL NEURAL NETWORKS
The mean field approximation may be applied to networks in
which the cortico-cortical feedback is a general function of cell
activity. In particular, the feedback may measure the difference
between the network activity and memories of network activity.
In this way, a network may be used as a content addressable
memory.
We have been discussing the properties of a mean
field network after equilibrium has been reached. We now focus
on the detailed time dependence of the relaxation of the cell
activity to a state of equilibrium.
Hopfield8 introduced a simple formalism for the analysis of
the time dependence of network activity.
In this model,
network activity is mapped onto a physical system in which the
state of neuron activity is considered as a 'particle' on a potential
energy surface.
Identification of the pattern occurs when the
activity 'relaxes' to a nearby minima of the energy.
Thus
mlmma are employed as the sites of memories. For a Hopfield
network of N neurons, the intra-layer connectivity required is of
order N2. This connectivity is a significant constraint on the
practical implementation of such systems for large scale
problems. Further, the Hopfield model allows a storage capacity
which is limited to m < N memories 8, 9. This is a result of the
proliferation of unwanted local minima in the 'energy' surface.
Recently, Bachmann et al. l 0, have proposed a model for the
relaxation of network activity in which memories of activity
patterns are the sites of negative 'charges', and the activity
caused by a test pattern is a positive test 'charge'. Then in this
model, the energy function is the electrostatic energy of the
(unit) test charge with the collection of charges at the memory
sites
E = -IlL ~ Qj I J-l- Xj I - L,
J
(14)
691
where Jl (0) is a vector describing the initial network activity
caused by a test pattern, and Xj' the site of the jth memory. L is
a parameter related to the network size.
This model has the advantage that storage density is not
restricted by the the network size as it is in the Hopfield model,
and in addition, the architecture employs a connectivity of order
m x N.
Note that at each stage in the settling of Jl (t) to a memory
(of network activity) Xj' the only feedback from the network to
each cell is the scalar
~
J
Q. I Jl- X? I - L
J
J
(15)
This quantity is an integrated measure of the distance of the
current network state from stored memories.
Importantly, this
measure is the same for all cells; it is as if a single virtual cell
was computing the distance in activity space between the
current state and stored states. The result of the computation is
This is a
then broadcast to all of the cells in the network.
generalization of the idea that the detailed activity of each cell in
the network need not be fed back to each cell.
Rather some
global measure, performed by a single 'effective' cell is all that is
sufficient in the feedback.
DISCUSSION
We have been discussing a formalism for the analysis of
networks of ideal neurons based on a mean field approximation
of the detailed activity of the cells in the network. We find that
a simple assumption concerning the spatial distribution of the
pattern preferences of the cells allows a great simplification of
the analysis. In particular, the detailed activity of the cells of
the network may be replaced with a mean field that in effect is
computed by a single 'effective' cell.
Further, the application of this formalism to the cortical layer
IV of visual cortex allows the prediction that much of learning in
cortex may be localized to the LGN-cortical synaptic states, and
that cortico-cortical plasticity is relatively unimportant. We find,
in agreement with experiment, that monocular deprivation of
the cortical cells will drive closed-eye responses to zero, but
chemical blockage of the cortical inhibitory pathways would
reveal non-zero closed-eye synaptic states.
692
Finally, the mean field approximation allows the development
of single layer models of memory storage that are unrestricted
in storage density, but require a connectivity of order mxN. This
is significant for the fabrication of practical content addressable
memories.
ACKNOWLEOOEMENTS
I would like to thank Leon Cooper for many helpful discussions
and the contributions he made to this work.
*This work was supported by the Office of Naval Research and
the Army Research Office under contracts #NOOOI4-86-K-0041
and #DAAG-29-84-K-0202.
REFERENCES
[1] Bienenstock, E. L., Cooper, L. N & Munro, P. W. (1982) 1.
Neuroscience 2, 32-48.
[2] Scofield, C. L. (I984) Unpublished Dissertation.
[3] Cooper, L. N, Munro, P. W. & Scofield, C. L. (1985) in Synaptic
Modification, Neuron Selectivity and Nervous System
Organization, ed. C. Levy, J. A. Anderson & S. Lehmkuhle,
(Erlbaum Assoc., N. J.).
[4] Cooper, L. N & Scofield, C. L. (to be published) Proc. Natl. Acad.
Sci. USA ..
[5] Singer, W. (1977) Brain Res. 134, 508-000.
[6] Bear, M. F., Schmechel D. M., & Ebner, F. F. (1985) 1. Neurosci.
5, 1262-0000.
[7] Mower, G. D., White, W. F., & Rustad, R. (1986) Brain Res. 380,
253-000.
[8] Hopfield, J. J. (1982) Proc. Natl. A cad. Sci. USA 79, 2554-2558.
[9] Hopfield, J. J., Feinstein, D. 1., & Palmer, R. O. (1983) Nature
304, 158-159.
[10] Bachmann, C. M., Cooper, L. N, Dembo, A. & Zeitouni, O. (to be
published) Proc. Natl. Acad. Sci. USA.
| 10 |@word proportion:1 open:2 independant:1 dramatic:1 reduction:1 initial:1 contains:1 exclusively:1 tuned:1 rearing:1 current:2 cad:1 ixil:1 plasticity:3 nervous:1 postnatal:1 dembo:1 ith:1 dissertation:1 location:2 preference:3 lessening:1 qualitative:1 consists:1 pathway:1 introduce:1 manner:2 proliferation:1 embody:1 behavior:1 brain:2 becomes:3 estimating:1 what:1 transformation:1 quantitative:1 every:2 charge:4 unwanted:1 lehmkuhle:1 scaled:1 assoc:1 zl:1 unit:1 before:1 positive:1 local:2 modify:5 limit:1 acad:2 rhode:2 studied:2 suggests:2 patterned:1 limited:1 palmer:1 averaged:3 practical:2 addressable:2 spite:1 suggest:1 cannot:1 onto:1 storage:4 elongated:1 center:1 mower:1 go:1 simplicity:2 m2:1 rule:1 importantly:1 stability:1 population:1 agreement:1 nonmodifiable:2 blocking:1 capture:1 region:1 environment:2 lji:1 complexity:1 traversal:1 depend:1 magnetism:1 hopfield:5 cat:1 fiber:1 various:1 mxn:1 effective:4 artificial:3 outside:2 advantage:1 rapidly:2 recurrent:1 stochastic:1 virtual:1 require:1 transparent:1 generalization:1 traversed:1 sufficiently:1 considered:1 normal:1 great:1 equilibrium:3 mapping:1 lm:1 early:1 proc:3 applicable:1 nonselective:1 rather:3 office:2 focus:2 naval:1 richmond:1 am:1 helpful:1 dependent:2 integrated:1 bienenstock:2 expand:1 selective:4 lgn:9 orientation:3 ill:1 development:3 animal:1 spatial:3 fairly:1 field:19 once:1 others:2 primarily:1 employ:1 nestor:1 individual:2 replaced:4 organization:1 interneurons:1 circular:1 intra:2 analyzed:1 light:2 activated:1 natl:3 ilo:1 iv:4 circle:2 re:2 mk:1 formalism:3 fabrication:1 erlbaum:1 stored:2 providence:2 varies:1 lkn:1 density:3 contract:1 physic:2 connectivity:5 broadcast:1 slowly:2 american:1 li:1 potential:1 intracortical:2 inc:1 caused:2 afferent:5 performed:1 closed:7 reached:2 contribution:1 square:1 identification:1 produced:1 drive:1 unaffected:1 published:2 synapsis:27 synaptic:8 ed:1 energy:5 ocular:3 mi:7 di:1 blockage:1 noooi4:1 intensively:1 cj:5 back:1 response:6 synapse:2 though:1 anderson:1 binocular:2 stage:1 nmin:1 receives:1 christopher:1 reveal:1 usa:3 effect:5 brown:1 contain:1 evolution:1 chemical:1 white:1 during:3 generalized:1 recently:1 stimulation:1 physical:1 jl:4 discussed:1 interpretation:1 m1:1 he:1 significant:3 particle:1 stable:1 resistant:1 cortex:9 surface:2 inhibition:2 electrostatic:1 recent:1 driven:1 selectivity:5 discussing:2 transmitted:1 minimum:2 unrestricted:1 employed:2 period:3 signal:9 reduces:1 hebbian:2 concerning:1 a1:1 prediction:1 represent:1 confined:1 cell:55 receive:1 addition:2 pyramidal:1 recording:1 ideal:2 relaxes:1 geniculo:1 xj:3 architecture:1 mica:1 idea:1 shift:3 qj:1 munro:2 detailed:5 unimportant:1 amount:1 dark:1 extensively:1 inhibitory:8 neuroscience:1 modifiable:9 write:2 dominance:3 relaxation:3 sum:2 respond:1 arrive:1 architectural:1 layer:14 simplification:1 activity:21 strength:2 occur:1 constraint:1 sharply:1 sake:1 nearby:1 argument:1 leon:1 relatively:3 department:1 according:1 describes:2 remain:2 island:2 modification:4 deprived:3 restricted:1 equation:2 monocular:5 previously:1 describing:3 singer:2 feinstein:1 fed:1 available:1 reared:1 responsive:2 original:1 denotes:2 zeitouni:1 quantity:1 occurs:1 primary:1 dependence:2 md:3 striate:1 italic:1 distance:2 thank:1 mapped:2 lateral:1 sci:3 capacity:1 unstable:1 assuming:1 retained:1 ratio:1 negative:1 implementation:1 ebner:1 neuron:8 anti:1 situation:1 extended:1 introduced:1 unpublished:1 required:2 connection:1 bcm:1 adult:1 pattern:9 memory:11 critical:2 meanfield:1 settling:1 mn:2 imply:1 eye:17 gabaergic:1 lij:4 asymptotic:2 loss:1 bear:1 localized:1 agent:1 sufficient:2 consistent:2 surrounded:1 lo:3 elsewhere:3 excitatory:2 supported:1 arriving:2 jth:1 scofield:4 cortico:7 institute:1 feedback:4 cortical:35 collection:1 made:1 ml:1 global:1 xi:2 constan:1 mj:7 zk:2 nature:1 expanding:1 dendrite:1 complex:1 intracellular:1 neurosci:1 noise:1 n2:1 site:5 cooper:6 slow:1 experienced:1 explicit:1 levy:1 deprivation:6 bachmann:2 physiological:1 dl:1 exists:2 ci:1 magnitude:1 army:1 visual:9 partially:1 scalar:1 environmental:2 satisfies:1 ljk:2 content:2 change:3 contrasted:1 total:1 slit:1 kitten:1 |
2 | 100 | 394
STORING COVARIANCE BY THE ASSOCIATIVE
LONG?TERM POTENTIATION AND DEPRESSION
OF SYNAPTIC STRENGTHS IN THE HIPPOCAMPUS
Patric K. Stanton? and Terrence J. Sejnowski t
Department of Biophysics
Johns Hopkins University
Baltimore, MD 21218
ABSTRACT
In modeling studies or memory based on neural networks, both the selective
enhancement and depression or synaptic strengths are required ror effident storage
or inrormation (Sejnowski, 1977a,b; Kohonen, 1984; Bienenstock et aI, 1982;
Sejnowski and Tesauro, 1989). We have tested this assumption in the hippocampus,
a cortical structure or the brain that is involved in long-term memory. A brier,
high-frequency activation or excitatory synapses in the hippocampus produces an
increase in synaptic strength known as long-term potentiation, or LTP (BUss and
Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it
requires the simultaneous release or neurotransmitter from presynaptic terminals
coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller,
1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or
synaptic strength that could balance LTP has not yet been demonstrated. We studied the associative interactions between separate inputs onto the same dendritic
trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency
input which, by itselr, does not persistently change synaptic strength, can either
increase (associative LTP) or decrease in strength (associative long-term depression
or LTD) depending upon whether it is positively or negatively correlated in time
with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian,
and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarization sufficient to block postsynaptic activity. Thus, associative LTP and associative LTO are capable or storing inrormation contained in the
covariance between separate, converging hippocampal inputs?
?Present address: Dep~ents of NeW'Oscience and Neurology, Albert Einstein College
of Medicine, 1410 Pelham Parkway South, Bronx, NY 10461 USA.
tPresent address: Computational Neurobiology Laboratory, The Salk Institute, P.O. Box
85800, San Diego, CA 92138 USA.
Storing Covariance by Synaptic Strengths in the Hippocampus
INTRODUCTION
Associative LTP can be produced in some hippocampal neuroos when lowfrequency. (Weak) and high-frequency (Strong) inputs to the same cells are simultaneously activated (Levy and Steward, 1979; Levy and Steward, 1983; Barrionuevo and
Brown, 1983). When stimulated alone, a weak input does not have a long-lasting effect
on synaptic strength; however, when paired with stimulation of a separate strong input
sufficient to produce homo synaptic LTP of that pathway, the weak pathway is associatively potentiated. Neural network modeling studies have predicted that, in addition to
this Hebbian form of plasticity, synaptic strength should be weakened when weak and
strong inputs are anti-correlated (Sejnowski, 1977a,b; Kohonen, 1984; Bienenstock et al,
1982; Sejnowski and Tesauro, 1989). Evidence for heterosynaptic depression in the hippocampus has been found for inputs that are inactive (Levy and Steward, 1979; Lynch et
al, 1977) or weakly active (Levy and Steward, 1983) during the stimulation of a strong
input, but this depression did not depend on any pattern of weak input activity and was
not typically as long-lasting as LTP.
Therefore, we searched for conditions under which stimulation of a hippocampal
pathway, rather than its inactivity, could produce either long-term depression or potentiation of synaptic strengths, depending on the pattern of stimulation. The stimulus paradigm that we used, illustrated in Fig. I, is based on the finding that bursts of stimuli at 5
Hz are optimal in eliciting LTP in the hippocampus (Larson and Lynch, 1986). A highfrequency burst (S'IRONG) stimulus was applied to Schaffer collateral axons and a lowfrequency (WEAK) stimulus given to a separate subicular input coming from the opposite side of the recording site, but terminating on dendrites of the same population of CAl
pyramidal neurons. Due to the rhythmic nature of the strong input bursts, each weak
input shock could be either superimposed on the middle of each burst of the strong input
(IN PHASE), or placed symmetrically between bursts (OUT OF PHASE).
RESULTS
Extracellular evoked field potentials were recorded from the apical dendritic and
somatic layers of CAl pyramidal cells. The weak stimulus train was first applied alone
and did not itself induce long-lasting changes. The strong site was then stimulated alone,
which elicited homosynaptic LTP of the strong pathway but did not significantly alter
amplitude of responses to the weak input. When weak and strong inputs were activated
IN PHASE, there was an associative LTP of the weak input synapses, as shown in Fig.
2a. Both the synaptic excitatory post-synaptic potential (e.p.s.p.) (Ae.p.s.p. = +49.8 ?
7.8%, n=20) and population action potential (&Pike = +65.4 ? 16.0%, n=14) were
significantly enhanced for at least 60 min up to 180 min following stimulation.
In contrast, when weak and strong inputs were applied OUT OF PHASE, they elicited an associative long-term depression (LTO) of the weak input synapses, as shown in
Fig. 2b. There was a marked reduction in the population spike (-46.5 ? 11.4%, n=10)
with smaller decreases in the e.p.s.p. (-13.8 ? 3.5%, n=13). Note that the stimulus patterns applied to each input were identical in these two experiments, and only the relative
395
396
Stanton and Sejnowski
phase of the weak and strong stimuli was altered. With these stimulus patterns. synaptic
strength could be repeatedly enhanced and depressed in a single slice. as illustrated in Fig
2c. As a control experiment to determine whether information concerning covariance
between the inputs was actually a determinant of plasticity. we combined the in phase
and out of phase conditions, giving both the weak input shocks superimposed on the
bursts plus those between the bursts. for a net frequency of 10 Hz. This pattern. which
resulted in zero covariance between weak and strong inputs. produced no net change in
weak input synaptic strength measmed by extracellular evoked potentials. Thus. the assoa
b
A.SSOCIA.TIVE STIMULUS PA.RA.DIGMS
POSJTIVE.LY CORKELA TED ? "IN PHASE"
~K~~ _I~__~I____~I____~I_
SI1IONG,NJO\IT
. u.Jj1l 11l. -1---1&1111.....
11 ---1&1
111.....
11 ---,I~IIII
NEGATIVELY CORRELATED? 'our OF PHASE"
W[AKIN'lTf
STIONG 'N''''
~I
11111
--,-;
11111
11111
Figure 1. Hippocampal slice preparation and stimulus paradigms. a: The in vitro hippocampal slice showing recording sites in CAl pyramidal cell somatic (stratum pyramidale) and dendritic (stratum radiatum) layers. and stimulus sites activating Schaffer collateral (STRONG) and commissural (WEAK) afferents. Hippocampal slices (400 Jlm
thick) were incubated in an interface slice chamber at 34-35 0 C. Extracellular (1-5 M!l
resistance, 2M NaCI filled) and intracellular (70-120 M 2M K-acetate filled) recording electrodes. and bipolar glass-insulated platinum wire stimulating electrodes (50 Jlm
tip diameter). were prepared by standard methods (Mody et al, 1988). b: Stimulus paradigms used. Strong input stimuli (STRONG INPUT) were four trains of 100 Hz bursts.
Each burst had 5 stimuli and the interburst interval was 200 msec. Each train lasted 2
seconds for a total of 50 stimuli. Weak input stimuli (WEAK INPUT) were four trains of
shocks at 5 Hz frequency. each train lasting for 2 seconds. When these inputs were IN
PHASE. the weak single shocks were superimposed on the middle of each burst of the
strong input. When the weak input was OUT OF PHASE. the single shocks were placed
symmetrically between the bursts.
n.
Storing Covariance by Synaptic Strengths in the Hippocampus
ciative LTP and LTD mechanisms appear to be balanced in a manner ideal for the
storage of temporal covariance relations.
The simultaneous depolarization of the postsynaptic membrane and activation of
glutamate receptors of the N-methyl-D-aspartate (NMDA) subtype appears to be necessary for LTP induction (Collingridge et ai, 1983; Harris et al, 1984; Wigstrom and Gustaffson, 1984). The SJ?read of current from strong to weak synapses in the dendritic tree,
d
ASSOCIATIVE
LON(;.TE~
I'OTENTIATION
LONG-TE~
DE,/tESSION
-
!!Ll!!!!.
b
ASSOCIATIVE
I
11111
?
11111.
I
c
e...
I
I
I
I
Figure 2. mustration of associative long-term potentiation (LTP) and associative longterm depression (LTD) using extracellular recordings. a: Associative LTP of evoked
excitatory postsynaptic potentials (e.p.s.p.'s) and population action potential responses in
the weak inpuL Test responses are shown before (Pre) and 30 min after (post) application of weak stimuli in phase with the coactive strong input. b: Associative LTD of
evoked e.p.s.p.'s and population spike responses in the weak input. Test responses are
shown before (Pre) and 30 min after (post) application of weak stimuli out of phase with
the coactive strong input. c: Time course of the changes in population spike amplitude
observed at each input for a typical experiment. Test responses from the strong input (S,
open circles), show that the high-frequency bursts (5 pulses/l00 Hz, 200 msec interburst
interval as in Fig. 1) elicited synapse-specific LTP independent of other input activity.
Test responses from the weak input (W. filled circles) show that stimulation of the weak
pathway out of phase with the strong one produced associative LTD (Assoc LTD) of this
input. Associative LTP (Assoc LTP) of the same pathway was then elicited following in
phase stimulation. Amplitude and duration of associative LTD or LTP could be increased
by stimulating input pathways with more trains of shocks.
397
398
Stanton and Sejnowski
coupled with release of glutamate from the weak inputs, could account for the ability of
the strong pathway to associatively potentiate a weak one (Kelso et al, 1986; Malinow
and Miller, 1986; Gustaffson et al, 1987). Consistent with this hypothesis, we find that
the NMDA receptor antagonist 2-amino-S-phosphonovaleric acid (APS, 10 J.1M) blocks
induction of associative LTP in CAl pyramidal neurons (data not shown, n=S). In contrast, the application of APS to the bathing solution at this same concentration had no
significant effect on associative LTD (data not shown, n=6). Thus, the induction of LTD
seems to involve cellular mechanisms different from associative LTP.
The conditions necessary for LTD induction were explored in another series of
experiments using intracellular recordings from CAl pyramidal neurons made using
standard techniques (Mody et al, 1988). Induction of associative LTP (Fig 3; WEAK
S+W IN PHASE) produced an increase in amplitude of the single cell evoked e.p.s.p. and
a lowered action potential threshold in the weak pathway, as reported previously (Barrionuevo and Brown, 1983). Conversely, the induction of associative LTD (Fig. 3;
WEAK S+W OUT OF PHASE) was accompanied by a long-lasting reduction of e.p.s.p.
amplitude and reduced ability to elicit action potential firing. As in control extracellular
experiments, the weak input alone produced no long-lasting alterations in intracellular
e.p.s.p.'s or firing properties, while the strong input alone yielded specific increases of
the strong pathway e.p.s.p. without altering e.p.s.p. 's elicited by weak input stimulation.
PRE
30 min POST
S+W OUT OF PHASE
30 min POST
S+W IN PHASE
Figure 3. Demonstration of associative LTP and LTD using intracellular recordings from
a CAl pyramidal neuron. Intracellular e.p.s.p.'s prior to repetitive stimulation (pre), 30
min after out of phase stimulation (S+W OUT OF PHASE), and 30 min after subsequent in phase stimuli (S+W IN PHASE). The strong input (Schaffer collateral side,
lower traces) exhibited LTP of the evoked e.p.s.p. independent of weak input activity.
Out of phase stimulation of the weak (Subicular side, upper traces) pathway produced a
marked, persistent reduction in e.p.s.p. amplitude. In the same cell, subsequent in phase
stimuli resulted in associative LTP of the weak input that reversed the LTD and enhanced
amplitude of the e.p.s.p. past the original baseline. (RMP = -62 mY, RN = 30 MO)
Storing Covariance by Synaptic Strengths in the Hippocampus
A weak stimulus that is out of phase with a strong one anives when the postsynaptic neuron is hyperpolarized as a consequence of inhibitory postsynaptic potentials and
afterhyperpolarization from mechanisms intrinsic to pyramidal neurons. This suggests
that postsynaptic hyperpolarization coupled with presynaptic activation may trigger L'ID.
To test this hypothesis, we injected current with intracellular microelectrodes to hyperpolarize or depolarize the cell while stimulating a synaptic input. Pairing the injection of
depolarizing current with the weak input led to LTP of those synapses (Fig. 4a; STIM;
a
PRE
? ?IDPOST
S'I1M ? DEPOL
~l"V
lS.,.c
r
," i
COI'ITROL
-Jj
b
I
--" \
"----
(W.c:ULVllj
PRE
lOlIIin POST
STlM ? HYPERPOL
Figure 4. Pairing of postsynaptic hyperpolarization with stimulation of synapses on CAl
hippocampal pyramidal neurons produces L'ID specific to the activated pathway, while
pairing of postsynaptic depolarization with synaptic stimulation produces synapsespecific LTP. a: Intracellular evoked e.p.s.p.'s are shown at stimulated (STIM) and
unstimulated (CONTROL) pathway synapses before (Pre) and 30 min after (post) pairing a 20 mY depolarization (constant current +2.0 nA) with 5 Hz synaptic stimulation.
The stimulated pathway exhibited associative LTP of the e.p.s.p., while the control,
unstimulated input showed no change in synaptic strength. (RMP = -65 mY; RN = 35
Mfl) b: Intracellular e.p.s.p. 's are shown evoked at stimulated and control pathway
synapses before (Pre) and 30 min after (post) pairing a 20 mV hyperpolarization (constant current -1.0 nA) with 5 Hz synaptic stimulation. The input (STIM) activated during
the hyperpolarization showed associative LTD of synaptic evoked e.p.s.p.'s, while
synaptic strength of the silent input (CONTROL) was unaltered. (RMP =-62 mV; RN =
38M!l)
399
400
Stanton and Sejnowski
+64.0 -9.7%, n=4), while a control input inactive during the stimulation did not change
(CONTROL), as reported previously (Kelso et al, 1986; Malinow and Miller, 1986; Gustaffson et al, 1987). Conversely, prolonged hyperpolarizing current injection paired with
the same low-frequency stimuli led to induction of LTD in the stimulated pathway (Fig.
4b; STIM; -40.3 ? 6.3%, n=6). but not in the unstimulated pathway (CONTROL). The
application of either depolarizing current, hyperpolarizing current, or the weak 5 Hz
synaptic stimulation alone did not induce long-term alterations in synaptic strengths.
Thus. hyperpolarization and simultaneous presynaptic activity supply sufficient conditions for the induction of LTD in CAl pyramidal neurons.
CONCLUSIONS
These experiments identify a novel fono of anti-Hebbian synaptic plasticity in the
hippocampus and confirm predictions made from modeling studies of information storage
in neural networks. Unlike previous reports of synaptic depression in the hippocampus,
the plasticity is associative, long-lasting, and is produced when presynaptic activity
occurs while the postsynaptic membrane is hyperpolarized. In combination with Hebbian
mechanisms also present at hippocampal synapses. associative LTP and associative LTD
may allow neurons in the hippocampus to compute and store covariance between inputs
(Sejnowski, 1977a,b; Stanton and Sejnowski. 1989). These finding make temporal as
well as spatial context an important feature of memory mechanisms in the hippocampus.
Elsewhere in the brain, the receptive field properties of cells in cat visual cortex
can be altered by visual experience paired with iontophoretic excitation or depression of
cellular activity (Fregnac et al, 1988; Greuel et al, 1988). In particular, the chronic hyperpolarization of neurons in visual cortex coupled with presynaptic transmitter release leads
to a long-teno depression of the active. but not inactive, inputs from the lateral geniculate
nucleus (Reiter and Stryker, 1988). Thus. both Hebbian and anti-Hebbian mechanisms
found in the hippocampus seem to also be present in other brain areas, and covariance of
firing patterns between converging inputs a likely key to understanding higher cognitive
function.
This research was supported by grants from the National Science Foundation and
the Office of Naval research to TJS. We thank Drs. Charles Stevens and Richard Morris
for discussions about related experiments.
Rererences
Bienenstock, E., Cooper. LN. and Munro. P. Theory for the development of neuron
selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2. 32-48 (1982).
Barrionuevo, G. and Brown, T.H. Associative long-teno potentiation in hippocampal
slices. Proc. Nat. Acad. Sci. (USA) 80, 7347-7351 (1983).
Bliss. T.V.P. and Lomo, T. Long-lasting potentiation of synaptic ttansmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J.
Physiol. (Lond.) 232. 331-356 (1973).
Storing Covariance by Synaptic Strengths in the Hippocampus
Collingridge, GL., Kehl, SJ. and McLennan, H. Excitatory amino acids in synaptic
transmission in the Schaffer collateral-commissural pathway of the rat hippocampus. J.
Physiol. (Lond.) 334, 33-46 (1983).
Fregnac, Y., Shulz, D., Thorpe, S. and Bienenstock, E. A cellular analogue of visual cortical plasticity. Nature (Lond.) 333, 367-370 (1988).
Greuel. J.M.. Luhmann. H.J. and Singer. W. Pharmacological induction of usedependent receptive field modifications in visual cortex. Science 242,74-77 (1988).
Gustafsson, B., Wigstrom, H., Abraham, W.C. and Huang. Y.Y. Long-term potentiation
in the hippocampus using depolarizing current pulses as the conditioning stimulus to
single volley synaptic potentials. J. Neurosci. 7, 774-780 (1987).
Harris. E.W., Ganong, A.H. and Cotman, C.W. Long-term potentiation in the hippocampus involves activation of N-metbyl-D-aspartate receptors. Brain Res. 323, 132137 (1984).
Kelso, S.R.. Ganong, A.H. and Brown, T.H. Hebbian synapses in hippocampus. Proc.
Natl. Acad. Sci. USA 83, 5326-5330 (1986).
Kohonen. T. Self-Organization and Associative Memory. (Springer-Verlag. Heidelberg,
1984).
Larson. J. and Lynch. G. Synaptic potentiation in hippocampus by patterned stimulation
involves two events. Science 232, 985-988 (1986).
Levy. W.B. and Steward, O. Synapses as associative memory elements in the hippocampal formation. Brain Res. 175,233-245 (1979).
Levy. W.B. and Steward, O. Temporal contiguity requirements for long-term associative
potentiation/depression in the hippocampus. Neuroscience 8, 791-797 (1983).
Lynch. G.S., Dunwiddie. T. and Gribkoff. V. Heterosynaptic depression: a postsynaptic
correlate oflong-term potentiation. Nature (Lond.) 266. 737-739 (1977).
Malinow. R. and Miller, J.P. Postsynaptic hyperpolarization during conditioning reversibly blocks induction of long-term potentiation Nature (Lond.)32.0. 529-530 (1986).
Mody. I.. Stanton. PK. and Heinemann. U. Activation of N-methyl-D-aspartate
(NMDA) receptors parallels changes in cellular and synaptic properties of dentate
gyrus granule cells after kindling. J. Neurophysiol. 59. 1033-1054 (1988).
Reiter, H.O. and Stryker, M.P. Neural plasticity without postsynaptic action potentials:
Less-active inputs become dominant when kitten visual cortical cells are pharmacologically inhibited. Proc. Natl. Acad. Sci. USA 85, 3623-3627 (1988).
Sejnowski, T J. and Tesauro, G. Building network learning algorithms from Hebbian
synapses, in: Brain Organization and Memory JL. McGaugh, N.M. Weinberger, and
G. Lynch, Eds. (Oxford Univ. Press, New York, in press).
Sejnowski, TJ. Storing covariance with nonlinearly interacting neurons. J. Math. Biology 4, 303-321 (1977).
Sejnowski, T. J. Statistical constraints on synaptic plasticity. J. Theor. Biology 69, 385389 (1977).
Stanton, P.K. and Sejnowski, TJ. Associative long-term depression in the hippocampus:
Evidence for anti-Hebbian synaptic plasticity. Nature (Lond.), in review.
Wigstrom, H. and Gustafsson, B. A possible correlate of the postsynaptic condition for
long-lasting potentiation in the guinea pig hippocampus in vitro. Neurosci. Lett. 44,
327?332 (1984).
401
| 100 |@word determinant:1 longterm:1 unaltered:1 middle:2 hippocampus:22 seems:1 hyperpolarized:2 open:1 pulse:2 covariance:12 lowfrequency:2 reduction:4 series:1 past:1 coactive:2 current:9 activation:5 yet:1 john:1 physiol:2 subsequent:2 hyperpolarizing:2 plasticity:8 aps:2 alone:6 patric:1 math:1 tpresent:1 burst:12 become:1 supply:1 persistent:2 pairing:6 gustafsson:2 pathway:18 manner:1 pharmacologically:1 ra:1 brier:1 brain:6 terminal:1 inrormation:2 prolonged:1 depolarization:4 contiguity:1 finding:2 jlm:2 temporal:3 bipolar:1 assoc:2 control:9 subtype:1 ly:1 grant:1 appear:1 before:4 consequence:1 acad:3 receptor:4 id:2 oxford:1 firing:4 path:1 plus:1 studied:1 bursting:1 weakened:1 evoked:9 conversely:2 suggests:1 patterned:1 block:3 i____:2 area:2 elicit:1 significantly:2 kelso:4 pre:8 induce:2 specificity:1 onto:1 cal:9 storage:3 context:1 demonstrated:1 chronic:1 duration:1 l:1 rabbit:1 methyl:2 subicular:2 population:6 insulated:1 diego:1 enhanced:3 trigger:1 hypothesis:2 pa:1 persistently:1 element:1 observed:1 decrease:2 balanced:1 gustaffson:3 terminating:1 weakly:1 depend:1 ror:3 upon:1 negatively:2 lto:2 neurophysiol:1 cat:1 neurotransmitter:1 train:6 univ:1 sejnowski:14 formation:1 ability:2 itself:1 associative:35 net:2 interaction:2 coming:1 kohonen:3 enhancement:1 electrode:2 transmission:1 requirement:1 produce:5 depending:2 dep:1 strong:26 predicted:1 involves:2 thick:1 stevens:1 coi:1 mclennan:1 potentiation:13 activating:1 dendritic:4 theor:1 dentate:2 mo:1 proc:3 geniculate:1 barrionuevo:3 platinum:1 lynch:5 rather:1 office:1 release:3 usedependent:1 lon:1 naval:1 oflong:1 transmitter:1 superimposed:3 lasted:1 contrast:2 baseline:1 glass:1 typically:1 bienenstock:4 relation:1 i1m:1 selective:1 orientation:1 development:1 spatial:1 field:4 ted:1 identical:1 biology:2 unstimulated:3 alter:1 report:1 stimulus:23 richard:1 inhibited:1 thorpe:1 shulz:1 microelectrodes:1 simultaneously:1 resulted:2 national:1 phase:26 ents:1 organization:2 homo:1 activated:4 natl:2 tj:2 capable:1 necessary:2 experience:1 collateral:4 tree:2 filled:3 circle:2 re:2 increased:1 modeling:3 altering:1 apical:1 reported:2 my:3 combined:1 stratum:2 terrence:1 tip:1 hopkins:1 fregnac:2 na:2 recorded:1 huang:1 cognitive:1 luhmann:1 account:1 potential:11 de:1 accompanied:1 alteration:2 afterhyperpolarization:1 bliss:1 afferent:1 mv:2 elicited:6 parallel:1 depolarizing:3 acid:2 miller:4 identify:1 weak:42 produced:7 reversibly:1 simultaneous:3 synapsis:12 synaptic:36 ed:1 frequency:8 involved:1 commissural:2 nmda:3 amplitude:7 actually:1 appears:1 higher:1 day:1 response:7 synapse:1 box:1 binocular:1 usa:5 effect:2 perforant:1 brown:4 depolarize:1 building:1 read:1 laboratory:1 reiter:2 illustrated:2 round:1 ll:1 during:4 pharmacological:1 self:1 excitation:1 larson:2 rat:1 hippocampal:11 antagonist:1 interface:1 novel:1 charles:1 stimulation:19 hyperpolarization:8 vitro:2 conditioning:2 jl:1 rmp:3 significant:1 potentiate:1 ai:2 depressed:1 had:2 lowered:1 cortex:4 dominant:1 inpul:1 showed:2 tesauro:3 store:1 steward:6 selectivity:1 verlag:1 determine:1 paradigm:3 pelham:1 hebbian:11 long:24 concerning:1 post:8 paired:3 biophysics:1 converging:2 prediction:1 ae:1 bronx:1 albert:1 repetitive:1 mustration:1 cell:10 addition:1 iiii:1 interval:2 baltimore:1 pyramidal:10 unlike:1 exhibited:2 south:1 associatively:2 hz:8 ltp:30 recording:6 seem:1 symmetrically:2 ideal:1 njo:1 opposite:1 silent:1 inactive:3 whether:2 munro:1 ltd:18 inactivity:1 akin:1 resistance:1 york:1 pike:1 jj:1 action:5 depression:14 repeatedly:1 involve:1 prepared:1 morris:1 diameter:1 reduced:1 gyrus:1 inhibitory:1 neuroscience:1 key:1 four:2 threshold:1 shock:6 interburst:2 injected:1 heterosynaptic:2 tjs:1 layer:2 yielded:1 activity:7 strength:19 constraint:1 homosynaptic:1 min:10 lond:6 injection:2 extracellular:5 department:1 combination:1 membrane:2 smaller:1 postsynaptic:15 modification:1 lasting:9 ln:1 bus:1 previously:2 mechanism:7 singer:1 drs:1 collingridge:2 einstein:1 chamber:1 weinberger:1 original:1 medicine:1 giving:1 granule:1 eliciting:1 anaesthetized:1 spike:3 occurs:1 receptive:2 concentration:1 stryker:2 md:1 highfrequency:1 reversed:1 separate:4 thank:1 lateral:1 sci:3 presynaptic:6 cellular:4 induction:10 stim:4 balance:1 demonstration:1 trace:2 lomo:2 potentiated:1 i_:1 neuron:12 wire:1 upper:1 anti:5 neurobiology:1 rn:3 interacting:1 somatic:2 schaffer:4 tive:1 nonlinearly:1 required:1 address:2 pattern:6 pig:1 memory:6 analogue:1 event:1 glutamate:2 stanton:7 altered:2 coupled:4 prior:1 understanding:1 review:1 wigstrom:3 relative:1 radiatum:1 foundation:1 nucleus:1 sufficient:3 consistent:1 storing:7 excitatory:4 course:1 elsewhere:1 placed:2 last:1 supported:1 gl:1 guinea:1 side:3 allow:1 institute:1 rhythmic:1 slice:6 lett:1 cortical:3 made:2 san:1 ganong:2 correlate:2 sj:2 l00:1 confirm:1 active:3 parkway:1 neurology:1 stimulated:6 nature:5 ca:1 dendrite:1 heidelberg:1 did:5 pk:1 intracellular:8 neurosci:3 abraham:1 amino:2 positively:1 fig:9 site:4 ny:1 salk:1 axon:1 cooper:1 msec:2 volley:1 levy:6 specific:3 showing:1 aspartate:3 explored:1 evidence:2 mcgaugh:1 intrinsic:1 te:2 nat:1 led:2 likely:1 visual:7 contained:1 malinow:4 springer:1 harris:2 stimulating:3 marked:2 change:7 heinemann:1 typical:1 total:1 rererences:1 college:1 searched:1 preparation:1 tested:1 kitten:1 correlated:3 |
3 | 1,000 | Bayesian Query Construction for Neural
Network Models
Gerhard Paass
Jorg Kindermann
German National Research Center for Computer Science (GMD)
D-53757 Sankt Augustin, Germany
paass@gmd.de
kindermann@gmd.de
Abstract
If data collection is costly, there is much to be gained by actively selecting particularly informative data points in a sequential way. In
a Bayesian decision-theoretic framework we develop a query selection criterion which explicitly takes into account the intended use
of the model predictions. By Markov Chain Monte Carlo methods
the necessary quantities can be approximated to a desired precision. As the number of data points grows, the model complexity
is modified by a Bayesian model selection strategy. The properties of two versions of the criterion ate demonstrated in numerical
experiments.
1
INTRODUCTION
In this paper we consider the situation where data collection is costly, as when
for example, real measurements or technical experiments have to be performed. In
this situation the approach of query learning ('active data selection', 'sequential
experimental design', etc.) has a potential benefit. Depending on the previously
seen examples, a new input value ('query') is selected in a systematic way and
the corresponding output is obtained. The motivation for query learning is that
random examples often contain redundant information, and the concentration on
non-redundant examples must necessarily improve generalization performance.
We use a Bayesian decision-theoretic framework to derive a criterion for query construction. The criterion reflects the intended use of the predictions by an appropriate
444
Gerhard Paass. Jorg Kindermann
loss function. We limit our analysis to the selection of the next data point, given a
set of data already sampled. The proposed procedure derives the expected loss for
candidate inputs and selects a query with minimal expected loss.
There are several published surveys of query construction methods [Ford et al. 89,
Plutowski White 93, Sollich 94]. Most current approaches, e.g. [Cohn 94], rely
on the information matrix of parameters. Then however, all parameters receive
equal attention regardless of their influence on the intended use of the model
[Pronzato Walter 92]. In addition, the estimates are valid only asymptotically. Bayesian approaches have been advocated by [Berger 80], and applied to neural networks
[MacKay 92]. In [Sollich Saad 95] their relation to maximum information gain is
discussed. In this paper we show that by using Markov Chain Monte Carlo methods it is possible to determine all quantities necessary for the selection of a query.
This approach is valid in small sample situations, and the procedure's precision can
be increased with additional computational effort. With the square loss function,
the criterion is reduced to a variant of the familiar integrated mean square error
[Plutowski White 93].
In the next section we develop the query selection criterion from a decision-theoretic
point of view. In the third section we show how the criterion can be calculated using
Markov Chain Monte Carlo methods and we discuss a strategy for model selection.
In the last section, the results of two experiments with MLPs are described.
2
A DECISION-THEORETIC FRAMEWORK
Assume we have an input vector x and a scalar output y distributed as y "" p(y I x, w)
where w is a vector of parameters. The conditional expected value is a deterministic
function !(x, w) := E(y I x, w) where y = !(x, w)+? and ? is a zero mean error term.
Suppose we have iteratively collected observations D(n) := ((Xl, iii), .. . , (Xn, Yn)).
We get the Bayesian posterior p(w I D(n)) = p(D(n) Iw) p(w)/ J p(D(n) Iw) p(w) dw
and the predictive distribution p(y I x, D(n)) = p(y I x, w)p(w I D(n)) dw if p(w) is
the prior distribution.
J
We consider the situation where, based on some data x, we have to perform an
action a whose result depends on the unknown output y. Some decisions may have
more severe effects than others. The loss function L(y, a) E [0,00) measures the
loss if y is the true value and we have taken the action a E A. In this paper we
consider real-valued actions, e.g. setting the temperature a in a chemical process.
We have to select an a E A only knowing the input x. According to the Bayes
Principle [Berger 80, p.14] we should follow a decision rule d : x --t a such that
the average risk J R(w, d) p(w I D(n)) dw is minimal, where the risk is defined as
R(w, d) := J L(y, d(x)) p(y I x, w) p(x) dydx. Here p(x) is the distribution of future
inputs, which is assumed to be known.
For the square loss function L(y, a) = (y - a)2, the conditional expectation
d(x) := E(y I x, D(n)) is the optimal decision rule. In a control problem the loss
may be larger at specific critical points. This can be addressed with a weighted square loss function L(y, a) := h(y)(y - a)2, where h(y) 2: a [Berger 80,
p.1U]. The expected loss for an action is J(y - a)2h(y) p(y I x, D(n)) dy. Replacing the predictive density p(y I x, D(n)) with the weighted predictive density
Bayesian Query Construction for Neural Network Models
445
p(y I x, Den) := h(y) p(y I x, Den)/G(x), where G(x) := I h(y) p(y I x, Den) dy,
we get the optimal decision rule d(x) := I yp(y I x, Den) dy and the average loss
G(x) I(y - E(y I x, D(n))2 p(y I x, Den) dy for a given input x. With these modifications, all later derIvations for the square loss function may be applied to the
weighted square loss.
The aim of query sampling is the selection of a new observation x in such a way
that the average risk will be maximally reduced. Together with its still unknown
y-value, x defines a new observation (x, y) and new data Den) U (x, y). To determine
this risk for some given x we have to perform the following conceptual steps for a
candidate query x:
1. Future Data: Construct the possible sets of 'future' observations Den) U
(x, y), where y ""' p(y I x, Den).
2. Future posterior: Determine a 'future' posterior distribution of parameters
p(w I Den) U (x, y? that depends on y in the same way as though it had
actually been observed.
3. Future Loss: Assuming d~,x(x) is the optimal decision rule for given values
of x, y, and x, compute the resulting loss as
1';,x(x):=
J
L(y,d;,x(x?p(ylx,w)p(wIDen)U(x,y?dydw
(1)
4. Averaging: Integrate this quantity over the future trial inputs x distributed
as p(x) and the different possible future outputs y, yielding
1';:= Ir;,x(x)p(x)p(ylx,Den)dxdy.
This procedure is repeated until an x with minimal average risk is found. Since local
optima are typical, a global optimization method is required. Subsequently we then
try to determine whether the current model is still adequate or whether we have to
increase its complexity (e.g. by adding more hidden units).
3
COMPUTATIONAL PROCEDURE
Let us assume that the real data Den) was generated according to a regression model
y = !(x, w)+{ with i.i.d. Gaussian noise {""' N(O, (T2(w?. For example !(x, w) may
be a multilayer perceptron or a radial basis function network. Since the error terms
are independent, the posterior density is p( w I Den) ex: p( w) rr~=l P(Yi I Xi, w) even
in the case of query sampling [Ford et al. 89].
As the analytic derivation of the posterior is infeasible except in trivial cases, we
have to use approximations. One approach is to employ a normal approximation
[MacKay 92], but this is unreliable if the number of observations is small compared to the number of parameters. We use Markov Chain Monte Carlo procedures
[PaaB 91, Neal 93] to generate a sample WeB) := {WI, .. .WB} of parameters distributed according to p( w I Den). If the number of sampling steps approaches infinity,
the distribution of the simulated Wb approximates the posterior arbitrarily well.
To take into account the range of future y-values, we create a set of them by simulation. For each Wb E WeB) a number of y ""' p(y I x, Wb) is generated. Let
446
y(x.R)
Gerhard Paass. JiJrg Kindermann
{YI, ... , YR} be the resulting set. Instead of performing a new Markov
Monte Carlo run to generate a new sample according to p(w I DCn) U (x, y)), we
:=
use the old set WCB) of parameters and reweight them (importance sampling).
In this way we may approximate integrals of some function g( w) with respect to
p(w I DCn) U (x, y)) [Kalos Whitlock 86, p.92]:
- -))d
j 9 (w ) P(W IDCn) U( X,
Y
W
__
--
L~-lg(Wb)P(ylx,Wb)
B
Lb=l p(Y I x, Wb)
(2)
The approximation error approaches zero as the size of WCB) increases.
3.1
APPROXIMATION OF FUTURE LOSS
Consider the future loss f;,x(x) given new observation (x, y) and trial input Xt. In
the case of the square loss function, (1) can be transformed to
f~,.t(Xt)
=
j[!(Xt,w)-E(yIXt,Dcn)U(X,y)Wp(wIDcn)U(x,y))dw (3)
+ j ?T2(w) p(w I DCn) U (x, y)) dw
where ?T2(w) := Var(y I x, w) is independent of x. Assume a set XT = {Xl, ... , XT}
is given, which is representative of trial inputs for the distribution p(x). Define
S(x, y) := L~=i p(Y I x, Wb) for y E YCx,R) . Then from equations (2) and (3) we get
E(ylxt,DCn)U(x,y)):= 1/S(x,Y)L~=1!(Xt,Wb)P(Ylx,Wb) and
1
B
S(x -) L?T 2(Wb)P(Ylx,Wb)
,y b=l
1
+ S(x
(4)
B
-) I)!(Xt, Wb) - E(y I Xt, DCn) U (x, y))]2 p(Y I x, Wb)
,y
b=l
The final value of f; is obtained by averaging over the different y E YCx,R) and
different trial inputs Xt E XT. To reduce the variance, the trial inputs Xt should
be selected by importance sampling (2) to concentrate them on regions with high
current loss (see (5) below). To facilitate the search for an x with minimal f; we
reduce the extent of random fluctuations of the y values. Let (Vi, ... , VR) be a
vector of random numbers Vr -- N(O,1), and let jr be randomly selected from
{1, ... , B}. Then for each x the possible observations Yr E YCx,R) are defined as
Yr := !(x, wir) + V r?T2(wir). In this way the difference between neighboring inputs
is not affected by noise, and search procedures can exploit gradients.
3.2
CURRENT LOSS
As a proxy for the future loss, we may use the current loss at
x,
rcurr(x) = p(x) j L(y, d*(x)) p(y I x, DCn)) dy
(5)
Bayesian Query Construction for Neural Network Models
447
where p(x) weights the inputs according to their relevance. For the square loss
function the average loss at x is the conditional variance Var(y I x, DCn?. We get
=
Tcurr(X)
p(x) jU(x,w)-E(YIX,DCn?)2p(wIDcn?dw
(6)
+ p(x) j 0"2(w) p(w I D(n? dw
If E(y I x,DCn?
fr~~=lf(x,wb) and the sample WCB):= {Wl, ... ,WB} is
representative of p(w I DCn? we can approximate the current loss with
Tcurr(X)
~
p( x) ~
13 L..tU(x, Wb) -
2
E(y I x, DCn?) +
A
p( x) ~
13 L..t 0"
b=l
2
(Wb)
(7)
b=l
If the input distribution p( x) is uniform, the second term is independent of x.
3.3
COMPLEXITY REGULARIZATION
Neural network models can represent arbitrary mappings between finite-dimensional
spaces if the number of hidden units is sufficiently large [Hornik Stinchcombe 89].
As the number of observations grows, more and more hidden units are necessary to catch the details of the mapping. Therefore we use a sequential procedure to increase the capacity of our networks during query learning. White and
Wooldridge call this approach the "method of sieves" and provide some asymptotic results on its consistency [White Wooldridge 91]. Gelfand and Dey compare Bayesian approaches for model selection and prove that, in the case of nested models Ml and M2, model choice by the ratio of popular Bayes factors
p(DCn) I Mi) := J p(DCn) I W, Mi ) p(w I Mi) dw will always choose the full model
regardless of the data as n --t 00 [Gelfand Dey 94]. They show that the pseudoBayes factor, a Bayesian variant of crossvalidation, is not affected by this paradox
A(Ml' M2) :=
n
n
;=1
j=1
II p(y; I x;, DCn,j), Mt}j II p(Y; Ix;, DCn,j), M2)
(8)
Here DCn ,;) := D(n) \ (x;, y;). As the difference between p(w I DCn? and p( wi D(n,j?
is usually small, we use the full posterior as the importance function (2) and get
p(Y;
I x;, DCn,j),Mi) =
j p(Y; IXj,w,Mi)p(wIDCn,j),Mi)dw
'" B/(t,l/P(Y;li;,W"M,))
4
(9)
NUMERICAL DEMONSTRATION
In a first experiment we tested the approach for a small a 1-2-1 MLP target function with Gaussian noise N(0,0.05 2 ). We assumed the square loss function and a
uniform input distribution p(x) over [-5,5]. Using the "true" architecture for the
approximating model we started with a single randomly generated observation. We
448
Gerhard Paass, JiJrg Kindermann
~
=~!?~
--- ~tuo:io_
~
.. .' .
1'01
..
on
~
I - '~ ' =~ I
:;
"
. ..
a:
0
::::.:::::.::::\....
d
:;
....
\~.
'\ ------ -- - - - - - - -----
\., 1\l
. . ......_. _-_._...........__................... _. ._......._..
~
\!
~
,
\
:.,.
\, '
"
\!
'"
0
..
-2
10
15
20
No.d_
25
30
Figure 1: Future loss exploration: predicted posterior mean, future loss and current
loss for 12 observations (left), and root mean square error of prediction (right) .
estimated the future loss by (4) for 100 different inputs and selected the input with
smallest future loss as the next query. B = 50 parameter vectors were generated requiring 200,000 Metropolis steps. Simultaneously we approximated the current loss
criterion by (7). The left side of figure 1 shows the typical relation of both measures.
In most situations the future loss is low in the same regions where the current loss
(posterior standard deviation of mean prediction) is high. The queries are concentrated in areas of high variation and the estimated posterior mean approximates
the target function quite well.
In the right part of figure 1 the RMSE of prediction averaged over 12 independent
experiments is shown. After a few observations the RMSE drops sharply. In our
example there is no marked difference between the prediction errors resulting from
the future loss and the current loss criterion (also averaged over 12 experiments).
Considering the substantial computing effort this favors the current loss criterion.
The dots indicate the RMSE for randomly generated data (averaged over 8 experiments) using the same Bayesian prediction procedure. Because only few data points
were located in the critical region of high variation the RMSE is much larger.
In the second experiment, a 2-3-1 MLP defined the target function I(x, wo) , to which
Gaussian noise of standard deviation 0.05 was added. I( x, wo) is shown in the left
part of figure 2. We used five MLPs with 2-6 hidden units as candidate models
Ml, .. . , M5 and generated B = 45 samples WeB) of the posterior pew I D(n)' M.),
where D(n) is the current data. We started with 30,000 Metropolis steps for small
values of n and increased this to 90,000 Metropolis steps for larger values of n.
For a network with 6 hidden units and n = 50 observations, 10,000 Metropolis
steps took about 30 seconds on a Sparc10 workstation. Next, we used equation (9)
to compare the different models, and then used the optimal model to calculate the
current loss (7) on a regular grid of 41 x 41 = 1681 query points x. Here we assumed
the square loss function and a uniform input distribution p(x) over [-5,5] x [-5,5].
We selected the query point with maximal current loss and determined the final
query point with a hillclimbing algorithm. In this way we were rather sure to get
close to the true global optimum.
The main result of the experiment is summarized in the right part of figure 2. It
Bayesian Query Construct.ion for Neural Network Models
449
?
o
".
.m
eXDlorati~n
random a
:2"': \
<:>
\
~\?{l? .
.,
."
o .. o .. o ............. __ (). ...
\
. . .......... 0 ... .. ........ --
..
~.
20
40
0
60
80
100
No. of Observations
Figure 2: Current loss exploration: MLP target function and root mean square error.
shows - averaged over 3 experiments - the root mean square error between the true
mean value and the posterior mean E(y I x) on the grid of 1681 inputs in relation to
the sample size. Three phases of the exploration can be distinguished (see figure 3).
In the beginning a search is performed with many queries on the border of the
input area. After about 20 observations the algorithm knows enough detail about
the true function to concentrate on the relevant parts of the input space. This leads
to a marked reduction ofthe mean square error. After 40 observations the systematic
part of the true function has been captured nearly perfectly. In the last phase of
the experiment the algorithm merely reduces the uncertainty caused by the random
noise. In contrast , the data generated randomly does not have sufficient information
on the details of f(x , w), and therefore the error only gradually decreases. Because
of space constraints we cannot report experiments with radial basis functions which
led to similar results.
Acknowledgements
This work is part of the joint project 'REFLEX' of the German Fed. Department
of Science and Technology (BMFT), grant number 01 IN 111Aj4. We would like to
thank Alexander Linden, Mark Ring, and Frank Weber for many fruitful discussions.
References
[Berger 80] Berger, J. (1980): Statistical Decision Theory, Foundations, Concepts, and
Methods. Springer Verlag, New York.
[Cohn 94] Cohn, D. (1994): Neural Network Exploration Using Optimal Experimental
Design. In J. Cowan et al. (eds.): NIPS 5. Morgan Kaufmann, San Mateo.
[Ford et al. 89] Ford, I. , Titterington, D.M., Kitsos, C.P. (1989): Recent Advances in Nonlinear Design. Technometrics, 31, p.49-60.
[Gelfand Dey 94] Gelfand, A.E., Dey, D.K. (1994): Bayesian Model Choice: Asymptotics
and Exact Calculations. J. Royal Statistical Society B, 56, pp.501-514.
450
Gerhard Paass, Jorg Kindermann
Figure 3: Squareroot of current loss (upper row) and absolute deviation from true
function (lower row) for 10,25, and 40 observations (which are indicated by dots) .
[Hornik Stinchcombe 89] Hornik, K., Stinchcombe, M. (1989): Multilayer Feedforward
Networks are Universal Approximators. Neural Networks 2, p.359-366.
[Kalos Whitlock 86] Kalos, M.H., Whitlock, P.A. (1986): Monte Carlo Methods, Wiley,
New York.
[MacKay 92] MacKay, D. (1992): Information-Based Objective Functions for Active Data
Selection. Neural Computation 4, p .590-604.
[Neal 93] Neal, R.M. (1993): Probabilistic Inference using Markov Chain Monte Carlo
Methods. Tech. Report CRG-TR-93-1, Dep. of Computer Science, Univ. of Toronto.
[PaaB 91] PaaB, G. (1991): Second Order Probabilities for Uncertain and Conflicting Evidence. In: P.P. Bonissone et al. (eds.) Uncertainty in Artificial Intelligence 6. Elsevier,
Amsterdam, pp. 447-456.
[Plutowski White 93] Plutowski, M., White, H. (1993): Selecting Concise Training Sets
from Clean Data. IEEE Tr. on Neural Networks, 4, p.305-318.
[Pronzato Walter 92] Pronzato, L., Walter, E. (1992): Nonsequential Bayesian Experimental Design for Response Optimization. In V. Fedorov, W.G. Miiller, I.N. Vuchkov
(eds.): Model Oriented Data-Analysis. Physica Verlag, Heidelberg, p. 89-102.
[Sollich 94] Sollich, P. (1994): Query Construction, Entropy and Generalization in Neural
Network Models. To appear in Physical Review E.
[Sollich Saad 95] Sollich, P., Saad, D. (1995): Learning from Queries for Maximum Information Gain in Unlearnable Problems. This volume.
[White Wooldridge 91] White, H., Wooldridge, J. (1991): Some Results for Sieve Estimation with Dependent Observations. In W. Barnett et al. (eds.) : Nonparametric and
Semiparametric Methods in Econometrics and Statistics, New York, Cambridge Univ.
Press.
| 1000 |@word trial:5 wcb:3 version:1 simulation:1 concise:1 tr:2 reduction:1 selecting:2 current:16 ixj:1 must:1 numerical:2 informative:1 dydx:1 analytic:1 drop:1 intelligence:1 selected:5 yr:3 beginning:1 toronto:1 wir:2 five:1 prove:1 expected:4 considering:1 project:1 sankt:1 titterington:1 control:1 unit:5 grant:1 yn:1 appear:1 local:1 limit:1 fluctuation:1 mateo:1 range:1 averaged:4 lf:1 procedure:8 asymptotics:1 area:2 universal:1 radial:2 regular:1 get:6 cannot:1 close:1 selection:10 risk:5 influence:1 fruitful:1 deterministic:1 demonstrated:1 center:1 dcn:19 attention:1 regardless:2 survey:1 m2:3 rule:4 dw:9 variation:2 construction:6 suppose:1 gerhard:5 target:4 exact:1 approximated:2 particularly:1 located:1 econometrics:1 observed:1 calculate:1 region:3 decrease:1 substantial:1 complexity:3 ycx:3 predictive:3 basis:2 joint:1 derivation:2 walter:3 univ:2 monte:7 query:25 artificial:1 whose:1 gelfand:4 larger:3 valued:1 quite:1 bonissone:1 favor:1 statistic:1 ford:4 final:2 rr:1 took:1 maximal:1 fr:1 neighboring:1 tu:1 relevant:1 crossvalidation:1 optimum:2 ring:1 depending:1 develop:2 derive:1 ex:1 dep:1 advocated:1 predicted:1 indicate:1 concentrate:2 subsequently:1 exploration:4 generalization:2 d_:1 crg:1 physica:1 sufficiently:1 normal:1 mapping:2 smallest:1 whitlock:3 estimation:1 nonsequential:1 iw:2 augustin:1 kindermann:6 wl:1 create:1 reflects:1 weighted:3 gaussian:3 yix:1 aim:1 modified:1 always:1 rather:1 tech:1 contrast:1 elsevier:1 inference:1 dependent:1 integrated:1 hidden:5 relation:3 transformed:1 selects:1 germany:1 mackay:4 equal:1 construct:2 sampling:5 barnett:1 nearly:1 paass:6 future:18 others:1 t2:4 report:2 employ:1 few:2 widen:1 randomly:4 oriented:1 simultaneously:1 national:1 familiar:1 intended:3 phase:2 technometrics:1 mlp:3 severe:1 yielding:1 chain:5 integral:1 necessary:3 old:1 desired:1 minimal:4 uncertain:1 increased:2 wb:18 deviation:3 uniform:3 ju:1 density:3 systematic:2 probabilistic:1 together:1 choose:1 yp:1 actively:1 li:1 account:2 potential:1 wooldridge:4 de:2 summarized:1 explicitly:1 caused:1 depends:2 vi:1 performed:2 view:1 later:1 try:1 root:3 bayes:2 rmse:4 mlps:2 square:14 ir:1 variance:2 kaufmann:1 ofthe:1 bayesian:14 carlo:7 published:1 ed:4 pp:2 mi:6 workstation:1 sampled:1 gain:2 popular:1 actually:1 follow:1 response:1 maximally:1 though:1 dey:4 until:1 web:3 replacing:1 cohn:3 nonlinear:1 defines:1 indicated:1 grows:2 facilitate:1 effect:1 contain:1 true:7 requiring:1 concept:1 regularization:1 sieve:2 chemical:1 iteratively:1 wp:1 neal:3 white:8 during:1 criterion:10 m5:1 theoretic:4 temperature:1 weber:1 mt:1 physical:1 volume:1 discussed:1 approximates:2 measurement:1 cambridge:1 pew:1 consistency:1 grid:2 had:1 dot:2 etc:1 posterior:12 recent:1 verlag:2 arbitrarily:1 approximators:1 yi:2 plutowski:4 seen:1 captured:1 additional:1 dxdy:1 morgan:1 determine:4 redundant:2 ii:2 full:2 reduces:1 technical:1 calculation:1 prediction:7 variant:2 regression:1 multilayer:2 expectation:1 represent:1 ion:1 receive:1 addition:1 semiparametric:1 addressed:1 saad:3 sure:1 cowan:1 call:1 feedforward:1 iii:1 enough:1 architecture:1 perfectly:1 reduce:2 knowing:1 whether:2 effort:2 wo:2 miiller:1 york:3 action:4 adequate:1 ylx:5 nonparametric:1 concentrated:1 gmd:3 reduced:2 generate:2 estimated:2 affected:2 clean:1 asymptotically:1 merely:1 run:1 uncertainty:2 squareroot:1 decision:10 dy:5 pronzato:3 infinity:1 sharply:1 constraint:1 performing:1 department:1 according:5 jr:1 ate:1 sollich:6 wi:2 metropolis:4 modification:1 den:13 gradually:1 taken:1 equation:2 previously:1 discus:1 german:2 know:1 fed:1 appropriate:1 distinguished:1 kalos:3 exploit:1 approximating:1 society:1 objective:1 already:1 quantity:3 added:1 strategy:2 costly:2 concentration:1 gradient:1 thank:1 simulated:1 capacity:1 collected:1 extent:1 trivial:1 assuming:1 berger:5 ratio:1 demonstration:1 lg:1 frank:1 reweight:1 jorg:3 design:4 unknown:2 perform:2 upper:1 observation:17 fedorov:1 markov:6 finite:1 situation:5 paradox:1 lb:1 arbitrary:1 tuo:1 required:1 conflicting:1 nip:1 below:1 usually:1 royal:1 stinchcombe:3 critical:2 rely:1 improve:1 technology:1 started:2 catch:1 prior:1 review:1 acknowledgement:1 asymptotic:1 loss:42 var:2 foundation:1 integrate:1 sufficient:1 proxy:1 principle:1 row:2 last:2 infeasible:1 side:1 perceptron:1 absolute:1 benefit:1 distributed:3 calculated:1 xn:1 valid:2 collection:2 san:1 approximate:2 unreliable:1 ml:3 global:2 active:2 conceptual:1 assumed:3 xi:1 search:3 hornik:3 heidelberg:1 necessarily:1 ylxt:1 main:1 motivation:1 noise:5 border:1 repeated:1 representative:2 vr:2 wiley:1 precision:2 xl:2 candidate:3 third:1 ix:1 specific:1 xt:11 linden:1 evidence:1 derives:1 sequential:3 adding:1 gained:1 importance:3 bmft:1 entropy:1 led:1 hillclimbing:1 amsterdam:1 scalar:1 reflex:1 springer:1 nested:1 conditional:3 marked:2 typical:2 except:1 determined:1 averaging:2 experimental:3 select:1 mark:1 alexander:1 relevance:1 tested:1 unlearnable:1 |
4 | 1,001 | Neural Network Ensembles, Cross
Validation, and Active Learning
Anders Krogh"
Nordita
Blegdamsvej 17
2100 Copenhagen, Denmark
Jesper Vedelsby
Electronics Institute, Building 349
Technical University of Denmark
2800 Lyngby, Denmark
Abstract
Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity
is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among
the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble
generalization error, and how this type of ensemble cross-validation
can sometimes improve performance. It is shown how to estimate
the optimal weights of the ensemble members using unlabeled data.
By a generalization of query by committee, it is finally shown how
the ambiguity can be used to select new training data to be labeled
in an active learning scheme.
1
INTRODUCTION
It is well known that a combination of many different predictors can improve predictions. In the neural networks community "ensembles" of neural networks has been
investigated by several authors, see for instance [1, 2, 3]. Most often the networks
in the ensemble are trained individually and then their predictions are combined.
This combination is usually done by majority (in classification) or by simple averaging (in regression), but one can also use a weighted combination of the networks .
.. Author to whom correspondence should be addressed. Email: kroghlnordita. elk
232
Anders Krogh, Jesper Vedelsby
At the workshop after the last NIPS conference (December, 1993) an entire session
was devoted to ensembles of neural networks ( "Putting it all together", chaired by
Michael Perrone) . Many interesting papers were given, and it showed that this area
is getting a lot of attention .
A combination of the output of several networks (or other predictors) is only useful
if they disagree on some inputs. Clearly, there is no more information to be gained
from a million identical networks than there is from just one of them (see also
[2]). By quantifying the disagreement in the ensemble it turns out to be possible
to state this insight rigorously for an ensemble used for approximation of realvalued functions (regression). The simple and beautiful expression that relates the
disagreement (called the ensemble ambiguity) and the generalization error is the
basis for this paper, so we will derive it with no further delay.
2
THE BIAS-VARIANCE TRADEOFF
Assume the task is to learn a function J from RN to R for which you have a sample
of p examples, (xiJ , yiJ), where yiJ = J(xiJ) and J.t = 1, . . . ,p. These examples
are assumed to be drawn randomly from the distribution p(x) . Anything in the
following is easy to generalize to several output variables.
The ensemble consists of N networks and the output of network a on input x is
called va (x). A weighted ensemble average is denoted by a bar , like
V(x) =
L Wa Va(x).
(1)
a
This is the final output of the ensemble. We think of the weight Wa as our belief in
network a and therefore constrain the weights to be positive and sum to one. The
constraint on the sum is crucial for some of the following results.
The ambiguity on input x of a single member of the ensemble is defined as aa (x)
(V a(x) - V(x))2 . The ensemble ambiguity on input x is
a(x)
= Lwaaa(x) = LWa(va(x) a
V(x))2 .
=
(2)
a
It is simply the variance of the weighted ensemble around the weighed mean, and
it measures the disagreement among the networks on input x. The quadratic error
of network a and of the ensemble are
(J(x) - V a(x))2
(J(x) - V(X))2
(3)
(4)
respectively. Adding and subtracting J( x) in (2) yields
a(x)
=L
Wafa(X) - e(x)
(5)
a
(after a little algebra using that the weights sum to one) . Calling the weighted
average of the individual errors ?( x) = La Wa fa (x) this becomes
e(x)
= ?(x) -
a(x).
(6)
Neural Network Ensembles, Cross Validation, and Active Learning
233
All these formulas can be averaged over the input distribution . Averages over the
input distribution will be denoted by capital letter, so
J dxp(xVl! (x)
J dxp(x)aa(x)
J dxp(x)e(x).
E
(7)
(8)
(9)
The first two of these are the generalization error and the ambiguity respectively
for network n , and E is the generalization error for the ensemble. From (6) we then
find for the ensemble generalization error
(10)
The first term on the right is the weighted average of the generalization errors of
the individual networks (E = La waEa), and the second is the weighted average
of the ambiguities (A = La WaAa), which we refer to as the ensemble ambiguity.
The beauty of this equation is that it separates the generalization error into a term
that depends on the generalization errors of the individual networks and another
term that contain all correlations between the networks . Furthermore, the correlation term A can be estimated entirely from unlabeled data, i. e., no knowledge is
required of the real function to be approximated. The term "unlabeled example" is
borrowed from classification problems, and in this context it means an input x for
which the value of the target function f( x) is unknown.
Equation (10) expresses the tradeoff between bias and variance in the ensemble ,
but in a different way than the the common bias-variance relation [4] in which the
averages are over possible training sets instead of ensemble averages. If the ensemble
is strongly biased the ambiguity will be small , because the networks implement very
similar functions and thus agree on inputs even outside the training set. Therefore
the generalization error will be essentially equal to the weighted average of the
generalization errors of the individual networks. If, on the other hand , there is a
large variance , the ambiguity is high and in this case the generalization error will
be smaller than the average generalization error . See also [5].
From this equation one can immediately see that the generalization error of the
ensemble is always smaller than the (weighted) average of the ensemble errors,
E < E. In particular for uniform weights:
E
~ ~ 'fEcx
(11)
which has been noted by several authors , see e.g. [3] .
3
THE CROSS-VALIDATION ENSEMBLE
From (10) it is obvious that increasing the ambiguity (while not increasing individual
generalization errors) will improve the overall generalization. We want the networks
to disagree! How can we increase the ambiguity of the ensemble? One way is to
use different types of approximators like a mixture of neural networks of different
topologies or a mixture of completely different types of approximators. Another
234
Anders Krogh, Jesper Vedelsby
.
:~
1. -
t
-
,',
.. ,
E o...... -' '.- .. ' ........ ....,.
.'
..... , ...
v '. --:
,
.~.--c??
__ .. -.tI"
.
. -- - -\\
'1
-
.~
~.
, . _ ? ." ?
.. - .....
_._ ..... .'-._._.1
,
-
>
-
-1.k!
~
-4
.t.
f.
1\.1
:\,'. - ?-.l
:--,____
..
~~
.
~.
,
,'
-2
.~
If
o
2
\.
~
:
?
' 0'
~:
4
x
Figure 1: An ensemble of five networks were trained to approximate the square
wave target function f(x). The final ensemble output (solid smooth curve) and
the outputs of the individual networks (dotted curves) are shown. Also the square
root of the ambiguity is shown (dash-dot line) _ For training 200 random examples
were used, but each network had a cross-validation set of size 40, so they were each
trained on 160 examples.
obvious way is to train the networks on different training sets. Furthermore, to be
able to estimate the first term in (10) it would be desirable to have some kind of
cross-validation. This suggests the following strategy.
Chose a number K :::; p. For each network in the ensemble hold out K examples for
testing, where the N test sets should have minimal overlap, i. e., the N training sets
should be as different as possible. If, for instance, K :::; piN it is possible to choose
the K test sets with no overlap. This enables us to estimate the generalization error
E(X of the individual members of the ensemble, and at the same time make sure
that the ambiguity increases . When holding out examples the generalization errors
for the individual members of the ensemble, E(X, will increase, but the conjecture
is that for a good choice of the size of the ensemble (N) and the test set size
(K), the ambiguity will increase more and thus one will get a decrease in overall
generalization error.
This conjecture has been tested experimentally on a simple square wave function
of one variable shown in Figure 1. Five identical feed-forward networks with one
hidden layer of 20 units were trained independently by back-propagation using 200
random examples. For each network a cross-validation set of K examples was held
out for testing as described above. The "true" generalization and the ambiguity were
estimated from a set of 1000 random inputs. The weights were uniform, w(X
1/5
(non-uniform weights are addressed later).
=
In Figure 2 average results over 12 independent runs are shown for some values of
Neural Network Ensembles, Cross Validation, and Active Learning
Figure 2: The solid line shows the generalization error for uniform weights as
a function of K, where K is the size
of the cross-validation sets. The dotted
line is the error estimated from equation (10) . The dashed line is for the
optimal weights estimated by the use of
the generalization errors for the individual networks estimated from the crossvalidation sets as described in the text.
The bottom solid line is the generalization error one would obtain if the individual generalization errors were known
exactly (the best possible weights).
0.08
235
,-----r----,--~---r-----,
o
t=
w
0.06
c
o
~
.!::!
co...
~ 0.04
Q)
(!)
0 .02 '---_---1_ _---'-_ _--'-_ _-----'
o
20
40
60
80
Size of CV set
K (top solid line) . First, one should note that the generalization error is the same
for a cross-validation set of size 40 as for size 0, although not lower, so it supports
the conjecture in a weaker form. However, we have done many experiments, and
depending on the experimental setup the curve can take on almost any form, sometimes the error is larger at zero than at 40 or vice versa. In the experiments shown,
only ensembles with at least four converging networks out of five were used . If all
the ensembles were kept, the error would have been significantly higher at ]{ = a
than for K > a because in about half of the runs none of the networks in the ensemble converged - something that seldom happened when a cross-validation set
was used. Thus it is still unclear under which circumstances one can expect a drop
in generalization error when using cross-validation in this fashion.
The dotted line in Figure 2 is the error estimated from equation (10) using the
cross-validation sets for each of the networks to estimate Ea, and one notices a
good agreement.
4
OPTIMAL WEIGHTS
The weights Wa can be estimated as described in e.g. [3]. We suggest instead
to use unlabeled data and estimate them in such a way that they minimize the
generalization error given in (10) .
There is no analytical solution for the weights , but something can be said about
the minimum point of the generalization error. Calculating the derivative of E as
given in (10) subject to the constraints on the weights and setting it equal to zero
shows that
Ea - Aa
E or Wa = O.
(12)
=
(The calculation is not shown because of space limitations, but it is easy to do.)
That is, Ea - Aa has to be the same for all the networks. Notice that Aa depends
on the weights through the ensemble average of the outputs. It shows that the
optimal weights have to be chosen such that each network contributes exactly waE
236
Anders Krogh, Jesper Vedelsby
to the generalization error. Note, however, that a member of the ensemble can have
such a poor generalization or be so correlated with the rest of the ensemble that it
is optimal to set its weight to zero.
The weights can be "learned" from unlabeled examples, e.g. by gradient descent
minimization of the estimate of the generalization error (10). A more efficient
approach to finding the optimal weights is to turn it into a quadratic optimization
problem. That problem is non-trivial only because of the constraints on the weights
(L:a Wa = 1 and Wa 2:: 0). Define the correlation matrix,
C af3
=
f
dxp(x)V a (x)V f3 (x) .
(13)
Then, using that the weights sum to one, equation (10) can be rewritten as
E
=
L
a
wa Ea
+ L w a C af3 w f3 - L
af3
waCaa .
(14)
a
Having estimates of E a and C af3 the optimal weights can be found by linear programming or other optimization techniques. Just like the ambiguity, the correlation
matrix can be estimated from unlabeled data to any accuracy needed (provided that
the input distribution p is known).
In Figure 2 the results from an experiment with weight optimization are shown.
The dashed curve shows the generalization error when the weights are optimized as
described above using the estimates of Ea from the cross-validation (on K exampies). The lowest solid curve is for the idealized case, when it is assumed that the
errors Ea are known exactly, so it shows the lowest possible error. The performance
improvement is quite convincing when the cross-validation estimates are used.
It is important to notice that any estimate of the generalization error of the individual networks can be used in equation (14). If one is certain that the individual
networks do not overfit, one might even use the training errors as estimates for
Ea (see [3]). It is also possible to use some kind of regularization in (14), if the
cross-validation sets are small.
5
ACTIVE LEARNING
In some neural network applications it is very time consuming and/or expensive
to acquire training data, e.g., if a complicated measurement is required to find the
value of the target function for a certain input. Therefore it is desirable to only use
examples with maximal information about the function. Methods where the learner
points out good examples are often called active learning.
We propose a query-based active learning scheme that applies to ensembles of networks with continuous-valued output. It is essentially a generalization of query by
committee [6, 7] that was developed for classification problems. Our basic assumption is that those patterns in the input space yielding the largest error are those
points we would benefit the most from including in the training set.
Since the generalization error is always non-negative, we see from (6) that the
weighted average of the individual network errors is always larger than or equal to
the ensemble ambiguity,
f(X) 2:: a(x),
(15)
Neural Network Ensembles. Cross Validation. and Active Learning
237
2.5 r"':":'T---r--"T""--.-----r---,
.
.
.
:
0.5
o
10
20
30
Training set size
40
50
o
10
20
30
40
50
Training set size
Figure 3: In both plots the full line shows the average generalization for active
learning, and the dashed line for passive learning as a function of the number of
training examples. The dots in the left plot show the results of the individual
experiments contributing to the mean for the active learning. The dots in right plot
show the same for passive learning.
which tells us that the ambiguity is a lower bound for the weighted average of the
squared error. An input pattern that yields a large ambiguity will always have a
large average error. On the other hand, a low ambiguity does not necessarily imply
a low error. If the individual networks are trained to a low training error on the
same set of examples then both the error and the ambiguity are low on the training
points. This ensures that a pattern yielding a large ambiguity cannot be in the close
neighborhood of a training example. The ambiguity will to some extent follow the
fluctuations in the error. Since the ambiguity is calculated from unlabeled examples
the input-space can be scanned for these areas to any detail. These ideas are well
illustrated in Figure 1, where the correlation between error and ambiguity is quite
strong, although not perfect.
The results of an experiment with the active learning scheme is shown in Figure 3.
An ensemble of 5 networks was trained to approximate the square-wave function
shown in Figure 1, but in this experiments the function was restricted to the interval
from - 2 to 2. The curves show the final generalization error of the ensemble in a
passive (dashed line) and an active learning test (solid line). For each training set
size 2x40 independent tests were made, all starting with the same initial training
set of a single example. Examples were generated and added one at a time. In the
passive test examples were generated at random, and in the active one each example
was selected as the input that gave the largest ambiguity out of 800 random ones.
Figure 3 also shows the distribution of the individual results of the active and
passive learning tests. Not only do we obtain significantly better generalization by
active learning, there is also less scatter in the results. It seems to be easier for the
ensemble to learn from the actively generated set.
238
6
Anders Krogh. Jesper Vedelsby
CONCLUSION
The central idea in this paper was to show that there is a lot to be gained from
using unlabeled data when training in ensembles. Although we dealt with neural
networks, all the theory holds for any other type of method used as the individual
members of the ensemble.
It was shown that apart from getting the individual members of the ensemble to
generalize well, it is important for generalization that the individuals disagrees as
much as possible, and we discussed one method to make even identical networks
disagree. This was done by training the individuals on different training sets by
holding out some examples for each individual during training. This had the added
advantage that these examples could be used for testing, and thereby one could
obtain good estimates of the generalization error.
It was discussed how to find the optimal weights for the individuals of the ensemble.
For our simple test problem the weights found improved the performance of the
ensemble significantly.
Finally a method for active learning was described, which was based on the method
of query by committee developed for classification problems. The idea is that if the
ensemble disagrees strongly on an input, it would be good to find the label for that
input and include it in the training set for the ensemble. It was shown how active
learning improves the learning curve a lot for a simple test problem.
Acknowledgements
We would like to thank Peter Salamon for numerous discussions and for his implementation of linear programming for optimization of the weights. We also thank
Lars Kai Hansen for many discussions and great insights, and David Wolpert for
valuable comments.
References
[1] L.K. Hansen and P Salamon. Neural network ensembles. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12(10):993- 1001, Oct. 1990.
[2] D.H Wolpert. Stacked generalization. Neural Networks, 5(2):241-59, 1992.
[3] Michael P. Perrone and Leon N Cooper. When networks disagree: Ensemble method
for neural networks. In R. J. Mammone, editor, Neural Networks for Speech and Image
processing. Chapman-Hall, 1993.
[4] S. Geman , E . Bienenstock, and R Doursat. Neural networks and the bias/variance
dilemma. Neural Computation, 4(1):1-58, Jan. 1992.
[5] Ronny Meir. Bias, variance and the combination of estimators; the case of linear least
squares. Preprint (In Neuroprose), Technion, Heifa, Israel, 1994.
[6] H.S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Proceedings of
the Fifth Workshop on Computational Learning Theory, pages 287-294, San Mateo,
CA, 1992. Morgan Kaufmann.
[7] Y. Freund, H.S. Seung, E. Shamir, and N. Tishby. Information, prediction, and query
by committee. In Advances in Neural Information Processing Systems, volume 5, San
Mateo, California, 1993. Morgan Kaufmann.
| 1001 |@word seems:1 thereby:1 solid:6 electronics:1 initial:1 scatter:1 enables:1 drop:1 plot:3 half:1 selected:1 intelligence:1 af3:4 five:3 consists:1 little:1 increasing:2 becomes:1 provided:1 lowest:2 israel:1 kind:2 developed:2 finding:1 ti:1 exactly:3 unit:1 positive:1 fluctuation:1 might:1 chose:1 mateo:2 suggests:1 co:1 averaged:2 testing:3 implement:1 jan:1 area:2 significantly:3 suggest:1 get:1 cannot:1 unlabeled:9 close:1 context:1 ronny:1 weighed:1 attention:1 starting:1 independently:1 immediately:1 insight:2 estimator:1 his:1 variation:1 target:3 shamir:1 programming:2 agreement:1 approximated:1 expensive:1 geman:1 labeled:1 bottom:1 preprint:1 ensures:1 sompolinsky:1 decrease:1 valuable:1 seung:2 rigorously:1 trained:6 algebra:1 dilemma:1 learner:1 completely:1 basis:1 train:1 stacked:1 jesper:5 query:6 tell:1 outside:1 neighborhood:1 mammone:1 quite:2 larger:2 valued:2 kai:1 think:1 final:3 dxp:4 advantage:1 analytical:1 propose:1 subtracting:1 maximal:1 getting:2 crossvalidation:1 perfect:1 derive:1 depending:1 borrowed:1 strong:1 krogh:5 lars:1 generalization:42 yij:2 hold:2 around:1 hall:1 great:1 estimation:1 label:1 hansen:2 individually:1 largest:2 vice:1 weighted:10 minimization:1 clearly:1 always:4 beauty:1 improvement:1 anders:5 entire:1 hidden:1 relation:1 bienenstock:1 overall:2 among:2 classification:4 denoted:2 equal:3 f3:2 having:1 chapman:1 identical:3 randomly:1 individual:22 mixture:2 yielding:2 devoted:1 held:1 minimal:1 instance:2 predictor:2 uniform:4 delay:1 technion:1 tishby:1 combined:1 michael:2 together:1 squared:1 ambiguity:28 central:1 choose:1 derivative:1 actively:1 elk:1 depends:2 idealized:1 later:1 root:1 lot:3 wave:3 complicated:1 minimize:1 square:5 accuracy:2 variance:7 kaufmann:2 ensemble:58 yield:2 wae:1 generalize:2 dealt:1 none:1 converged:1 email:1 obvious:2 vedelsby:5 knowledge:1 improves:1 ea:7 back:1 salamon:2 feed:1 higher:1 follow:1 improved:2 done:3 strongly:2 furthermore:2 just:2 correlation:5 overfit:1 hand:2 propagation:1 building:1 contain:1 true:1 regularization:1 illustrated:1 during:1 noted:1 anything:1 passive:5 image:1 common:1 volume:1 million:1 discussed:3 refer:1 measurement:1 versa:1 cv:1 seldom:1 session:1 had:2 dot:3 something:2 showed:1 apart:1 certain:2 approximators:2 morgan:2 minimum:1 dashed:4 relates:1 full:1 desirable:2 smooth:1 technical:1 calculation:1 cross:18 va:3 prediction:3 converging:1 regression:2 basic:1 essentially:2 circumstance:1 sometimes:2 want:1 addressed:2 interval:1 crucial:1 biased:1 rest:1 doursat:1 sure:1 comment:1 subject:1 member:8 december:1 easy:2 gave:1 topology:1 idea:3 tradeoff:2 x40:1 expression:1 peter:1 speech:1 useful:1 meir:1 xij:2 notice:3 dotted:3 happened:1 estimated:8 nordita:1 express:1 lwa:1 putting:1 four:1 drawn:1 capital:1 kept:1 sum:4 run:2 letter:1 you:1 almost:1 entirely:1 layer:1 bound:1 dash:1 correspondence:1 quadratic:2 scanned:1 constraint:3 constrain:1 calling:1 leon:1 conjecture:3 combination:6 perrone:2 poor:1 smaller:2 restricted:1 neuroprose:1 lyngby:1 equation:7 agree:1 turn:2 pin:1 committee:6 needed:1 rewritten:1 disagreement:4 top:1 include:1 calculating:1 added:2 fa:1 strategy:1 unclear:1 said:1 gradient:1 separate:1 thank:2 blegdamsvej:1 majority:1 whom:1 extent:1 trivial:1 denmark:3 convincing:1 acquire:1 setup:1 holding:2 negative:1 implementation:1 unknown:1 disagree:4 descent:1 rn:1 community:1 david:1 copenhagen:1 required:2 optimized:1 california:1 learned:1 nip:1 able:1 bar:1 usually:1 pattern:4 reliable:2 including:1 belief:1 overlap:2 beautiful:1 scheme:3 improve:3 imply:1 numerous:1 realvalued:1 text:1 disagrees:2 acknowledgement:1 contributing:1 freund:1 expect:1 interesting:1 limitation:1 validation:18 editor:1 last:1 bias:5 weaker:1 institute:1 fifth:1 benefit:1 curve:7 calculated:1 opper:1 author:3 forward:1 made:1 san:2 transaction:1 approximate:2 active:18 assumed:2 consuming:1 continuous:2 quantifies:1 learn:2 ca:1 contributes:1 investigated:1 necessarily:1 fashion:1 cooper:1 formula:1 workshop:2 adding:1 gained:2 easier:1 wolpert:2 simply:1 applies:1 aa:5 oct:1 quantifying:1 experimentally:1 averaging:1 called:3 experimental:1 la:3 select:1 support:1 tested:1 correlated:1 |
5 | 1,002 | U sing a neural net to instantiate a
deformable model
Christopher K. I. Williams; Michael D. Revowand Geoffrey E. Hinton
Department of Computer Science, University of Toronto
Toronto, Ontario, Canada M5S lA4
Abstract
Deformable models are an attractive approach to recognizing nonrigid objects which have considerable within class variability. However, there are severe search problems associated with fitting the
models to data. We show that by using neural networks to provide
better starting points, the search time can be significantly reduced.
The method is demonstrated on a character recognition task.
In previous work we have developed an approach to handwritten character recognition based on the use of deformable models (Hinton, Williams and Revow, 1992a;
Revow, Williams and Hinton, 1993). We have obtained good performance with this
method, but a major problem is that the search procedure for fitting each model to
an image is very computationally intensive, because there is no efficient algorithm
(like dynamic programming) for this task. In this paper we demonstrate that it is
possible to "compile down" some of the knowledge gained while fitting models to
data to obtain better starting points that significantly reduce the search time.
1
DEFORMABLE MODELS FOR DIGIT RECOGNITION
The basic idea in using deformable models for digit recognition is that each digit has
a model, and a test image is classified by finding the model which is most likely to
have generated it. The quality of the match between model and test image depends
on the deformation of the model, the amount of ink that is attributed to noise and
the distance of the remaining ink from the deformed model.
?Current address: Department of Computer Science and Applied Mathematics, Aston
University, Birmingham B4 7ET, UK.
966
Christopher K. T. Williams, Michael D. Revow, Geoffrey E. Hinton
More formally, the two important terms in assessing the fit are the prior probability distribution for the instantiation parameters of a model (which penalizes very
distorted models), and the imaging model that characterizes the probability distribution over possible images given the instantiated model l . Let I be an image, M
be a model and z be its instantiation parameters. Then the evidence for model M
is given by
P(IIM)
=
J
P(zIM)P(IIM, z)dz
(1)
The first term in the integrand is the prior on the instantiation parameters and the
second is the imaging model i.e., the likelihood of the data given the instantiated
model. P(MII) is directly proportional to P(IIM), as we assume a uniform prior
on each digit.
Equation 1 is formally correct, but if z has more than a few dimensions the evaluation of this integral is very computationally intensive. However, it is often possible
to make an approximation based on the assumption that the integrand is strongly
peaked around a (global) maximum value z*. In this case, the evidence can be approximated by the highest peak of the integrand times a volume factor ~(zII, M),
which measures the sharpness of the peak 2 .
P(IIM) ~ P(z*IM)P(Ilz*, M)~(zII, M)
(2)
By Taylor expanding around z* to second order it can be shown that the volume
factor depends on the determinant of the Hessian of 10gP(z, 11M) . Taking logs
of equation 2, defining EdeJ as the negative log of P(z*IM), and EJit as the corresponding term for the imaging model, then the aim of the search is to find the
minimum of E tot = EdeJ + EJit . Of course the total energy will have many local
minima; for the character recognition task we aim to find the global minimum by
using a continuation method (see section 1.2).
1.1
SPLINES, AFFINE TRANSFORMS AND IMAGING MODELS
This section presents a brief overview of our work on using deformable models for
digit recognition. For a fuller treatment, see Revow, Williams and Hinton (1993) .
Each digit is modelled by a cubic B-spline whose shape is determined by the positions of the control points in the object-based frame. The models have eight control
points, except for the one model which has three, and the seven model which has
five. To generate an ideal example of a digit the control points are positioned at
their "home" locations. Deformed characters are produced by perturbing the control points away from their home locations. The home locations and covariance
matrix for each model were adapted in order to improve the performance.
The deformation energy only penalizes shape deformations. Affine transformations,
i.e., translation, rotation, dilation, elongation, and shear, do not change the underlying shape of an object so we want the deformation energy to be invariant under
them . We achieve this by giving each model its own "object-based frame" and
computing the deformation energy relative to this frame.
lThis framework has been used by many authors, e.g. Grenander et al (1991) .
2The Gaussian approximation has been popularized in the neural net community by
MacKay (1992) .
Using a Neural Net to Instantiate a Deformable Model
967
The data we used consists of binary-pixel images of segmented handwritten digits.
The general flavour of a imaging model for this problem is that there should be a
high probability of inked pixels close to the spline, and lower probabilities further
away. This can be achieved by spacing out a number of Gaussian "ink generators"
uniformly along the contour; we have found that it is also useful to have a uniform
background noise process over the area of the image that is able to account for
pixels that occur far away from the generators. The ink generators and background
process define a mixture model. Using the assumption that each data point is
generated independently given the instantiated model, P(Ilz*, M) factors into the
product of the probability density of each black pixel under the mixture model.
1.2
RECOGNIZING ISOLATED DIGITS
For each model, the aim of the search is to find the instantiation parameters that
minimize E tot . The search starts with zero deformations and an initial guess for
the affine parameters which scales the model so as to lie over the data with zero
skew and rotation. A small number of generators with the same large variance are
placed along the spline, forming a broad, smooth ridge of high ink-probability along
the spline. We use a search procedure similar to the (iterative) Expectation Maximization (EM) method of fitting an unconstrained mixture of Gaussians, except
that (i) the Gaussians are constrained to lie on the spline (ii) there is a deformation energy term and (iii) the affine transformation must be recalculated on each
iteration. During the search the number of generators is gradually increased while
their variance decreases according to predetermined "annealing" schedule3 .
After fitting all the models to a particular image, we wish to evaluate which of the
models best "explains" the data. The natural measure is the sum of Ejit, Edej
and the volume factor. However, we have found that performance is improved by
including four additional terms which are easily obtained from the final fits of the
model to the image. These are (i) a measure which penalizes matches in which
there are beads far from any inked pixels (the "beads in white space" problem),
and (ii) the rotation, shear and elongation of the affine transform. It is hard to
decide in a principled way on the correct weightings for all of these terms in the
evaluation function. We estimated the weightings from the data by training a
simple postprocessing neural network. These inputs are connected directly to the
ten output units. The output units compete using the "softmax" function which
guarantees that they form a probability distribution, summing to one.
2
PREDICTING THE INSTANTIATION PARAMETERS
The search procedure described above is very time consuming. However, given many
examples of images and the corresponding instantiation parameters obtained by the
slow method, it is possible to train a neural network to predict the instantiation
parameters of novel images. These predictions provide better starting points, so the
search time can be reduced.
3The schedule starts with 8 beads increasing to 60 beads in six steps, with the variance
decreasing from 0.04 to 0.0006 (measured in the object frame). The scale is set in the
object-based frame so that each model is 1 unit high.
968
2.1
Christopher K. I. Williams, Michael D. Revow, Geoffrey E. Hinton
PREVIOUS WORK
Previous work on hypothesizing instantiation parameters can be placed into two
broad classes, correspondence based search and parameter space search. In correspondence based search, the idea is to extract features from the image and identify
corresponding features in the model. Using sufficient correspondences the instantiation parameters of the model can be determined. The problem is that simple, easily
detectable image features have many possible matches, and more complex features
require more computation and are more difficult to detect. Grimson (1990) shows
how to search the space of possible correspondences using an interpretation tree.
An alternative approach, which is used in Hough transform techniques, is to directly work in parameter space. The Hough transform was originally designed for
the detection of straight lines in images, and has been extended to cover a number
of geometric shapes, notably conic sections. Ballard (1981) further extended the
approach to arbitrary shapes with the Generalized Hough Transform . The parameter space for each model is divided into cells ("binned"), and then for each image
feature a vote is added to each parameter space bin that could have produced that
feature. After collecting votes from all image features we then search for peaks in
the parameter space accumulator array, and attempt to verify pose. The Hough
transform can be viewed as a crude way of approximating the logarithm of the
posterior distribution P(zII, M) (e.g. Hunt et al , 1988).
However, these two techniques have only been used on problems involving rigid
models, and are not readily applicable to the digit recognition problem. For the
Hough space method, binning and vote collection is impractical in the high dimensional parameter space, and for the correspondence based approach there is a
lack of easily identified and highly discriminative features. The strengths of these
two techniques, namely their ability to deal with arbitrary scalings, rotations and
translations of the data, and their tolerance of extraneous features, are not really
required for a task where the input data is fairly well segmented and normalized.
Our approach is to use a neural network to predict the instantiation parameters for
each model, given an input image. Zemel and Hinton (1991) used a similar method
with simple 2-d objects, and more recently, Beymer et al (1993) have constructed
a network which maps from a face image to a 2-d parameter space spanning head
rotations and a smile/no-smile dimension. However, their method does not directly
map from images to instantiation parameters; they use a computer vision correspondence algorithm to determine the displacement field of pixels in a novel image
relative to a reference image, and then use this field as the input to the network.
This step limits the use of the approach to images that are sufficiently similar so
that the correspondence algorithm functions well.
2.2
INSTANTIATING DIGIT MODELS USING NEURAL
NETWORKS
The network which is used to predict the model instantiation parameters is shown
in figure 1. The (unthinned) binary images are normalized to give 16 x 16 8-bit
greyscale images which are fed into the neural network. The network uses a standard
three-layer architecture; each hidden unit computes a weighted sum of its inputs,
and then feeds this value through a sigmoidal nonlinearity u(x) = 1/(1 + e- X ). The
Using a Neural Net to Instantiate a Deformable Model
cps for 0 model
cps for I model
969
cps for 9 model
o
Figure 1: The prediction network architecture. "cps" stands for control points.
output values are a weighted linear combination of the hidden unit activities plus
output biases. The targets are the locations of the control points in the normalized
image, found from fitting models as described in section 1.2.
The network was trained with backpropagation to minimize the squared error, using
900 training images and 200 validation images of each digit drawn from the br
set of the CEDAR CDROM 1 database of Cities, States, ZIP Codes, Digits, and
Alphabetic Characters4 . Two test sets were used; one was obtained from data in the
br dataset, and the other was the (official) bs test set. After some experimentation
we chose a network with twenty hidden units, which means that the net has over
8,000 weights . With such a large number of weights it is important to regularize the
solution obtained by the network by using a complexity penalty; we used a weight
and optimized A on a validation set. Targets were only set for the
penalty AL: j
correct digit at the output layer; nothing was backpropagated from the other output
units. The net took 440 epochs to train using the default conjugate gradient search
method in the Xerion neural network simulator 5 . It would be possible to construct
ten separate networks to carry out the same task as the net described above, but
this would intensify the danger of overfitting, which is reduced by giving the network
a common pool of hidden units which it can use as it decides appropriate.
wJ
For comparison with the prediction net described above, a trivial network which
just consisted of output biases was trained; this network simply learns the average
value of the control point locations. On a validation set the squared error of the
prediction net was over three times smaller than the trivial net. Although this is
encouraging, the acid test is to compare the performance of elastic models settled
from the predicted positions using a shortened annealing schedule; if the predictions
are good, then only a short amount of settling will be required.
4Made available by the Unites States Postal Service Office of Advanced Technology.
5Xerion was designed and implemented by Drew van Camp, Tony Plate and Geoffrey
Hinton at the University of Toronto.
970
Christopher K. I. Williams, Michael D. Revow, Geoffrey E. Hinton
Figure 2: A comparision of the initial instantiations due to the prediction net (top row)
and the trivial net (bottom row) on an image of a 2. Notice that for the two model the
prediction net is much closer to the data. The other digit models mayor may not be greatly
affected by the input data; for example, the predictions from both nets seem essentially
the same for the zero, but for the seven the prediction net puts the model nearer to the
data.
The feedforward net predicts the position of the control points in the normalized
image. By inverting the normalization process, the positions of the control points
in the un-normalized image are determined. The model deformation and affine
transformation corresponding to these image control point locations can then be
determined by running a part of one iteration of the search procedure. Experiments
were then conducted with a number of shortened annealing schedules; for each one,
data obtained from settling on a part of the training data was used to train the
postprocessing net. The performance was then evaluated on the br test set.
The full annealing schedule has six stages. The shortened annealing schedules are:
1. No settling at all
2. Two iterations at the final variance of 0.0006
3. One iteration at 0.0025 and two at 0.0006
4. The full annealing schedule (for comparison)
The results on the br test set are shown in table 1. The general trends are that the
performance obtained using the prediction net is consistently better than the trivial
net, and that longer annealing schedules lead to better performance. A comparison
of schedules 3 and 4 in table 1 indicates that the performance of the prediction
net/schedule 3 combination is similar to (or slightly better than) that obtained
with the full annealing schedule, and is more than a factor of two faster. The
results with the full schedule are almost identical to the results obtained with the
default "box" initialization described in section 1.2. Figure 2 compares the outputs
of the prediction and trivial nets on a particular example. Judging from the weight
Using a Neural Net to Instantiate a Deformable Model
Schedule number
Trivial net
Prediction net
1
2
3
4
427
329
160
40
200
58
32
36
971
Average time required
to settle one model (s)
0.12
0.25
0.49
1.11
Table 1: Errors on the internal test set of 2000 examples for different annealing schedules.
The timing trials were carried out on a R-4400 machine.
vectors and activity patterns of the hidden units, it does not seem that some of the
units are specialized for a particular digit class.
A run on the bs test set using schedule 3 gave an error rate of 4.76 % (129 errors),
which is very similar to the 125 errors obtained using the full annealing schedule
and the box initialization. A comparison of the errors made on the two runs shows
that only 67 out of the 129 errors were common to the two sets. This suggests that
it would be very sensible to reject cases where the two methods do not agree.
3
DISCUSSION
The prediction net used above can be viewed as an interpolation scheme in the
control point position space of each digit z(I) = Zo + 2:i ai(I)zi, where z(I) is
the predicted position in the control point space, Zo is the contribution due to the
biases, ai is the activity of hidden unit i and Zi is its location in the control point
position space (learned from the data) . If there are more hidden units than output
dimensions, then for any particular image there are an infinite number of ways to
make this equation hold exactly. However, the network will tend to find solutions
so that the ai(I)'s will vary smoothly as the image is perturbed.
The nets described above output just one set of instantiation parameters for a
given model. However, it may be preferable to be able to represent a number of
guesses about model instantiation parameters; one way of doing this is to train a
network that has multiple sets of output parameters, as in the "mixture of experts"
architecture of Jacobs et aI (1991). The outputs can be interpreted as a mixture
distribution in the control point position space, conditioned on the input image.
Another approach to providing more information about the posterior distribution
is described in (Hinton, Williams and Revow, 1992b), where P(zlI) is approximated
using a fixed set of basis functions whose weighting depends on the input image I.
The strategies descriped above directly predict the instantiation parameters in parameter space. It is also possible to use neural networks to hypothesize correspondences, i.e. to predict an inked pixel's position on the spline given a local window
of context in the image. With sufficient matches it is then possible to compute
the instantiation parameters of the model. We have conducted some preliminary
experiments with this method (described in Williams, 1994), which indicate that
good performance can be achieved for the correspondence prediction task.
972
Christopher K. I. Williams, Michael D. Revow, Geoffrey E. Hinton
We have shown that the we can obtain significant speedup using the prediction net.
The schemes outlined above which allow multimodal predictions in instantiation
parameter space may improve performance and deserve further investigation. We
are also interested in improving the performance of the prediction net, for example
by outputting a confidence measure which could be used to adjust the length of
the elastic models' search appropriately. We believe that using machine learning
techniques like neural networks to help reduce the amount of search required to fit
complex models to data may be useful for many other problems.
Acknowledgements
This research was funded by Apple and by the Ontario Information Technology Research
Centre. We thank Allan Jepson, Richard Durbin, Rich Zemel, Peter Dayan, Rob Tibshirani
and Yann Le Cun for helpful discussions. Geoffrey Hinton is the Noranda Fellow of the
Canadian Institute for Advanced Research.
References
Ballard, D. H. (1981). Generalizing the Hough transfrom to detect arbitrary shapes.
Pattern Recognition, 13(2):111-122.
Beymer, D., Shashua, A., and Poggio, T . (1993). Example Based Image Analysis and
Synthesis. AI Memo 1431, AI Laboratory, MIT.
Grenander, U., Chow, Y., and Keenan, D. M. (1991). Hands: A pattern theoretic study of
biological shapes. Springer-Verlag.
Grimson, W. E. 1. (1990) . Object recognition by computer. MIT Press, Cambridge, MA.
Hinton, G. E., Williams, C. K. 1., and Revow, M. D. (1992a). Adaptive elastic models
for hand-printed character recognition. In Moody, J. E., Hanson, S. J., and Lippmann, R. P., editors, Advances in Neural Information Processing Systems 4. Morgan
Kauffmann.
Hinton, G. E., Williams, C. K. 1., and Revow, M. D. (1992b). Combinining two methods
of recognizing hand-printed digits. In Aleksander, 1. and Taylor, J., editors, Artificial
Neural Networks 2. Elsevier Science Publishers.
Hunt, D. J., Nolte, L. W., and Ruedger, W . H. (1988) . Performance of the Hough Transform and its Relationship to Statistical Signal Detection Theory. Computer Vision,
Graphics and Image Processing, 43:221- 238.
Jacobs, R. A., Jordan, M. 1., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive mixtures
of local experts. Neural Computation, 3(1).
MacKay, D. J. C. (1992). Bayesian Interpolation. Neural Computation, 4(3):415-447.
Revow, M. D., Williams, C. K. 1., and Hinton, G. E. (1993) . Using mixtures of deformable
models to capture variations in hand printed digits. In Srihari, S., editor, Proceedings
of the Third International Workshop on Frontiers in Handwriting Recognition, pages
142-152, Buffalo, New York, USA.
Williams, C. K. 1. (1994) . Combining deformable models and neural networks for handprinted digit recognition. PhD thesis, Dept. of Computer Science, University of
Toronto.
Zemel, R . S. and Hinton, G. E. (1991) . Discovering viewpoint-invariant relationships that
characterize objects. In Lippmann, R. P., Moody, J. E., and Touretzky, D. S., editors, Advances In Neural Information Processing Systems 3, pages 299-305. Morgan
Kaufmann Publishers.
| 1002 |@word deformed:2 trial:1 determinant:1 covariance:1 jacob:2 carry:1 initial:2 current:1 nowlan:1 must:1 readily:1 tot:2 predetermined:1 shape:7 hypothesize:1 designed:2 instantiate:4 guess:2 discovering:1 short:1 postal:1 toronto:4 location:7 sigmoidal:1 five:1 zii:3 along:3 constructed:1 consists:1 fitting:6 allan:1 notably:1 simulator:1 decreasing:1 encouraging:1 window:1 increasing:1 underlying:1 interpreted:1 developed:1 finding:1 transformation:3 impractical:1 guarantee:1 fellow:1 collecting:1 exactly:1 preferable:1 uk:1 control:14 unit:12 service:1 local:3 timing:1 limit:1 shortened:3 interpolation:2 black:1 plus:1 chose:1 initialization:2 suggests:1 compile:1 hunt:2 accumulator:1 backpropagation:1 digit:20 procedure:4 displacement:1 danger:1 area:1 significantly:2 reject:1 printed:3 confidence:1 close:1 put:1 context:1 map:2 demonstrated:1 dz:1 williams:14 starting:3 independently:1 sharpness:1 array:1 regularize:1 variation:1 kauffmann:1 target:2 programming:1 us:1 trend:1 recognition:12 approximated:2 predicts:1 database:1 binning:1 bottom:1 capture:1 wj:1 connected:1 decrease:1 highest:1 principled:1 grimson:2 complexity:1 dynamic:1 trained:2 basis:1 easily:3 multimodal:1 train:4 zo:2 instantiated:3 artificial:1 zemel:3 whose:2 ability:1 gp:1 transform:6 la4:1 final:2 xerion:2 grenander:2 net:28 took:1 outputting:1 product:1 combining:1 achieve:1 ontario:2 deformable:11 alphabetic:1 assessing:1 object:9 help:1 pose:1 measured:1 implemented:1 predicted:2 indicate:1 correct:3 settle:1 bin:1 explains:1 require:1 really:1 preliminary:1 investigation:1 biological:1 im:2 frontier:1 hold:1 around:2 sufficiently:1 recalculated:1 predict:5 major:1 vary:1 birmingham:1 applicable:1 city:1 weighted:2 mit:2 gaussian:2 aim:3 unthinned:1 aleksander:1 office:1 zim:1 consistently:1 likelihood:1 indicates:1 greatly:1 detect:2 camp:1 helpful:1 elsevier:1 dayan:1 rigid:1 chow:1 hidden:7 interested:1 pixel:7 extraneous:1 constrained:1 softmax:1 mackay:2 fairly:1 field:2 construct:1 fuller:1 elongation:2 identical:1 broad:2 peaked:1 hypothesizing:1 spline:7 richard:1 few:1 attempt:1 detection:2 highly:1 evaluation:2 severe:1 adjust:1 mixture:7 integral:1 closer:1 poggio:1 tree:1 taylor:2 hough:7 penalizes:3 logarithm:1 deformation:8 isolated:1 increased:1 cover:1 maximization:1 cedar:1 uniform:2 recognizing:3 conducted:2 graphic:1 characterize:1 perturbed:1 density:1 peak:3 international:1 pool:1 michael:5 synthesis:1 moody:2 squared:2 thesis:1 settled:1 expert:2 account:1 depends:3 doing:1 characterizes:1 shashua:1 start:2 contribution:1 minimize:2 variance:4 acid:1 kaufmann:1 identify:1 modelled:1 handwritten:2 bayesian:1 produced:2 apple:1 m5s:1 straight:1 classified:1 touretzky:1 energy:5 associated:1 attributed:1 handwriting:1 dataset:1 treatment:1 knowledge:1 schedule:15 positioned:1 feed:1 originally:1 improved:1 evaluated:1 box:2 strongly:1 just:2 stage:1 hand:4 christopher:5 lack:1 quality:1 believe:1 usa:1 verify:1 normalized:5 consisted:1 laboratory:1 white:1 attractive:1 deal:1 during:1 inked:3 generalized:1 nonrigid:1 plate:1 ridge:1 demonstrate:1 theoretic:1 postprocessing:2 image:38 novel:2 recently:1 common:2 rotation:5 specialized:1 shear:2 overview:1 perturbing:1 b4:1 volume:3 interpretation:1 significant:1 cambridge:1 ai:6 unconstrained:1 outlined:1 mathematics:1 nonlinearity:1 centre:1 funded:1 longer:1 posterior:2 own:1 verlag:1 binary:2 morgan:2 minimum:3 additional:1 zip:1 determine:1 signal:1 ii:2 full:5 multiple:1 segmented:2 smooth:1 match:4 faster:1 divided:1 prediction:18 involving:1 basic:1 instantiating:1 vision:2 expectation:1 mayor:1 essentially:1 iteration:4 normalization:1 represent:1 achieved:2 cell:1 cps:4 background:2 want:1 spacing:1 annealing:10 publisher:2 appropriately:1 tend:1 smile:2 seem:2 jordan:1 ideal:1 feedforward:1 iii:1 canadian:1 fit:3 gave:1 zi:2 architecture:3 identified:1 nolte:1 reduce:2 idea:2 br:4 intensive:2 six:2 penalty:2 peter:1 hessian:1 york:1 useful:2 amount:3 transforms:1 backpropagated:1 ten:2 ilz:2 reduced:3 continuation:1 generate:1 notice:1 judging:1 estimated:1 tibshirani:1 affected:1 four:1 drawn:1 imaging:5 sum:2 compete:1 run:2 distorted:1 zli:1 almost:1 decide:1 yann:1 home:3 mii:1 scaling:1 flavour:1 bit:1 layer:2 correspondence:9 durbin:1 activity:3 adapted:1 occur:1 binned:1 strength:1 comparision:1 integrand:3 speedup:1 department:2 according:1 popularized:1 combination:2 conjugate:1 smaller:1 slightly:1 em:1 character:5 cun:1 rob:1 b:2 invariant:2 gradually:1 computationally:2 equation:3 agree:1 skew:1 detectable:1 fed:1 available:1 gaussians:2 experimentation:1 eight:1 away:3 appropriate:1 alternative:1 top:1 remaining:1 tony:1 running:1 giving:2 approximating:1 ink:5 added:1 strategy:1 gradient:1 distance:1 separate:1 thank:1 sensible:1 seven:2 evaluate:1 trivial:6 spanning:1 code:1 length:1 relationship:2 providing:1 handprinted:1 difficult:1 greyscale:1 negative:1 memo:1 twenty:1 sing:1 buffalo:1 defining:1 hinton:17 variability:1 extended:2 head:1 frame:5 arbitrary:3 community:1 canada:1 inverting:1 namely:1 required:4 optimized:1 hanson:1 learned:1 nearer:1 address:1 able:2 deserve:1 pattern:3 cdrom:1 including:1 natural:1 settling:3 predicting:1 advanced:2 scheme:2 improve:2 aston:1 brief:1 technology:2 conic:1 carried:1 extract:1 prior:3 geometric:1 epoch:1 acknowledgement:1 keenan:1 relative:2 proportional:1 geoffrey:7 generator:5 validation:3 affine:6 sufficient:2 editor:4 viewpoint:1 translation:2 row:2 course:1 placed:2 iim:4 bias:3 allow:1 institute:1 taking:1 face:1 tolerance:1 van:1 dimension:3 default:2 stand:1 contour:1 computes:1 rich:1 author:1 collection:1 made:2 adaptive:2 far:2 lippmann:2 global:2 overfitting:1 instantiation:18 decides:1 summing:1 consuming:1 discriminative:1 noranda:1 search:20 iterative:1 bead:4 un:1 lthis:1 dilation:1 table:3 ballard:2 expanding:1 elastic:3 improving:1 complex:2 official:1 jepson:1 noise:2 nothing:1 unites:1 cubic:1 slow:1 position:9 wish:1 lie:2 crude:1 weighting:3 third:1 learns:1 down:1 evidence:2 workshop:1 gained:1 drew:1 phd:1 conditioned:1 smoothly:1 generalizing:1 simply:1 likely:1 beymer:2 forming:1 srihari:1 springer:1 ma:1 viewed:2 revow:11 considerable:1 change:1 hard:1 determined:4 except:2 uniformly:1 infinite:1 total:1 vote:3 formally:2 internal:1 dept:1 |
6 | 1,003 | Plasticity-Mediated Competitive Learning
Terrence J. Sejnowski
terry@salk.edu
Nicol N. Schraudolph
nici@salk.edu
Computational Neurobiology Laboratory
The Salk Institute for Biological Studies
San Diego, CA 92186-5800
and
Computer Science & Engineering Department
University of California, San Diego
La Jolla, CA 92093-0114
Abstract
Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited
to sparse representations, while decorrelation and factorization
schemes that support distributed representations are computationally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique
to Simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski,
1993); the same approach could be used to improve other learning
algorithms.
1 INTRODUCTION
Unsupervised neural networks frequently employ sets of nodes or subnetworks
with identical architecture and objective function. Some form of competitive interaction is then needed for these nodes to differentiate and efficiently complement
each other in their task.
476
Nicol Schraudolph, Terrence 1. Sejnowski
1.00 -
-
j ................................. '.'
f(y)
?4r(y)'....
0.50 -
........../ /....??1
0.00 -
'.:!' ...." ? ? , , ? ? . , ,. .. ' ..1???? ?????? ?? ?
=...:::....::::...:j:....:........-.. -~
=...
-4.00
y
-2.00
0.00
2.00
4.00
Figure 1: Activity f and plasticity f' of a logistic node as a function of its net input
y. Vertical lines indicate those values of y whose pre-images in input space are
depicted in Figure 2.
Inhibition is the simplest competitive mechanism: the most active nodes suppress
the ability of their peers to learn, either directly or by depressing their activity.
Since inhibition can be implemented by diffuse, nonadaptive mechanisms, it is an
attractive solution from both neurobiological and computational points of view.
However, inhibition can only form either localized (unary) or sparse distributed
representations, in which each output has only one state with significant information content.
For fully distributed representations, schemes to decorrelate (Barlow and Foldiak,
1989; Leen, 1991) and even factorize (Schmidhuber, 1992; Bell and Sejnowski, 1995)
node activities do exist. Unfortunately these require specific, weighted lateral
connections whose adaptation is computationally expensive and may interfere
with feedforward learning. While they certainly have their place as competitive
learning algorithms, the capability of biological neurons to implement them seems
questionable.
In this paper, we suggest an alternative approach: we extend the advantages of
simple inhibition to distributed representations by decoupling the competition
from the activation vector. In particular, we use neural plasticity - the derivative
of a logistic activation function - as a medium for competition.
Plasticity is low for both high and low activation values but high for intermediate
ones (Figure 1); distributed patterns of activity may therefore have localized plasticity. If competition is controlled by plasticity then, simple competitive mechanisms
will constrain us to localized plasticity but allow representations with distributed
activity.
The next section reintroduces the binary information gain optimization (BINGO)
algorithm for a single node; we then discuss how plasticity-mediated competition
improves upon the decorrelation mechanism used in our original extension to
multiple nodes. Finally, we establish a close relationship between the plasticity
and the entropy of a logistiC node that provides an intuitive interpretation of
plasticity-mediated competitive learning in this context.
Plasticity-Med;ated Competitive Learning
477
2 BINARY INFORMATION GAIN OPTIMIZATION
In (Schraudolph and Sejnowski, 1993), we proposed an unsupervised learning rule
that uses logistic nodes to seek out binary features in its input. The output
z
= f(y),
where f(y)
1
= 1 + e- Y
and y
= tV ? x
(1)
of each node is interpreted stochastically as the probability that a given feature is
present. We then search for informative directions in weight space by maximizing
the information gained about an unknown binary feature through observation of
z. This binary infonnation gain is given by
D.H(z)
= H(Z) -
H(z) ,
(2)
where H(z) is the entropy of a binary random variable with probability z, and z
is a prediction of z based on prior knowledge. Gradient ascent in this objective
results in the learning rule
D.w
<X
J'(y) . (y - fI) . x,
(3)
where fI is a prediction of y. In the simplest case, fI is an empirical average (y) of past
activity, computed either over batches of input data or by means of an exponential
trace; this amounts to a nonlinear version of the covariance rule (Sejnowski, 1977).
Using just the average as prediction introduces a strong preference for splitting the
data into two equal-sized clusters. While such a bias is appropriate in the initial
phase of learning, it fails to take the nonlinear nature of f into account. In order
to discount data in the saturated regions of the logistic function appropriately, we
weigh the average by the node's plasticity J'(y):
(y . f'(y))
(f'(y)) + C ,
fI = --'-'---'--'-'--'-'--
(4)
where c is a very small positive constant introduced to ensure numerical stability
for large values of y. Now the bias for splitting the data evenly is gradually relaxed
as the network's weights grow and data begins to fall into saturated regions of f.
3
PLASTICITY-MEDIATED COMPETITION
For multiple nodes the original BINGO algorithm used a decorrelating predictor
as the competitive mechanism:
g = y + (Qg -
2I)(y - (y)) ,
(5)
where Qg is the autocorrelation matrix of y, and I the identity matrix. Note that
Qg is computationally expensive to maintain; in connectionist implementations it
478
Nicol Schraudolph, Terrence J. Sejnowski
j
!
i
.: f
. ....~'. ..i..
,.
.: . ?,"f. e: 1',.
..... ...
" ... ~.',, " . ..:.....
, , :~X ~."
..
?'IJ"~~ .~~~.
.. . ~~
.. .
.j
I . .
::
': " !
Figure 2: The "three cigars" problem. Each plot shows the pre-image of zero net
. input, superimposed on a scatter plot of the data set, in input space. The two
flanking lines delineate the "plastic region" where the logistic is not saturated,
providing an indication of weight vector size. Left, two-node BINGO network
using decorrelation (Equations 3 & 5) fails to separate the three data clusters. Right,
same network using plasticity-mediated competition (Equations 4 & 6) succeeds.
is often approximated by lateral anti-Hebbian connections whose adaptation must
occur on a faster time scale than that of the feedforward weights (Equation 3) for
reasons of stability (Leen, 1991). In practice this means that learning is slowed
significantly.
In addition, decorrelation can be inappropriate when nonlinear objectives are optimized - in our case, two prominent binary features may well be correlated.
Consider the "three cigars" problem illustrated in Figure 2: the decorrelating predictor (left) forces the two nodes into a near-orthogonal arrangement, interfering
with their ability to detect the parallel gaps separating the data clusters.
For our purposes, decorrelation is thus too strong a constraint on the discriminants:
all we require is that the discovered features be distinct. We achieve this by reverting
to the simple predictor of Equation 4 while adding a global, plasticity-mediated
excitation l factor to the weight update:
~Wi ex: f'(Yi) . (Yi - 1li) . X ?
L
f'(Yj)
(6)
j
As Figure 2 (right) illustrates, this arrangement solves the "three cigars" problem. In the high-dimensional environment of hand-written digit recognition, this
algorithm discovers a set of distributed binary features that preserve most of the
information needed to classify the digits, even though the network was never given
any class labels (Figure 3).
1 The interaction is excitatory rather than inhibitory since a node's plasticity is inversely
correlated with the magnitude of its net input.
Plasticity-Mediated Competitive Learning
.... ........
.
..
.....
.......
....
. ..
.
....
?
???????
........ .. ................
........
-.....
..
.......
.
..........
??????
....
?????????
..............
.
..
??????????
................... . ..
...................
,
...?
479
"
..,
?
............ ............. ......
.......
?????
???????? ....
" ,
? I
,
~
'
.
"
...
,
???
..?.............. ,
?...
.-
a ....
"
I.
I
......
?
.....
.....
....
.?..????
??
? I ............
.. ..
"
"
t
'"
~
_
......
.....
......
..... ....
......
..?????.....
..
..
???
...?........
??
........
.
????
..
, ?????
a .......
'
? ......
?
?????
???
'
?
'
'
'"
,
...... I
??? ,
...........
l
. . . . . . to. . . . .
.. ... a ..
Figure 3: Weights found by a four-node network running the improved BINGO
algorithm (Equations 4 & 6) on a set of 1200 handwritten digits due to (Guyon et aI.,
1989). Although the network is unsupervised, its four-bit output conveys most of
the information necessary to classify the digits.
4 PLASTICITY AND BINARY ENTROPY
It is possible to establish a relationship between the plasticity /' of a logistiC node
and its entropy that provides an intuitive account of plasticity-mediated competition as applied to BINGO. Consider the binary entropy
H(z)
= - z logz -
(1 - z) log(l - z)
(7)
A well-known quadratic approximation is
= 8e- 1 z (1 -
H(z)
z) ~ H(z)
(8)
Now observe that the plasticity of a logistic node
!'(Y)=:Y l+le _ y =, .. =z(l-z)
(9)
is in fact proportional to H(z) - that is, a logistic node's plasticity is in effect
a convenient quadratic approximation to its binary output entropy. The overall
entropy in a layer of such nodes equals the sum of individual entropies less their
redundancy:
(10)
H(z) =
H(zj) - R(Z)
L
j
The plasticity-mediated excitation factor in Equation 6
(11)
j
j
is thus proportional to an approximate upper bound on the entropy of the layer,
which in turn indicates how much more information remains to be gained by
learning from a particular input. In the context of BINGO, plasticity-mediated
480
Nicol SchraudoLph. Terrence J. Sejnowski
competition thus scales weight changes according to a measure of the network's
ignorance: the less it is able to identify a given input in terms of its set of binary
features, the more it tries to learn doing so.
5 CONCLUSION
By using the derivative of a logistic activation function as a medium for competitive
interaction, we were able to obtain differentiated, fully distributed representations
without resorting to computationally expensive decorrelation schemes. We have
demonstrated this plasticity-mediated competition approach on the BINGO feature
extraction algorithm, which is significantly improved by it. A close relationship
between the plasticity of a logistic node and its binary output entropy provides an
intuitive interpretation of this unusual form of competition.
Our general approach of using a nonmonotonic function of activity - rather than
activity itself - to control competitive interactions may prove valuable in other
learning schemes, in particular those that seek distributed rather than local representations.
Acknowledgements
We thank Rich Zemel and Paul Viola for stimulating discussions, and the McDonnell-Pew Center for Cognitive Neuroscience in San Diego for financial support.
References
Barlow, H. B. and Foldiak, P. (1989). Adaptation and decorrelation in the cortex. In
Durbin, R. M., Miall, c., and Mitchison, G. J., editors, The Computing Neuron,
chapter 4, pages 54-72. Addison-Wesley, Wokingham.
Bell, A. J. and Sejnowski, T. J. (1995). A non-linear information maximisation
algorithm that performs blind separation. In Advances in Neural Information
Processing Systems, volume 7, Denver 1994.
Guyon,!., Poujaud, 1., Personnaz, L., Dreyfus, G., Denker, J., and Le Cun, Y. (1989).
Comparing different neural network architectures for classifying handwritten
digits. In Proceedings of the International Joint Conference on Neural Networks,
volume II, pages 127-132. IEEE.
Leen, T. K. (1991). Dynamics of learning in linear feature-discovery networks.
Network, 2:85-105.
Schmidhuber, J. (1992). Learning factorial codes by predictability minimization.
Neural Computation, 4(6):863-879.
Schraudolph, N. N. and Sejnowski, T. J. (1993). Unsupervised discrimination of
clustered data via optimization of binary information gain. In Hanson, S. J.,
Cowan, J. D., and Giles, C. L., editors, Advances in Neural Information Processing Systems, volume 5, pages 499-506, Denver 1992. Morgan Kaufmann, San
Mateo.
Sejnowski, T. J. (1977). Storing covariance with nonlinearly interacting neurons.
Journal of Mathematical Biology, 4:303-321.
| 1003 |@word version:1 seems:1 seek:2 covariance:2 decorrelate:1 initial:1 past:1 comparing:1 activation:4 scatter:1 must:1 written:1 numerical:1 informative:1 plasticity:27 plot:2 update:1 discrimination:1 provides:3 node:22 preference:1 mathematical:1 prove:1 autocorrelation:1 frequently:1 inappropriate:1 begin:1 medium:2 interpreted:1 differentiation:1 questionable:1 control:1 positive:1 engineering:1 local:1 mateo:1 limited:1 factorization:1 yj:1 practice:1 maximisation:1 implement:1 digit:5 logz:1 empirical:1 poujaud:1 bell:2 significantly:2 convenient:1 pre:2 suggest:1 close:2 context:2 demonstrated:1 center:1 maximizing:1 bingo:7 splitting:2 rule:3 financial:1 stability:2 diego:3 us:1 expensive:3 approximated:1 recognition:1 region:3 valuable:1 weigh:1 environment:1 dynamic:1 upon:1 basis:1 joint:1 chapter:1 distinct:1 sejnowski:11 zemel:1 nonmonotonic:1 peer:1 whose:3 ability:2 itself:1 differentiate:1 advantage:1 indication:1 net:3 interaction:5 adaptation:3 achieve:1 intuitive:3 competition:11 cluster:3 ij:1 strong:2 solves:1 implemented:1 indicate:1 direction:1 require:2 clustered:1 biological:2 extension:1 purpose:1 label:1 infonnation:1 weighted:1 minimization:1 rather:3 superimposed:1 indicates:1 detect:1 unary:1 overall:1 equal:2 never:1 extraction:2 identical:1 biology:1 unsupervised:4 connectionist:1 simplify:1 employ:1 preserve:1 individual:1 phase:1 maintain:1 certainly:1 saturated:3 introduces:1 necessary:1 orthogonal:1 classify:2 giles:1 predictor:3 too:1 international:1 terrence:4 stochastically:1 cognitive:1 derivative:2 li:1 account:2 blind:1 ated:1 view:1 try:1 doing:1 competitive:13 capability:1 parallel:1 kaufmann:1 efficiently:1 identify:1 handwritten:2 plastic:1 conveys:1 gain:5 knowledge:1 improves:1 wesley:1 improved:2 decorrelating:2 depressing:1 leen:3 delineate:1 though:1 just:1 hand:1 nonlinear:3 interfere:1 logistic:11 effect:1 barlow:2 laboratory:1 illustrated:1 ignorance:1 attractive:1 excitation:2 prominent:1 performs:1 image:2 dreyfus:1 discovers:1 fi:4 discriminants:1 denver:2 volume:3 extend:1 interpretation:2 significant:1 ai:1 pew:1 resorting:1 cortex:1 inhibition:4 foldiak:2 jolla:1 reintroduces:1 schmidhuber:2 binary:15 yi:2 morgan:1 relaxed:1 nici:1 ii:1 multiple:2 hebbian:1 faster:1 schraudolph:7 controlled:1 qg:3 prediction:3 achieved:1 addition:1 grow:1 appropriately:1 ascent:1 med:1 cowan:1 near:1 feedforward:2 intermediate:1 architecture:2 factorial:1 amount:1 discount:1 simplest:2 exist:1 zj:1 inhibitory:2 neuroscience:1 redundancy:1 four:2 nonadaptive:2 sum:1 place:1 guyon:2 separation:1 bit:1 layer:2 bound:1 quadratic:2 durbin:1 activity:9 occur:1 constraint:1 constrain:1 diffuse:2 department:1 tv:1 according:1 mcdonnell:1 wi:1 cun:1 slowed:1 gradually:1 flanking:1 computationally:4 equation:6 remains:1 discus:1 turn:1 mechanism:6 needed:2 reverting:1 letting:1 addison:1 subnetworks:1 unusual:1 denker:1 observe:1 appropriate:1 differentiated:1 alternative:2 batch:1 original:2 running:1 ensure:1 establish:2 personnaz:1 objective:3 arrangement:2 gradient:1 separate:1 thank:1 lateral:2 separating:1 evenly:1 reason:1 code:1 relationship:3 providing:1 unfortunately:1 trace:1 suppress:1 implementation:1 unknown:1 upper:1 vertical:1 neuron:3 observation:1 anti:1 viola:1 neurobiology:1 discovered:1 interacting:1 introduced:1 complement:1 nonlinearly:1 connection:2 optimized:1 hanson:1 california:1 able:2 pattern:1 terry:1 decorrelation:7 force:1 scheme:4 improve:2 inversely:1 conventionally:1 mediated:11 prior:1 acknowledgement:1 discovery:1 nicol:4 fully:3 proportional:2 localized:3 editor:2 classifying:1 storing:1 interfering:1 excitatory:1 bias:2 allow:1 institute:1 fall:1 sparse:2 distributed:10 rich:1 san:4 miall:1 approximate:1 neurobiological:1 global:1 active:1 factorize:1 mitchison:1 search:1 learn:2 nature:1 ca:2 decoupling:1 correlated:2 cigar:3 paul:1 mediate:1 salk:3 predictability:1 fails:2 exponential:1 specific:1 unattractive:1 adding:1 gained:2 magnitude:1 illustrates:1 gap:1 entropy:10 depicted:1 stimulating:1 sized:1 identity:1 content:1 change:1 la:1 succeeds:1 support:2 ex:1 |
7 | 1,004 | ICEG Morphology Classification using an
Analogue VLSI Neural Network
Richard Coggins, Marwan Jabri, Barry Flower and Stephen Pickard
Systems Engineering and Design Automation Laboratory
Department of Electrical Engineering J03,
University of Sydney, 2006, Australia.
Email: richardc@sedal.su.oz.au
Abstract
An analogue VLSI neural network has been designed and tested
to perform cardiac morphology classification tasks. Analogue techniques were chosen to meet the strict power and area requirements
of an Implantable Cardioverter Defibrillator (ICD) system. The robustness of the neural network architecture reduces the impact of
noise, drift and offsets inherent in analogue approaches. The network is a 10:6:3 multi-layer percept ron with on chip digital weight
storage, a bucket brigade input to feed the Intracardiac Electrogram (ICEG) to the network and has a winner take all circuit
at the output. The network was trained in loop and included a
commercial ICD in the signal processing path. The system has successfully distinguished arrhythmia for different patients with better
than 90% true positive and true negative detections for dangerous
rhythms which cannot be detected by present ICDs. The chip was
implemented in 1.2um CMOS and consumes less than 200nW maximum average power in an area of 2.2 x 2.2mm2.
1
INTRODUCTION
To the present time, most ICDs have used timing information from ventricular
leads only to classify rhythms which has meant some dangerous rhythms can not
be distinguished from safe ones, limiting the use of the device. Even two lead
732
Richard Coggins, Marwan Jabri, Barry Flower, Stephen Pickard
4.00
HO
3.00
2.00
I.SO
_ _ _:::::::!
Q
1.00
O.SO
Figure 1: The Morphology of ST and VT retrograde 1:1.
atrial/ventricular systems fail to distinguish some rhythms when timing information alone is used [Leong and Jabri, 1992]. A case in point is the separation of Sinus Tachycardia (ST) from Ventricular Tachycardia with 1:1 retrograde conduction.
ST is a safe arrhythmia which may occur during vigorous exercise and is characterised by a heart rate of approximately 120 beats/minute. VT retrograde 1:1 also
occurs at the same low rate but can be a potentially fatal condition. False negative
detections can cause serious heart muscle injury while false positive detections deplete the batteries, cause patient suffering and may lead to costly transplantation
of the device. Figure 1 shows however, the way in which the morphology changes
on the ventricular lead for these rhythms. Note, that the morphology change is
predominantly in the "QRS complex" where the letters QRS are the conventional
labels for the different points in the conduction cycle during which the heart is
actually pumping blood.
For a number of years, researchers have studied template matching schemes in order
to try and detect such morphology changes. However, techniques such as correlation
waveform analysis [Lin et. al., 1988], though quite successful are too computationally intensive to meet power requirements. In this paper, we demonstrate that
an analogue VLSI neural network can detect such morphology changes while still
meeting the strict power and area requirements of an implantable system. The
advantages of an analogue approach are born out when one considers that an energy efficient analogue to digital converter such as [Kusumoto et. al., 1993] uses
1.5nJ per conversion implying 375nW power consumption for analogue to digital
conversion of the ICEG alone. Hence, the integration of a bucket brigade device and
analogue neural network provides a very efficient way of interfacing to the analogue
domain. Further, since the network is trained in loop with the ICD in real time,
the effects of device offsets, noise, QRS detection jitter and signal distortion in the
analogue circuits are largely alleviated.
The next section discusses the chip circuit designs. Section 3 describes the method
ICEG Morphology Classification Using an Analogue VLSI Neural Network
733
AowAcId. . .
1axl Syna.... AIRy
"-
Column
AoIcIr.-
I
o.ta Reglsl...
IClkcMmux
I
Bu1I...
I WTAI
10 DOD DO
Figure 2: Floor Plan and Photomicrograph of the chip
used to train the network for the morphology classification task. Section 4 describes
the classifier performance on seven patients with arrhythmia which can not be
distinguished using the heart rate only. Section 5 summarises the results, remaining
problems and future directions for the work .
2
ARCHITECTURE
The neural network chip consists of a 10:6:3 multilayer perceptron, an input bucket
brigade device (BBD) and a winner take all (WTA) circuit at the output. A floor
plan and photomicrograph of the chip appears in figure 2. The BBD samples the
incoming ICEG at a rate of 250Hz. For three class problems, the winner take all
circuit converts the winning class to a digital signal. For the two class problem
considered in this paper , a simple thresholding function suffices. The following
subsections briefly describe the functional elements of the chip . The circuit diagrams
for the chip building blocks appear in figure 3.
2.1
BUCKET BRIGADE DEVICE
One stage of the bucket brigade circuit is shown in figure 3. The BBD uses a
two phase clock to shift charge from cell to cell and is based on a design by
Leong [Leong, 1992] . The BBD operates by transferring charge deficits from S
to D in each of the cells. PHIl and PHI2 are two phase non-overlapping clocks.
The cell is buffered from the synapse array to maintain high charge transfer efficiency. A sample and hold facility is provided to store the input on the gates of the
synapses. The BBD clocks are generated off chip and are controlled by the QRS
complex detector in the lCD.
2.2
SYNAPSE
This synapse has been used on a number of neural network chips previously.
e.g . [Coggins et. al., 1994] . The synapse has five bits plus sign weight storage which
734
Richard Coggins, Marwan Jabri, Barry Flower, Stephen Pickard
NEURON
.-----------------------------------------------------------,,,
,,
~ !
BUJIOIII'
00
BUCKET BRIGADE ClLL
"
Figure 3: Neuron, Bucket Brigade and Synapse Circuit Diagrams.
sets the bias to a differential pair which performs the multiplication. The bias references for the weights are derived from a weighted current source in the corner of
the chip. A four quadrant multiplication is achieved by the four switches at the top
of the differential pair.
2.3
NEURON
Due to the low power requirements, the bias currents of the synapse arrays are of
the order of hundreds of nano amps, hence the neurons must provide an effective
resistance of many mega ohms to feed the next synapse layer while also providing
gain control. Without special high resistance polysilicon, simple resistive neurons
use prohibitive area, However, for larger networks with fan-in much greater than
ten, an additional problem of common mode cancellation is encountered, That is,
as the fan-in increases, a larger common mode range is required or a cancellation
scheme using common mode feedback is needed.
The neuron of figure 3 implements such a cancellation scheme, The mirrors MO/M2
and Ml/M3 divide the input current and facilitate the sum at the drain of M7.
M7/M8 mirrors the sum so that it may be split into two equal currents by the
mirrors formed by M4, M5 and M6 which are then subtracted from the input
currents. Thus, the differential voltage vp - Vm is a function of the transistor
transconductances, the common mode input current and the feedback factor , The
gain of the neuron can be controlled by varying the width to length ratio of the
mirror transistors MO and Ml. The implementation in this case allows seven gain
combinations, using a three bit RAM cell to store the gain,
ICEG Morphology Classification Using an Analogue VLSI Neural Network
735
Implantable
C.cio?erlor
DefibrillalOr
RunnngMUME
Ne .....1
Nelwa'1<
Chip
Figure 4: Block Diagram of the Training and Testing System.
The importance of a common mode cancellation scheme for large networks can
be seen when compared to the straight forward approach of resistive or switched
capacitor neurons. This may be illustrated by considering the energy usage of
the two approaches. Firstly, we need to define the required gain of the neuron
as a function of its fan-in . If we assume that useful inputs to the network are
mostly sparse, i.e. with a small fraction of non-zero values, then the gain is largely
independent of the fan-in, yet the common mode signal increases linearly with fanin. For the case of a neuron which does not cancel the common mode, the power
supply voltage must be increased to accommodate the common mode signal, thus
leading to a quadratic increase in energy use with fan-in. A common mode cancelling
neuron on the other hand , suffers only a linear increase in energy use with fan-in
since extra voltage range is not required and the increased energy use arises only
due to the linear increase in common mode current.
3
TRAINING SYSTEM
The system used to train and test the neural network is shown in figure 4. Control
of training and testing takes place on the PC. The PC uses a PC-LAB card to
provide analogue and digital I/O . The PC plays the ICEG signal to the input of
the commercial ICD in real time. Note, that the PC is only required for initially
training the network and in this case as a source of the heart signal. The commercial
ICD performs the function of QRS complex detection using analogue circuits. The
QRS complex detection signal is then used to freeze the BBD clocks of the chip, so
that a classification can take place.
When training, a number of examples of the arrhythmia to be classified are selected
from a single patient data base recorded during an electrophysiological study and
previously classified by a cardiologist. Since most of the morphological information
is in the QRS complex, only these segments of the data are repeatedly presented to
736
Richard Coggins. Marwan Jabri. Barry Flower. Stephen Pickard
Patient
1
2
3
4
5
6
7
% Training Attempts Converged
Run ~
Run 1
H=3
80
80
0
60
100
100
80
H= 6
10
100
0
10
80
40
100
H=3
60
0
0
40
0
60
40
H=6
60
10
10
40
60
60
100
Average
Iterations
62
86
101
77
44
46
17
Table 1: Training Performance of the system on seven patients.
the network. The weights are adjusted according to the training algorithm running
on the PC using the analogue outputs of the network to reduce the output error .
The PC writes weights to the chip via the digital I/Os of the PC-LAB card and the
serial weight bus of network. The software package implementing the training and
testing, called MUME [Jabri et. al ., 1992], provides a suite of training algorithms
and control options. Online training was used due to its success in training small
networks and because the presentation of the QRS complexes to the network was
the slowest part of the training procedure. The algorithm used for weight updates
in this paper was summed weight node perturbation [Flower and Jabri, 1993].
The system was trained on seven different patients separately all of whom had
VT with 1: 1 retrograde conduction. Note, that patient independent training has
been tried but with mixed results [Tinker, 1992] . Table 1 summarises the training
statistics for the seven patients. For each patient and each architecture, five training
runs were performed starting from a different random initial weight set. Each
of the patients was trained with eight of each class of arrhythmia. The network
architecture used was 10:H:1, where H is the number of hidden layer neurons and
the unused neurons being disabled by setting their input weights to zero. Two sets
of data were collected denoted Run 1 and Run 2. Run 1 corresponded to output
target values of ?0.6V within margin 0.45V and Run 2 to output target values of
?0.2V within margin 0.05V. A training attempt was considered to have converged
when the training set was correctly classified within two hundred training iterations.
Once the morphologies to be distinguished have been learned for a given patient,
the remainder of the patient data base is played back in a continuous stream and
the outputs of the classifier at each QRS complex are logged and may be compared
to the classifications of a cardiologist. The resulting generalisation performance is
discussed in the next section.
4
MORPHOLOGY CLASSIFIER GENERALISATION
PERFORMANCE
Table 2 summarises the generalisation performance of the system on the seven
patients for the training attempts which converged. Most of the patients show a
correct classification rate better than 90% for at least one architecture on one of the
ICEG Morphology Classification Using an Analogue VLSI Neural Network
Patient
1
2
3
4
5
6
7
No. of
Complexes
ST
VT
440
61
57
94
67
146
166
65
61
96
61
99
28
80
1
2
3
4
5
6
7
440
94
67
166
61
61
28
61
57
146
65
96
99
80
737
% Correct Classifications Run 1
H = 6
H - i3
VT
ST
ST
VT
89?10 89?3
58?0
99?0
99?1
99?1
100?0 99?1
66?44 76?37
99?1
50?3
82?1 75?13
89?9
94?6
84?8
97?1
90?5
99?1
97?3
98?5
99?1
99?1
% Correct Classifications Run 2
86?14 99?1
88?2
99?1
94?6
94?3
84?2
99?1
76?18 59?2
87?7 100?0
88?2
49?5
84?1
82?5
92?6 90?10
99?1
99?1
94?3
99?0
94?3
92?3
Table 2: Generalisation Performance of the system on seven patients.
runs, whereas, a timing based classifier can not separate these arrhythmia at all.
For each convergent weight set the network classified the test set five times. Thus,
the "% Correct" columns denote the mean and standard deviation of the classifier
performance with respect to both training and testing variations. By duty cycling
the bias to the network and buffers, the chip dissipates less than 200n W power for
a nominal heart rate of 120 beats/minute during generalisation.
5
DISCUSSION
Referring to table 1 we see that the patient 3 data was relatively difficult to train.
However, for the one occasion when training converged generalisation performance
was quite acceptable. Inspection of this patients data showed that typically, the
morphologies of the two rhythms were very similar. The choice of output targets,
margins and architecture appear to be patient dependent and possibly interacting
factors. Although larger margins make training easier for some patients they appear
to also introduce more variability in generalisation performance. This may be due
to the non-linearity of the neuron circuit. Further experiments are required to
optimise the architecture for a given patient and to clarify the effect of varying
targets, margins and neuron gain. Penalty terms could also be added to the error
function to minimise the possibility of missed detections of the dangerous rhythm.
The relatively slow rate of the heart results in the best power consumption being
obtained by duty cycling the bias currents to the synapses and the buffers. Hence,
the bias settling time of the weighted current source is the limiting factor for reducing power consumption further for this design. By modifying the connection of the
current source to the synapses using a bypassing technique to reduce transients in
Riclulrd Coggins, Marwan Jabri, Barry Flower, Stephen Pickard
738
the weighted currents, still lower power consumption could be achieved.
6
CONCLUSION
The successful classification of a difficult cardiac arrhythmia problem has been
demonstrated using. an analogue VLSI neural network approach. Furthermore, the
chip developed has shown very low power consumption of less than 200n W, meeting the requirements of an implantable system. The chip has performed well, with
over 90% classification performance for most patients studied and has proved to be
robust when the real world influence of analogue QRS detection jitter is introduced
by a commercial implantable cardioverter defibrillator placed in the signal path to
the classifier.
Acknowledgements
The authors acknowledge the funding for the work in this paper provided under
Australian Generic Technology Grant Agreement No. 16029 and thank Dr. Phillip
Leong of the University of Sydney and Dr. Peter Nickolls of Telectronics Pacing
Systems Ltd., Australia for their helpful suggestions and advice.
References
[Castro et. al., 1993] H.A. Castro, S.M. Tam, M.A. Holler, "Implementation and
Performance of an analogue Nonvolatile Neural Network," Analogue Integrated
Circuits and Signal Processing, vol. 4(2), pp. 97-113, September 1993.
[Lin et. al., 1988] D. Lin, L.A. Dicarlo, and J .M. Jenkins, "Identification of Ventricular Tachycardia using Intracavitary Electrograms: analysis of time and frequency domain patterns," Pacing (3 Clinical Electrophysiology, pp. 1592-1606,
November 1988.
[Leong, 1992] P.H.W. Leong, Arrhythmia Classification Using Low Power VLSI,
PhD Thesis, University of Sydney, Appendix B, 1992.
[ Kusumoto et. al., 1993] K. Kusumoto et. al., "A lObit 20Mhz 30mW Pipelined
Interpolating ADC," ISSCC, Digest of Technical Papers, pp. 62-63, 1993.
[Leong and Jabri, 1992] P.H.W. Leong and M. Jabri, "MATIC - An Intracardiac Tachycardia Classification System", Pacing (3 Clinical Electrophysiology,
September 1992.
[Coggins et. al., 1994] R.J. Coggins and M.A. Jabri, "WATTLE: A Trainable Gain
Analogue VLSI Neural Network", NIPS6, Morgan Kauffmann Publishers, 1994.
[Jabri et. al., 1992] M.A. Jabri, E.A. Tinker and L. Leerink, "MUME- A MultiNet-Multi-Architecture Neural Simulation Environment", Neural Network Simulation Environments, Kluwer Academic Publications, January, 1994.
[Flower and Jabri, 1993] B. Flower and M. Jabri, "Summed Weight Neuron Perturbation: an O(N) improvement over Weight Perturbation," NIPS5, Morgan
Kauffmann Publishers, pp. 212-219, 1993.
[Tinker, 1992] E.A. Tinker, "The SPASM Algorithm for Ventricular Lead Timing and Morphology Classification," SEDAL ICEG-RPT-016-92, Department of
Electrical Engineering, University of Sydney, 1992.
| 1004 |@word briefly:1 simulation:2 tried:1 accommodate:1 initial:1 born:1 amp:1 current:11 yet:1 must:2 icds:2 designed:1 update:1 alone:2 implying:1 prohibitive:1 device:6 selected:1 inspection:1 provides:2 node:1 ron:1 firstly:1 tinker:4 five:3 differential:3 m7:2 supply:1 consists:1 resistive:2 isscc:1 introduce:1 arrhythmia:8 morphology:15 multi:2 m8:1 considering:1 provided:2 linearity:1 circuit:11 developed:1 adc:1 nj:1 suite:1 axl:1 charge:3 um:1 classifier:6 control:3 grant:1 appear:3 positive:2 engineering:3 timing:4 pumping:1 meet:2 path:2 approximately:1 plus:1 au:1 studied:2 range:2 testing:4 block:2 implement:1 writes:1 procedure:1 area:4 matching:1 alleviated:1 quadrant:1 cannot:1 pipelined:1 storage:2 influence:1 conventional:1 demonstrated:1 phil:1 starting:1 m2:1 array:2 variation:1 kauffmann:2 limiting:2 target:4 commercial:4 play:1 nominal:1 us:3 agreement:1 element:1 electrical:2 mume:2 cycle:1 morphological:1 consumes:1 environment:2 battery:1 lcd:1 trained:4 segment:1 efficiency:1 chip:17 train:3 describe:1 effective:1 detected:1 corresponded:1 quite:2 larger:3 distortion:1 tested:1 fatal:1 transplantation:1 statistic:1 online:1 advantage:1 transistor:2 remainder:1 cancelling:1 loop:2 oz:1 requirement:5 cmos:1 sydney:4 implemented:1 australian:1 direction:1 safe:2 waveform:1 correct:4 modifying:1 australia:2 transient:1 implementing:1 suffices:1 pacing:3 coggins:8 adjusted:1 clarify:1 hold:1 bypassing:1 considered:2 nw:2 mo:2 label:1 successfully:1 weighted:3 interfacing:1 i3:1 varying:2 voltage:3 publication:1 derived:1 improvement:1 multinet:1 slowest:1 detect:2 helpful:1 dependent:1 typically:1 transferring:1 integrated:1 initially:1 hidden:1 vlsi:9 classification:16 denoted:1 plan:2 integration:1 special:1 summed:2 equal:1 once:1 mm2:1 cancel:1 future:1 richard:4 inherent:1 serious:1 implantable:5 m4:1 phase:2 maintain:1 attempt:3 detection:8 possibility:1 pc:8 divide:1 increased:2 classify:1 column:2 bbd:6 injury:1 mhz:1 deviation:1 hundred:2 dod:1 successful:2 too:1 ohm:1 j03:1 conduction:3 referring:1 defibrillator:2 st:6 spasm:1 off:1 vm:1 holler:1 thesis:1 recorded:1 possibly:1 nano:1 dr:2 corner:1 tam:1 leading:1 automation:1 stream:1 cardioverter:2 performed:2 try:1 lab:2 option:1 cio:1 formed:1 largely:2 percept:1 pickard:5 vp:1 identification:1 researcher:1 straight:1 classified:4 converged:4 detector:1 synapsis:3 suffers:1 email:1 energy:5 pp:4 frequency:1 gain:8 proved:1 subsection:1 electrophysiological:1 actually:1 back:1 appears:1 feed:2 ta:1 synapse:7 though:1 furthermore:1 stage:1 correlation:1 clock:4 hand:1 su:1 o:1 overlapping:1 mode:10 disabled:1 usage:1 effect:2 phillip:1 facilitate:1 true:2 building:1 facility:1 hence:3 laboratory:1 rpt:1 illustrated:1 during:4 width:1 rhythm:7 m5:1 occasion:1 demonstrate:1 performs:2 funding:1 predominantly:1 common:10 functional:1 brigade:7 winner:3 discussed:1 kluwer:1 sedal:2 buffered:1 freeze:1 cancellation:4 had:1 clll:1 base:2 showed:1 store:2 buffer:2 success:1 vt:6 meeting:2 muscle:1 seen:1 morgan:2 greater:1 additional:1 floor:2 barry:5 signal:10 stephen:5 reduces:1 technical:1 polysilicon:1 academic:1 clinical:2 lin:3 serial:1 controlled:2 impact:1 multilayer:1 patient:23 intracardiac:2 sinus:1 iteration:2 achieved:2 cell:5 whereas:1 separately:1 diagram:3 source:4 publisher:2 extra:1 strict:2 hz:1 capacitor:1 mw:1 unused:1 leong:8 split:1 m6:1 switch:1 architecture:8 converter:1 reduce:2 intensive:1 shift:1 minimise:1 icd:5 duty:2 ltd:1 penalty:1 peter:1 resistance:2 cause:2 repeatedly:1 useful:1 ten:1 sign:1 per:1 mega:1 correctly:1 vol:1 four:2 blood:1 photomicrograph:2 retrograde:4 ram:1 fraction:1 year:1 convert:1 sum:2 run:10 package:1 letter:1 jitter:2 logged:1 place:2 separation:1 missed:1 acceptable:1 appendix:1 bit:2 layer:3 distinguish:1 played:1 convergent:1 fan:6 syna:1 quadratic:1 encountered:1 dangerous:3 occur:1 software:1 ventricular:6 relatively:2 department:2 according:1 combination:1 describes:2 cardiac:2 qrs:10 wta:1 castro:2 bucket:7 heart:7 computationally:1 previously:2 bus:1 discus:1 fail:1 needed:1 jenkins:1 eight:1 generic:1 distinguished:4 subtracted:1 robustness:1 ho:1 gate:1 top:1 remaining:1 running:1 transconductances:1 summarises:3 added:1 occurs:1 digest:1 costly:1 cycling:2 september:2 deficit:1 card:2 separate:1 thank:1 consumption:5 seven:7 whom:1 considers:1 collected:1 richardc:1 length:1 dicarlo:1 providing:1 ratio:1 difficult:2 mostly:1 potentially:1 negative:2 design:4 implementation:2 perform:1 conversion:2 neuron:16 acknowledge:1 november:1 beat:2 january:1 variability:1 interacting:1 perturbation:3 drift:1 introduced:1 pair:2 required:5 connection:1 learned:1 flower:8 pattern:1 atrial:1 optimise:1 analogue:22 power:13 settling:1 dissipates:1 electrogram:1 scheme:4 technology:1 leerink:1 ne:1 acknowledgement:1 drain:1 multiplication:2 mixed:1 suggestion:1 digital:6 switched:1 thresholding:1 placed:1 bias:6 perceptron:1 template:1 nickolls:1 wattle:1 sparse:1 feedback:2 world:1 forward:1 author:1 ml:2 incoming:1 marwan:5 matic:1 continuous:1 table:5 transfer:1 robust:1 complex:8 interpolating:1 jabri:15 domain:2 tachycardia:4 linearly:1 noise:2 suffering:1 advice:1 slow:1 nonvolatile:1 winning:1 exercise:1 minute:2 offset:2 false:2 importance:1 mirror:4 phd:1 margin:5 easier:1 fanin:1 vigorous:1 electrophysiology:2 cardiologist:2 presentation:1 telectronics:1 iceg:9 change:4 included:1 characterised:1 generalisation:7 operates:1 reducing:1 called:1 m3:1 arises:1 meant:1 trainable:1 |
8 | 1,005 | "Real-Time Control of a Tokamak Plasma\nUsing Neural Networks\n\nChris M Bishop\nNeural Computing Re(...TRUNCATED) | "1005 |@word cox:2 loading:1 pulse:2 simulation:2 attainable:2 pressure:3 pick:2 thereby:1 solid:2 s(...TRUNCATED) |
9 | 1,006 | "Real-Time Control of a Tokamak Plasma\nUsing Neural Networks\n\nChris M Bishop\nNeural Computing Re(...TRUNCATED) | "1006 |@word pulsestream:4 cox:2 chromium:2 loading:1 simulation:2 pulse:9 attainable:2 pressure:3 p(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
NIPS
Some measurable characteristics of the dataset:
- D — number of documents
- W — modality dictionary size (number of unique tokens)
- len D — average document length in modality tokens (number of tokens)
- len D uniq — average document length in unique modality tokens (number of unique tokens)
D | @word W | @word len D | @word len D uniq | |
---|---|---|---|---|
value | 7241 | 1.18333e+07 | 1634.21 | 644.49 |
Information about document lengths in modality tokens:
len_total@word | len_uniq@word | |
---|---|---|
mean | 1634.21 | 644.49 |
std | 481.923 | 162.31 |
min | 0 | 0 |
25% | 1249 | 524 |
50% | 1663 | 641 |
75% | 1978 | 755 |
max | 6000 | 1513 |
There are several dataset versions used in other works.
- Downloads last month
- 42