Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
100 | 1,089 | Beating a Defender in Robotic Soccer:
Memory-Based Learning of a Continuous
FUnction
Peter Stone
Department of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Manuela Veloso
Department of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
Learning how to adjust to an opponent's position is critical to
the success of having intelligent agents collaborating towards the
achievement of specific tasks in unfriendly environments. This paper describes our work on a Memory-based technique for to choose
an action based on a continuous-valued state attribute indicating
the position of an opponent. We investigate the question of how an
agent performs in nondeterministic variations of the training situations. Our experiments indicate that when the random variations
fall within some bound of the initial training, the agent performs
better with some initial training rather than from a tabula-rasa.
1
Introduction
One of the ultimate goals subjacent to the development of intelligent agents is to
have multiple agents collaborating in the achievement of tasks in the presence of
hostile opponents. Our research works towards this broad goal from a Machine
Learning perspective. We are particularly interested in investigating how an intelligent agent can choose an action in an adversarial environment. We assume that
the agent has a specific goal to achieve. We conduct this investigation in a framework where teams of agents compete in a game of robotic soccer. The real system
of model cars remotely controlled from off-board computers is under development .
Our research is currently conducted in a simulator of the physical system.
Both the simulator and the real-world system are based closely on systems designed by the Laboratory for ComputationalIntelligence at the University of British
Columbia [Sahota et a/., 1995, Sahota, 1993]. The simulator facilitates the control
of any number of cars and a ball within a designated playing area. Care has been
taken to ensure that the simulator models real-world responses (friction, conserva-
897
Memory-based Learning of a Continuous Function
tion of momentum, etc.) as closely as possible. Figure l(a) shows the simulator
graphics.
IJ
-
0
(j
<0
-
~
<:P
(a)
I
IJ
-
~
(b)
Figure 1: (a) the graphic view of our simulator. (b) The initial position for all
of the experiments in this paper. The teammate (black) remains stationary, the
defender (white) moves in a small circle at different speeds, and the ball can move
either directly towards the goal or towards the teammate . The position of the ball
represents the position of the learning agent.
We focus on the question of learning to choose among actions in the presence of
an adversary. This paper describes our work on applying memory-based supervised
learning to acquire strategy knowledge that enables an agent to decide how to
achieve a goal. For other work in the same domain, please see [Stone and Veloso ,
1995b]. For an extended discussion of other work on incremental and memorybased learning [Aha and Salzberg, 1994 , Kanazawa, 1994, Kuh et al., 1991, Moore,
1991, Salganicoff, 1993, Schlimmer and Granger , 1986, Sutton and Whitehead , 1993,
Wettschereck and Dietterich, 1994, Winstead and Christiansen, 1994], particularly
as it relates to this paper, please see [Stone and Veloso, 1995a].
The input to our learning task includes a continuous-valued range of the position
of the adversary. This raises the question of how to discretize the space of values
into a set of learned features . Due to the cost of learning and reusing a large set of
specialized instances, we notice a clear advantage to having an appropriate degree
of generalization . For more details please see [Stone and Veloso, 1995a].
Here , we address the issue of the effect of differences between past episodes and the
current situation. We performed extensive experiments, training the system under
particular conditions and then testing it (with learning continuing incrementally) in
nondeterministic variations of the training situation. Our results show that when
the random variations fall within some bound of the initial training , the agent
performs better with some initial training rather than from a tabula-rasa. This
intuitive fact is interestingly well- supported by our empirical results.
2
Learning Method
The learning method we develop here applies to an agent trying to learn a function
with a continuous domain. We situate the method in the game of robotic soccer.
We begin each trial by placing a ball and a stationary car acting as the "teammate"
in specific places on the field. Then we place another car, the "defender," in front of
the goal. The defender moves in a small circle in front of the goal at some speed and
begins at some random point along this circle. The learning agent must take one
of two possible actions: shoot straight towards the goal, or pass to the teammate so
898
P. STONE, M. VELOSO
that the ball will rebound towards the goal. A snapshot of the experimental setup
is shown graphically in Figure 1(b).
The task is essentially to learn two functions, each with one continuous input variable, namely the defender's position. Based on this position, which can be represented unambiguously as the angle at which the defender is facing, ?, the agent tries
to learn the probability of scoring when shooting , Ps* (?), and the probability of scor(? ).1 If these functions were learned completely, which would
ing when passing ,
only be possible if the defender's motion were deterministic, then both functions
would be binary partitions: Ps*, P; : [0 .0,360.0) f--.+ {-1 , I}. 2 That is, the agent
would know without doubt for any given ? whether a shot, a pass, both, or neither
would achieve its goal. However, since the agent cannot have had experience for
every possible ?, and since the defender may not move at the same speed each time,
the learned functions must be approximations: Ps,Pp : [0 .0,360.0) f--.+ [-1.0 , 1.0] .
P;
In order to enable the agent to learn approximations to the functions Ps* and P*,
we gave it a memory in which it could store its experiences and from which it coufd
retrieve its current approximations Ps(?) and Pp( ?). We explored and developed
appropriate methods of storing to and retrieving from memory and an algorithm
for deciding what action to take based on the retrieved values.
2.1
Memory Model
Storing every individual experience in memory would be inefficient both in terms
of amount of memory required and in terms of generalization time. Therefore, we
store Ps and Pp only at discrete , evenly-spaced values of ?. That is, for a memory
of size M (with M dividing evenly into 360 for simplicity), we keep values of Pp(O)
and Ps(O) for 0 E {360n/M I 0 ~ n < M}. We store memory as an array "Mem"
of size M such that Mem[n] has values for both Pp(360n/M) and Ps(360n/M) .
Using a fixed memory size precludes using memory-based techniques such as KNearest-Neighbors (kNN) and kernel regression which require that every experience
be stored, choosing the most relevant only at decision time. Most of our experiments
were conducted with memories of size 360 (low generalization) or of size 18 (high
generalization), i.e. M = 18 or M = 360. The memory size had a large effect on
the rate of learning [Stone and Veloso, 1995a].
2.1.1
Storing to Memory
With M discrete memory storage slots, the problem then arises as to how a specific
training example should be generalized. Training examples are represented here as
E.p,a,r, consisting of an angle ?, an action a, and a result r where ? is the initial
position of the defender, a is "s" or "p" for "shoot" or "pass," and r is "I" or
"-I" for "goal" or " miss" respectively. For instance, E 72 .345 ,p ,1 represents a pass
resulting in a goal for which the defender started at position 72.345 0 on its circle.
Each experience with 0 - 360/2M :::; ? < 0 + 360/2M affects Mem[O] in proportion to the distance 10 - ?I. In particular, Mem[O] keeps running sums of the
magnitudes of scaled results, Mem[O]. total-a-results, and of scaled positive results,
Mem[O].positive-a-results, affecting Pa(O), where "a" stands for "s" or "p" as before. Then at any given time , Pa (0) = -1 + 2 * positive-a-results
total-a-r esults . The "-I" is for
per convention, P * represents the target (optimal) function.
we think of
and
as functions from angles to probabilities, we will use
-1 rather than 0 as the lower bound of the range. This representation simplifies many of
our illustrative calculations.
1 As
2 Although
P;
P;
899
Memory-based Learning of a Continuous Function
the lower bound of our probability range, and the "2*" is to scale the result to this
range. Call this our adaptive memory storage technique:
Adaptive Memory Storage of
I _
(1 _ 14>-01 )
? r
- r
*
E4>,a,r
in Mem 0
360/M .
? Mem[O].total-a-results += r'o
? If r' > 0 Then Mem[O].positive-a-results
? P (0) = -1 + 2 * posittve-a-results.
a
+=
r'o
total-a-resuLts
For example, EllO,p,l wOilld set both total-p-results and positive-p-results for
Mem[120] (and Mem[100]) to 0.5 and consequently Pp(120) (and Pp(100)) to 1.0.
But then E l25 ,p,-1 would increment total-p-resultsfor Mem[120] by .75, while leaving positive-p-results unchanged. Thus Pp(120) becomes -1 + 2 * 1:~5 = -.2.
This method of storing to memory is effective both for time-varying concepts and
for concepts involving random noise. It is able to deal with conflicting examples
within the range of the same memory slot.
Notice that each example influences 2 different memory locations. This memory
storage technique is similar to the kNN and kernel regression function approximation
techniques which estimate f( ?) based on f( 0) possibly scaled by the distance from
o to ? for the k nearest values of O. In our linear continuum of defender position,
our memory generalizes training examples to the 2 nearest memory locations. 3
2.1.2
Retrieving from Memory
Since individual training examples affect multiple memory locations, we use a simple
technique for retrieving Pa (?) from memory when deciding whether to shoot or to
pass. We round ? to the nearest 0 for which Mem[O] is defined, and then take Pa (0)
as the value of Pa(?). Thus, each Mem[O] represents Pa(?) for 0 - 360/2M ~ ? <
o+ 360 /2M. Notice that retrieval is much simpler when using this technique than
when using kNN or kernel regression: we look directly to the closest fixed memory
position, thus eliminating the indexing and weighting problems involved in finding
the k closest training examples and (possibly) scaling their results.
2.2
Choosing an Action
The action selection method is designed to make use of memory to select the action
most probable to succeed, and to fill memory when no useful memories are available.
For example, when the defender is at position ?, the agent begins by retrieving Pp (?)
and Ps( ?) as described above. Then, it acts according to the following function:
If Pp_(<fJ) = P.(<fJ) (no basis for a decision), shoot or pass randomly.
else If Pp(<fJ) > 0 and Pp(<fJ) > Ps(<fJ), pass.
else If P.(<fJ) > 0 and P.(<fJ) > Pp(<fJ), shoot.
else If Pp(<fJ) = 0, (no previous passes) pass.
else If P.(<fJ) = 0, (no previous shots) shoot.
else (Pp(<fJ),P.(<fJ) < 0) shoot or pass randomly.
An action is only selected based on the memory values if these values indicate that
one action is likely to succeed and that it is better than the other. If, on the other
hand, neither value Pp(?) nor Ps(?) indicate a positive likelihood of success, then
an action is chosen randomly. The only exception to this last rule is when one of
3For particularly large values of M it is useful to generalize training examples to more
memory locations, particularly at the early stages of learning. However for the values of
M considered in this paper, we always generalize to the 2 nearest memory locations.
P.STONE,M.VELOSO
900
the values is zero,4 suggesting that there has not yet been any training examples for
that action at that memory location. In this case, there is a bias towards exploring
the untried action in order to fill out memory.
3
Experiments and Results
In this section, we present the results of our experiments. We explore our agent's
ability to learn time-varying and nondeterministic defender behavior .
While examining the results, keep in mind that even if the agent used the functions
and
to decide whether to shoot or to pass, the success rate would be significantly less than 100% (it would differ for different defender speeds): there were
many defender starting positions for which neither shooting nor passing led to a
goal (see Figure 2). For example, from our experiments with the defender moving
P;
P;
I
-
-
I
D ..... ? ?
Cd)
(b)
Figure 2: For different defender starting positions (solid rectangle), the agent can
score when (a) shooting, (b) passing, (c) neither, or (d) both.
at a constant speed of 50, 5 we found that an agent acting optimally scores 73.6%
of the time; an agent acting randomly scores only 41.3% of the time. These values
set good reference points for evaluating our learning agent's performance.
3.1
Coping with Changing Concepts
Figure 3 demonstrates the effectiveness of adaptive memory when the defender's
speed changes. In all of the experiments represented in these graphs, the agent
Success Rate vs. Defender Speed: Memory Size = 360
80r-~~--~~~--~~--~
75
Success Rate vs. Defender Speed: Memory Size = 18
BOr-~~--~~~--~~~--'
75
70
~... - -".""':::::'" .:.::.-~
60
60
55
55
50
.........
~._.
~
65
65
50
45
45
First 1000 trials Next 1000 trials .. - ..
TheoreHcal optimum .....
40
35L-~~--~~~--~~~~
10
20
30
40 50
60
70
Defender Speed
First 1000 trials Next 1000 trials .-..
Theoretical optimum .. ..
40
80
90
100
35L-~~--~~~--~~~~
10
20
30
40 50 60
70
Defender Speed
80
90
100
Figure 3: For all trials shown in these graphs, the agent began with a memory
trained for a defender moving at constant speed 50.
started with a memory trained by attempting a single pass and a single shot with
the defender starting at each position 0 for which Mem[O] is defined and moving in
4Recall that a memory value of 0 is equivalent to a probability of .5, representing no
reason to believe that the action will succeed or fail.
SIn the simulator, "50" represents 50 cm/s. Subsequently, we omit the units.
Memory-based Learning of a Continuous Function
901
its circle at speed 50. We tested the agent 's performance with the defender moving
at various (constant) speeds.
With adaptive memory, the agent is able to unlearn the training that no longer
applies and approach optimal behavior: it re-Iearns the new setup. During the first
1000 trials the agent suffers from having practiced in a different situation (especially
for the less generalized memory, M = 360) , but then it is able to approach optimal
behavior over the next 1000 trials. Remember that optimal behavior, represented
in the graph, leads to roughly a 70% success rate, since at many starting positions,
neither passing nor shooting is successful.
From these results we conclude that our adaptive memory can effectively deal with
time-varying concepts. It can also perform well when the defender 's motion is
nondeterministic, as we show next.
3.2
Coping with Noise
To model nondeterministic motion by the defender, we set the defender's speed
randomly within a range. For each attempt this speed is constant, but it varies from
attempt to attempt. Since the agent observes only the defender's initial position,
from the point of view of the agent, the defender's motion is nondeterministic.
This set of experiments was designed to test the effectiveness of adaptive memory
when the defender's speed was both nondeterministic and different from the speed
used to train the existing memory. The memory was initialized in the same way as
in Section 3.1 (for defender speed 50). We ran experiments in which the defender's
speed varied between 10 and 50. We compared an agent with trained memory
against an agent with initially empty memories as shown in Figure 4.
70
Success Rate VS . Trial #: M=18, Defender speed 10-50
r-~---.--~--~--r--.--~---.--.
55
50
No initial memory
Full initial memory - -
45
40
L-~
50
_ _- L_ _~_ _~_ _L-~_ _~_ _~~
100 150 200 250 300 350 400 450 500
Trial Number
Figure 4: A comparison of the effectiveness of starting with an empty memory
versus starting with a memory trained for a constant defender speed (50) different
from that used during testing. Success rate is measured as goal percentage thus far.
The agent with full initial memory outperformed the agent with initially empty
memory in the short run. The agent learning from scratch did better over time
since it did not have any training examples from when the defender was moving
at a fixed speed of 50; but at first, the training examples for speed 50 were better
than no training examples. Thus, when you would like to be successful immediately
upon entering a novel setting, adaptive memory allows training in related situations
to be effective without permanently reducing learning capacity.
902
4
P. STONE, M. VELOSO
Conclusion
Our experiments demonstrated that online, incremental, supervised learning can be
effective at learning functions with continuous domains. We found that adaptive
memory made it possible to learn both time-varying and nondeterministic concepts.
We empirically demonstrated that short-term performance was better when acting
with a memory trained on a concept related to but different from the testing concept, than when starting from scratch. This paper reports experimental results on
our work towards multiple learning agents, both cooperative and adversarial, III a
continuous environment.
Future work on our research agenda includes simultaneous learning of the defender
and the controlling agent in an adversarial context. We will also explore learning
methods with several agents where teams are guided by planning strategies. In
this way we will simultaneously study cooperative and adversarial situations using
reactive and deliberative reasoning.
Acknow ledgements
We thank Justin Boyan and the anonymous reviewers for their helpful suggestions. This research is
sponsored by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF,
and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views
and conclusions contained in this document are those of the authors and should not be interpreted
as necessarily representing the official policies or endorsements, either expressed or implied, of Wright
Laboratory or the U. S. Government.
References
[Aha and Salzberg, 1994] David W . Aha and Steven L. Salzberg. Learning to catch: Applying nearest
neighbor algorithms to dynamic control tasks. In P. Cheeseman and R. W. Oldford, editors, Selecttng
Models from Data : Artificial Intelhgence and StattStics IV. SpringIer-Verlag, New York, NY, 1994.
[Kanazawa, 1994] Keiji Kanazawa. Sensible decisions: Toward a theory of decision-theoretic information
invariants. In Proceedings of the Twelfth National Conference on Art~ficial Intelligence, pages 973978, 1994 .
[Kuh et al., 1991] A. Kuh, T. Petsche, and R.L. Rivest. Learning time-varying concepts. In Advances
in Neural Information Processing Systems 3 , pages 183-189. Morgan Kaufman, December 1991.
[Moore, 1991] A.W . Moore . Fast, robust adaptive control by learning only forward models. In Advances
in Neural Information Processing Systems 3. Morgan Kaufman, December 1991.
[Sahota et al., 1995] Michael K. Sahota, Alan K. Mackworth, Rod A. Barman, and Stewart J. Kingdon.
Real-time control of soccer-playing robots using off-board vision : the dynamite testbed. In IEEE
Internahonal Conference on Systems, Man, and Cybernetics, pages 3690-3663, 1995.
[Sahota, 1993] Michael K . Sahota. Real-time intelligent behaviour in dynamic environments: Soccerplaying robots. Master's thesis, University of British Columbia, August 1993.
[Salganicoff, 1993] Marcos Salganicoff. Density-adaptive learning and forgetting. In Proceedmgs of the
Tenth International Conference on Machine Learning, pages 276-283, 1993.
[Schlimmer and Granger, 1986] J.C. Schlimmer and R.H. Granger. Beyond incremental processing:
Tracking concept drift. In Proceedings of the Fiffth National Conference on Artifictal Intelligence ,
pages 502-507. Morgan Kaufman, Philadelphia, PA, 1986.
[Stone and Veloso, 1995a] Peter Stone and Manuela Veloso. Beating a defender in robotic soccer:
Memory-based learning of a continuous function. Technical Report CMU-CS-95-222, Computer Science Department, Carnegie Mellon University, 1995.
[Stone and Veloso, 1995b] Peter Stone and Manuela Veloso. Broad learning from narrow training: A case
study in robotic soccer. Technical Report CMU-CS-95-207, Computer Science Department, Carnegie
Mellon University, 1995.
[Sutton and Whitehead, 1993] Richard S. Sutton and Steven D. Whitehead. Online learning with random representations. In ProceedIngs of the Tenth International Conference on Machine Learnmg,
pages 314-321, 1993.
[Wettschereck and Dietterich, 1994] Dietrich Wettschereck and Thomas Dietterich. Locally adaptive
nearest neighbor algorithms. In J. D . Cowan, G. Tesauro, and J. Alspector, editors, Advances in
Neural Informatton Processing Systems 6, pages 184-191, San Mateo, CA, 1994. Morgan Kaufmann.
[Winstead and Christiansen, 1994] Nathaniel S. Winstead and Alan D. Christiansen. Pinball: Planning
and learning in a dynamic real-time environment. In AAAI-9-4 Fall Symposium on Control of the
Physical World by Intelligent Agents, pages 153-157, New Orleans, LA, November 1994.
| 1089 |@word trial:10 eliminating:1 proportion:1 twelfth:1 solid:1 shot:3 initial:10 score:3 practiced:1 document:1 interestingly:1 past:1 existing:1 current:2 yet:1 must:2 partition:1 enables:1 designed:3 sponsored:1 v:3 stationary:2 ficial:1 selected:1 intelligence:2 short:2 location:6 simpler:1 along:1 symposium:1 retrieving:4 shooting:4 nondeterministic:8 forgetting:1 roughly:1 alspector:1 nor:3 planning:2 simulator:7 behavior:4 becomes:1 begin:3 project:1 rivest:1 what:1 cm:1 interpreted:1 kaufman:3 developed:1 finding:1 remember:1 every:3 act:1 scaled:3 demonstrates:1 control:5 unit:1 grant:1 omit:1 positive:7 teammate:4 before:1 sutton:3 black:1 mateo:1 range:6 testing:3 orleans:1 area:1 empirical:1 remotely:1 coping:2 significantly:1 cannot:1 selection:1 storage:4 context:1 applying:2 influence:1 equivalent:1 deterministic:1 demonstrated:2 reviewer:1 center:1 graphically:1 starting:7 simplicity:1 immediately:1 rule:1 array:1 fill:2 retrieve:1 variation:4 increment:1 target:1 controlling:1 pa:9 particularly:4 cooperative:2 steven:2 episode:1 observes:1 ran:1 environment:5 agency:1 dynamic:3 trained:5 raise:1 usaf:1 upon:1 completely:1 basis:1 represented:4 various:1 train:1 fast:1 effective:3 artificial:1 choosing:2 valued:2 precludes:1 ability:1 knn:3 knearest:1 think:1 online:2 untried:1 advantage:1 dietrich:1 relevant:1 achieve:3 intuitive:1 achievement:2 empty:3 p:11 optimum:2 incremental:3 develop:1 measured:1 nearest:6 ij:2 dividing:1 c:2 indicate:3 convention:1 winstead:3 differ:1 guided:1 closely:2 attribute:1 subsequently:1 enable:1 require:1 government:1 behaviour:1 generalization:4 investigation:1 anonymous:1 memorybased:1 probable:1 exploring:1 considered:1 wright:2 deciding:2 continuum:1 early:1 outperformed:1 currently:1 always:1 rather:3 varying:5 command:1 focus:1 likelihood:1 l25:1 adversarial:4 helpful:1 initially:2 interested:1 issue:1 among:1 development:2 art:1 field:1 having:3 represents:5 broad:2 placing:1 look:1 rebound:1 future:1 pinball:1 report:3 intelligent:5 defender:38 richard:1 randomly:5 simultaneously:1 national:2 individual:2 consisting:1 attempt:3 salganicoff:3 investigate:1 adjust:1 schlimmer:3 experience:5 conduct:1 iv:1 aha:3 continuing:1 initialized:1 circle:5 re:1 theoretical:1 arpa:1 instance:2 salzberg:3 stewart:1 cost:1 examining:1 successful:2 conducted:2 graphic:2 front:2 optimally:1 stored:1 varies:1 density:1 international:2 off:2 michael:2 thesis:1 aaai:1 choose:3 possibly:2 inefficient:1 reusing:1 doubt:1 suggesting:1 wettschereck:3 includes:2 scor:1 tion:1 try:1 view:3 performed:1 air:1 nathaniel:1 kaufmann:1 spaced:1 generalize:2 bor:1 cybernetics:1 straight:1 simultaneous:1 suffers:1 against:1 pp:15 involved:1 mackworth:1 recall:1 knowledge:1 car:4 supervised:2 unambiguously:1 response:1 stage:1 hand:1 incrementally:1 believe:1 dietterich:3 effect:2 concept:9 entering:1 laboratory:3 moore:3 white:1 deal:2 round:1 sin:1 game:2 during:2 please:3 illustrative:1 soccer:6 generalized:2 trying:1 stone:12 theoretic:1 performs:3 motion:4 fj:12 reasoning:1 shoot:8 novel:1 began:1 unlearn:1 specialized:1 physical:2 empirically:1 mellon:4 rasa:2 had:2 moving:5 robot:2 longer:1 etc:1 closest:2 perspective:1 retrieved:1 tesauro:1 store:3 verlag:1 hostile:1 binary:1 success:8 scoring:1 morgan:4 tabula:2 care:1 relates:1 multiple:3 full:2 ing:1 technical:2 alan:2 veloso:12 calculation:1 retrieval:1 controlled:1 involving:1 regression:3 subjacent:1 essentially:1 vision:1 cmu:2 kernel:3 affecting:1 else:5 keiji:1 leaving:1 pass:1 facilitates:1 cowan:1 december:2 effectiveness:3 call:1 presence:2 iii:1 affect:2 gave:1 simplifies:1 rod:1 whether:3 ultimate:1 peter:3 passing:4 york:1 action:15 useful:2 clear:1 amount:1 locally:1 percentage:1 notice:3 per:1 ledgements:1 carnegie:4 discrete:2 changing:1 neither:5 tenth:2 rectangle:1 graph:3 aeronautical:1 sum:1 compete:1 angle:3 run:1 you:1 master:1 place:2 decide:2 christiansen:3 decision:4 endorsement:1 scaling:1 bound:4 speed:23 friction:1 f33615:1 attempting:1 department:4 designated:1 according:1 ball:5 describes:2 invariant:1 indexing:1 taken:1 remains:1 granger:3 fail:1 know:1 mind:1 whitehead:3 generalizes:1 available:1 opponent:3 appropriate:2 petsche:1 permanently:1 thomas:1 running:1 ensure:1 especially:1 unchanged:1 implied:1 move:4 question:3 strategy:2 distance:2 thank:1 capacity:1 sensible:1 evenly:2 reason:1 toward:1 acquire:1 setup:2 esults:1 acknow:1 agenda:1 policy:1 perform:1 discretize:1 snapshot:1 november:1 situation:6 extended:1 team:2 varied:1 august:1 drift:1 david:1 namely:1 required:1 extensive:1 learned:3 conflicting:1 testbed:1 narrow:1 address:1 able:3 adversary:2 justin:1 beyond:1 beating:2 memory:63 critical:1 boyan:1 force:1 cheeseman:1 advanced:1 representing:2 barman:1 started:2 catch:1 columbia:2 philadelphia:1 suggestion:1 facing:1 versus:1 agent:40 degree:1 editor:2 playing:2 storing:4 cd:1 supported:1 last:1 l_:1 bias:1 fall:3 neighbor:3 world:3 stand:1 evaluating:1 author:1 made:1 adaptive:11 forward:1 situate:1 san:1 far:1 kuh:3 keep:3 robotic:5 investigating:1 mem:15 pittsburgh:2 manuela:3 conclude:1 continuous:11 learn:6 robust:1 ca:1 necessarily:1 domain:3 official:1 did:2 noise:2 board:2 ny:1 position:18 momentum:1 weighting:1 british:2 e4:1 specific:4 explored:1 kanazawa:3 effectively:1 magnitude:1 led:1 likely:1 explore:2 expressed:1 contained:1 tracking:1 applies:2 collaborating:2 succeed:3 slot:2 goal:14 ello:1 consequently:1 towards:8 man:1 change:1 reducing:1 acting:4 miss:1 total:6 pas:11 experimental:2 la:1 indicating:1 select:1 exception:1 arises:1 reactive:1 tested:1 scratch:2 |
101 | 109 | 206
APPLICATIONS OF
~RROR BACK-PROPAGATION
TO PHONETIC CLASSIFICATION
Hong C. Leung & Victor W. Zue
Spoken Language Systems Group
Laboratory for Computer Science
Massachusetts Institute of Technology
Cambridge, MA 02139
ABSTRACT
This paper is concerced with the use of error back-propagation
in phonetic classification. Our objective is to investigate the basic characteristics of back-propagation, and study how the framework of multi-layer perceptrons can be exploited in phonetic recognition. We explore issues such as integration of heterogeneous
sources of information, conditioll~ that can affect performance of
phonetic classification, internal representations, comparisons with
traditional pattern classification techniques, comparisons of different error metrics, and initialization of the network. Our investigation is performed within a set of experiments that attempts to recognize the 16 vowels in American English independent of speaker.
Our results are comparable to human performance.
Early approaches in phonetic recognition fall into two major extremes: heuristic
and algorithmic. Both approaches have their own merits and shortcomings. The
heuristic approach has the intuitive appeal that it focuses on the linguistic information in the speech signal and exploits acoustic-phonetic knowledge. HO'fever, the
weak control strategy used for utilizing our knowledge has been grossly inadequate.
At the other extreme, the algorithmic approach relies primarily on the powerful control strategy offered by well-formulated pattern recognition techniques. However,
relatively little is known about how our speech knowledge accumulated over the
past few decades can be incorporated into the well-formulated algorithms. We feel
that artificial neural networks (ANN) have some characteristics that can potentially
enable them to bridge the gap between these two extremes. On the one hand, our
speech knowledge can provide guidance to the structure and design of the network.
On the other hand, the self-organizing mechanism of ANN can provide a control
strategy for utilizing our knowledge.
In this paper, we extend our earlier work on the use of artificial neural networks
for phonetic recognition [2]. Specifically, we focus our investigation on the following
sets of issues. First, we describe the use of the network to integrate heterogeneous
sources of information. We will see how classification performance improves as more
Error Back-Propagation to Phonetic Classification
information is available. Second, we discuss several important factors that can substantially affect the performance of phonetic classification. Third, we examine the
internal representation of the network. Fourth, we compare the network with two
traditional classification techniques: K-nearest neighbor and Gaussian classification. Finally, we discuss our specific implementations of back-propagation that
yield improved performance and more efficient learning time.
EXPERIMENTS
Our investigation is performed within the context of a set of experiments that
attempts to recognize the 16 vowels in American English independent of speaker.
The vowels are excised from continuous speech and they can be preceded and followed by any phonemes, thus providing a rich environment to study contextual
influence. We assume that the locations of the vowels have been detected. Given a
time region, the network determines which one of the 16 vowels was spoken.
CORPUS
As Table 1 shows, our training set consists of 20,000 vowel tokens, excised from
2,500 continuous sentences spoken by 500 male and female speakers. The test set
consists of about 2,000 vowel tokens, excised from 250 sentences spoken by 50 different speakers. All the data are extracted from the TIMIT database, which has a
wide range of American dialectical variations [1]. The speech signal is represented
by spectral vectors obtained from an auditory model [4]. Speaker and energy normalization are also performed [5].
Training
Testing
Tokens
20,000
2,000
Sentences
2500
250
Speakers (M/F)
500 (350/150)
50 (33/17)
Table 1: Corpus extracted from the TIMIT database.
NETWORK STRUCTURE
The structure of the network we have examined most extensively has 1 hidden
layer as shown in Figure 1. It has 16 output units, with one unit for each of the 16
vowels. In order to capture dynamic information, the vowel region is divided into
three equal subregions. An average spectrum is then computed in each subregion.
These 3 average spectra are then applied to the first 3 sets of input units. Additional
sources of information, such as duration and local phonetic contexts, can also be
made available to the network. While spectral and durational inputs are continuous
and numerical, the contextual inputs are discrete and symbolic.
207
208
Leung and Zue
output from auditory model
(synchrony spectrogram)
-.. . . . .-..-..~.~j:.I._._. . ---.. . . . .-.. -.~
Figure 1: Basic structure of the network.
HETEROGENEOUS INFORMATION INTEGRATION
In our earlier study, we have examined the integration of the Synchrony Envelopes and the phonetic contexts [2]. The Synchrony Envelopes, an output of
the auditory model, have been shown to enhance the formant information. In this
study, we add additional sources of information. Figure 2 shows the performance
as heterogeneous sources of information are made available to the network. The
performance is about 60% when only the Synchrony Envelopes are available. The
performance improves to 64% when the Mean Rate Response, a different output of
the auditory model which has been shown to enhance the temporal aspects of the
speech signal, is also available. We can also see that the performance improves consistently to 77% as durational and contextual inputs are provided to the network.
This experiment suggests that the network is able to make use of heterogeneous
sources of information, which can be numerical and/or symbolic.
Error Back-Propagation to Phonetic Classification
One may ask how well human listeners can recognize the vowels. Experiments
have been performed to study how well human listeners agree with each other when
they can only listen to sequences of 3 phonemes, i.e. the phoneme before the vowel,
the vowel itself, and the phoneme after the vowel [3]. Results indicate that the
average agreement among the listeners on the identities of the vowels is between
65% and 70%.
80?
:
70?
605lr
Synchrony
Add
Envelopes Mean Rate
Response
Add
Duration
Add
Phonetic
Context
Sources of Information
Figure 2: Integration of heterogeneous sources of information.
PERFORMANCE RESULTS
We have seen that one of the important factors for the network performance
is the amount of information available to the network. To gain additional insights
about how the network performs under different conditions, several experiments
were conducted using different databases. In these and the subsequent experiments
we describe in this paper, only the Synchrony Envelopes are available to the network.
Table 2 shows the performance results for several recognition tasks. In each
of these tasks, the network is trained and tested with independent sets of speech
data. The first task recognizes vowels spoken by one speaker and excised from the
fbf-vowel-ftf environment, spoken in isolation. This recognition task is relatively
straightforward, resulting in perfect performance. In the second experiment, vowel
tokens are extracted from the same phonetic context, but spoken by 17 male and
female speakers. Due to inter-speaker variability, the accuracy degrades to 86%.
The third task recognizes vowels spoken by one speaker and excised from an unrestricted context, spoken continuously. We can see that the accuracy decreases
further to 70%. Finally, data from the TIM IT database are used, spoken by multiple speakers. The accuracy drops to 60%. These results indicate that a substantial
difference in performance can be expected under different conditions, depending on
whether the task is speaker-independent, what is the restriction on the phonetic
209
210
Leung and Zue
Speakers(M/F)
1(1/0)
17(8/9)
1(1/0)
500(350/150)
Context
b
b
-
-
t
t
* *
*- *
Training
Tokens
64
256
3,000
20,000
Percent
Correct
100
86
70
60
Remark
isolated
isolated
continuous
continuous
Table 2: Performance for different tasks, using only the synchrony spectral information. "*,, stands for any phonetic contexts.
contexts, whether the speech material is spoken continuously, and how much data
are used to train the network.
INTERNAL REPRESENTATION
To understand how the network makes use of the input information, we examined the connection weights of the network. A vector is formed by extracting the
connections from all the hidden units to one output unit as shown in Figure 3a. The
same process is repeated for all output units to obtain a total of 16 vectors. The
correlations among these vectors are then examined by measuring the inner products or the angles between them. Figure 3b shows the distribution of the angles
after the network is trained, as a function of the number of hidden units. The circles
represent the mean of the distribution and the vertical bars stand for one standard
deviation away from the mean. As the number of hidden units increases, the distribution becomes more and more concentrated and the vectors become increasingly
orthogonal to each other.
The correlations of the connection weights before training were also examined,
as shown in Figure 3c. Comparing parts (b) and (c) of Figure 3, we can see that
the distributions before and after training overlap more and more as the number of
hidden units increases. With 128 hidden units, the two distributions are actually
quite similar. This leads us to suspect that perhaps the connection weights between
the hidden and the output layer need not be trained if we have a sufficient number
of hidden units.
Figure 4a shows the performance of recognizing the 16 vowels using three different techniques: (i) train all the connections in the network, (ii) fix the connections
between the hidden and output layers after random initialization and train only
the connections between the input and hidden layers, and (iii) fix the connections
between the input and hidden layers and train only the connections between the
hidden and output layers. We can see that with enough hidden units, training only
the connections between the input and the hidden layers achieves almost the same
performance as training all the connections in the network. We can also see that
Error Back-Propagation to Phonetic Classification
for the same number of hidden units, training only the connections between the
input and the hidden layer can achieve higher performance than training only the
connections between the hidden and the output layer.
Figure 4b compares the three training techniques for 8 vowels, resulting in 8
output units only. We can see similar characteristics in both parts (a) and (b) of
Figure 4.
Output
Layer
Hidden
Layer
Cl" Y
"j
?r"Y"J'
??
'~',~,">-<."-----';,..
.........: . ,
Input
Layer
(a)
150
150
- 130
l110
!w 90
-;;- 130
Ja
70
~
< 50
30
1
I
f
I
l
Iff
!
w
u
I
Cib
c
110
90
70
< 50
10
100
Number of Hidden Units
(b)
toOO
30
1
10
100
1000
Number of Hidden Units
(c)
Figure 3: (a) Correlations of the vectors from the hidden to output layers are
examined. (b) Distribution of the angles between these vectors after training. (c)
Distribution of the angles between these vectors before training.
COMPARISONS WITH TRADITIONAL TECHNIQUES
One of the appealing characteristics of back-propagation is that it does not assume any probability distributions or distance metrics. To gain further insights, we
compare with two traditional pattern classification techniques: K-nearest neighbor
(KNN) and multi-dimensional Gaussian classifiers.
211
212
Leung and Zue
70
y.
(i)
60
~... 50
...
8
1:
~
CD
/
o
30
Q.
cI/
20
1
o ..?. o-..?
/--C)
III
/
~...
8
1:
70
50
~.c//\
?
(iii)
~.
...
CD
\(")
~
I
,....-0/
u
Q.
I
,
CD
""
.?..
".?..0.???.0
p....
?~D
..p/
iI
40
10
90 (i) __ ..0.???? 0-._.
30
o
'Qi)
II
10
100
Number of Hidden Units
(a)
1000
10
1
10
100
Number of Hidden Units
1000
(b)
Figure 4: Performance of recognizing (a) 16 vowels, (b) 8 vowels when (i) all the
connections in the network are trained, (ii) only the connections between the input
and hidden layers are trained, and (iii) only the connections between the hidden
and output layers are trained.
Figure 5a compares the performance results of the network with those of KNN,
for different amounts of training tokens. Again, only the Synchrony Envelopes
are made available to the network, resulting in input vectors of 100 dimensions.
Each cluster of crosses corresponds to performance results of ten networks, each
one randomly initialized differently. Due to different initialization, a fluctuation of
2% to 3% is observed even for the same training size. For comparison, we perform
KNN using the Euclidean distance metric. For each training size, we run KNN 6
times, each one with a different K, which is chosen to be proportional to the square
root of the number of training tokens, N. For simplicity, Figure 5a shows results for
only 3 different values of K: (i) K = Vii, (ii) K = 10Vii, and (iii) K = 1. In this
experiment, we have found that the performance is the best when K = ..fFi and is
the worst when K
1. We have also found that up to 20,000 training tokens, the
network consistently compares favorably to KNN. It is possible that the network is
able to find its own distance metric to achieve better performance.
=
Since the true underlying probability distribution is unknown, we assume multidimensional Gaussian distribution in the second experiment. (i) We use the full
covariance matrix, which has 100zl00 elements. To avoid problems with singularity,
we obtain results only for large number of training tokens. (ii) We use the diagonal
covariance matrix which has non-zero elements only along the diagonal. We can
see from Figure 5b that the network compares favorably to the Gaussian classifiers.
Our results also suggest that the Gaussian assumption is invalid.
Error Back-Propagation to Phonetic Classification
60
~
50
~
40
iii
.t.
! ...!
~--....-,
.......
... ...........
j.. .......-:.?._.....
/
J""".'
........
/"Jr/
(ii)
(iii)
] 50
'e!
~
(.-
Q.
~
??
60
(i)
40
I
.....
)..-/
D????
~~:~)
I
30~----------------------------------~
~~------------------------------------~
100000
1000
10000
100
1000
10000
100000 100
Number of Training Tokens
Number of Training Tokens
(a)
(b)
Figure 5: (a) Comparison with KNN for different values of K (See text). (b)
Comparison with Gaussian classification when using the (i) full covariance matrix,
and (ii) diagonal covariance matrix. Each cluster of 10 crosses corresponds to the
results of 10 different networks, each one randomly initialized.
ERROR METRIC AND INITIALIZATION
In order to take into account the classification performance of the network more
explicitly, we have introduced a weighted mean square error metric [2]. By modulating the mean square error with weighting factors that depend on the classification performance, we have shown that the rank order statistics can be improved.
Like simulated annealing, gradient descent takes relatively big steps when the performance is poor, and takes smaller and smaller steps as the performance of the
network improves.
Results also indicate that it is more likely for a unit output to be initially in the
saturation regions of the sigmoid function if the network is randomly initialized.
This is not desirable since learning is slow when a unit output is in a saturation
region. Let the sigmoid function goes from -1 to 1. If the connection weights
between the input and the hidden layers are initialized with zero weights, then all
the hidden unit outputs in the network will initially be zero, which in turn results in
zero output values for all the output units. In other words, all the units will initially
operate at the center of the transition region of the sigmoid function, where learning
is the fastest. We call this method center initialization (CI).
Parts (a) and (b) of Figure 6 compare the learning speed and performance,
respectively, of the 3 different techniques: (i) mean square error (MSE), (ii) weighted
mean square error (WMSE), and (iii) center initialization (CI) with WMSE. We can
see that both WMSE and CI seem to be effective in improving the learning time
and the performance of the network.
213
214
Leung and Zue
K'"
(iii)
(i)
(ii)
30~--~----------------~
o
10
20
30
40
Number of Training Iterations
50
30~4-------------~----~
100
1000
10000
100000
Number of Training Tokens
(a)
(b)
Figure 6: Comparisons of the (a) learning characteristics and, (b) performance
results, for the 3 different techniques: (i) MSE, (ii) WMSE, and (iii) CI with WMSE.
Each point corresponds to the average of 10 different networks, each one initialized
randomly.
SUMMARY
In summary, we have described a set of experiments that were designed to help
us get a better understanding of the use of back-propagation in phonetic classification. Our results are encouraging and we are hopeful that artificial neural networks
may provide an effective framework for utilizing our acoustic-phonetic knowledge
in speech recognition.
References
[1] Fisher, W.E., Doddington, G.R., and Goudie-Marshall, K.M., "The DARPA
Speech Recognition Research Database: Specifications and Status," Proceedings of the DARPA Speech Recognition Workshop Report No. SAIC-86/1546,
February, 1986.
[2] Leung, H.C., "Some phonetic recognition experiments using artificial neural
nets/' ICASSP-88, 1988.
[3] Phillips, M.S., "Speaker independent classification of vowels and diphthongs
in continuous speech," Proc. of the 11th International Congress of Phonetic
Sciences, Estonia, USSR, 1987.
[4] Seneff S., "A computational model for the peripheral auditory system: application to speech recognition research," Proc. ICASSP, Tokyo, 1986.
[5] Seneff S., "Vowel recognition based on 'line-formants' derived from an auditorybased spect(al representation," Proc. of the 11th International Congress of
Phonetic Sciences, Estonia, USSR, 1987.
| 109 |@word covariance:4 past:1 comparing:1 contextual:3 subsequent:1 numerical:2 drop:1 designed:1 lr:1 location:1 along:1 become:1 consists:2 inter:1 expected:1 examine:1 multi:2 formants:1 little:1 encouraging:1 becomes:1 provided:1 underlying:1 what:1 substantially:1 spoken:11 temporal:1 multidimensional:1 classifier:2 control:3 unit:23 before:4 local:1 congress:2 fluctuation:1 initialization:6 examined:6 suggests:1 fastest:1 range:1 testing:1 excised:5 word:1 suggest:1 symbolic:2 get:1 context:9 influence:1 restriction:1 center:3 straightforward:1 go:1 duration:2 simplicity:1 insight:2 utilizing:3 variation:1 feel:1 agreement:1 element:2 recognition:12 database:5 observed:1 capture:1 worst:1 region:5 decrease:1 substantial:1 environment:2 dynamic:1 trained:6 depend:1 icassp:2 darpa:2 differently:1 represented:1 listener:3 train:4 shortcoming:1 describe:2 effective:2 artificial:4 detected:1 quite:1 heuristic:2 formant:1 knn:6 statistic:1 itself:1 sequence:1 net:1 product:1 organizing:1 iff:1 achieve:2 intuitive:1 cluster:2 perfect:1 tim:1 depending:1 help:1 nearest:2 subregion:1 indicate:3 correct:1 tokyo:1 human:3 enable:1 material:1 ja:1 fix:2 investigation:3 singularity:1 algorithmic:2 cib:1 major:1 achieves:1 early:1 proc:3 bridge:1 modulating:1 weighted:2 gaussian:6 dialectical:1 avoid:1 linguistic:1 derived:1 focus:2 consistently:2 rank:1 leung:6 accumulated:1 initially:3 hidden:27 issue:2 classification:18 among:2 ussr:2 integration:4 equal:1 report:1 primarily:1 few:1 randomly:4 saic:1 recognize:3 vowel:24 attempt:2 investigate:1 male:2 durational:2 extreme:3 orthogonal:1 euclidean:1 initialized:5 circle:1 guidance:1 isolated:2 earlier:2 marshall:1 measuring:1 deviation:1 recognizing:2 inadequate:1 conducted:1 international:2 enhance:2 continuously:2 again:1 american:3 account:1 explicitly:1 performed:4 root:1 synchrony:8 timit:2 formed:1 square:5 accuracy:3 phoneme:4 characteristic:5 yield:1 weak:1 grossly:1 energy:1 gain:2 auditory:5 massachusetts:1 ask:1 knowledge:6 listen:1 improves:4 actually:1 back:10 higher:1 response:2 improved:2 correlation:3 hand:2 propagation:10 perhaps:1 true:1 laboratory:1 self:1 speaker:14 hong:1 performs:1 percent:1 sigmoid:3 preceded:1 extend:1 cambridge:1 phillips:1 language:1 specification:1 add:4 own:2 female:2 phonetic:23 seneff:2 exploited:1 victor:1 seen:1 additional:3 unrestricted:1 spectrogram:1 signal:3 ii:11 multiple:1 full:2 desirable:1 cross:2 divided:1 qi:1 basic:2 heterogeneous:6 metric:6 iteration:1 normalization:1 represent:1 annealing:1 source:8 envelope:6 operate:1 suspect:1 seem:1 call:1 extracting:1 iii:10 enough:1 affect:2 isolation:1 inner:1 whether:2 speech:13 remark:1 amount:2 extensively:1 subregions:1 concentrated:1 ten:1 discrete:1 group:1 run:1 angle:4 powerful:1 fourth:1 almost:1 comparable:1 layer:17 spect:1 followed:1 fever:1 aspect:1 speed:1 relatively:3 peripheral:1 poor:1 jr:1 smaller:2 increasingly:1 appealing:1 wmse:5 agree:1 discus:2 ffi:1 mechanism:1 zue:5 turn:1 merit:1 available:8 away:1 spectral:3 ho:1 recognizes:2 exploit:1 february:1 objective:1 strategy:3 degrades:1 traditional:4 diagonal:3 gradient:1 distance:3 simulated:1 providing:1 potentially:1 favorably:2 design:1 implementation:1 unknown:1 perform:1 vertical:1 descent:1 incorporated:1 variability:1 introduced:1 sentence:3 connection:17 acoustic:2 able:2 bar:1 pattern:3 saturation:2 overlap:1 technology:1 text:1 understanding:1 proportional:1 integrate:1 offered:1 sufficient:1 cd:3 summary:2 token:12 english:2 understand:1 institute:1 fall:1 neighbor:2 wide:1 hopeful:1 dimension:1 stand:2 transition:1 rich:1 made:3 status:1 corpus:2 spectrum:2 continuous:6 decade:1 table:4 goudie:1 improving:1 mse:2 cl:1 big:1 repeated:1 slow:1 third:2 weighting:1 specific:1 appeal:1 workshop:1 ci:5 gap:1 vii:2 explore:1 likely:1 rror:1 corresponds:3 determines:1 relies:1 extracted:3 ma:1 identity:1 formulated:2 ann:2 invalid:1 fisher:1 specifically:1 total:1 perceptrons:1 internal:3 doddington:1 tested:1 |
102 | 1,090 | y
z
!#"
$ %'&
! (
{
|
}
~
?
UWVX Y=) Z=[]*+-\_u,/^WUg`3.1vBacbd w03\_e 25ac4 \fg\@Y [ 86 7:95,/;<0=43>3f ?@\ 0BY AD[u C3VieLEhkxy{zi7 jm\,5^]li03t []x,G| [Nn }8F5\_o0V~Dl HJj []I<pq a_\ ? lio ELr=KNY=`3M na I ps x \ ABt e OQac\_P t R 0=7:S:IT2
vBVt wg?Le ? ?L\@?Jr?\_a_tt@?=??Z3U acw1? ?i\_?? ?iZ ?:? _? ?? VL[]l ??=? []\_?-? ? ~ ? ? ?\_l? ^???=V ? e x? t@om?L?J?J??Q\_^??i? ?=?@\ ?t?? o t a x? ???8\ o8t?[ \ ?3Z
?=?'?
??
?
/? ?'??=?L?G?3?
?=?a V lVe z^![\bq?l e ? Z? VLx? } ziZLo \ t e ? Y [ x[?^ o V\ [ac\ xZ=? a?^o ^ ?L\ V3e} ?L[e=\_li? e o8\_ZLx? ?%a?^?t li[]? ?=??e=??x?{t ? \ [?\_? ^?li\ VY3e ^?\ [?? ^ ?? ~?^l? \ ?? ?]x? tcV ? ? e ^ ?Lt\@\ x?[???c?@??? []?e ?? o \l8\ ??ix ?{? ? ?c? ??=^ \ ? []^?\_\c\@? ? r-[]?o ?? [ ? ?l?}\
?l e=V ? ? \_t??B[]Z []?n=} li[ x? [ ?Lt x? tcY a?l ^ \? l:[]???-?c? \c?_? t!li[??G? ? ?? \ t?a \@?? eZ=[ x\_? e eZ [ VLx? ? ? ? t??]? [ } ?x? ? e l \go ? \x? a ? t ?i}~ ? ^x? ?Lt?[]}\c??L?]o li[?\@V [ ? ^ x? ? a ?=e ??@[?\_? ?
??=??V [ ^?x? V?e=t@?%t!~ ? V^ \[]Y ? ^]\ V zL? x? x? ?L? \ac^ \@t??[ p? ?m??\ a ?x? \ ?o8[V=?? ? \_? ? ?[?V ? \ ? acl \_z t]\t?li[? n} \ x ? e?t l ?? l\ e ac} ? aca ? ? ?c??\_~ ^?tc\_?!\ ?]V? e}???
?? \_l [ ? ? ? Vac^ t ?Vt ? []? \ a?V^]^?\_t]Y Ve ? ? e ?a Ve [ x??e Z VZ=t???[ x?{o \o8V=? \_? ?? ?%t]}L? o \ ??^ x? ?
?
? ?L???????????? ??
?[ x ? V o ? \ l ? eU ?? ? ?De=o \ Z=V ^ ?LV \_e ?Lx^?{tk\@?c[?Z? ?]\J^ \cr \_?Lli[ ?}e ?\_?x? [??L? ?? ? ?_V?i??^?? ?i?D[ ?? ? ? o8Z3?=?]?? ? ?t ? Z3W?\_?? ^?\_[ ??? l e
? ? x ? ? \? ? ? } ??Z t \@? ? ? ???\_?e ? [ x? ? Z? V ?Z t? ?
? ?
?x??z \ e ? }m?!t?} tc[?\@? V ~ e V? ?-?? ? \_l:^W?Lx? ?3\ ^ ?_?L[ x? ? ?B\?i? ? [ x? ? ? t ?? ? ??
798 ? ;
? : ? < >? = ? ? ? ?
???
7 !#" $&%' (*)+
, - .0/ " .21 $&354 6 ? ?
*
@
A
B
.
lj? l:? ^cacl%ZL
t ^]?@I? ? ??[]?@?\_t?[ ? \_x? ^]tcz ?c\ ???@J [ [ ? ? ? C ? ? oL? KL\ ? ? t]f ? ?@? ? ?J?? z ?\ N? D]t MixE{?} ? x?t?t }?][][]? \ \ ot ?F? ? ?3o e ?iV e=x? ? ?-x?? ?=? \ o8D ? ? ?F? \ ? ?W? ?? \_? ? ? Y ? ?? ? ? Z G? [] ?? ?V ? '? ? ??H
?
?
? ? ?
O
" 1QP 7 $ RTSS % U W? V
7 \^] ? :? ?
_ ? :? ?-? ???G?
`?
,
U .0X "LY 1 Z? 7 R $[3 R@?
?
?
?
?
. *ab A
[]\ ? mz ? Je2f Fp2? q gh []? ? ? ?m? i ^ ^]\ ? ? t]???_? ? z ? t ? Z ? \@?L? \ tc? \ ? x? ? ?? ?LVi[ ~ ? ? \@? ? []?]^ ?F0? r e ?
?_ j d?l c@[ ?]\[ xE?o " X \ `Z7c7 ? ? ? ? 1 ? x?{x??t!e ? [?n=zi\m? r [ ZLx? \_o t \ x ??e ? ? l e act ?i? ol e Y [_? G? ?ck [ ?? ?lln
?? ?
, .00o
?
?
?
%' ? s^t m f ? g o x ?{t'[5u \ l a [ x z l [ x ??V e ~ ? ? a [ F ? ?m? ?'F ?*v ? ? ~ [?? ? [ ? ? \_t ? t x ? ? ??x ? ? ? ? ~ ? ? o ? ? ?xw 4
! " $# !%
&(' &
)*,+.-0/2143576"+-08 69-;:< - :>=0? 8A@CBDE?GF"H -I-J5LK :
M =0N7OQP RES4T / 1 K - / 1 6"U
-JV5W=0XY)Z +[+\7]4] 1 -^ B_E3;`BZbac1Aaed
f )g +C+ \ ]4]4h -J=i)j$3
k dAl0m 1In \ 6 8 ] Bo 3+p57qr N7-s/t]u5n 1>v +/ 8w 1r 1>1 6xV1>yzv M 6 n 1 =+- 575 n({| l0m21~}
] 5?n"1
@?>?
?I? Z,+ 8y!??8 \ + 3576 w 1??0?A1 6"-
??628 ] 1y!? ?(1 w?1 = \ ? ?,69? *E-J? D 8?y +-8 - 1?VB _ y!? 8A? F =J5 8 3 / 8L? < 1 n
? 57? *E6 -u? ? ] ? -J5W-0?{ 3 8?? @ ?Y?J? ? ?,=s+ 3 / ? ??7? ?2?? ? 1>= -? ?[??u?? Z d ??A?(? ?
? 5 ?W? 1>v n d ?>?A???"? ? 8 6n?- m"1
? ] 5n 1 @ ?? ? ? g,+1>BjE- / 1>= 3 5W62w?1 = ? 1 6 - 5A= 8 ?"?"? 5 8 3 / 1 +.? ? 1 ? ?? 52? ? ??$35 =r B* -57q ? 1>= ???5 ? ?4?? |!? 1 d
g
8 ???? ? 3 v!1
k ?J? 59@ 1 + d ? ?4??e? ??? ? ?A? Z?? 8?=3 H +???T 1 +- 1 = w?1 @$- ? ?>? ? ? ??
Z ? 5 ? ? ?8 6 ? ? ??? ?7? ??? 57=
= 1 + H2v - + 8 6 a ? 8 6 8?@ ? + 1 +?5Aq ?2? 1 ? ? 5 ?D 6"- + 8 ??n 3 ? 3 v!1 +~? jE6?6 1 - V5 ? X +-/ 8?-~? = 1 675 - 6 1 3 : +0+ 8 = ? ? v ?
+ ? ] ] 1 -J=? 3 ? + 1>1??0? = 5 V6 ?? ? ? ??? ? = H 3? ?? ?A??? ?Z ? 5 v!1 + ? ? ? ?A? ? g
? 57=?8 ? ? j w?1 6 + ? ]?] 1 -0=0? * 36 1 -?V N7= ? ?J? d f d?2? ? ?? ? ? ?
d -0? : 1 <2B? +0- 1 623 1 5Wq ? 57++0? ? r"ve1;? ? 3? 3 ve1 + ? * 6
B | -0+?n7) ?,+3= 1 - 1 ? -0? ?E] 1 5 ? 1 = 8A-0??,5A6 ?D +C+05 ] 1 - ? ? ] 1 +-0^ 5 H r v 1 ? +J59? h | 8 6n;H 6 ?G1 +0BDE=8Ar"ve1 ? 1 + ? 1 3>?? 8Avev ? ? ?E6
??+05 3B ?E8A-0)j w?1 ] 1
??5 ^ ? 8 ?(n 6 1 H = 8?v757?2-J? * ]4?? ? 8 -J? g,5 6 8 ?"? y ? *$3?? -0B 5 ?2+ ??? 1>=1 5"6 @ ? ? < 1 n ? N7Bg,6"-0+| ? = 1
H + 1 n - 54= 1 ? =1 + 1?6"-I?41 ] N7^ ? ? 8?--J1 = 6"+ ? ? 5 ?7? 1 @!n ? ??A? ? ?5 ? -J5t1 6235?? 1 q 1
? ?? r @$1 +0N ?eH -) 57K +
? ? 1 ^ - ? ? ??? ? Z dC???A? ? ? g? =J? * ? ? * 6 8A@ @ ? ??$6 ` ? 5 ? ? 1>v n ? ?>?A? ? ? 8 - ? F 1 5Aq?+ 1
?7H21 ?"-0? g 8?y n ? 6 8 ]t? ?E3+
? ? ?E6?V ?2??$3 / 576 y ? 576 1 = 8 62n757??? ? 3 / 5 + 1 6 6 1>H =0576 H ? n 8 - 1 +???$- +u+ - 8 - 1 8?? 8 6 ? - ?? ] 1
? / 8 n
-J5 r 141 ] ? v 5 ? 1 ????E6Y57=n 1>=;- 5 1 62+ H2?01xl0m"1 35 6 w?1 =J? 1 6 - n ? 6 8?? B? 3+?5Aq ? ? k ?x? ?W= 1
8 - ? 1
8?v
57q? 57= ?4576 ? ? ] ? - 5"-J? ?$3 r 1>m 8?w ? gc5 = 5A| ? ? ? k?/ ? q!523 H + 1
? 576Y35 6 + l =?)?,62?? 6 ??-0? 1 + ? ] ] 1 -0?0? ? 3
] 8 -i=0? g < f + 5 - / 8 -I- / 1 ] 591v1?"?2? r ?? -0+ NW6 y ? 3576 w1 = ? 1>| ? - ? \ 6 ? ] )j ? + - V ? +0? N V| ? ?? ?
? ? 5 v 1 + ? ? ?
? ?>???A?A? - / 8Al d q N?= ?? ??1
28 ?"-J5.-0/ 1 ? ? ? +0? ? 6 H ] q H 6 ? - Bj 5 6 d ? ?,q f ? E+ F N ? | - ?? w :
? 1 ? ?? l01 N76;- / 1 + 1 -;? d2d? d - / 1 6u- ? | 1 ?u
N 1>v ? ? ? B { + 3576 w 1 =J? 1 - 5 ?2v ?"! N ? ? 1 n FN ? ? -
? ? ? 8 = ? H + ? T 1 +- 1 = w 1 # l d ? S'?A& ?A? k ? ??+0? ] ? v!8 =~
35 n7? *E- ? 5"6x5"%
6 $ 8 nY6 : H"= N ? ? 8 ? ? ? ??
n 1 ^ ?{ w 1 ?4q!5 = 6 1 - V 5 =X +? ??E-0/xnW? 1 = 1 6 - ?g 8Ar7ve1.?? ( ? ? + 1>1 ? v +0N *? ) ??= ? H + d ? +~?? ? d ? ?A? dg T ? M ? ?
? T 1 +- 1 = w 1 v - d;? ?A?,- ? /
{ . 1w 1= - m21>ve1 +0+ dI- / 1 q 8 3 -u= 1 ] 8?? ?,?2+t? -0 8 - 6"5 - 8Avev + ? ]4] 1 -0= B1 ?
6 1
-? 57=X7+~-J? ? - 8 = 1;3 5 6 w 1>= ? 1 69-?? | ? ? - + 0 5 V - 0 1+ ? ] : 357? w?1 2 ? 1>67-?n ? 6 8 ] ? S ? + B* 4
? 3*5 -
|
|
|
6 H 3 0?) ] F"y 8 M + ? ? r ? 7?@$B ?,- ? 5A? l ? 1 ? ] 5 n 1>v ? ? ? ? g 6tq H2vev ? ? ? /9: ^J? - ? ?,6"?u- / 1 n ? 6 8 ]?? ? + 57q - / 1 }
V 0 ? ; ? / ? 1 ? : ??v B 7 ? : ? ? ?
] N2n91 9
? 8 ??? : 8 n7+?-JNt+- H n7? N q 8 675 - /": = ? ] 5 ?"1>v BZ ? - / B*E+ F 8 F 1 = d:
V ? * =- < +J5"]t: 1? ? 8 ^ 8 ] 1
l 1 =0+ ? ? 5 =+ ? ?4? : l ^ ? ? ? 21 - ? N =^ > + d"- / ?? + ] Nn7: v7m ? -0? : +?A? : - ? F 1 +
57q ? ?= 8 ?e? : ? 8 ? n + 1 H 1?"-0? ? ? @n ? 6 8 ] ?*$3 +CN7qC` ? -? ? H -
d H ?n 1 =+05 ] 1 ? 5A6 n ? ?E- B?,N + N ? - m 1[?21>V
? 8 ^ 8 ] 1 - 1>= + ? ^ 8 - 0 1>? =- 0 8 ? N 6[- / 1V?1 B @c? 0 - ] ? -0=0? ?c<~? ?,-0+ 1 v ???d?-0/2B? +C?uNn A v 02? l 0 1 +>? | ? : ? v N7r ?v
3576 w 1 ^ ? 1 6"- ? 8?= 8 v ve1>v ? ? 6 8 ] Bj 3+ 8 6 ? - / 1 + ? ] 1Cv N ? 8?v + - ? r ? ? v ??E- ? 8 ^ N H 6 ? ? ? 1 n;F257? S -?+N q ? ? k 7
? 57=1 5 w 1>? d V ? gE- 0 - 0 1 + 1 6 1>V ???=? ] 1 - 1= + ? r ?o q H"=3 8 l ? ?cN F 8 = 8 | ?41 - 1 = B - / 1?1 < ? - 1 ? 1 NAq
v H = 1 + H y -0? 6 ?;qb^ 5 ] - / 1~1 < ? ? +i- A ? ? 1 57q
C5A++ ? * r"v 1 ? D 3 ? 3 # 1 + 3 8 6 r 1~H 6 n 1?= + -J5"N n ?F* E - 0 ? F? G ?t52n 1 I
? 57S +0+ ?{ r"ve1[F 1 = ? ? 5 ? ? N H2r"v ? ? ? r ? ?cq H =3 8 - ?J N K V~m"1 ? - /": ? M? L08 ] 1 - 1>= + 8 = 1 w ? =?1 1
I? N ? BOE? 8?Pv # ? d ? | -B Q,+
- 0 ? + ] 5 ?"1 ? d = 8 - m : = -=0 8 6 ? ? ? B -0? 8 - ??,+ M + : n ? N ^ 1 57q - 1??? g,6 F = 8 3>-0B1 ? 1 ? 8 n"| B? ? ^ 1 - 1 ? -0? g ? 1
? R ] 5n 1 ?+- H n"? 1 n /G: = 1 ? ? + 8
8AF ? ^J5 <2?? ] 8 l ? ? 5 6t
NAq ? ? - S ?? 1 n 57? 8 ?e? NA?- / 14? r 5 w 1 B(- m91 S
S
{
] | 5 ?01?8 ? F"=0NAF ? ? 8 - 1 n ) + ? ^: - 1 ? - ? ? ] 1 ] | N n 1 T NAq ? 1 H ^ 8?y 21 - V N7= S > + ?bN ^F H ^ ? 57+ 1 +?N ? - m : N 2 1 -0?j ? 8 ?
? 6 w 1 +-s? ?c? ? -J? *,Npd 7H ]?: = ?U ? 8Av + ?UFV HGv!? -0? N t? n ? =? ? -0?W ? ??v ?FGFX$? ? 8 - ??,N ?
Y
Z
[]\_^S`badcfegcihMe \j
c
jlk [ cfm
/ 1 ? ]tN ? : v -0/ 8 -?? * ++ - H n7? ? 1 ? ? g,6 -0m ? ?E+ F? ? 1 =? g,+
n ? ? ?po ? -9qr ? ?tsvuxw - n2? y ?po ?xzu ? ?c? ? { ?}| ~ ? ??? n ?c? ? o - ? w*- ? o?? ?
? d ? ? |ed?? Z ?? ? d |e|P? d ? d ? , ?'?'?x?
V~m"1 =1 u? ? ? +8?=1[6 1?V v ? a ? @ "l =0N S n HG3 : ?xF ?=? ] 1 -1 = + d - 8> ? | 6"? w 87@$H : +??E6g8 Gd??=? | /"? g,+?? Nn :X ?1 +
r ? :n 5764- / 1??H ? 1 = n ? +3= 1>-0? ? 8?l ? ?,N76 5 q-0? 1"| ? ] 5n h v ? ??V ? ?E- / 8 vevx? w q ??? d ?~)1 l 0?| ? ? ?,?*? 8 6 ??? n ? ? ? ?? ?? ? n? ? ? ? - ? u ? (8 ? F"= N< B 7 ? ? - ? ? ??n ? ? ? o kI? ?n?? n w? o? ? o ??E? ?? kd = 1 + ? 1 ? -0? w 1>v ? d
8 - o ?? ??? up? y - -8>?1>+;- / 1 ] 52n 1 v ? ? ? ? ? ? l +[+ ? 1 ? ? ? 8Av 3 ??1 5A? 8#$v?u ? ? ?? ? | 0 1? : V
21
H = 5"6 +- 8 -01 n* ? ( ? o ? - Ba +~6 5 V 8uv ??E621>8?=~3N ? r"? ? 28 -0? ?c5;N7q - / 1;8 3 - ?1 w
8 - B ?cN ?4q H 6 ? -0? ?c57? w ??veH21
? ?=?v? ?M????*??9? ?a ?S??F???_?9?F? ?f?? ?M? ?? ? ? ? ? ? ???? ???? ? ? ? ? ? dx? ? ? ? ? ? ? ? ?i???? ? ? ? ? ?S? ? ? ? ?? ? ? ??? ? ?
?
;
?
? ? ? ?? ? ? ??
?
!#"$% '&)(* ,+ ,-
./ *103254 6 798:;=< * >@?1A#> *B0C DFEHG / IKJMLONPRQTSVU=WYXBZ\[ Q LO]_^ A < *a`b cedMf Uhg LYi [TUjWFkl < mVnpo_qrs t r Q SUjuRWZFUhv o3wYx
y
z { [}|~Uhv1 P U ? N U)? rV? ?a? ] S L]}??S U ? ? QTU)? L#Q ? {a? U?u L?V[ ? U)[ ??v Q z ? N???k??_WYu ?TS U uRW P U ?_?
6
o?? ?
? / m o A???? o s~? l < ?@c A < I E?? / ??? < * `T? ; < ? >@? A > F??? / c r
4 a?
y??
? ? Ue[ U ? ? ^ N U)? ??W N [ ] L] U)[ ? ??? QTS U?g W u ? L g Q ? ? NYQ U ? ??L v???? ? ? ? ?
? W ??] S?? ?? ? ? WF[ U[?W? ]?S ? ? [ ????U? r N U i? W N L g Q ? ? ?L? ? ? W N k i N g Q ? ? W N [ ? < * L? U???[ ?u U?Z Q W?[ LQT? ??[?k?
?_? U?BWVv?? W| z ? NV? g W N [ ] ? L ? ? N ]_????
0 ? ? J . / m SL? Ujg W ? ] z ? NYi W ? [ ? ?T[ ??? W ? ZFU ? P U)?T? ? ?LQ ? ? ? U[ ??? *1??? J k?W ? LO? X ? n?? ??
dd
??? ? D ? < * L? U?u W N W Q W ? ^ ? ? ? g? ? ? ? ??? ? |=? ? QT??? <? Ia? ? D???? ?r
? ? { ? ?1??K? ? /? ? 0 ? ?M? q ???\????? ? L ? Z
?
d
?
` ? ? c ? / ? ? 0 ? ? QTL#? U?u?? z ? u L ? LO? i U)[~? < I rY? S z ? g ??L? U ? [ ? L ?K? ? ? U ? U)? ? U P?Q3? ? ?
?
?
? r s ? LNP "? !?s # ? [ ? $ ?% W ? ? P [ !
? g S k iVN g ? ? ? W N [ L? U k L z ? ? U N : U ? L v r z N g iP ? N?? W#k Q U N ? ? [UhZ
?
o
b
*
?
o
<
?
!
o
D
?
?
?
?
?
+
,
.
.
,
[i g'
S & )? (+* ? /
]L 0
N /
?
L"N 1
S U\g W N [ Q ? L ? 9;: Q =
[ < ? ? <? [
/324 5 6
?
?
?
?
?
? 7
8
L?_^?[ >??g U Q kBWY? ?_? U?k ? Ng Q W N [ PU ? N U A
P @ ?
Ro Q D
< 0 ? PD O ? ?
B < ?DC < ? ?FE
EHGJILK M ? / ?3N
S
d
Q W ? L ? U Q_? U?kBW?v ? W ? ? NV? ?V? W ?Uh? Q ? U)[ ]T?L Q ? ? { XKUX TU i [ UhZ?[ i T [ WU V i U NFQ v1? ? X N ? ? W < kB[}WFk [TU ? U ?L ?
? ?_W ?VW [ ? Y Q ? ?[Z N [?WFk QTS ? ? [}? L?]\ ? ??
:
0 ? ? D B ?< o ? _
c ^? . < / 0 ? c r LNPR?L#? ? ? ` g ? v L? v1? Bba< c D0 d A < e o b c - g
c ^
> m? l < n >@ h < ? b D ? ? < p o!
l f ? h <? o C ? i
Ekj
D u Bwaxq ? c o ? ? tD ? - o ,my < c o ? ? tc z|?{ B a o ? D o ? ~
? } ?c !
q ? ra? ? c B < s?o ? ? ? B < ? D0 t v
0 ? ? ? ? ? ? c B ?? ? ? 0 ? 8
?
c ? - ? <? ? o ? < ? ? N o ? J c r L N Z
?
?? ? x B ? o ? ? ? c ? B ? o ? ? c ?? 0 u ? X ?N ? Bw<a ? a o t?? D ? ? ? ? - ? < ? r
/ S
/
?
?
N E
? S U ? U d h < ?90 C c ^^ h < 0b ? Ws ??? h < ? o b c
?
??
?
?????? ?????k?
?_? ?
?
????? ? ?J?????
?k?????]?
?
N ? L? L vKv Uhv ZY? NLu?? ? g [ ! L v [3W g L v1KU ZH[ ?L?g ? ?_W N W i [RZY? ? L u o ? g[ ! L ?K? N U iV? < N [ i?'Z LQ U QP? ? ? ? ?
?r
? r
?
?
?
[ ] L? U)[ ? ? N U L g S?Q ? ? u?U?[ ?3?)?+?|? [ UWV
? a? U L g S?Q ? ? u?U?[ ]T?h? ? N [ i g S L N ? L ? i Q U ? NVL#Q Q L U X L ZYg ? S NN L U u ? ?_? gW [ N?L i [ Z NV? v U U [ N ? ? U i? W i? ? 9 Z L#Q U ? ? [ ? Q [ [ Q LQ U
? LQ
Q?[}7 [ QLQ U N?,N Q U?? ? ? u L??F?
QP?Ku ? [ r W ? U ?? L v?? QP? ? u U [ Q U ? [ C ? ? ?A? WY[ Q ? ? ZYUe??5[ QTiZ ? ? UhZ [ ?? g ? L? g)? \ ? < kj[?V iU?? Q ? L ?
P ? NLu ? g[?? [ g L vKv U?Zp??? N g ?? W NVW i [ P ? NL u ? ? g[ oD? W ? ? U)?Z ? ??L? , D ! ? { N ? ?? ? Q ? U NV? ? ? < N
? S WF[ ? [ ] LQ U ? [ i ? PVLQT?P ? ? [?g S WY[ U ? LQ ? L N Z < %??
S ? [juRW PVU v [j? ? N g SV?TWL? <V? [?U ? WVv iQ ? ? WL?
?
?
?
?
?
7
W?? LRNV?)i? L v NV?hQ WY?? ??g ? g ? Q gWVu ??W[ UZ?WFk Li? W N WVu W ? [ N?)i?_W N [
{
? ? ? ? [ U)? ? Q W [U)U ?TSVL QRQ_? U P ? [g ? U Q ? ?? U 1?? ? u?WZVU? o w c [TS L ? U [ ?P?VU?[ L u U [ ?hQ WYk ?? U?Z
? WY? ? N?Q [ ? ? QTS Q S~
U ? ? % W,ZYUhv ? ? $r QT?LQ ? [ ! L ? WF? NYQ hU? ? ? ? [ L ? ?UhZ ? < ? ? ? Q < k o?w D o?? ? U !
? ^
< ? o h?? c ? ? QP? ? < ? ? ? ? U N ? { N o ?Yc c !?? { k L N Z W ? K? ? ? k ? Q ? [ L ?"? UhZ ?W ? ? < k o D o?? ??? ? !
C < ? ???
NVQ
? ?
A ?< ? E ? f 0 2 < m > m l < [> ? > ? ? H
? ? < D E ?YD ?
C
E
E
? W ? U U r
? ? ? ?P? U ? ? [ i v ? Wk P ? ? [g ? U Q ? { ? L? ? $ W N r ? ? ? UhZ ? W ? ` N QT[}u L ? S L ? UjZ ? ?[? U)??U NYQ ? ? u ?Q W Q ? ? g
?
r ? ? , c L NP ? U ? ? WZ ? { g ? W ? ? Q [ Q?SLQ?L?TU N W Q ? ? U P ? W ? [?u ?
[ Q L T ? z ] ? Do ? LN ?
? ??? v i u ?
NVQ
L
o
?
?
?
f
r
?
, m
?? ? L?g i [ ? ? U)[ Q U ? ? Ue? Q r ????? c ? N Q_? ? U ? ? < ZYUe? ! ? [ ?U?g ? L ?K?K?
Wg)g i?
?i u ?
L#NV?
? S ?N?L ? ? < ? ?^
? ? U ? ? QTS Uv ? [ [ r ?TS UMZ ? { [ g ? U Q ? ? ??Z ? ? 7 uRWZ?U)v ? U QTL ? N [ QT7 SUM[ L u U Q ? ? U?WFk Q ?U
?F?
? va
W ? ?L ?U?,L? L v ? U X Z ? N,L u ? g[ LN Z?[ ? V i U ? PQ ? ?
L ? P ? NV?L % ? g ?
[ Z? Ro ? D ! ? [ ? ?RQ \ P ? ? N ? QTS????BW?v ? < ? ? ? N??
?
?
Q |MW ??_WF?WY[ ? Q z W N ?
??U [ U ?T? [ i X Q [?U ?,Q_U Z Q_? U ??U)[ ? v Q [ k?W ? L ? v l < e E? ? N o ? L? g i [ ?
? U[ ] U? U r?s ? ? ? D { ?
Q W l < m n ? ?r P? o
? Q
?
"!
#$%'& # )( %'*
+-,.
/1023 2546798 67)2:<;>='?A@ B C D DEFGFIHJ"K-L MN5OP E JQ O"R9S H MT"UQVXW Y P[Z OQ\] ]9H -] ^ E _ O FG` a M 1
D UX
? bcd J HP-^
e
Uf hg ` i Jkj g Q \Almg ^ Z U ` n _ JAU QXohpq M E Mr g
s
tuQ U
Uvwyx
z|{|}~)?|??A??|?1? z|??? z?A? ??{|?[?k?{-???w z?{?w
?G?k??
?
???
?u? a ??n?? ? ?)?k??? ??n9? ?5?>?? ???
?? ?9? ?-? ? ? ??? ?-? ? ?k?5????k?
??
? ? ?"? ??? ? ? ?? ? n ? ? ? ? ? ? ? ?
?? ?<? ? ? L ? ? ? ?"? ? ? ? ? ?5?>?
?"???? ?
? ?? a?? ? ?<?
? ?
?
?a
?a
??? ? { ? ? ? ?? ??? ?? ~? } ?k?|{ ???? z?{ ~ ? } ?? ? z {??? } ?? ~ ? { ?? z ?)? } ? g J o r)? ? ????????[?h?? ?? }??
? ? } ? ? ?? ? ?9? ?5?????? ? ?? ? ? ? {?? ? } ?|z?A? ? ? ? ?[?|? ? ? ~? ?9}A~? } z|? ~? ? ??
?I? ? ???"z }? ? ??[? ~? ? ~ ? ? ? ?? ??z ? ?
? ? ? ?? ? ?y??{|?? }? ? ? ? ? ? ? ??? z?? ? ?k? ? ? ? ?G? ?? ?? ? ? ?? ? ??X?G?k?? ??? 4 ???A? 4 ? ? }} ?? ? {z ? ???
? z
?{? ? ? ? ?? ?? ~? ? ? ? ~? ? } ?? ? ? ???A?
? I} ?"? ? ?? ? { ?|
? ? ?k? ? ?? ? ? ? ? ? ? ? ? ? ?? ? ? ? }
{?
?? ? ? ? ??? ? ?"z ?"z } ? ? ? ? ?! } ?|z??{? ? ?h?k?"#z " ? ?? z? ? ?"?-?X{|%? $"? z '? & ~ } ? (? ? ??9z ? ?
?z#) ? ? ?* ?) ? ?"zI??+ ? ?? ? ?-,/. ? ? ? z ~ ? { ?yz ? ? ? ?? #0 ?)? +
1
?Y-? ? ?
/10
23 232 64 8 6465 :87:9'?1@ W ; DXD E FGF?H T Q B L M=<` > J@? O]']BA ? C ? n#D ? ?"?E ? ? F ? ? ? ?3G6H G ? N ? ? HI
J F U ^#K]
bcd ?|\D T ? K D HLM|HP J ` N OO ^E P O FG` n M D
P UPBQH QSR H _ T JT lm H'U Z T ` N P J D
? T QGO _ E-V ? n#W ?XBY ?SZ[
w
\ Q U
U ?
? xuz?{|} ?? ?|?%]A???X?k?{^ ?~ z {?? }? ? ~? { ?_ z#`/a ? ?6b Y ? ???c ?Y ? ? ? ^'d }1? ? ? } ?? ?e ?%'? Y ?%f ??? ? Y
?
g
? ? ? h? Ei ? j ? n ? ? ? ? a ? ? ? ? ??n ? ??y? ? ? ? k ? G ? ??ml ? n ? C ? ? ? S ? ?? ?%o
?Sp ?
? qS; r ? ?
?n
?F
? ? z ? ? {|? ? ? z?{ W w ~ ? } ? ??z?} ?{<?"z?? ? ? ? ?? ~ ? ? }1}? ? ?? ? {-? ?t ?y?'u ? z ? ?? ? } ? ?%" ? ~ v {
s ? ? ? ? ~? ?
?|{ 0? ? {|??? b Y ??|? ? g ? ?x
? wy } {?z ? ~ ? {? ? ? z ~ ? { ? Y ? { b ~ { ? ~ |?} } ? ? ~ ? ?? ??? ? ? ? ? z ~ ? { ???A? ? ? ? ? ? z#) ??
} ?? ? 0? ~? } #? ~ ? { ? ? a)? ? ? Y
? ? ? Y ? ? ? Y ~ ? } {?z ? X ?? ? ( ???|? ? ~{ e ? ?~? z?{ ~ } z/? ~ ? ? ??
? ?|?? ?@? }'` ? ^ ?
? ~+ ? ~? ? ? ? ~ ? z|{ ? ?? ?{ ?? ? ? { } ? ? ?{ ? ~ ? ? ? ? ? ?%? ? ?
y ??"z ? ???{ ? } ?"z } z/" ? a/?? ? ? z ~? { ? {
?
?
???
?????
?????????????
?
???????
x ? 9? ? ? "Iz ??? ? ? ? ? { ? ? ] ? /? ? ?"? z ?'? M EPO H q)?%? HH ~? ?? | ?? i }?? ? z|?B??? ?z { e ?? ? ?{|? ? ?aB? ?
? ` z ~ | {|?}
z B{ ? w ??? ? ?'z ? ??? ~ ? } ? ? ?"z `z?}~ ? ? ~ ? z {?? ? z e ~y ?|? } ? ?z?{-? ~ ? ? ~ ! z } ? ? ? ? ? ? ~ o?? ~ ? }m? ?? } ? ??` ?#? } ~ ? /? ? ?
} `? ? ? z?? } ? ? `-? ? ? z ?/?? ^X#? |{ ? ? ~? ? ??%( ? e ?o ? ? } z??y?(|?[? ~ ? } ^ ] ???? ?
???x? ? ??|?%??S? ? ?
??'
10 2 3y2 46? 8
6 ???-:8? ??? @ B ? D D E F-? H JQW ? MN5O D M'? M W ? H _ ? M T_ U W ; J"W ? T? ? T ??c? T T? H P? P] H ? ? ? HHW ? ?
Z|O Q OO6O9H%O^ E P O FGW ? M D ` n D
? (? " ? ? ? ?? .?? ? ? ? ?X??? ? ?=????:? ? ? ? }A? z }~ ? ? ~ ! e ?b??a } ?? ?? Y
?S? ?
< ? H
Q H
? ? W ? ? ? ? H ? ? ? O QH J??|H?^ ` n OR T_ O ]FIO J ? W N MH D ? T ? ? HU?? E T ? H
?
?E ? W ? ?#? ? ? ? a ? O P ^ ? ??
?
Z OQ\FIHJ H Q D ? ? ? OP-^ J ? H P-HM ? U P R?O` n P D H?G s
} ? ? ?u? { ? ] #? ? ? ?{?? ?? z?{ g ? ?? ? } ? ? ~? { ? ?? ? ?? z??|z??m? ? z~? ? }? ? ? ~ ? z } ?
? z {?'? ? ??%?
\ Q U
U ?
? ~??? ?? ?
?3{ ? ?? ? g ? ? ? z ? g ? ?? ? ? z?{B? ? { I?] ?? ? ? ??"z ? ? ? ? ? ? z ? ? ?B` ? ? ? ? ? ? ? ? ? ? ?
? ?
~
? ??
?? ?
?? ?
? g ??
? ??'?y? @ ? ? ? ????? ? ?=? ?:?3?? ?
? ?k? ? !
?
E
?
??G?z ? ? ~ ? ? ~? ? { ? ? ? ~ ? "` ?9~ ??? } ? ( ? ? ? ??[? ? `-? ? ? ? ? } ?
? ???} { ? ? ? ? ~? e ? ?} ???? } ^ ? ( ? ? ?? ? ?9? ? ?
?
? { ? ? ~ ? ? } ~ ? } ? ? z ? ? ? ? ?z } e ? ]"? } ? |
#
?
} ? G ?? ? ? ? { ? ? Y??B? ? ? ? ?? ? ? { ? ? ? ?? ? ? ? ? ? ? ?? V Y ??|~ ? ?
{? ? } ? ??" ? ? ? zh? ? ? ? ? ? ? ? ? ? ? ?? {
` ? z ? z } ? ? ? ~ ? z?{ } ? |} ? ( ? ?5??|?
z?|? ? ? } ? ? X? ? ? ? ~ | ?|? ?|? " ? '? ]"~ ? @
? ?
? ? ?
? ? ~ } ? ? }~ ? ~ o e ?
?|? , { ?? ??
?
y
!"$#&%('
*)+$
,(-*.0/124365 78.&9;:<3=9>5 ?@: 5 A =9CBEDFHG . 9;.6I6JKL5 M@N6. /O J 9PQ.R*S ?8/: TL9;GU3=;9;V 5 W : T =9;/(XY=I : Z . 3 P;3&[L.\E] I .&.
`;f&9hgi jk mo 5 p@:HI .rq 1s .&/ :E=U: ` J :<O J : I 5 W Rutwv+xzy|{ 5 }8/
>P;9J O 5 ^ 36/(5 _ 9 : ` . KL5 a@:b.&c J : 1*d .
e =9 .<? c .&/ ln
. /: . I6 .&??? o|??????H? ??5 ??: `
.&9;: . > S ? 9U? ? J I 36?;/<?
=~ /5 A :E5 M@ . V;.6?9;5 ?@: .?$??` 5 ? 3`?5 ?8/(: WQ`;.
k ? S ? 94:`*.65 ?8I???=V;.&? ? ]?= d : - .0?(? O = >. [?E? ? : =?? .Q36P;36K?e ?H? d .&.?0? = I.6= frI o ? Z .&9
J[L[??? ??
?U?w? :` . / 5 ? G? = S ? > J ? ]Y?;9 36: 5 ? = 9;/ : . 9 >4: = :Z . / 5 ?8G 9 1 O?? ?;9*? : 5 ? =?9??? ] 5 W@94?` 5 ?8/ ?r? . g? j;?+?
]?=I / = ? f??*?*.rV ~ =/b5 W8: 5 M . ?? :` . ?6=9 > 5 ? : 5 ? =?U?EDF I f?>1*36.&/ : =Q: ? J :?:`;. ??.6S ? GZ : O J: c 5 ? R4? ? ?*.
~ =/5 ? ? ? ? f >;. ? 9;5 ?@: . ? ??Z*? ? 3 ` 5 ?8?(: Z*.H=9*. 5 ? 9U?E?H= [ .6/ ?|?6?(????? o m ??? F ? . R 36. ~ :H? ` J :(5 ? 94: ` . [ J? : .&I
3 ? . t 9;.&. V ? . ~ =/ 5 W8: 5 a@ .>;. ? 9;5 ? : . = 9*? P =9 : -*.H/ . :0?? m ?? o??????
?8?
Z .&9U?
I . /1 2 365 ?8.&9 : ? ? O J ? ? ? : `*. O J : I 5 a R 5 ? 9U?E? ? ??5 ? ?L? ? .0V?= O 5 ? 9 J : .?> ? P45 ?@: / ~ =/ 5 7@:E5 .
eV;5 ? J G=9 J ?$? ?*.&J 9: I 5 ?@. /?J 9 >4? .rP?
3 = O . ~ =/ 5 ? : 5 ?? .<> .? 9 S ? : . ? 9 ] J 36: o
7
?
??;
? ??? ?
? ????
?
? ?n? ?
?
?H?$? ?$??Y? ??????? ???*? ? ? ?<?? ?<?0? ?
7
j
j
5
? $? '
? ? ! ? ? ?#? " F ? %
? +-, ? t , /.0
j * ? ? 1 3? 2? 4 ? ? !??6?
6 ? 9g 8 : ? ?? ? ? ; ? ?*? =? < ,
? &)( 7 " * ?Y??
? M ? ? ? > 2?N (=O#P
? I ? ? $ J K , ! ? ?LC
> ? ? ? ? ? F ??R? ?Q @ ?6? AC? + B? ? ?CD D ? ? g ? E 3 ?F!
? G ? ? ? F=H ? g ? E ? ? ; F
S >
. : t WY
V X[Z]\X ? .UJ?u= I: `;= G =_9 ^?K(>;. 3 = O ~ =/5??: 5 ?a` ? = ] t ?o : Z^: 5 ? / o 5 _@/ J
> 5 ? J G =9 ?UJ ?T O J :EI S ? R ? =I ??.r> ?cb : Z . . S } G . 9 J ? 1;f&/ = ] t J? > X 5 ? / /E= O .(= c :`;=cd ` 9 J [ O \ J : I S ? R
??5 W@: Z 5 ? : /?: I J 9 / ~ =/. X Z ?? Xfe {
g ??Z*.3= ? >5 h@: S ? ` ? ? D ? 5 i8/?.
j1 S ? J ??f
k : :E=4: ` J : : `*.0> S l ^ d =;9 J ?
O J : I5 m R
'
\ v ? G ?on? ? n y|{#pq{ 5 / ? ` ` / S r@? ?h .H> . ? xzw{? S s : cf ?
??` . [ J? .6I s = 9 > ? ? : S s = ? 3 Jo
9 t . ] 1 u w? v . > ?P . x ? : Z .&I 3 ?$>5 A : 5 y8= 9 ? = ? x ? 5 ? ? z ? Z;f 3 ` k 36??1 / 5 ?a` ?
? }I = ~ =/ 5 ?@: 5 s =9
? ?
? `*. | ] ` ? ? =??/?] c =;~
s
?
??Z ? ? / 3= RI `?L? J I b 5 s O ~ ?L5 _8f ? : Z J : S s ] : Z .(? . 5 ? d -;: )
? ^ : I 5 ? R4t 5 }8/?]L= d ? f >0J 3 s `c>0? z 9;GQ? ` g :R? .[??. ?*?
d 1;? . ? 3 = 9;/: I 136: f >Q5 ?89Q
? ? ` ? ? .&? > om ?? G?? o : Z . k : Z f ? = V;.&??T / s P*s??=?E] I . .c? , `;5 ?8/ 5 /?? ? ? . s6Jc? / f
t 5 ?8/ J9 =1 : .?I Q ~ cE= >1 3&: t k?
? ?f? ????? = ] J 3 = ?L?L.r36: S ? =?k4= ] / `;O .)? ? ?4. ? `d P .F? : ` I /
???
? ??? ? ? ? ? m =? ? ? <o J 9 > 5 W@:?5 s / ? ` / 5 W :E5 ?@ .<Z >#. ??*5 :. ?
?
?????C?'?Y???%?U?Y???C???)?0? ??????C???f?0???
`;fr9 J KL?g ? j lk?? o : `;.<3 ` 9>5 ?@: 5 _ =9 ? DF 5 789??I = ~ =/ x ?8:R? ? ` k t . s `??. /
e
? g
p e { 5 / ? ` / 5 i8: 5? .<>f#?9*5 ? : . y
: `;f0? J : d 5 ? R ? v?? ?
g
? - S ? /?5 s /H: Z*.?=9*f G 5 ? f ? 5 W k ? e J?k G ??? ? ;? ? o? ? ? G ? : Z_^ : .k /1 c . ? s=`?k ? x ? :R? ? ? P?=]: Z ? ? ,
O = >f ? ? ? ? S ?@: - ? Z;f3? ? O
` >. ? ? m F `?? ? ` s ^ ? ? P? ? : ` : ?? ?? >P;9J O?xh s6/ J d ` 10k > ? R ? > ? =5 ? 9;:E/
] =I/ P;? ?Qfr:bI 5 } s ?*. :E??;` cR? / } ?? . s ` ? / 5 ? /: .k s P O frJ? /H: Z J : J ? P4? R.r> ~ ` 5 ? 9:Z ? . R*J 3&: ?LP
: `*. / J ? . ? P O ~ : ` ? ? ? s/ : J? 5 } ? S ? ? P 5 }8? ? = : Z ? ? J k > ? m ? ? ] : ` .&/ . :E? = ? ` >;.&? ?J I .?s ` 9 ? S _ / :. ? :
S ? ?C? Z;5 ? /Hc .6G J I6> o J ? R*. >?*? = 5 s 9;:?5 ?8/ J ? J : : I J s6:E=I ? / J V*>;?L. ? =5 ' 9;:?=IHI. ~ . ?L??=I o I . / ~ .rs6: 5 ha? .&? P ?
= ]??
? F?5 i ]|J 9 V?=;9 [?P?
? .&I /
S ? ] 5 ?8: 5 ? / J 9 J : : c J s6:E=I ? /^ >;> ? . ? =5 ?@9: = c I . ~ . ?w? =I#? ` ? ? m { ' ??Z;5 }8/ J *k /R?
? ` . 5 _ / /=;? . d J S ? ? .?> 5 ? 9 ? ?4J c 361;/ ? ? e . / : .&I6 . ?L: o m ? ? ? F?=9 ??` P4J /: J ? ?L. ? R . V ? =5 ' ? : =c?
? ?m
5 ? / J [?/ =?/ : J?? ?L.H5 a 9 ?
G F ? 5 h ]|J / ? .r365 ? 3< . c / 5 i =9?= ]: `*.<3 = k > x W@? S l ` k ?R?c? 5 ?8/ O . : ;` c / P ?4?Q.r: d 5 ???s
9 .&: ? = I=? / ? :`;. 3=?k / 5 ? /:R? 9 3P 3 =9? 5 _ :E5 i ` 9 ` ? : ` . v =*3 J ?? P;? ? : ` : 5 ? sH>P;9 J O m]5 ??s6? / ? .r:E? .&. ? :?;.
? ?zJ 9> ?(? O =*? .&?L/(: ? d 9;/H=1*:H:= ? . J s =;9 / 5 8? /: . 9 36P 3 ` 9 > 5 8? : 5 ? ` 9 ? .&: ? .? ? : Z .r? = 9 : Z .
G [ = ?*J ?3 = 9 . I G.;k :> P 9 J ?Q5 _?s / ? ??.6?L? :?5 ? /(3 . I : J 5 _ ;k ??P x? ?: . c . ? : ? ? k d :E=?/.&. 5 s ] : ` 5 ? / : P ? . =X
I .&? J : 5 ? =9 / Z;5 ? ~ ? . :E? .&. 9 : Z . ??= 3 J ?J9*V4?]
G? ? ` ? J?s = 9;/ 5 8h / :b. 9*365 8? . /036J? ? f . ? : . ? ;? .?>U:E= d .*k .&I J ?
? 9=;9 Q /P?????. : I 5 s 3 F 9*.6:E? = cR? / ?
?
???????_???????????????f?0??? ?C?0?????????)???? ?
?E9?? J 9?PQs ? . / o ? Z f<s =?$>5 } : x ??` ? ?ED ? x ? k ? IE= ~ =/ 5 ? ? S s ` ? S ? /??L/ ` 9fr36f ? / J c P?] = d?:=?
:E= ? f3 b 3&[?. Q ? I.&. ? ??`*5 8? /?s J??? . J >;>0E? .&?/ f?>Q] d
` ?J<? x s ? 1 d 3&J: S ? ` ? ? `0x s ?;: ` ] ?;5 ?@f&?Ut
.<9*.&: ? ` I=?
P : I . J : 5 W ?;G
!#" %$&('*) $ +, '.-
/1020
346587:9<;9<= 5> 5?@BAC D2EGF2H IKJ.L ;NM9 > HOQP2R 7 9;9S 5 > 5 ; @ @@5R >#H U 9(V*V*WYX > 465 M P<Z\[ H ] > H UQP2^`_acbed Hfg 5@ihYP
; P2P S J.P2j 5k H lm@ > 5R M 5nP<o 7 5 ; H U Pp[<qr2P2s6tvu*HwR2xyFvzfQ{| ;M} ">T ~] P2Re??8??H ?M 4 H I @B3 4 5?@P2s6; M 5?P2J d 5RY?j9 > z ?mR d
? P2@@H ?K?:?K?y??? M? M? 5??
????\???e???*???? ?\??????N?e???6???<?N?#? ?? ?#? ? ?(????<? ? ? ? ? ??Y?B? C ??? ???????? ?#? D ???? ? ????.?Y? C ?6?
U<?????*?? ?<??
??6??? ???#? ? ? ??
???#?c?B????? ?6?? ? ?? ? ??? C ? I
?
? ? ?B?? ? ?? ??? ?\??? ? ? ??? ? ???#? ?8???? ?p??? ? ? ?????? ? ? ??? ? ????????? ? ????? ?? ? ? ? ?N? ?? ?? ?? ??? ? ? ? D ??
? ? ? ?? ?????#? ? ?? ?#??? ? ? ?<?? ?N?K?????
? ? ?? ? ? 5 R:5 ? _ A
? A ? _ ? ?
? ~I @ 7 P @H > z !#" 5y[25%$ RvH " > 5'&)(* 5 >
@ P2J > 4 54 ? > L67 V 5@ H lKR57698 5 M9 Lv@5
? r 5?RYP > 5 >4 5 7 ?
; P,+ fK5 M > H-QP2R P2J >/. 5 ? 0 132 M P S 76P ?
R65? R >
?
>
]
]
>
?
H
@
5
D
4
C
4
>
z
@
7
?
<
9
M
z
Q
E
n
@
5
H
5
6
4
5 5R > HI ;5 HI R > 5; g 9 ? ? ? XGFH
P j 9 R P27p5 R
C
?
S
C
?
C
;
:<
=
;
:
B
:
A
?
6
I RY3 ?jJ"c9 u ? ? XK C ? L ? J.P ;M = 5 ? ; K C L ; F lON P > H E M 5?> 4 9 > ? H !Q@ 9 k 5[ 7YP2zI R23 P2J >JP 5 ^65 > ? PY?RQTS
U . 5V 9<M PYF)WS 9 R P2Je3 P 5 IXZY 5;9 > z ] " 5 S?9 7 @BH U R _3[ bB9 > >4 5 $ k 5[ 7 P2H ! R > ? H 6 @
\ ? b^] ]
_`_2b
?
a P > HX M 5 >JP 9<3 3bP 5`M P2Rp[2H 0 > HE P ^ _a ? HKc @ 5e<d L2H E g 9V 5 R > > P > 4Y9 >?> 4:5 5H E d25?R g 9'f L65 @nP2J _ ?hgiBj ?k
;5 9<V V 7YP2@o > o g 5< ? 4qpU M 4 z Qr @t?s | ; >JP 5 ; 5 d L6zu g 9V 5RY3 > P >JP 9 > 34:5V 9'v P FYH 9 h _ _xw y E 9cV*V
5H Qz d2m
5R l g 9<V n L 5 @|{~} ?,?
? J? ? _ ? Fb? ? ? > 4:5 S P6[Y5?f . E RYP v W MNV 5@ X 92MNM P2;[ H " ^ d > P?,? ; Ph?2? @H?> H E ? ^??6X J??2; n h W
? : ? A? ? A _ ? X,Fb?
P??B5 g 5 j XH0 { ?
_ ? ? ? L b J.P2; @#P S 5 ? ? 9 R[ ? C ? ?? F X? ? S 5
5H U d 5 RhU "c!?9 z u*L65 P2J > 4Y5V29cMPYFYr?zU 9 ? R ? _<b8F 5 M P S 5@ V 5? @@ > 4 9 R ?? ? P 5R : C ? 5 k M 5 5
[2@ >4Y5?? ? 34 j 5 @4x?
P2u.[ K?? L,? L ? zf R d > 4vH Q? @ M P2L ; @5 P2JN? 4 9 RYx I Q! R d : C D X > 4:5?RY53??P ; Q L6Rp[25? d P 5 @?x<5^65
? H " M9 V*V.W?9
765e? IX P r?? [ P2L t ???] R d FYz0 JLv; M9 > HI P2R _/? L65 u u*5 XTF_'?'? b X ;5 @| V > H R d I? R?? S ?; d 5R M 5 <? J @#P S?? ? ? v%Y? M u*5N@G?
? 4YLY@?cz0 R > 4 H ] @ ME 5 >JP 5 M? R r H ! > H U P<^?_ acbHE R ? ; P ? P<@ H UK> zS P?R?? H ?Q@ 9cV*@P h65 M 5@@ 9 ?/? > ? ? ;? g 5RY3 > ? ~E @
> W 7 5 P<J?? 5 jzU P6[?? [<P LFY? H E R d F2H U J LY; M 9 > H? PvR J ; P)? y 9 ? 765?hYH U R d?9 ? P LYR r q? 5 [ 7 P IE R > @ 9 ^p[?4 5^ Nv ?
? P?5 ? Hc S H- R 9 > 5 7 2? @@H U tYH? V H? > W P2Jed?5R65;9 > H E R d ? ? ? W v *f 5@G?
?
? k 6 9,? 7?? 5@ P J ? C ??? @?@ 9 3#HX @ J?W H? R d 4 W 7 P2>b. 5@ 5@yP?J? j P 7vP2@H r > HwQPYRD? 9 ; 5 > 9 R64??? ? "" ??? ? ? F X,Fb?
? H > 4 > 9 R 4? ? ?J??? ?? > 9 R:4 _?? ? ? ? w
? ??? ? ???D? ????????5? ??? ???? ?????'? ? ??? ???B?
?????? ?B??? ? ? ?<??
? P2R6@ I! [?5; 9 FYu*5 ; 5@ ?9;%v 47P E F 55R MP2Rpr L M > 5[?Ph? LY@ z0 R2d ? F b ~f R RY5 L ;R? u v P S ?6| > 9 > H ] ?cR6@?L vy
E @PY? g H? R2d?PY7 > H UZ??H ]Z? 9 > H fQP R 7 ; PvF V 5 S @ 9 ?)? ; P k Hw S?? > 5 V W XYE @55 _ ? ?; >? X ?? ? ? ? X?? F?'_? X?? P ? ? > 5
?
[Y6 bBJ P ;9 R P<g 5 j "Yz?K5? ? ? s 3#5R X > 4 5 RY5 L ; P2R d29 z UK^Y@ ? ? ? 9(; 5 9V @P S P[ H ???p5[y?. H V ? > y 5?^:5 > ?O?2;
I @?5 g?PY? g H KR2x O 7 P 76| V 9;y9<V d P ; H >JP ?<J >4 ~? @ Q<H U R[ s @5@7? 5 n h $:5 V [ 9Z6h ? n V ~uK^ d _
?
_Je? 5 >5 ; @ P2R
R6[ 5 ; @#PvR e_'?'? ? > P?@ P?u g 5
P ? > z E ? pu?? 9 3#H E ? ^?7 ; PYF V 5 S @X?Huy? ? M . @S?9<VV h 5 L ; ?
d 9 I] RY@89cj 5 LY@ 5 [ H 6 R H ]K>#H f 9<6 V V Y? 9 R[ z QRM ? 5 E 5 [ d ? 9 [ L 9<V*V ? ] H lKS H lQV 9 ?
@ HE >| ? > p ( ?2^ @ n V @P?4 n ? ? 5 ^?H ]KR
@ P S 5|f 5 9<; R H Qr R d 9 ? d P2; H > 4 S @ U
? ^ 7 ;9<M 3#HX M 5 ? 9 r H E @ M; 5 > H U ? 5[ S Pp[25f @ s M 4 E _ ?e~ @ L @ 5?[ H ? ^6@ > 5 9?[ U ? ; ?c76P2@H S > H ? ? h ? d H l g(5@ @P S??
? ; I ? 5;H @ P?R?? R 4 P<? > P M . P2P @ 5 >4 5 ? [Yz ?Q@ M ; 5 > H@ ? 9 > z ? ?2^ @ ( >57 ? @ ~ ? 5 @ : C E J | R M > H?QPYRY@|? J ? C ? ] ? J
5 M o ? RM W2 o32? ? ? k 9 S 6? ? ? X ?B? j? > y 5 7 9 ; 9 S 2? LYR > ? P2^6@ H [25 ;9 3 Il P?^eX?P2R ? ??zUKx ? > ? 9 ^ >>#P"!4 ?)? @5
A?C D E f 9cj#d2??E 7YP2@@zf F)f 5?? 4YH l u*5 5RY@ | ; H ] R d > 4 9 33/. 5?@L# M H? 5R > M P2R[2H? > Hu PYR ?2Jk? j ? ??<@ H U > H? P2R ?
H U @8S 5 > 6
$ 465?5&% 5 M > P<J M 4 9 R d H ? R d
PYR >465 V 9; d 5@ > @L# M ~UZ? d(' M9 R*) 5 5 k 9S H + ^65 [ E J?? V*V ? ? @ S
P<;
@ H0 S 7 ? HE M H6 > WY M P2Rv@H? [Y5 j >4 5 ? M, 5 ? P 5 ;5 9V Yf R 5L ; P R d29 H lQh @ ? 5.- L 9V ? R[ 9V*V ' C ? 5./<L ? V > ? ' ?
0 5321 C ? ? r ?
] _65 ? : ?87 _ :29 C D ? ? 9 ? ?
X?p?F65 >4 5 j 5 @75 M > H E g43 @L 7 ; 5 S L S @ P<J : @e@? L M 4 > P 9 > ? l ?
:? P2@H U > ?! g 5?[25 R6zU ? ? ? 4 5 ^?>JP 5?Z:5 L ; P h d?9 HKE R @ ? 9<;5 5 ;d : 9 f > P > ?
P?[ H <>=p5 ; 5h > g ? V.s 5@ 9? 9 R[
' ()+*-,./102-3/
4
7`_YaYb. X!9 c b. 5 0 (-ed
U:y . s 7!z 9 { aZK|h} y~
? 7? _??? i 5 . 4 U a
? 7 U ? s ] y 9 ? .-./5 ~
?[???C?Y??Y?K???
!"!#
$%
&
0 (65 7!8:9 ;<5:=?>A@CB-D 0EF G @ 0EH 2 IKJLNM B DO0 (P @ 0 ( H (RQS+T .RU:U:9 V<WYX[Z LD H i ( G!\ E] 8:^.
B0 2 G @ 0 2fgZh@CB+D 0 2 F:Fikjml .R7n o > ] 0R( 9 p<5Kq n a qYrs U 9 t a/
7 l UCa[U ^. n:.Ru 9 q s a
v 7!w a!x
5R] y ./g??? ?<5+5 ? 7???C]R? u 7!/g?. U 7!? . / ? 7 s X. s UC? 7 /g? ^ .Wg??9 V 5h??3 s X. i } ^Y9 V 5
. bRa o?b. U ?Y? /. U a s ? .? u 9 ? ./ 8 ? >O9 ??/ i U y .??
. X 9 ?<//Y9 ? /YX 7!/??5 ? a 9 V?U ?!a /
s 9 ??/X?U:^7 U-?R? u > u ??.567 s ./Y.?!.Rn s . U n:9 .Rb. 4 |
i
i
i
? <? ???+? ? ? ????? ?h? 7 /YX V @?R? ?!B F6? 8:7 ?Y9 V ? 9 V 8 > a =??Y?. ? q a 9 / U 5?7/
4 q . s 9 ? a?Y9 u[a s ?Y9 U 5[7!W 4
?
?
_ 9 ; x ? s u 7 U 9 {<a/ 5m9 ? /A7 /
7? a X /. ? s 7 ? W .R8 a s ? 5?m??????R? ? ??C?K???R??? ?!??R?Y?!??Y?
i
???<? {!? s a / ? M??!? B Fk? 7 U s:? ?<? U:.5 U 5 x?r?k?.s 9 ? a 4 ?K? / 4 B-??9 ??9 V Uu > u o .5 9 ? / 4 9 ? 5u n . U ? U y s . 5 y a o?4
/ . U a n ? 5R?m? ? ?+?????? ?? ? ?O?? ??:?????h??? ? ?m???+?? ?R?????? ? ???Y?!??? ? ?!??Y?!?? i
? ? ? s ? u ? cK?? ?? I / UC? . u a / b. s X . / u . q s a q ? s U 9 ? .5+a= U y . a . o?4 ? a 4 . o ?
K? ? ?
?? ? ?+? ?
? ? i ? ? ? ? ?
V
?
?
w
.
5
a X.Ro???7 ? a ? ? ??. ?k. ? ? .X s ? !?/#"+@ ? ? ? ?!I ? . u s .%$g9 &?.W. s:X> = ? / uU ? ' a W 5)(
? a
c
7?UCa a ?= a s 5 U:? 4Y> 9 V /YX U y s .5 ya o?4 /.U a s+*5%,)- ? . ?R?? ?/ ?1022?3 4 ?5[? ? ?76 ?8g? ??? 9 ? ? ? ? B;:<<B=
??>YB ? ? !
i
. 5R)+@ ? ?? B A 9 ? ? 4 a 9 ? Uh_ . ? 7 b!9 B Da C s a =
U y s .5 yYa F? Eg= ? / uU 9 GIH!W 5 a / 7 ;? J ? U ? 5 . U ? ?DK 0 ?
w
a
V @
? -Q3 . ? ?R? ? / ? ???? W7XDYZ ? ;? [ ?<<?B!%? \? ] ?
LDM O? NQPQRIS`? T;U 7? 6 ??? 4 V
?
I _-/ U 9 c 5 > ?A??. U s 9 ? u 7 ? / . ~ s 7!o / . bU a a +n c!
5 de? 4 ? ? ???CQ
? f 2 2h? g 9 ? 5ji?%? k X ? ? ? /73 l ? ? ?
" a o .5 @ ? ? ? ^ `
] m? ?@
? n
? o? p"
] _t ? s a XY^ v
? |6Y?? Y?? ~?b? ??? x ???
? u ? @ ? 7 ? ?g. s i @ ?!?
?I Kw ?C?? Y 5x ??C? y Y {
? z Y ? 6Y}
? Vq . s OU rs
W . 5 ? .>
? ?!? 2;x b? ? ? ? ? ?;? ?
_6? 4 9 ?<5 r!D
?
V
i
?
s 5 u y m @ ? ? ? ? A j a/Yb . s X./YUK7 uU 9 ? b7 U 9 ? a / 4 > ?7 ?g9 u 5k9 " / u a U 9 ? W ? r ? 5 U ? < ??. /. U a s ?5 i
9
?
? x?+??? q ? ? ? ?+?!?? ? ] ? ?? ]]
?+?D]? ?
??
V ?
!
?
F ? . C s 7 l / . U a s ? 567
4gqY^ >5 ? ! u 7o 5 > 5U:.? 5m 9 U ^ u r??? . uU 9 " b . u a ? ? ?
? ?<? ;i ? a q ? .Ro 4 " ? @ ? ? ? B )
U 7 U 9 ? aY/ 7!o7 _ 9 ? ? 9 U 9 ??.5?`
? ? ? ? ? NV/76 ? ?? ??? ? ? ? ??#f ? ? 5 ?????? N ? ?? ? ? ? ? ?? ?? ? 0?? ? ? "? B? ??? ? ? ? ?
?
?
? @ ? ?!? ??1? . C s a/5 9 U ^?X s 7 4 . ? ? .5 a /Y5 . y 7b . v a o?o . v U 9 c b.?u
?
a
?
?
U
7
9
?
a
/
7
?
? ? c q a q? .w 4?
?
U
q s a q . s z 9 p .5???9 ? ? . OU ? a 5 .?a = U `
? 5 8:7 U .?/. ? s a /Y5
K? ? ? O? N?7? 6?[??? z ? l Y ? ?#
?
06?? ? 5!?????b? N ? ? 3 4
c
? ? +? ??? ? ? f ]? i" ] !?!%? ?D] ? ? ?
i ? a 9 ? s 7!/ M? ?`? ? I ? >/
3??? ? u 5 a x 4 9 i 5 u s . U . U 9 ???g. ? u a U ? ! / ?Ya ? 565 U 7 U:. Da ? .R??41/. 7U ? a ? ?!5
?
!
q
V
? ?
x ? ? ?h? ?? 2 x!? ? ??? 4 ? ? ? = ? ? ?%? ? ^ ?
???
?
j ?
? 7 s ?u C 5 ? u ?
? ? . 5 U:. ? b . ? U i @? ? ? ?!I ? > W
7??9 ??u 5 a )
= ? ? U . s 7 U . 4 ? ?A3 } . ? s 3
? . U
a ? n ?!c 5 ?`? 6 ?R? ? ? ? ? R????
?? ? ? ? ?{? f? ? ?? ?o ??? ? ? ?
j ? ? . U . s 5 a /O? ? <Iu i _ / 4 .n 5:a #
V
I ? . ? s 7 ? /.U a s c5-" 7 4 ) ? ? u a ? ? ? 8:.?a q U 9 ? ??9 ? r7 ?
?U 9 a / q n a _ o . ? 5 ?]7 q .n = a s ??7!/ / v "m.?@ 5 ? U ?!C ?!4 ? > )
? Y ? 2? ?? ?
a / U yY? X ? 7 y _ 9 5 . ? U a /Nq s aY_ ??. ? ? )
? ?R??:????] B <? ? ?? ? ? i
? 5?? ?? 3 ? ? ? ? ;? Z??)? . x?? ? ?C? . Y???6??? ? i
? ? u ~ .? ? . i M? ?!?!?? ? ? ? ? ? ? w /???O? N - ? ? ? ? ? ? b? 3 ? ?!?? ? ?
_u 74 . ? ! Q
u s ? 5 5 ;? ? / u ?
i
? ? ? 7 / XO? ? ? ? ? ? ? i M ?? ? B F ? 9 5 u s . U . ?CU 9 ?V ?g. b. s 5 ? 5 u a /U J ? / ? a? b5 ? U 9 ????.?/ . ? s 7 o / ?RU
? Z ? ?R??C?? ?
? g ? ? ? ??? D? ? ? m? +? ? ?? ?
? a s c 5 i L ? ? N[? V ?? 2 x!?:? ? ? ;
`? u i
F ? /7 ? a X /. ~ s 7!?+/ . U a s c5m9 V?U ^O? au 7
7 ? X y ?
? u ? ? .5 U . s ? .??U @ ?? ? ] 1
? ? aY?
?
> /7 ? ? 9 ;?u 5 7 ? / ? 5 U 7 _ 9 ? ? ? ??U ? > "
6 ??? ? ?? ??? ?R? . ?R?
. U 9 ? U:9 ? a ? /
?
? ? ?Y? ? ?? B ? ? ? ? ] = <
?
??? ?
?
!
"
#
| 1090 |@word cu:1 hu:3 d2:2 bn:1 ld:1 od:1 dx:1 j1:1 nq:1 jkj:1 uca:2 lx:2 vxw:1 xye:1 ra:1 xz:1 ry:4 ol:2 uz:2 jm:1 nyq:3 z:1 acbed:1 ve1:6 l2h:1 w8:2 y3:1 act:1 p2j:7 ro:4 um:1 uk:3 zl:2 ly:4 xv:1 wyk:1 acl:1 r4:1 bi:1 ihi:1 vu:1 r65:2 vlx:2 bh:1 py:3 go:1 c57:1 uhz:6 y8:1 q:1 wvv:1 s6:4 it2:1 jk:1 p5:2 mz:1 pvf:1 rq:3 pd:2 q5:2 uh:1 po:2 h0:2 wg:2 g1:1 gq:1 p4:1 tu:2 qr:4 g9:2 m21:1 p:1 zp:1 bba:1 tk:1 ac:5 qt:5 b0:1 p2:7 kb:1 qtl:2 hx:3 dxd:1 cb:3 bj:2 cfm:1 lm:1 wl:1 ck:2 cr:3 jnt:1 q3:2 lon:1 wf:4 nn:1 vl:1 lj:1 w:2 fhg:1 j5:6 iu:1 uc:2 f3:1 lyi:1 y6:1 r7:1 np:1 e2f:1 abt:1 dg:1 bw:2 tq:1 ab:2 sh:1 hg:3 xy:3 bq:1 iv:2 ar:1 a6:2 kq:1 acb:1 sv:1 my:1 gd:1 ie:2 l5:1 v4:1 bu:1 e_:1 na:2 nm:1 r2d:2 f5:1 e9:1 yp:1 li:7 f6:1 m9:2 de:1 pqs:1 ad:1 aca:1 hf:1 p2p:2 om:1 il:1 zy:1 lli:1 ed:4 pp:2 pyr:2 di:1 ut:1 cj:2 ou:2 ujg:1 fih:1 yb:2 r23:1 p6:2 xzy:1 ei:1 y9:7 a7:1 yf:1 b3:1 y2:1 eg:1 x5:1 ue:2 elr:1 q_:3 ay:2 qti:1 tn:1 naf:1 qp:5 ji:1 jp:7 he:4 cv:3 uv:1 fk:1 i6:3 aq:3 pq:3 f0:2 ikjml:1 pu:1 xzw:1 xe:1 ldm:1 mr:1 bra:1 fri:1 rv:2 d0:2 xf:1 af:1 a1:1 va:1 bz:1 dfehg:1 w2:1 nv:9 n7:10 vw:1 mw:1 ryx:1 zi:2 cn:2 o0:1 b5:3 n7q:1 k9:1 hke:1 e3:1 jj:1 u5:1 ph:2 gih:1 zj:1 rb:1 yy:1 iz:2 ibj:1 jv:1 k4:1 ce:1 h2r:1 v1:3 h5:1 hjj:1 wu:1 ki:1 hi:4 frj:1 bp:1 ri:1 x7:1 yp2:2 qb:1 uf:1 cbed:1 jr:1 fio:1 ekj:1 lp:1 ikj:1 pr:1 xo:1 jlk:1 ln:1 bbj:1 wvu:2 ge:1 pbq:1 h6:1 k5:1 rp:1 n9:1 cf:1 qsr:1 yx:3 gqy:1 xw:2 yz:3 uj:2 nyi:1 hq:2 lio:1 y5:6 fy:1 r64:1 ru:3 z3:1 cq:2 fp2:1 fe:1 wfk:4 ba:3 bde:1 l01:1 zf:2 av:2 t:3 dc:2 rn:1 wyx:3 kl:2 pvu:1 fgf:2 zlx:2 wy:8 mnm:1 ev:1 yc:1 ia:1 eh:1 hkc:1 lk:1 ryp:2 n6:1 gf:1 kj:1 xby:1 vh:1 w03:1 zh:2 lv:4 h2:1 ivn:1 o8:2 dd:1 i8:2 cd:2 lo:3 l8:1 lln:1 umz:1 fg:3 avev:2 p2l:2 fb:3 vp2:1 c5:1 sz:1 ml:1 b1:2 a_:1 un:1 qz:1 ku:2 zk:1 pvr:2 lks:1 e5:4 hc:2 uwv:2 z_:1 da:2 lvi:1 sp:1 n2:1 je:1 lc:1 pv:1 xh:1 r6:1 yh:1 ix:2 hw:2 z0:1 zu:3 r8:1 dk:1 a3:1 dl:1 boe:1 kr:1 o9:1 p2r:6 tc:3 lt:3 ux:1 bo:1 ya:1 e6:3 wq:2 h21:1 qts:5 c9:1 ex:1 |
103 | 1,091 | Independent Component Analysis
of Electroencephalographic Data
Scott Makeig
Naval Health Research Center
P.O. Box 85122
San Diego CA 92186-5122
Anthony J. Bell
Computational Neurobiology Lab
The Salk Institute, P.O. Box 85800
San Diego, CA 92186-5800
scott~cplJmmag.nhrc.navy.mil
tony~salk.edu
Tzyy-Ping Jung
Naval Health Research Center and
Computational Neurobiology Lab
The Salk Institute, P.O. Box 85800
San Diego, CA 92186-5800
Terrence J. Sejnowski
Howard Hughes Medical Institute and
Computational Neurobiology Lab
The Salk Institute, P.O. Box 85800
San Diego, CA 92186-5800
jung~salk.edu
terry~salk.edu
Abstract
Because of the distance between the skull and brain and their different resistivities, electroencephalographic (EEG) data collected from
any point on the human scalp includes activity generated within
a large brain area. This spatial smearing of EEG data by volume
conduction does not involve significant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm
of Bell and Sejnowski [1] is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of
source identification from that of source localization. First results
of applying the ICA algorithm to EEG and event-related potential
(ERP) data collected during a sustained auditory detection task
show: (1) ICA training is insensitive to different random seeds. (2)
ICA may be used to segregate obvious artifactual EEG components
(line and muscle noise, eye movements) from other sources. (3) ICA
is capable of isolating overlapping EEG phenomena, including alpha and theta bursts and spatially-separable ERP components, to
separate ICA channels. (4) Nonstationarities in EEG and behavioral state can be tracked using ICA via changes in the amount of
residual correlation between ICA-filtered output channels.
146
1
1.1
S. MAKEIG, A. l . BELL, T.-P. lUNG, T. l. SEJNOWSKI
Introduction
Separating What from Where in EEG Source Analysis
The joint problems of EEG source segregation, identification, and localization are
very difficult, since the problem of determining brain electrical sources from potential patterns recorded on the scalp surface is mathematically underdetermined .
Recent efforts to identify EEG sources have focused mostly on verforming spatial
segregation and localization of source activity [4]. By applying the leA algorithm
of Bell and Sejnowski [1], we attempt to completely separate the twin problems of
source identification (What) and source localization (Where). The leA algorithm
derives independent sources from highly correlated EEG signals statistically and
without regard to the physical location or configuration of the source generators.
Rather than modeling the EEG as a unitary output of a multidimensional dynamical system, or as "the roar of the crowd" of independent microscopic generators, we
suppose that the EEG is the output of a number of statistically independent but
spatially fixed potential-generating systems which may either be spatially restricted
or widely distributed .
1.2
Independent Component Analysis
Independent Component Analysis (leA) [1, 3] is the name given to techniques for
finding a matrix, Wand a vector, w, so that the elements, u = (Ul .. . uNF, of
the linear transform u = Wx + W of the random vector, x = [Xl ... xNF, are statistically independent. In contrast with decorrelation techniques such as Principal
Components Analysis (peA) which ensure that {UiUj} = 0, Vij, ICA imposes the
much stronger criterion that the multivariate probability density function (p .d.f.)
of u factorizes : fu(u) =
fu.(ud . Finding such a factorization involves making the mutual information between the Ui go to zero: I(ui,uj) = O,Vij. Mutual
information is a measure which depends on all higher-order statistics of the Ui while
decorrelation only takes account of 2nd-order statistics.
n::l
In (1], a new algorithm was proposed for carrying out leA. The only prior assumption is that the unknown independent components, Ui, each have the same form of
cumulative density function (c.d.f.) after scaling and shifting, and that we know this
form, call it Fu(u). ICA can then be performed by maximizing the entropy, H(y),
of a non-linearly transformed vector: y = Fu(u) . This yields stochastic gradient
ascent rules for adjusting Wand w:
(1)
where
y = (:ih ... YN F, the elements of which are :
,
a
0Yi
Yi = - 0Yi OUi
(
h if y = Fu (U )]
whic
_ Ofu(Ui)
OFu(Ui)
(2)
It can be shown that an leA solution is a stable point of the relaxation of eqs.(1-2) .
In practical tests on separating mixed speech signals, good results were found when
using the logistic function, Yi = (1 + e- u?)-1, instead of the known c.d.f., Fu , of
the speech signals. In this case Yi = 1 - 2Yi, and the algorithm has a simple form.
These results were obtained despite the fact that the p.d.f. of the speech signals was
not exactly matched by the gradient of the logistic function. In the experiments in
this paper, we also used the speedup technique of prewhitening described in [2] .
Independent Component Analysis of Electroencephalographic Data
1.3
147
Applying leA to EEG Data
The leA technique appears ideally suited for performing source separation in domains where, (1) the sources are independent, (2) the propagation delays of the
'mixing medium' are negligible, (3) the sources are analog and have p.d.f.'s not too
unlike the gradient of a logistic sigmoid, and (4) the number of independent signal
sources is the same as the number of sensors, meaning if we employ N sensors,
using the ICA algorithm we can separate N sources. In the case of EEG signals,
N scalp electrodes pick up correlated signals and we would like to know what effectively 'independent brain sources' generated these mixtures. If we assume that
the complexity of EEG dynamics can be modeled, at least in part, as a collection
of a modest number of statistically independent brain processes, the EEG source
analysis problem satisfies leA assumption (1) . Since volume conduction in brain
tissue is effectively instantaneous, leA assumption (2) is also satisfied. Assumption
(3) is plausible, but assumption (4), that the EEG is a linear mixtures of exactly N
sources, is questionable, since we do not know the effective number of statistically
independent brain signals contributing to the EEG recorded from the scalp. The
foremost problem in interpreting the output of leA is, therefore, determining the
proper dimension of input channels, and the physiological and/or psychophysiological significance of the derived leA source channels.
Although the leA model of the EEG ignores the known variable synchronization of
separate EEG generators by common subcortical or corticocortical influences [5], it
appears promising for identifying concurrent signal sources that are either situated
too close together, or are too widely distributed to be separated by current localization techniques. Here, we report a first application of the ICA algorithm to analysis
of 14-channel EEG and ERP recordings during sustained eyes-closed performance
of an auditory detection task, and give evidence suggesting that the leA algorithm
may be useful for identifying psychophysiological state transitions.
2
Methods
EEG and behavioral data were collected to develop a method of objectively monitoring the alertness of operators of complex systems [8] . Ten adult volunteers participated in three or more half-hour sessions, during which they pushed one button
whenever they detected an above-threshold auditory target stimulus (a brief increase in the level of the continuously-present background noise). To maximize the
chance of observing alertness decrements, sessions were conducted in a small, warm,
and dimly-lit experimental chamber, and subjects were instructed to keep their eyes
closed . Auditory targets were 350 ms increases in the intensity of a 62 dB white
noise background, 6 dB above their threshold of detectability, presented at random
time intervals at a mean rate of 10/min, and superimposed on a continuous 39-Hz
click train evoking a 39-Hz steady-state response (SSR). Short, and task-irrelevant
probe tones of two frequencies (568 and 1098 Hz) were interspersed between the
target noise bursts at 2-4 s intervals. EEG was collected from thirteen electrodes
located at sites of the International 10-20 System, referred to the right mastoid, at
a sampling rate of 312.5 Hz. A bipolar diagonal electrooculogram (EOG) channel
was also recorded for use in eye movement artifact correction and rejection . Target Hits were defined as targets responded to within a 100-3000 ms window, while
Lapses were targets not responded to. Two sessions each from three of the subjects
were selected for analysis based on their containing at least 50 response Lapses.
A continuous performance measure, local error rate, was computed by convolving
the irregularly-sampled performance index time series (Hit=O/Lapse=l) with a 95
s smoothing window advanced for 1.64 s steps.
S. MAKEIG, A. l. BELL, T.-P. lUNG, T. 1. SEJNOWSKI
148
The leA algorithm in eqs.(1-2) was applied to the 14 EEG recordings. The time
index was permuted to ensure signal stationarity, and the 14-dimensional time point
vectors were presented to a 14 ---. 14 leA network one at a time. To speed convergence, we first pre-whitened the data to remove first- and second-order statistics.
The learning rate was annealed from 0.03 to 0.0001 during convergence. After each
pass through the whole training set, we checked the amount of correlation between
the leA output channels and the amount of change in weight matrix, and stopped
the training procedure when, (1) the mean correlation among all channel pairs was
below 0.05, and (2) the leA weights had stopped changing appreciably.
3
Results
A small (4 .5 s) portion of the resulting leA-transformed EEG time series is shown
in Figure 1. As expected, correlations between the leA traces are close to zero . The
dominant theta wave (near 7 Hz) spread across many EEG channels (left paneQ is
more or less isolated to leA trace 1 (upper right), both in the epoch shown and
throughout the session. Alpha activity (near 10 Hz) not obvious in the EEG data
is uncovered in leA trace 2, which here and throughout the session contains alpha
bursts interspersed with quiescent periods. Other leA traces (3-8) contain brief
oscillatory bursts which are not easy to characterize, but clearly display different
dynamics from the activity in leA trace 1 which dominates the raw EEG record.
ICA trace 10 contains near-De changes associated with eye slow movements in the
EOG and most frontal (Fpz) EEG channels. leA trace 13 contains mostly line
noise (60 Hz), while ICA traces 9 and 14 have a broader high frequency (50-100
Hz) spectrum, suggesting that their source is likely to be high-frequency activity
generated by scalp muscles.
Apparently, the ICA source solution for this data does not depend strongly on
learning rate or initial conditions. When the same portion of one session was used to
train two leA networks with different random starting weights, data presentation
orders, and learning rates, the two final ICA weight matrices were very close to
one another. Filtering another segment of EEG data from the same session using
each ICA matrix produced two ICA source transforms in which 11 of the 14 bestcorrelated output channel pairs correlated above 0.95 and none correlated less than
0.894.
While ICA training minimized mutual information, and therefore also correlations
between output channels during the initial (alert) leA training period, output data
channels filtered by the same leA weight matrix became more correlated during the drowsy portion of the session, and then reverted to their initial levels of
(de)correlation when the subject again became alert. Conversely, filtering the same
session's data with an leA weight matrix trained on the drowsy portion of the session produced output channels that were more correlated during the alert portions
of the session than during the drowsy training period. Presumably, these changes
in residual correlation among ICA outputs reflect changes in the dynamics and
topographic structure of the EEG signals in alert and drowsy brain states.
An important problem in human electrophysiology is to determine a means of objectively identifying overlapping ERP subcomponents. Figure 3 (right paneQ shows an
leA decomposition of (left paneQ ERPs to detected (Hit) and undetected (Lapse)
targets by the same subject. leA spatial filtering produces two channels (S[I-2])
separating out the 39-Hz steady-state response (SSR) produced by the continuous
39-Hz click stimulation during the session. Note the stimulus-induced perturbation in SSR amplitude previously identified in [6] . Three channels (H[I-3]) pass
time-limited components of the detected target response, while four others (1[1-4])
J 49
Independent Component Analysis of Electroencephalographic Data
EEG
Fz
~V~~~hI'o/A.
Cz
~~MvN'N{\~v<wv'yJ\J\r"~
pz
V&V\fM~IIjJ-r~
leA
3 VfJV\.'\I\~~~'~
4 'rIvV\.JJvvV'-r~?~,
F3
.;vwvwvvvrv~~WV'JI~
5 i~'MI'\'V1fV{\tNN~10~~
F4
~\0.,fvo/lf'1Vlf\,~~~
6 {.I'VVVVVvw....;;~~rwvr'ri(.,'r?Nvf
C3
VIV"'vVWv!l!vWN/W\'~~
7
/<?Yl1~'riiwNV~~~~
C4
\MIV{lAtv!ifVV{\AfJV0~~
8
~v.AJvJw-~\Jn~
9
>I*~vw~Y!"'~.fW'Iwi'fr."'I
T4 ~~~~
10 ~~~~~
P3
v"V'v""Nv\~"'-'V
11 1~IVVV~\fvv{iJYlw~~
P4
M'-VVI/<{WY'V0,Ww~
12/?v\~~
Fpz
~\~I"IVlV';.~~
13~~~~~~~
EOG~~~
14
~~\"W!~~~~~
- - 1 sec.
Figure 1: Left: 4.5 seconds of 14-channel EEG data. Right : an l e A transform of
the same data, using weights trained on 6.5 minutes of similar data from the same
seSSlOn.
150
S. MAKEIG, A. J. BELL, T.-P. JUNG, T. J. SEJNOWSKI
Scalp ERPs
Fz
ICAERPs
~~~~....,
+
Cz
+
pz
-...
+
Oz
~~P ?. ..-.woiQfiS$i
;.,
+
"-,;r::>
~ .,.
,
Q
-I
H2;c
+
Gtr~ --,.~
H3
'''iIi 04"'.
L1
...
?I,;;"
F4
T4
+~~"4 =
~
L3
::"I-?~~
~
+
; _ .... '''''''S
L4
-1$
1;so~
-., '
+
;;.,
,~, coA'C>"",~ .... ,W Ii ::v;
e .....
f- . . . .?A.E:::::;?
<:::!>
<P
~
~'f'r4'i"'Qf4;.~TIN'I"m"'''''~''i
81
+
+
~ 1"?" ,,~"" ~;t;!? ;Nft~h'"
82 ;,"1Y?".-U'
l t~'tt~'.'~'~'~"'fH','t1
+
;'+~~t ~~~."w
U1
- .J1
U2
;'~t.,,=,,,~~FW"" ~ .. _~
c ......... r::c;.,.~4&f'
!
c 44c;c;e
+
~
+
P4
..At '~"lf"'o ~
'tV;
+
+
P3
v..
:1 ~;f'O"IV'~~
+
T3
-..,~.
~
L2 -
+
C4
Rep
;11
~ ""~~n;;
+
+
C3
~ ?...,p.~~
+
+
F3
_'_~4W.
+
~."
0? ' 4
+
~~;. '"
? j"
""*+
+
U4
... ~... '1* it~
US
~~ 111(ok
+
+
o
:: -.........0.2
0.4
0.6
iiIIe<>""
0.8
,;I6'O'~'
U3 -.... ro1~C--
FPZ~n~~~?.
EOG;""1
I--
i.
_
'k.
tdt ..'IWoio'"
+
:+~[P'~Q""'~
"M
t'4ac*'" ,
? ... -
I~ C
"'. . . ~.._
1 sec
-
Detected targets
-
Undetected targets
Figure 2: Left panel: Event-related potentials (ERPs) in response to undetected
(bold traces) and detected (faint traces) noise targets during two half-hour sessions.
Right panel: Same ERP signals filtered using an leA weight matrix trained on the
ERP data.
Independent Component Analysis of Electroencephalographic Data
151
components of the (larger) undetected target response. We suggest these represent
the time course of the locus (either focal or distributed) of brain response activity,
and may represent a solution to the longstanding problem of objectively dividing
evoked responses into neurobiologically meaningful, temporally overlapping subcomponents.
4
Conclusions
ICA appears to be a promising new analysis tool for human EEG and ERP research.
It can isolate a wide range of artifacts to a few output channels while removing them
from remaining channels. These may in turn represent the time course of activity
in longlasting or transient independent 'brain sources' on which the algorithm converges reliably. By incorporating higher-order statistical information, ICA avoids
the non-uniqueness associated with decorrelating decompositions. The algorithm
also appears to be useful for decomposing evoked response data into spatially distinct subcomponents, while measures of nonstationarity in the ICA source solution
may be useful for observing brain state changes.
Acknowledgments
This report was supported in part by a grant (ONR.Reimb .30020.6429) to the Naval
Health Research Center by the Office of Naval Research. The views expressed in
this article are those of the authors and do not reflect the official policy or position
of the Department of the Navy, Department of Defense, or the U.S. Government.
Dr. Bell is supported by grants from the Office of Naval Research and the Howard
Hughes Medical Institute.
References
[1] A.J. Bell & T.J. Sejnowski (1995). An information-maximization approach to
blind separation and blind deconvolution, Neural Computation 7:1129-1159 .
[2] A.J. Bell & T.J. Sejnowski (1995). Fast blind separation based on information theory, in Proc. Intern . Symp. on Nonlinear Theory and Applications
(NOLTA), Las Vegas, Dec. 1995.
[3] P. Comon (1994) Independent component analysis, a new concept? Signal
processing 36:287-314 .
[4] A.M. Dale & M.1. Sereno (1993) EEG and MEG source localization: a linear
approach. J. Cogn. Neurosci . 5:162.
[5] R. Galambos & S. Makeig. (1989) Dynamic changes in steady-state potentials. In Erol Basar (ed.), Dynamics of Sensory and Cognitive Processing of
the Brain, 102-122. Berlin:Springer-Verlag.
[6] S. Makeig & R. Galambos. (1989) The CERP: Event-related perturbations
in steady-state responses. In E. Basar & T.H. Bullock (ed.), Brain Dynamics:
Progress and Perspectives, 375-400. Berlin:Springer-Verlag.
[7] T-P. Jung, S. Makeig, M. Stensmo, & T. Sejnowski. Estimating alertness from
the EEG power spectrum. Submitted for publication.
[8] S. Makeig & M. Inlow (1993) Lapses in alertness: Coherence of fluctuations
in performance and EEG spectrum. Electroencephalog. din. N europhysiolog.
86:23-35.
| 1091 |@word stronger:1 nd:1 decomposition:2 pick:1 initial:3 configuration:1 series:2 uncovered:1 contains:3 current:1 subcomponents:3 j1:1 wx:1 remove:1 half:2 selected:1 tone:1 short:1 record:1 filtered:3 location:1 burst:4 alert:4 sustained:2 behavioral:2 symp:1 expected:1 ica:24 coa:1 brain:13 window:2 estimating:1 matched:1 panel:2 medium:1 what:3 electroencephalog:1 finding:2 multidimensional:1 questionable:1 bipolar:1 exactly:2 makeig:8 hit:3 medical:2 grant:2 yn:1 evoking:1 drowsy:4 negligible:1 t1:1 local:1 despite:1 fluctuation:1 erps:3 r4:1 conversely:1 evoked:2 factorization:1 limited:1 range:1 statistically:5 practical:1 acknowledgment:1 yj:1 hughes:2 lf:2 cogn:1 procedure:1 area:1 bell:9 pre:1 suggest:1 close:3 operator:1 applying:3 influence:1 center:3 maximizing:1 annealed:1 go:1 starting:1 focused:1 identifying:3 rule:1 diego:4 suppose:1 target:12 element:2 corticocortical:1 located:1 neurobiologically:1 u4:1 electrical:1 movement:3 alertness:4 ui:6 complexity:1 ideally:1 dynamic:6 trained:3 carrying:1 depend:1 segment:1 localization:6 completely:1 joint:1 train:2 separated:1 distinct:1 fast:1 effective:1 sejnowski:9 detected:5 crowd:1 navy:2 widely:2 plausible:1 larger:1 objectively:3 statistic:3 topographic:1 transform:2 final:1 fr:1 p4:2 mixing:1 oz:1 convergence:2 electrode:2 produce:1 generating:1 converges:1 develop:1 ac:1 h3:1 progress:1 eq:2 dividing:1 involves:1 nhrc:1 f4:2 pea:1 stochastic:1 human:3 transient:1 government:1 underdetermined:1 mathematically:1 correction:1 presumably:1 seed:1 u3:1 fh:1 uniqueness:1 proc:1 concurrent:1 appreciably:1 tool:1 clearly:1 sensor:2 rather:1 factorizes:1 mil:1 broader:1 office:2 publication:1 derived:1 naval:5 electroencephalographic:5 superimposed:1 contrast:1 transformed:2 among:2 smearing:1 spatial:3 smoothing:1 mutual:3 f3:2 sampling:1 lit:1 minimized:1 report:2 stimulus:2 others:1 employ:1 few:1 attempt:1 tdt:1 detection:2 stationarity:1 highly:1 mixture:2 prewhitening:1 fu:6 capable:1 fpz:3 modest:1 iv:1 isolating:1 isolated:1 stopped:2 modeling:1 maximization:1 delay:2 conducted:1 too:3 characterize:1 conduction:2 density:2 international:1 terrence:1 together:1 continuously:1 roar:1 again:1 reflect:2 recorded:3 satisfied:1 containing:1 dr:1 cognitive:1 convolving:1 suggesting:3 potential:5 account:1 de:2 twin:1 sec:2 includes:1 bold:1 blind:4 depends:1 performed:1 view:1 lab:3 closed:2 observing:2 apparently:1 mastoid:1 portion:5 lung:2 wave:1 iwi:1 responded:2 became:2 yield:1 identify:1 t3:1 identification:3 raw:1 produced:3 none:1 monitoring:1 tissue:1 submitted:1 ping:1 resistivity:1 oscillatory:1 whenever:1 checked:1 nonstationarity:1 ed:2 frequency:3 galambos:2 obvious:2 associated:2 mi:1 sampled:1 auditory:4 adjusting:1 amplitude:1 appears:4 ok:1 higher:2 response:10 decorrelating:1 box:4 strongly:1 correlation:7 nonlinear:1 overlapping:3 propagation:1 logistic:3 artifact:2 name:1 contain:1 concept:1 din:1 spatially:4 lapse:5 white:1 during:10 steady:4 criterion:1 m:2 tt:1 l1:1 interpreting:1 meaning:1 instantaneous:1 vega:1 tzyy:1 reverted:1 sigmoid:1 common:1 permuted:1 stimulation:1 physical:1 tracked:1 ji:1 insensitive:1 volume:2 interspersed:2 analog:1 significant:1 focal:1 session:13 i6:1 had:1 l3:1 stable:1 surface:1 v0:1 dominant:1 multivariate:1 recent:1 perspective:1 irrelevant:1 verlag:2 wv:2 rep:1 onr:1 yi:6 muscle:2 determine:1 maximize:1 ud:1 period:3 signal:13 ii:1 nonstationarities:1 inlow:1 whitened:1 foremost:1 volunteer:1 represent:3 cz:2 dec:1 lea:32 background:2 participated:1 interval:2 source:29 electrooculogram:1 unlike:1 ascent:1 nv:1 recording:2 subject:4 hz:10 db:2 induced:1 isolate:1 call:1 vvi:1 unitary:1 near:3 vw:1 iii:1 easy:1 identified:1 click:2 fm:1 defense:1 ul:1 effort:1 speech:3 useful:3 involve:1 amount:3 transforms:1 ten:1 situated:1 fz:2 detectability:1 four:1 threshold:2 changing:1 erp:7 button:1 relaxation:1 wand:2 throughout:2 psychophysiological:2 separation:4 p3:2 coherence:1 scaling:1 pushed:1 basar:2 hi:1 display:1 scalp:6 activity:7 ri:1 whic:1 speed:1 u1:1 min:1 performing:2 separable:1 speedup:1 department:2 tv:1 across:1 ro1:1 skull:1 bullock:1 making:1 comon:1 restricted:1 segregation:2 previously:1 turn:1 know:3 locus:1 irregularly:1 decomposing:1 probe:1 chamber:1 yl1:1 jn:1 remaining:1 tony:1 ensure:2 uj:1 diagonal:1 microscopic:1 gradient:3 distance:1 separate:5 separating:3 berlin:2 collected:4 meg:1 modeled:1 index:2 difficult:1 mostly:2 thirteen:1 trace:10 reliably:1 proper:1 policy:1 unknown:1 upper:1 howard:2 neurobiology:3 segregate:1 ssr:3 ww:1 perturbation:2 intensity:1 pair:2 c3:2 c4:2 hour:2 adult:1 dynamical:1 pattern:1 scott:2 below:1 wy:1 including:1 terry:1 shifting:1 suitable:1 event:3 ofu:2 decorrelation:2 warm:1 power:1 undetected:4 residual:2 advanced:1 miv:1 theta:2 eye:5 brief:2 temporally:1 health:3 eog:4 mvn:1 prior:1 epoch:1 l2:1 determining:2 contributing:1 synchronization:1 mixed:1 subcortical:1 filtering:3 generator:3 vlf:1 h2:1 imposes:1 article:1 nolta:1 vij:2 course:2 jung:4 supported:2 institute:5 wide:1 distributed:3 regard:1 dimension:1 transition:1 cumulative:1 avoids:1 ignores:1 instructed:1 collection:1 author:1 san:4 dale:1 longstanding:1 viv:1 sensory:1 alpha:3 keep:1 quiescent:1 spectrum:3 continuous:3 promising:2 channel:19 dimly:1 ca:4 eeg:38 complex:1 anthony:1 domain:1 official:1 significance:1 spread:1 linearly:1 decrement:1 whole:1 noise:6 neurosci:1 sereno:1 site:1 referred:1 paneq:3 salk:6 slow:1 position:1 xl:1 tin:1 vwn:1 minute:1 removing:1 tnn:1 pz:2 physiological:1 faint:1 evidence:1 derives:1 dominates:1 incorporating:1 ih:1 deconvolution:1 effectively:2 t4:2 rejection:1 suited:1 entropy:1 artifactual:1 electrophysiology:1 likely:1 intern:1 expressed:1 u2:1 springer:2 satisfies:1 chance:1 presentation:1 stensmo:1 change:7 fw:2 principal:1 pas:2 experimental:1 la:1 meaningful:1 l4:1 frontal:1 phenomenon:1 correlated:6 |
104 | 1,092 | Clustering data through an analogy to
the Potts model
Marcelo Blatt, Shai Wiseman and Eytan Domany
Department of Physics of Complex Systems,
The Weizmann Institute of Science, Rehovot 76100, Israel
Abstract
A new approach for clustering is proposed. This method is based
on an analogy to a physical model; the ferromagnetic Potts model
at thermal equilibrium is used as an analog computer for this hard
optimization problem . We do not assume any structure of the underlying distribution of the data. Phase space of the Potts model is
divided into three regions; ferromagnetic, super-paramagnetic and
paramagnetic phases. The region of interest is that corresponding
to the super-paramagnetic one, where domains of aligned spins appear. The range of temperatures where these structures are stable
is indicated by a non-vanishing magnetic susceptibility. We use a
very efficient Monte Carlo algorithm to measure the susceptibility and the spin spin correlation function. The values of the spin
spin correlation function, at the super-paramagnetic phase, serve
to identify the partition of the data points into clusters.
Many natural phenomena can be viewed as optimization processes, and the drive to
understand and analyze them yielded powerful mathematical methods. Thus when
wishing to solve a hard optimization problem, it may be advantageous to apply these
methods through a physical analogy. Indeed, recently techniques from statistical
physics have been adapted for solving hard optimization problems (see e.g. Yuille
and Kosowsky, 1994). In this work we formulate the problem of clustering in terms
of a ferromagnetic Potts spin model. Using the Monte Carlo method we estimate
physical quantities such as the spin spin correlation function and the susceptibility,
and deduce from them the number of clusters and cluster sizes.
Cluster analysis is an important technique in exploratory data analysis and is applied in a variety of engineering and scientific disciplines. The problem of partitionaZ
clustering can be formally stated as follows. With everyone of i = 1,2, ... N patterns represented as a point Xi in a d-dimensional metric space, determine the
partition of these N points into M groups, called clusters, such that points in a
cluster are more similar to each other than to points in different clusters. The value
of M also has to be determined.
Clustering Data through an Analogy to the Potts Model
417
The two main approaches to partitional clustering are called parametric and nonparametric. In parametric approaches some knowledge of the clusters' structure is
assumed (e.g . each cluster can be represented by a center and a spread around
it) . This assumption is incorporated in a global criterion. The goal is to assign the
data points so that the criterion is minimized . A typical example is variance minimization (Rose, Gurewitz, and Fox, 1993) . On the other hand, in non-parametric
approaches a local criterion is used to build clusters by utilizing local structure of
the data. For example, clusters can be formed by identifying high-density regions
in the data space or by assigning a point and its K -nearest neighbors to the same
cluster. In recent years many parametric partitional clustering algorithms rooted
in statistical physics were presented (see e.g. Buhmann and Kiihnel , 1993). In the
present work we use methods of statistical physics in non-parametric clustering.
Our aim is to use a physical problem as an analog to the clustering problem. The
notion of clusters comes very naturally in Potts spin models (Wang and Swendsen,
1990) where clusters are closely related to ordered regions of spins . We place a Potts
spin variable Si at each point Xi (that represents one of the patterns), and introduce
a short range ferromagnetic interaction Jij between pairs of spins, whose strength
decreases as the inter-spin distance Ilxi - Xj" increases . The system is governed by
the Hamiltonian (energy function)
1i = -
L
hj D8,,8j
Si
= 1 . .. q ,
(1)
<i,j>
where the notation < i, j > stands for neighboring points i and j in a sense that is
defined later . Then we study the ordering properties of this inhomogeneous Potts
model.
As a concrete example , place a Potts spin at each of the data points of fig. 1.
~~--~------~--------~--------~------~--------~------~--~
-10
10
?30
?20
20
30
Figure 1: This data set is made of three rectangles, each consisting of 800 points
uniformly distributed , and a uniform rectangular background of lower density, also
consisting of 800 points. Points classified (with Tclus = 0.08 and () = 0.5) as
belonging to the three largest clusters are marked by crosses, squares and x's. The
fourth cluster is of size 2 and all others are single point clusters marked by triangles .
At high temperatures the system is in a disordered (paramagnetic) phase. As
the temperature is lowered, larger and larger regions of high density of points (or
spins) exhibit local ordering, until a phase transition occurs and spins in the three
rectangular high density regions become completely aligned (i. e. within each region
all Si take the same value - super-paramagnetic phase) .
The aligned regions define the clusters which we wish to identify. As the temperature
418
M. BLATT, S. WISEMAN, E. DOMANY
is further lowered, a pseudo-transition occurs and the system becomes completely
ordered (ferromagnetic).
1
A mean field model
To support our main idea, we analyze an idealized set of points where the division
into natural classes is distinct. The points are divided into M groups. The distance
between any two points within the same group is d 1 while the distance between any
two points belonging to different groups is d2 > d1 (d can be regarded as a similarity
index). Following our main idea, we associate a Potts spin with each point and an
interaction J 1 between points separated by distance d 1 and an h between points
separated by d 2 , where a ~ J 2 < J 1 ? Hence the Hamiltonian (1) becomes;
1{
~
= -
L L 6~; ,~j
/10
-
~
i<j
L L 6s; ,sj
si
= 1, ... , q ,
(2)
/1o<V i ,j
where si denotes the ith spin (i = 1, ... , ~) of the lJth group (lJ = 1, ... , M).
From standard mean field theory for the Potts model (Wu , 1982) it is possible to
show that the transition from the ferromagnetic phase to the paramagnetic phase
is at Tc
2M (qJ.)~Og(q-l) [J 1 + (M - 1)h] . The average spin spin correlation
function, 6~,,~ j at the paramagnetic phase is for all points Xi and Xj; i. e. the spin
value at each point is independent of the others. The ferromagnetic phase is further
divided into two regions. At low temperatures, with high probability, all spins are
aligned; that is 6~.,sJ ~ 1 for all i and j. At intermediate temperatures, between T*
and Tc, only spins of the same group lJ are aligned with high probability; 6~" ~'-: ~ 1,
=
t
.' J
while spins belonging to different groups, Jl and lJ, are independent; 6~1"? ' s~1 ~ 1q .
The spin spin correlation function at the super-paramagnetic phase can be used
to decide whether or not two spins belong to the same cluster. In contrast with
the mere inter-point distance, the spin spin correlation function is sensitive to the
collective behavior of the system and is therefore a suitable quantity for defining
collective structures (clusters).
The transition temperature T* may be calculated and shown to be proportional to
J 2 ; T* = a(N, M, q) h. In figure 2 we present the phase diagram, in the (~, ~)
plane, for the case M
4, N
1000 and q 6.
=
=
=
paramagnetic
/'
1e-01 ~_ _ _ _ _--,-_ _ _ _~~
"1
super-paramagnet~s-,'-'
..,
/
/
1e"()2
f::
;" .. ,
.-'
"",,;
ferromagnetic
./,'
1e-03
, ...... ',.. '
1e..()4~--~~~~--~~----~~~
1e..()S
1e"()4
1e-03
le-02
J2JJl
le..()1
Figure 2: Phase diagram
of the mean field Potts
model (2) for the case
M = 4, N = 1000 and q =
6. The critical temperature Tc is indicated by the
solid line, and the transition temperature T*, by
the dashed line.
1e+OO
The phase diagram fig. 2 shows that the existence of natural classes can manifest
itself in the thermodynamic properties of the proposed Potts model. Thus our
approach is supported, provided that a correct choice of the interaction strengths
is made.
419
Clustering Data through an Analogy to the Potts Model
2
Definition of local interaction
In order to minimize the intra-cluster interaction it is convenient to allow an interaction only bet.ween "neighbors". In common \. . ith other "local met.hods" , we assume
that there is a 'local length scale' '" a, which is defined by the high density regions
and is smaller than the typical distance between points in the low density regions.
This property can be expressed in the ordering properties of the Potts system by
choosing a short range interaction . Therefore we consider that each point interacts
only with its neighbors with interaction strength
__ 1
Jij -
J ji -
R exp
(!lXi-Xj!l2)
-
2a 2
.
(3)
Two points, Xi and Xj, are defined as neighbors if they have a mutual neighborhood
value J{; that is, if Xi is one of the J{ nearest neighbors of Xj and vice-versa. This
definition ensures that hj is symmetric; the number of bonds of any site is less
than J{. We chose the "local length scale", a, to be the average of all distances
Ilxi - Xj II between pairs i and j with a mutual neighborhood value J{. R is the
average number of neighbors per site; i. e it is twice the number of non vanishing
interactions, Jij divided by the number of points N (This careful normalization of
the interaction strength enables us to estimate the critical temperature Tc for any
data sample).
3
Calculation of thermodynanlic quantities
The ordering properties of the system are reflected by the susceptibility and the
Spill spin correlation functioll D'<"'<J' where -..-. stands for a thermal average. These
quantities can be estimated by averaging over the configurations genel'ated by a
Monte Carlo procedure. We use the Swendsen-Wang (Wang and Swendsen, 1990)
Monte Carlo algorithm for the Potts model (1) not only because of its high efficiency,
but also because it utilizes the SW clusters. As will be explained the SW clusters
are strongly connected to the clusters we wish to identify. A layman's explanation
of the method is as follows. The SW procedure stochastically identifies clusters
of aligned spins, and then flips whole clusters simultaneously. Starting from a
given spin configuration, SW go over all the bonds between neighboring points,
and either "freeze" or delete them. A bond connecting two neighboring sites i and
j, is deleted with probability p~,j = exp( 63 .. 3 J and frozen with probability
p? = 1 - p~,j. Having gone over all the bonds , all spins which have a path of
frozen bonds connecting them are identified as being in the same SW cluster. Note
t.hat, according to the definition of p~,j, only spins of the same value can be frozen
in the same SW cluster. Now a new spin configuration is generated by drawing, for
each cluster, randomly a value s = 1, ... q, which is assigned to all its spins. This
procedure defines one Monte Carlo step and needs to be iterated in order to obtain
thermodynamic averages.
*
At temperatures where large regions of correlated spins occur, local methods (e. g.
Metropolis), which flip one spin at a time, become very slow. The SVl method overcomes this difficulty by flipping large clusters of aligned spins simult.aneously. Hence
the SW method exhibits much smaller autocorrelation times than local methods .
The strong connection between the SW clusters and the ordering properties of the
Pot.ts spins is manifested in the relation
-6-- (<1- 1 )710+ 1
_".,8) q
(4)
M. BLATI, S. WISEMAN, E. DOMANY
420
where nij = 1 whenever Si and Sj belong to the same SW-cluster and nij = 0
otherwise. Thus, nij is the probability that Si and Sj belong to the same SW-cluster.
The r.h.s. of (4) has a smaller variance than its l.h.s., so that the probabilities nij
provide an improved estimator of the spin spin correlation function.
4
Locating the super-paramagnetic phase
In order to locate the temperature range in which the system IS III the superparamagnetic phase we measure the susceptibility of the system which is proportional to the variance of the magnetization
N-
X = T (m 2
2
-
(5)
m ).
The magnetization, m , is defined as
qNmax/N -1
m=-----q-1
(6)
where NJ.' is the number of spins with the value J.l.
In the ferromagnetic phase the fluctuations of the magnetization are negligible,
so the susceptibility, X, is small. As the temperature is raised, a sudden increase of the susceptibility occurs at the transition from the ferromagnetic to the
super-paramagnetic phase. The susceptibility is non-vanishing only in the superparamagnetic phase, which is the only phase where large fluctuations in the magnetization can occur. The point where the susceptibility vanishes again is an upper
bound for the transition temperature from the super-paramagnetic to the paramagnetic phase.
5
The clustering procedure
Our method consists of two main steps. First we identify the range of temperatures
where the clusters may be observed (that corresponding to the super-paramagnetic
phase) and choose a temperature within this range. Secondly, the clusters are
identified using the information contained in the spin spin correlation function at
this temperature. The procedure is summarized here, leaving discussion concerning
the choice of the parameters to a later stage.
(a) Assign to each point Xi a q-state Potts spin variable
20 in the example that we present in this work.
Si.
q was chosen equal to
(b) Find all the pairs of points having mutual neighborhood value K. We set K = 10.
(c) Calculate the strength of the interactions using equation (3).
(d) Use the SW procedure with the Hamiltonian (1) to calculate the susceptibility X
for various temperatures. The transition
temperature from the paramagnetic phase
_ 1
can be roughly estimated by
Tc
~ 410;(1~A)'
(e) Identify the range of temperatures of non-vanishing X (the super-paramagnetic
phase). Identify the temperature Tmax where the susceptibility X is maximal , and
the temperature Tvanish, where X vanishes at the high temperature side. The optimal temperature to identify the clusters lies between these two temperatures. As a
rule of thumb we chose the "clustering temperature" Tcltl~ = Tvan .. ~+Tma.r but the
results depend only weakly on Tclu~, as long as T cltls is in the super-paramagnetic
range, Tmax < Tcltl~ < Tvani~h.
Clustering Data through an Analogy to the Potts Model
421
(f) At the clustering temperature Tclu s , estimate the spin spin correlation,
all neigh boring pairs of points Xi and Xj, using (4) .
o s "s J
'
for
(g) Clusters are identified according to a thresholding procedure. The spin spin
correlation function 03. ,3J of points Xi and Xj is compared with a threshold, () ; if
OS,, 3J > () they are defined as "friends". Then all mutual friends (including fl'iends
of friends , etc) are assigned to the same cluster. We chose () = 0.5 .
In order to show how this algorithm works, let us consider the distribution of points
presented in figure 1. Because of the overlap of the larger sparse rectangle with the
smaller rectangles, and due to statistical fluctuations, the three dense rectangles
actually contain 883, 874 and 863 points.
Going through steps (a) to (d) we obtained the susceptibility as a function of the
temperature as presented in figure 3. The susceptibility X is maximal at T max = 0.03
and vanishes at Tvanish = 0.13 . In figure 1 we present the clusters obtained according to steps (f) and (g) at Tclus = 0.08. The size of the largest clusters in descending
order is 900 , 894, 877, 2 and all the rest are composed of only one point. The three
biggest clusters correspond to the clusters we are looking for, while the background
is decomposed into clusters of size one.
0 .035
0030
Figure 3: The susceptibility density
as a function of t.he t.emperature.
x;;.
0025
0020
0015
0 0 10
0 005
0000
0 .00
0.02
0 .04
0.06
0 ,08
0.10
012
o.te
016
T
Let us discuss the effect of the parameters on the procedure. The number of Potts
states, q , determines mainly the sharpness of the transition and the critical temperature . The higher q, the sharper the transition . On the other hand, it is necessary
to perform more statistics (more SW sweeps) as the value of q increases . From our
simulations, we conclude that the influence of q is very weak . The maximal number
of neighbors, f{, also affects the results very little; we obtained quite similar results
for a wide range of f{ (5 ~ f{ ~ 20).
No dramatic changes were observed in the classification, when choosing clustering
temperatures Tc1u3 other than that suggested in (e). However this choice is clearly
ad-hoc and a better choice should be found. Our method does not provide a natural way to choose a threshold () for the spin spin correlation function. In practice
though, the classification is not very sensitive to the value of (), and values in the
range 0.2 < () < 0.8 yield similar results. The reason is that the frequency distribution of the values of the spin spin correlation function exhibit.s t.wo peaks, one
close to 1q and the other close to 1, while for intermediate values it is verv close
t.o zero . In figure (4) we present the average size of the largest S\V cluster as a
function of the temperature , along with the size of the largest cluster obtained by
the thresholding procedUl'e (described in (7)) using three different threshold values
() = 0.1, 0 ..5 , o .~). Not.e the agreement. between the largest clust.er size defined by t.he
threshold e = 0.5 and the average size of the largest SW cluster for all t.emperatures
(This agreement holds for the smaller clusters as well) . It support.s our thresholding
procedure as a sensible one at all temperatUl'es .
v
422
M. BLATT, S. WISEMAN, E. DOMANY
Figure 4: Average size of
the largest SW cluster as
a function of the temperature , is denoted by the
solid line. The triangles,
x's and squares denote the
size of the largest cluster
obtained with thresholds
() = 0.2, 0.5 and 0.9 respectively.
500
o~~~~~~~~~~~~~
0.00
6
0.02
0.04
0.06
0.08
T
0.10
0.12
0.14
0.16
Discussion
Other methods that were proposed previously, such as Fukunaga's (1990) , can be
formulated as a Metropolis relaxation of a ferromagnetic Potts model at T = O.
The clusters are then determined by the points having the same spin value at the
local minima of the energy at which the relaxation process terminates. Clearly this
procedure depends strongly on the initial conditions. There is a high probability of
getting stuck in a metastable state that does not correspond to the desired answer.
Such a T = 0 method does not provide any way to distinguish between "good" and
"bad" metastable states. We applied Fukunaga's method on the data set of figure
(1) using many different initial conditions. The right answer was never obtained.
In all runs, domain walls that broke a cluster into two or more parts appeared.
Our method generalizes Fukunaga's method by introducing a finite temperature at
which the division into clusters is stable. In addition, the SW dynamics are completely insensitive to the initial conditions and extremely efficient .
Work in progress shows that our method is especially suitable for hierarchical clustering. This is done by identifying clusters at several temperatures which are chosen
according to features of the susceptibility curve. In particular our method is successful in dealing with "real life" problems such as the Iris data and Landsat data.
Acknowledgments
We thank 1. Kanter for many useful discussions. This research has been supported
by the US-Israel Bi-national Science Foundation (BSF) , and the Germany-Israel
Science Foundation (GIF).
References
J .M. Buhmann and H. Kuhnel (1993); Vector quantization with complexity costs,
IEEE Trans. Inf. Theory 39, 1133.
K. Fukunaga (1990); Introd. to statistical Pattern Recognition, Academic Press.
K. Rose , E. Gurewitz, and G.C . Fox (1993); Constrained clustering as an optimization method, IEEE Trans on Patt. Anal. and Mach. Intel. PAMI 15, 785.
S. Wang and R.H . Swendsen (1990); Cluster Monte Carlo alg., Physica A 167,565.
F.Y. Wu (1982) , The Potts model, Rev Mod Phys, 54, 235.
A.L. Yuille and J.J . Kosowsky (1994); Statistical algorithms that converge, Neural
Computation 6, 341 (1994).
| 1092 |@word advantageous:1 d2:1 simulation:1 emperature:1 dramatic:1 solid:2 initial:3 configuration:3 paramagnetic:18 si:8 assigning:1 partition:2 enables:1 plane:1 ith:2 vanishing:4 short:2 hamiltonian:3 sudden:1 mathematical:1 along:1 become:2 consists:1 autocorrelation:1 introduce:1 inter:2 indeed:1 roughly:1 behavior:1 decomposed:1 little:1 becomes:2 provided:1 underlying:1 notation:1 israel:3 gif:1 nj:1 pseudo:1 appear:1 negligible:1 engineering:1 local:10 mach:1 path:1 fluctuation:3 pami:1 tmax:2 chose:3 twice:1 range:10 gone:1 bi:1 weizmann:1 acknowledgment:1 practice:1 procedure:10 superparamagnetic:2 convenient:1 close:3 influence:1 descending:1 center:1 go:1 starting:1 rectangular:2 formulate:1 sharpness:1 identifying:2 estimator:1 rule:1 utilizing:1 regarded:1 bsf:1 exploratory:1 notion:1 agreement:2 associate:1 recognition:1 observed:2 wang:4 calculate:2 region:12 ferromagnetic:11 ensures:1 connected:1 ordering:5 decrease:1 rose:2 vanishes:3 complexity:1 dynamic:1 depend:1 solving:1 weakly:1 serve:1 yuille:2 division:2 efficiency:1 completely:3 triangle:2 represented:2 various:1 separated:2 distinct:1 monte:6 choosing:2 neighborhood:3 whose:1 quite:1 larger:3 solve:1 kanter:1 drawing:1 otherwise:1 statistic:1 itself:1 hoc:1 frozen:3 interaction:11 jij:3 maximal:3 neighboring:3 aligned:7 getting:1 cluster:54 oo:1 friend:3 nearest:2 progress:1 strong:1 pot:1 come:1 met:1 inhomogeneous:1 closely:1 correct:1 disordered:1 broke:1 assign:2 wall:1 secondly:1 physica:1 hold:1 around:1 swendsen:4 exp:2 equilibrium:1 susceptibility:15 bond:5 sensitive:2 largest:8 vice:1 simult:1 minimization:1 clearly:2 super:12 aim:1 hj:2 og:1 bet:1 potts:21 mainly:1 contrast:1 wishing:1 sense:1 landsat:1 lj:3 relation:1 going:1 germany:1 classification:2 denoted:1 raised:1 constrained:1 mutual:4 field:3 equal:1 never:1 having:3 represents:1 neigh:1 minimized:1 others:2 randomly:1 composed:1 simultaneously:1 national:1 phase:24 consisting:2 interest:1 intra:1 necessary:1 fox:2 desired:1 nij:4 delete:1 wiseman:4 cost:1 introducing:1 uniform:1 successful:1 answer:2 density:7 peak:1 physic:4 discipline:1 connecting:2 concrete:1 again:1 choose:2 d8:1 stochastically:1 kuhnel:1 summarized:1 idealized:1 ad:1 ated:1 later:2 depends:1 analyze:2 shai:1 blatt:3 minimize:1 formed:1 square:2 marcelo:1 spin:55 variance:3 correspond:2 identify:7 yield:1 weak:1 thumb:1 iterated:1 clust:1 mere:1 carlo:6 drive:1 classified:1 phys:1 whenever:1 definition:3 energy:2 frequency:1 naturally:1 manifest:1 knowledge:1 actually:1 higher:1 reflected:1 improved:1 done:1 though:1 strongly:2 stage:1 correlation:13 until:1 hand:2 o:1 defines:1 indicated:2 scientific:1 effect:1 contain:1 hence:2 assigned:2 symmetric:1 rooted:1 iris:1 criterion:3 magnetization:4 functioll:1 temperature:34 recently:1 ilxi:2 common:1 physical:4 ji:1 insensitive:1 jl:1 analog:2 belong:3 he:2 freeze:1 versa:1 lowered:2 stable:2 similarity:1 deduce:1 etc:1 recent:1 inf:1 manifested:1 life:1 minimum:1 layman:1 determine:1 ween:1 converge:1 dashed:1 ii:1 thermodynamic:2 academic:1 calculation:1 cross:1 long:1 divided:4 concerning:1 metric:1 normalization:1 background:2 addition:1 diagram:3 leaving:1 rest:1 mod:1 intermediate:2 iii:1 variety:1 xj:8 affect:1 identified:3 idea:2 domany:4 qj:1 whether:1 introd:1 wo:1 locating:1 tma:1 useful:1 nonparametric:1 estimated:2 per:1 patt:1 rehovot:1 group:7 threshold:5 deleted:1 rectangle:4 relaxation:2 year:1 run:1 powerful:1 fourth:1 place:2 decide:1 wu:2 utilizes:1 bound:1 fl:1 distinguish:1 yielded:1 strength:5 adapted:1 occur:2 fukunaga:4 extremely:1 department:1 according:4 metastable:2 belonging:3 smaller:5 terminates:1 metropolis:2 rev:1 explained:1 equation:1 previously:1 discus:1 flip:2 generalizes:1 apply:1 hierarchical:1 magnetic:1 lxi:1 hat:1 existence:1 denotes:1 clustering:17 spill:1 sw:15 build:1 especially:1 sweep:1 quantity:4 occurs:3 flipping:1 parametric:5 interacts:1 exhibit:3 partitional:2 distance:7 thank:1 sensible:1 reason:1 length:2 index:1 sharper:1 stated:1 anal:1 collective:2 perform:1 upper:1 finite:1 t:1 thermal:2 defining:1 incorporated:1 looking:1 locate:1 pair:4 kosowsky:2 connection:1 trans:2 suggested:1 pattern:3 appeared:1 including:1 max:1 explanation:1 everyone:1 suitable:2 critical:3 natural:4 difficulty:1 overlap:1 buhmann:2 identifies:1 gurewitz:2 l2:1 proportional:2 analogy:6 foundation:2 thresholding:3 supported:2 side:1 allow:1 understand:1 institute:1 neighbor:7 wide:1 sparse:1 distributed:1 curve:1 calculated:1 stand:2 transition:10 stuck:1 made:2 sj:4 overcomes:1 dealing:1 global:1 assumed:1 conclude:1 xi:8 svl:1 alg:1 complex:1 domain:2 main:4 spread:1 dense:1 whole:1 fig:2 site:3 biggest:1 intel:1 slow:1 wish:2 lie:1 governed:1 bad:1 boring:1 er:1 quantization:1 te:1 hod:1 tc:5 expressed:1 ordered:2 contained:1 determines:1 viewed:1 goal:1 marked:2 formulated:1 careful:1 hard:3 change:1 determined:2 typical:2 uniformly:1 averaging:1 called:2 eytan:1 e:1 formally:1 support:2 d1:1 phenomenon:1 correlated:1 |
105 | 1,094 | Plasticity of Center-Surround Opponent
Receptive Fields in Real and Artificial
Neural Systems of Vision
S. Yasui
Kyushu Institute of Technology
lizuka 820, Japan
T. Furukawa
Kyushu Institute of Technology
lizuka 820, Japan
M. Yamada
Electrotechnical Laboratory
Tsukuba 305, Japan
T. Saito
Tsukuba University
Tsukuba 305, Japan
Abstract
Despite the phylogenic and structural differences, the visual systems of different species, whether vertebrate or invertebrate, share
certain functional properties. The center-surround opponent receptive field (CSRF) mechanism represents one such example. Here,
analogous CSRFs are shown to be formed in an artificial neural
network which learns to localize contours (edges) of the luminance
difference. Furthermore, when the input pattern is corrupted by
a background noise, the CSRFs of the hidden units becomes shallower and broader with decrease of the signal-to-noise ratio (SNR).
The same kind of SNR-dependent plasticity is present in the CSRF
of real visual neurons; in bipolar cells of the carp retina as is shown
here experimentally, as well as in large monopolar cells of the fly
compound eye as was described by others. Also, analogous SNRdependent plasticity is shown to be present in the biphasic flash
responses (BPFR) of these artificial and biological visual systems .
Thus, the spatial (CSRF) and temporal (BPFR) filtering properties with which a wide variety of creatures see the world appear to
be optimized for detectability of changes in space and time.
1
INTRODUCTION
A number of learning algorithms have been developed to make synthetic neural
machines be trainable to function in certain optimal ways. If the brain and nervous
systems that we see in nature are best answers of the evolutionary process, then
one might be able to find some common 'softwares' in real and artificial neural
systems. This possibility is examined in this paper, with respect to a basic visual
S. YASUI, T. FURUKAWA, M. YAMADA, T. SAITO
160
mechanism relevant to detection of brightness contours (edges). In most visual
systems of vertebrate and invertebrate, one finds interneurons which possess centersurround opponent receptive fields (CSRFs). CSRFs underlie the mechanism of
lateral inhibition which produces edge enhancement effects such as Mach band. It
has also been shown in the fly compound eye that the CSRF of large monopolar cells
(LMCs) changes its shape in accordance with SNR; the CSRF becomes wider with
increase of the noise level in the sensory environment. Furthermore, whereas CSRFs
describe a filtering function in space, an analogous observation has been made
in LMCs as regards the filtering property in the time domain; the biphasic flash
response (BPFR) lasts longer as the noise level increases (Dubs, 1982; Laughlin,
1982).
A question that arises is whether similar SNR-dependent spatia-temporal filtering
properties might be present in vertebrate visual cells. To investigate this, we made
an intracellular recording experiment to measure the CSRF and BPFR profiles of
bipolar cells in the carp retina under appropriate conditions, and the results are
described in the first part of this paper. In the second part, we ask the same
question in a 3-layer feedforward artificial neural network (ANN) trained to detect
and localize spatial and temporal changes in simulated visual inputs corrupted by
noise. In this case, the ANN wiring structure evolves from an initial random state so
as to minimize the detection error, and we look into the internal ANN organization
that emerges as a result of training. The findings made in the real and artificial
neural systems are compared and discussed in the final section.
In this study, the backpropagation learning algorithm was applied to update the
synaptic parameters of the ANN. This algorithm was used as a means for the computational optimization. Accordingly, the present choice is not necessarily relevant
to the question of whether the error backpropagation pathway actually might exist
in real neural systems( d. Stork & Hall, 1989).
2
THE CASE OF A REAL NEURAL SYSTEM:
RETINAL BIPOLAR CELL
Bipolar cells occur as a second order neuron in the vertebrate retina, and they have
a good example of CSRF Here we are interested in the possibility that the CSRF
and BPFR of bipolar cells might change their size and shape as a function of the
visual environment, particularly as regards the dark- versus light-adapted retinal
states which correspond to low versus high SNR conditions as explained later. Thus,
the following intracellular recording experiment was carried out .
2.1
MATERIAL AND METHOD
The retina was isolated from the carp which had been kept in complete darkness
for 2 hrs before being pithed for sacrifice. The specimen was then mounted on
a chamber with the receptor side up, and it was continuously superfused with a
Ringer solution composed of (in mM) 102 NaCI, 28 NaHC0 3 , 2.6 KCI, 1 CaCh, 1
MgCh and 5 glucose, maintained at pH=7 .6 and aerated with a gas mixture of 95%
O 2 and 5% CO 2 ? Glass micropipettes filled with 3M KCI and having tip resistances
of about 150 Mn were used to record the membrane potential. Identification of
bipolar cell units was made on the basis of presence or absence of CSRF . For this
preliminary test, the center and peripheral responses were examined by using flashes
of a small centered spot and a narrow annular ring. To map their receptive field
profile, the stimulus was given as flashes of a narrow slit presented at discrete
positions 60 pm apart on the retina. The slit of white light was 4 mm long and 0.17
mm wide, and its flash had intensity of 7.24 pW /cm 2 and duration of 250 msec.
The CSRF measurement was made under dark- and light- adapted conditions. A
161
Plasticity of Center-Surround Opponent Receptive Fields
(b) 1.0
aUght
? Dark
(a)
o .
t CCnler
Lighl
"'/I~~I~~1~!.yvr""'~
_0_
_
. _ . _._ . _ ._
I
n.,k
0_0_0_0_._
"_
-1.0
"- ".
".".
i
i
-1.0
0
i
1.0
(c)
~
~~H?\J)rl. !rt~~\..+,~
I
I
i
SIIlV _ ._
._
_ _ _0_0_
GO/1m
"_"_
0_
._0_"_._
"
5mV
~
I Osee
lsec
Figure 1: (a) Intracellular recordings from an ON-center bipolar cell of the carp
retina with moving slit stimuli under light and dark adapted condition. (b) The
receptive field profiles plotted from the recordings. (c) The response recorded when
the slit was positioned at the receptive field center.
steady background light of 0.29 JJW /cm2 was provided for light adaptation.
2.2
RESULTS
Fig.la shows a typical set of records obtained from a bipolar cell. The response
to each flash of slit was biphasic (i.e., BPFR), consisting of a depolarization (ON)
followed by a hyperpolarization(OFF) . The ON response was the major component
when the slit was positioned centrally on the receptive field, whereas the OFF
response was dominant at peripheral locations and somewhat sluggish. The CSRF
pattern was portrayed by plotting the response membrane potential measured at
the time just prior to the cessation of each test flash. The result compiled from
the data of Figola is presented in Fig.lb, showing that the CSRF of the darkadapted state was shallow and broad as opposed to the sharp profile produced during
light adaptation. The records with the slit positioned at the receptive field center
are enlarged in Fig.lc, indicating that the OFF part of the BPFR waveform was
shallower and broader when the retina was dark adapted than when light adaptedo
3
THE CASE OF ARTIFICIAL NEURAL NETWORKS
Visual pattern recognition and imagery data processing have been a traditional
application area of ANNs. There are also ANNs that deal with time series signals.
These both types of ANNs are considered here, and they are trained to detect and
localize spatial or temporal changes of the input signal corrupted by noise.
162
3.1
S. YASUI, T. FURUKAWA, M. YAMADA, T. SAITO
PARADIGMS AND METHODS
The ANN models we used are illustrated in Figs.2. The model of Fig.2a deals
with one-dimensional spatial signals. It consists of three layers (input, hidden,
output), each having the same number of 12 or 20 neuronal units. The pattern
given to the input layer represents the brightness distribution of light. The network
was trained by means of the standard backpropagation algorithm, to detect and
localize step-wise changes (edges) which were distributed on each training pattern
in a random fashion with respect to the number, position and height. The mean
level of the whole pattern was varied randomly as well. In addition, there was
a background noise (not illustrated in Figs.2); independent noise signals of the
same statistics were given to the all input units, and the maximum noise amplitude
(NL: noise level) remained constant throughout each training session. The teacher
signal was the "true" edge positions which were subject to obscuration due to the
background noise; the learning was supervised such that each output unit would
respond with 1 when a step-wise change not due to the background noise occurred
at the corresponding position, and respond with -1 otherwise. The value of each
synaptic weight parameter was given randomly at the outset and updated by using
the backpropagation algorithm after presentation of each training pattern. The
training session was terminated when the mean square error stopped decreasing.
To process time series inputs, the ANN model of Fig.2b was constructed with the
backpropagation learning algorithm. This temporal model also has three layers,
but the meaning of this is quite different from the spatial network model of Fig.2a.
That is, whereas each unit of each layer in the spatial model is an anatomical
entity, this is not the case with respect to the temporal model. Thus, each layer
represents a single neuron so that there are actually only three neuronal elements,
i.e., a receptor, an interneuron, and an output cell. And, the units in the same
layer represent activity states of one neuron at different time slices; the rightmost
unit for the present time, the next one for one time unit ago, and so on. As is
apparent from Fig.2b, therefore, there is no convergence from the future (right) to
the past (left). Each cell has memory of T-units time. Accordingly, the network
requires 2T - 1 units in the input layer, T units in the hidden layer and 1 units in
the output layer to calculate the output at present time. The input was a discrete
time series in which step-wise changes took place randomly in a manner analogous
to the spatial input of Fig.2a. As in the spatial case, there was a background noise
(b)
Input
Correetiom
Correctiom
Figure 2: The neural network architectures. Spatial (a) and temporal model (b).
163
Plasticity of Center-Surround Opponent Receptive Fields
(8)
11I1JJ11JJ111
iFJiiii? ?
2000
Oulput ut,i'"
4000
10000
30000
0.2
0.1
01.O-=~~~~4~.0~!!iI!!:!II"'''-'l8.0xl04
0.0
Iterations
Figure 3: Development of receptive fields. Synaptic weights (a) and mean square
error (b), both as a function of the number of iterations.
added to the input. The network was trained to respond with +1/ -1 when the
original input signal increased/decreased, and to respond with 0 otherwise.
3.2
RESULTS
Spatial case: Emergence of CSRFs with SNR-dependent plasticity
As regards the edge detection learning by the
ANN model of Fig.2a, the results without
the background noise are described first (Furukawa & Yasui, 1990; Joshi & Lee, 1993).
Fig.3a illustrates how the synaptic connecHIddm Layer
tions developed from the initial random state.
If the final distribution of synaptic weight parameters is examined from input units to any
hidden unit and also from hidden units to
Output Layer
any output unit, then it can be seen in either case that the central and peripheral connections are opposite in the polarity of their
weight parameters; the central group had eiFigure 4: A Sample of activity
ther positive (ON-center) or negative (OFFpattern of each layer
center) values, but the reversed profiles are
shown in the drawing of Fig.3a for the OFF -center case. In any event, CSRFs were
formed inside the network as a result of the edge detection learning. Fig.3b shows
the performance improvement during a learning session. FigA shows the activation
pattern of each layer in response to a sample input, and edge enhancement like
the Mach band effect can be observed in the hidden layer. Fig.5a presents sample
input patterns corrupted by the background noise of various NL values, and Fig.5b
shows how a hidden unit was connected to the input layer at the end of training.
CSRFs were still formed when the environment suffered from the noise. However,
the structure of the center-surround antagonism changed as a function of NL; the
CSRFs became shallow and broad as NL increased, i.e., as the SNR decreased.
\'---
I
I
Temporal case: Emergence of BPFRs with SNR-dependent plasticity
With reference to the learning paradigm of Fig.2b, Fig.5c reveals how a representative hidden unit made synaptic connections with the input units as a function of
NL; the weight parameters are plotted against the elapsed time. Each trace would
correspond to the response of the hidden unit to a flash of light, and it consists of
164
S. Y ASUI, T. FURUKAWA, M. YAMADA, T . SAITO
two phases of ON and OFF, i.e., BPFRs (biphasic flash responses) emerged in this
ANN as a result of learning, and the biphasic time course changed depending on
NL; the negative-going phase became shallower and longer with decrease of SNR.
4
DISCUSSION: Common Receptive Field Properties in
Vertebrate, Invertebrate and Artificial Systems
A CSRF profile emerges after differentiating twice in space a small patch of light,
and CSRF is a kind of point spreading function. Accordingly, the response to any
input distribution can be obtained by convolving the input pattern with CSRF.
The double differentiation of this spatial filtering acts to locate edge positions. On
the other hand, the waveform of BPFR appears by differentiating once in time a
short flash of light. Thus, the BPFR is an impulse response function with which
to convolve the given input time series to obtain the response waveform. This is
a derivative filtering, which subserves detection of temporal changes in the input
visual signal. While both CSRF and BPFR occur in visual neurons of a wide variety
of vertebrates and invertebrates, the first part of the present study shows that these
spatial and temporal filtering functions can develop autonomously in our ANNs.
The neural system of visual signal processing encounters various kinds of noise.
There are non-biological ones such as a background noise in the visual input itself
and the photon noise which cannot be ignored when the light intensity is low. Endogenous sources of noise include spontaneous photoisomerization in photoreceptor
cells, quantal transmitter release at synaptic sites, open/close activities of ion channels and so on. Generally speaking, therefore, since the surroundings are dim when
the retina is dark adapted, SNR in the neuronal environment tends to be low during
dark adaptation. According to the present experiment on the carp retina, the CSRF
of bipolar cells widens in space and the BPFR is prolonged in time when the retina
is dark adapted, that is, when SNR is presumably low. Interestingly, the same
SNR-dependent properties have also been described in connection with the CSRF
and BPFR of large monopolar cells in the fly compound eye. These spatial and
temporal observations are both in accord with a notion that a method to remove
noise is smoothing which requires averaging for a sufficiently long interval. In other
words, when SNR is low , the signal averaging takes place over a large portion of
the spatio-temporal domain comprised of CSRF and BPFR. Smoothing and differentiation are entirely opposite in the signal processing role. The SNR dependency
of the CSRF and BPFR profiles can be viewed as a compromise between these two
operations, for the need to detect signal changes in the presence of noise. These
(a)
(b)
~
0
10
20
-io
(c)
0
10
10
20
Figure 5: (a) A sample set of training patterns with different background noise
levels (NLs). The NLs are 0.0, 0.4, 1.0 from bottom to top. The receptive field
profiles (b) and flash responses (c) after training with each NL. The ordinate scale
is linear but in arbitrary unit, with the zero level indicated by dotted lines.
Plasticity of Center-Surround Opponent Receptive Fields
165
points parallel the results of information-theoretic analysis by Atick and Redlich
(1992) and by Laughlin (1982).
5
CONCLUDING REMARKS
We have learnt from this study that the same software is at work for the SNRdependent control of the spati~temporal visual receptive field in entirely different
hardwares; namely, vertebrate, invertebrate and artificial neural systems. In other
words, the plasticity scheme represents nature's optimum answer to the visual functional demand, not a result of compromise with other factors such as metabolism
or morphology. Some mention needs to be made of the standard regularization
theory. If the theory is applied to the edge detection problem, then one obtains
the Laplacian-Gaussian filter which is a well-known CSRF example(Torre & Poggio, 1980). And, the shape of this spatial filter can be made wide or narrow by
manipulating the value of a constant usually referred to as the regularization parameter . This parameter choice corresponds to the compromise that our ANN finds
autonomously between smoothing and differentiation. The present type of research
aided by trainable artificial neural networks seems to be a useful top-down approach
to gain insight into the brain and neural mechanisms. Earlier , Lehky and Sejnowski
(1988) were able to create neuron-like units similar to the complex cells of the visual
cortex by using the backpropagation algorithm, however, the CSRF mechanism was
given a priori to an early stage in their ANN processor. It should also be noted that
Linsker (1986) succeeded in self-organization of CSRFs in an ANN model that operates under the learning law of Hebb. Perhaps, it remains to be examined whether
the CSRFs formed in such an unsupervised learning paradigm might also possess
an SNR-dependent plasticity similar to that described in this paper.
References
Atick, J .J. & Redlich, A.N . (1992) What does the retina know about natural scenes?
Neural Computation, 4, 196-210.
Dubs, A. (1982) The spatial integration of signals in the retina and lamina of the fly
compound eye under different conditions of luminance. 1. Compo Physiol A, 146,
321-334.
Furukawa, T. & Yasui, S. (1990) Development of center-surround opponent receptive
fields in a neural network through backpropagation training. Proc. Int. Con/. Fuzzy
Logic & Neural Networks (Iizuka, Japan) 473-490.
Joshi, A. & Lee, C.H . (1993) Backpropagation learns Marr's operator Bioi. Cybern.,
10, 65-73.
Laughlin, S. B. (1982) Matching coding to scenes to enhance efficiency. In Braddick
OJ, Sleigh AC(eds) The physical and biological processing of images (pp.42-52).
Springer, Berlin, Heidelberg New York.
Lehky, S. R. & Sejnowski, T. J. (1988) Network model of shape-from shading: neural
function arises from both receptive and projective fields . Nature, 333, 452-454.
Linsker, R. (1986) From basic network principles to neural architecture: Emergence
of spatial-opponent cells. Proc. Natl. Acad. Sci. USA, 83, 7508-7512.
Stork, D. G. & Hall, J. (1989) Is backpropagation biologically plausible? International Join Con/. Neural Networks, II (Washington DC), 241-246.
Torre, V . & Poggio, T. A. (1986) On edge detection. IEEE Trans. Pattern Anal.
Machine Intel. , PAMI-8, 147-163.
PART III
THEORY
| 1094 |@word pw:1 seems:1 open:1 cm2:1 brightness:2 mention:1 shading:1 initial:2 series:4 interestingly:1 rightmost:1 past:1 activation:1 physiol:1 plasticity:10 shape:4 remove:1 update:1 metabolism:1 nervous:1 accordingly:3 short:1 yamada:4 record:3 compo:1 location:1 height:1 constructed:1 consists:2 pathway:1 inside:1 manner:1 sacrifice:1 morphology:1 monopolar:3 brain:2 decreasing:1 prolonged:1 vertebrate:7 becomes:2 provided:1 what:1 kind:3 cm:1 depolarization:1 fuzzy:1 developed:2 finding:1 biphasic:5 differentiation:3 temporal:13 act:1 bipolar:9 control:1 unit:23 underlie:1 appear:1 before:1 positive:1 accordance:1 tends:1 io:1 acad:1 despite:1 mach:2 receptor:2 pami:1 might:5 twice:1 examined:4 co:1 projective:1 backpropagation:9 spot:1 saito:4 area:1 matching:1 outset:1 word:2 cannot:1 close:1 operator:1 cybern:1 darkness:1 map:1 center:14 go:1 duration:1 insight:1 marr:1 notion:1 analogous:4 updated:1 spontaneous:1 element:1 recognition:1 particularly:1 observed:1 role:1 bottom:1 fly:4 calculate:1 connected:1 autonomously:2 decrease:2 environment:4 trained:4 compromise:3 efficiency:1 basis:1 various:2 describe:1 sejnowski:2 artificial:10 quite:1 apparent:1 emerged:1 plausible:1 drawing:1 otherwise:2 statistic:1 emergence:3 itself:1 final:2 took:1 adaptation:3 relevant:2 xl04:1 convergence:1 enhancement:2 double:1 optimum:1 produce:1 ring:1 wider:1 tions:1 depending:1 develop:1 lamina:1 ac:1 measured:1 waveform:3 nls:2 torre:2 filter:2 centered:1 material:1 preliminary:1 biological:3 mm:3 antagonism:1 hall:2 considered:1 sufficiently:1 presumably:1 major:1 early:1 proc:2 spreading:1 create:1 gaussian:1 spati:1 broader:2 release:1 improvement:1 transmitter:1 detect:4 glass:1 dim:1 lizuka:2 dependent:6 hidden:9 manipulating:1 going:1 interested:1 priori:1 development:2 spatial:16 smoothing:3 integration:1 field:17 once:1 having:2 washington:1 represents:4 broad:2 look:1 unsupervised:1 linsker:2 future:1 others:1 stimulus:2 retina:12 randomly:3 surroundings:1 composed:1 phase:2 consisting:1 detection:7 organization:2 interneurons:1 possibility:2 investigate:1 mixture:1 nl:7 light:13 natl:1 edge:11 succeeded:1 poggio:2 filled:1 lmcs:2 plotted:2 isolated:1 stopped:1 increased:2 earlier:1 snr:15 comprised:1 dependency:1 answer:2 teacher:1 corrupted:4 learnt:1 synthetic:1 international:1 lee:2 off:5 tip:1 enhance:1 continuously:1 electrotechnical:1 imagery:1 recorded:1 central:2 opposed:1 convolving:1 derivative:1 japan:5 potential:2 photon:1 retinal:2 coding:1 int:1 mv:1 later:1 endogenous:1 portion:1 parallel:1 minimize:1 formed:4 square:2 became:2 correspond:2 identification:1 produced:1 dub:2 ago:1 processor:1 anns:4 synaptic:7 ed:1 against:1 pp:1 con:2 gain:1 ask:1 emerges:2 ut:1 amplitude:1 positioned:3 actually:2 appears:1 supervised:1 response:15 furthermore:2 just:1 atick:2 stage:1 hand:1 cessation:1 indicated:1 impulse:1 perhaps:1 usa:1 effect:2 true:1 regularization:2 laboratory:1 illustrated:2 white:1 deal:2 wiring:1 during:3 self:1 maintained:1 noted:1 steady:1 complete:1 theoretic:1 meaning:1 wise:3 image:1 common:2 functional:2 rl:1 hyperpolarization:1 physical:1 stork:2 discussed:1 occurred:1 measurement:1 glucose:1 surround:7 pm:1 session:3 figa:1 had:3 moving:1 longer:2 compiled:1 inhibition:1 cortex:1 spatia:1 dominant:1 apart:1 compound:4 certain:2 furukawa:6 seen:1 somewhat:1 specimen:1 paradigm:3 signal:13 ii:3 annular:1 long:2 laplacian:1 basic:2 vision:1 iteration:2 represent:1 accord:1 cell:18 ion:1 background:10 whereas:3 addition:1 decreased:2 interval:1 source:1 suffered:1 posse:2 recording:4 subject:1 structural:1 joshi:2 presence:2 feedforward:1 iii:1 variety:2 architecture:2 opposite:2 whether:4 resistance:1 speaking:1 york:1 remark:1 ignored:1 generally:1 useful:1 dark:8 band:2 ph:1 hardware:1 lehky:2 exist:1 dotted:1 anatomical:1 discrete:2 detectability:1 group:1 kci:2 localize:4 kept:1 luminance:2 ringer:1 respond:4 place:2 throughout:1 patch:1 entirely:2 layer:16 followed:1 centrally:1 tsukuba:3 activity:3 adapted:6 occur:2 scene:2 software:2 invertebrate:5 lsec:1 concluding:1 kyushu:2 according:1 peripheral:3 membrane:2 shallow:2 evolves:1 biologically:1 explained:1 remains:1 mechanism:5 know:1 end:1 operation:1 opponent:8 appropriate:1 chamber:1 encounter:1 original:1 convolve:1 top:2 include:1 widens:1 question:3 added:1 receptive:17 rt:1 traditional:1 evolutionary:1 reversed:1 lateral:1 simulated:1 centersurround:1 entity:1 braddick:1 berlin:1 sci:1 polarity:1 quantal:1 ratio:1 trace:1 negative:2 anal:1 shallower:3 neuron:6 observation:2 gas:1 locate:1 dc:1 varied:1 lb:1 sharp:1 arbitrary:1 intensity:2 ordinate:1 namely:1 optimized:1 connection:3 elapsed:1 narrow:3 ther:1 trans:1 able:2 usually:1 pattern:12 oj:1 memory:1 event:1 natural:1 carp:5 hr:1 mn:1 scheme:1 technology:2 eye:4 carried:1 prior:1 law:1 filtering:7 mounted:1 versus:2 plotting:1 principle:1 share:1 l8:1 course:1 changed:2 last:1 side:1 laughlin:3 institute:2 wide:4 differentiating:2 distributed:1 regard:3 slice:1 world:1 contour:2 sensory:1 made:8 obtains:1 logic:1 reveals:1 spatio:1 iizuka:1 nature:3 channel:1 heidelberg:1 necessarily:1 complex:1 domain:2 intracellular:3 terminated:1 whole:1 noise:23 profile:8 enlarged:1 fig:18 intel:1 join:1 neuronal:3 representative:1 site:1 creature:1 fashion:1 redlich:2 referred:1 lc:1 hebb:1 position:5 msec:1 portrayed:1 learns:2 subserves:1 down:1 remained:1 showing:1 sluggish:1 illustrates:1 demand:1 interneuron:1 visual:16 springer:1 corresponds:1 bioi:1 viewed:1 presentation:1 ann:11 flash:11 absence:1 experimentally:1 change:10 aided:1 typical:1 operates:1 averaging:2 specie:1 slit:7 la:1 photoreceptor:1 indicating:1 internal:1 arises:2 trainable:2 |
106 | 1,095 | Worst-case Loss Bounds
for Single Neurons
David P. Helmbold
Department of Computer Science
University of California, Santa Cruz
Santa Cruz, CA 95064
USA
Jyrki Kivinen
Department of Computer Science
P.O. Box 26 (Teollisuuskatu 23)
FIN-00014 University of Helsinki
Finland
Manfred K. Warmuth
Department of Computer Science
University of California, Santa Cruz
Santa Cruz, CA 95064
USA
Abstract
We analyze and compare the well-known Gradient Descent algorithm and a new algorithm, called the Exponentiated Gradient
algorithm, for training a single neuron with an arbitrary transfer
function . Both algorithms are easily generalized to larger neural
networks, and the generalization of Gradient Descent is the standard back-propagation algorithm. In this paper we prove worstcase loss bounds for both algorithms in the single neuron case.
Since local minima make it difficult to prove worst-case bounds
for gradient-based algorithms, we must use a loss function that
prevents the formation of spurious local minima. We define such
a matching loss function for any strictly increasing differentiable
transfer function and prove worst-case loss bound for any such
transfer function and its corresponding matching loss. For example, the matching loss for the identity function is the square loss
and the matching loss for the logistic sigmoid is the entropic loss.
The different structure of the bounds for the two algorithms indicates that the new algorithm out-performs Gradient Descent when
the inputs contain a large number of irrelevant components.
310
1
D. P. HELMBOLD, J. KIVINEN, M. K. WARMUTH
INTRODUCTION
The basic element of a neural network, a neuron, takes in a number of real-valued
input variables and produces a real-valued output. The input-output mapping of
a neuron is defined by a weight vector W E RN, where N is the number of input
variables, and a transfer function ?. When presented with input given by a vector
x E RN, the neuron produces the output y = ?(w . x). Thus, the weight vector
regulates the influence of each input variable on the output, and the transfer function
can produce nonlinearities into the input-output mapping. In particular, when the
transfer function is the commonly used logistic function, ?(p) = 1/(1 + e- P ), the
outputs are bounded between 0 and 1. On the other hand, if the outputs should
be unbounded, it is often convenient to use the identity function as the transfer
function, in which case the neuron simply computes a linear mapping. In this
paper we consider a large class of transfer functions that includes both the logistic
function and the identity function, but not discontinuous (e.g. step) functions.
The goal of learning is to come up with a weight vector w that produces a
desirable input-output mapping. This is achieved by considering a sequence
S = ((X1,yt}, ... ,(Xl,Yl? of examples, where for t = 1, ... ,i the value Yt E R
is the desired output for the input vector Xt, possibly distorted by noise or other
errors. We call Xt the tth instance and Yt the tth outcome. In what is often called
batch learning, alli examples are given at once and are available during the whole
training session. As noise and other problems often make it impossible to find a
weight vector w that would satisfy ?(w? Xt) = Yt for all t, one instead introduces a
loss function L, such as the square loss given by L(y, y) = (y - y)2/2, and finds a
weight vector w that minimizes the empirical loss (or training error)
l
Loss(w,S) = LL(Yt,?(w . xt}) .
(1)
t=l
With the square loss and identity transfer function ?(p) = p, this is the well-known
linear regression problem. When ? is the logistic function and L is the entropic loss
given by L(y, y) = Y In(yJY) + (1 - y) In((l - y)/(l - y)), this can be seen as a
special case oflogistic regression. (With the entropic loss, we assume 0 ~ Yt, Yt ~ 1
for all t, and use the convention OlnO = Oln(O/O) = 0.)
In this paper we use an on-line prediction (or life-long learning) approach to the
learning problem. It is well known that on-line performance is closely related to
batch learning performance (Littlestone, 1989; Kivinen and Warmuth, 1994).
Instead of receiving all the examples at once, the training algorithm begins with
some fixed start vector W1, and produces a sequence W1, ... , w l+1 of weight vectors.
The new weight vector Wt+1 is obtained by applying a simple update rule to the
previous weight vector Wt and the single example (Xt, Yt). In the on-line prediction
model, the algorithm uses its tth weight vector, or hypothesis, to make the prediction
Yt = ?(Wt . xt). The training algorithm is then charged a loss L(Yt, Yt) for this tth
trial. The performance of a training algorithm A that produces the weight vectors
Wt on an example sequence S is measured by its total (cumulative) loss
l
Loss(A, S) = L
L(Yt, ?(Wt
. Xt? .
(2)
t=l
Our main results are bounds on the cumulative losses for two on-line prediction
algorithms. One of these is the standard Gradient Descent (GO) algorithm. The
other one, which we call EG?, is also based on the gradient but uses it in a different
Worst-case Loss Bounds for Single Neurons
311
manner than GD. The bounds are derived in a worst-case setting: we make no assumptions about how the instances are distributed or the relationship between each
instance Xt and its corresponding outcome Yt. Obviously, some assumptions are
needed in order to obtain meaningful bounds. The approach we take is to compare
the total losses, Loss(GD,5) and Loss(EG?, 5), to the least achievable empirical
loss, infw Loss( w, 5). If the least achievable empirical loss is high, the dependence
between the instances and outcomes in 5 cannot be tracked by any neuron using
the transfer function, so it is reasonable that the losses of the algorithms are also
high. More interestingly, if some weight vector achieves a low empirical loss, we
also require that the losses of the algorithms are low. Hence, although the algorithms always predict based on an initial segment of the example sequence, they
must perform almost as well as the best fixed weight vector for the whole sequence.
The choice of loss function is crucial for the results that we prove. In particular,
since we are using gradient-based algorithms, the empirical loss should not have spurious local minima. This can be achieved for any differentiable increasing transfer
function ? by using the loss function L? defined by
r1(y)
L?(y, fj) =
f
J?-l(y)
(?(z) - y) dz .
(3)
For y < fj the value L?(y, fj) is the area in the z X ?(z) plane below the function
?(z), above the line ?(z) = y, and to the left of the line z = ?-l(fj). We call L? the
matching loss function for transfer function ?, and will show that for any example
sequence 5, if L = L? then the mapping from w to Loss(w , 5) is conveX. For
example, if the transfer function is the logistic function, the matching loss function
is the entropic loss, and ifthe transfer function is the identity function, the matching
loss function is the square loss. Note that using the logistic activation function with
the square loss can lead to a very large number of local minima (Auer et al., 1996).
Even in the batch setting there are reasons to use the entropic loss with the logistic
transfer function (see, for example, Solla et al. , 1988).
How much our bounds on the losses of the two algorithms exceed the least empirical
loss depends on the maximum slope of the transfer function we use. More importantly, they depend on various norms of the instances and the vector w for which
the least empirical loss is achieved. As one might expect, neither of the algorithms
is uniformly better than the other. Interestingly, the new EG? algorithm is better
when most of the input variables are irrelevant, i.e., when some weight vector w
with Wi = 0 for most indices i has a low empirical loss. On the other hand, the
GD algorithm is better when the weight vectors with low empirical loss have many
nonzero components, but the instances contain many zero components.
The bounds we derive concern only single neurons, and one often combines a number
of neurons into a multilayer feedforward neural network. In particular, applying
the Gradient Descent algorithm in the multilayer setting gives the famous back
propagation algorithm . Also the EG? algorithm, being gradient-based, can easily
be generalized for multilayer feedforward networks. Although it seems unlikely
that our loss bounds will generalize to multilayer networks, we believe that the
intuition gained from the single neuron case will provide useful insight into the
relative performance of the two algorithms in the multilayer case. Furthermore, the
EG? algorithm is less sensitive to large numbers of irrelevant attributes. Thus it
might be possible to avoid multilayer networks by introducing many new inputs,
each of which is a non-linear function of the original inputs. Multilayer networks
remain an interesting area for future study.
Our work follows the path opened by Littlestone (1988) with his work on learning
D. P. HELMBOLD, J. KIVINEN, M. K. WARMUTH
312
thresholded neurons with sparse weight vectors. More immediately, this paper is
preceded by results on linear neurons using the identity transfer function (CesaBianchi et aI., 1996; Kivinen and Warmuth, 1994).
2
THE ALGORITHMS
This section describes how the Gradient Descent training algorithm and the new
Exponentiated Gradient training algorithm update the neuron's weight vector.
For the remainder of this paper, we assume that the transfer function </J is increasing
and differentiable, and Z is a constant such that </J'(p) ~ Z holds for all pER. For
the loss function LcjJ defined by (3) we have
aLcjJ(Y, </J(w . x?
aWi
= (</J(w . x) -
Y)Xi .
(4)
Treating LcjJ(Y, </J(w?x? for fixed x and Y as a function ofw, we see that the Hessian
H of the function is given by Hij = </J'(W?X)XiXj. Then v T Hv = </J'(w?x)(v.x)2, so
H is positive definite. Hence, for an arbitrary fixed 5, the empirical loss Loss( w, 5)
defined in (1) as a function of W is convex and thus has no spurious local minima.
We first describe the Gradient Descent (GO) algorithm, which for multilayer networks leads to the back-propagation algorithm. Recall that the algorithm's prediction at trial t is Yt = </J(Wt . Xt), where Wt is the current weight vector and Xt is
the input vector. By (4), performing gradient descent in weight space on the loss
incurred in a single trial leads to the update rule
Wt+l = Wt -
TJ(Yt - Yt)Xt .
The parameter TJ is a positive learning rate that multiplies the gradient of the loss
function with respect to the weight vector Wt. In order to obtain worst-case loss
bounds, we must carefully choose the learning rate TJ. Note that the weight vector
Wt of GO always satisfies Wt = Wi +
aixi for some scalar coefficients ai.
Typically, one uses the zero initial vector Wi = O.
E!:;
A more recent training algorithm, called the Exponentiated Gradient (EG) algorithm (Kivinen and Warmuth, 1994), uses the same gradient in a different way. This
algorithm makes multiplicative (rather than additive) changes to the weight vector,
and the gradient appears in the exponent. The basic version of the EG algorithm
also normalizes the weight vector, so the update is given by
N
Wt+i,i
= Wt,ie-IJ(Yt-Yt)Xt" /
L
Wt,je-IJ(Yt-Y,)Xt,i
j=i
The start vector is usually chosen to be uniform, Wi = (1/ N, ... ,1/ N). Notice that
it is the logarithms of the weights produced by the EG training algorithm (rather
than the weights themselves) that are essentially linear combinations of the past
examples. As can be seen from the update, the EG algorithm maintains the constraints Wt,i > 0 and Ei Wt,i = 1. In general, of course, we do not expect that such
constraints are useful. Hence, we introduce a modified algorithm EG? by employinj
a linear transformation of the inputs. In addition to the learning rate TJ, the EG
algorithm has a scaling factor U > 0 as a parameter. We define the behavior of
EG? on a sequence of examples 5
((Xi,Yi), .. . ,(Xl,Yl? in terms of the EG algorithm's behavior on a transformed example sequence 5' ((xi, yd, .. . , (x~, Yl?
=
=
Worst-case Loss Bounds for Single Neurons
313
where x' = (U Xl , ... , U XN , -U Xl, ... , -U XN) ' The EG algorithm uses the uniform
start vector (1/(2N), . .. , 1/(2N? and learning rate supplied by the EG? algorithm .
At each time time t the N-dimensional weight vector w of EG? is defined in terms
of the 2N -dimensional weight vector Wi of EG as
Wt,i = U(W~ , i - W~ ,N+i ) '
Thus EG? with scaling factor U can learn any weight vector w E RN with Ilwlll <
U by having the embedded EG algorithm learn the appropriate 2N-dimensional
(nonnegative and normalized) weight vector Wi.
3
MAIN RESULTS
The loss bounds for the GO and EG? algorithms can be written in similar forms
that emphasize how different algorithms work well for different problems. When
L = L?n we write Loss?(w, S) and Loss?(A , S) for the empirical loss of a weight
vector wand the total loss of an algorithm A, as defined in (1) and (2). We give
the upper bounds in terms of various norms. For x E RN, the 2-norm Ilxl b is the
Euclidean length of the vector x, the I-norm Ilxlll the sum of the absolute values
of the components of x , and the (X)-norm Ilxll oo the maximum absolute value of
any component of x . For the purposes of setting the learning rates, we assume
that before training begins the algorithm gets an upper bound for the norms of
instances. The GO algorithm gets a parameter X 2 and EG a parameter Xoo such
that IIxtl12 ~ X 2 and Ilxtl loo ~ X oo hold for all t. Finally, recall that Z is an upper
bound on ?/(p) . We can take Z = 1 when ? is the identity function and Z = 1/4
when ? is the logistic function .
Our first upper bound is for GO . For any sequence of examples S and any weight
vector u ERN, when the learning rate is TJ = 1/(2X?Z) we have
Loss?(GO,S) ~ 2Loss?(u,S) + 2(llulbX2)2Z .
Our upper bounds on the EG? algorithm require that we restrict the one-norm of
the comparison class: the set of weight vectors competed against . The comparison
class contains all weight vectors u such that Ilulh is at most the scaling factor ,
U. For any scaling factor U , any sequence of examples S, and any weight vector
u ERN with Ilulll ~ U, we have
4
16
Loss?(EG? , S) ~ 3Loss?(u, S)+ 3(UXoo )2Z1n(2N)
when the learning rate is TJ = 1/(4(UXoo )2Z).
Note that these bounds depend on both the unknown weight vector u and some
norms of the input vectors. If the algorithms have some further prior information
on the sequence S they can make a more informed choice of TJ . This leads to bounds
with a constant of 1 before the the Loss?(u, S) term at the cost of an additional
square-root term (for details see the full paper , Helmbold et al. , 1996) .
It is important to realize that we bound the total loss of the algorithms over any
adversarially chosen sequence of examples where the input vectors satisfy the norm
bound. Although we state the bounds in terms of loss on the data, they imply that
the algorithms must also perform well on new unseen examples, since the bounds
still hold when an adversary adds these additional examples to the end of the
sequence. A formal treatment of this appears in several places (Littlestone, 1989;
D. P. HELMBOLD, J. KIVINEN, M. K. WARMUTH
314
Kivinen and Warmuth, 1994). Furthermore, in contrast to standard convergence
proofs (e.g. Luenberger, 1984), we bound the loss on the entire sequence of examples
instead of studying the convergence behavior of the algorithm when it is arbitrarily
close to the best weight vector.
Comparing these loss bounds we see that the bound for the EG? algorithm grows
with the maximum component of the input vectors and the one-norm of the best
weight vector from the comparison class. On the other hand, the loss bound for the
GD algorithm grows with the tWo-norm (Euclidean length) of both vectors. Thus
when the best weight vector is sparse, having few significant components, and the
input vectors are dense, with several similarly-sized components, the bound for the
EG? algorithm is better than the bound for the GD algorithm. More formally,
consider the noise-free situation where Lossr/>(u, S) = 0 for some u. Assume Xt E
{ -1, I}N and U E {-I, 0, I}N with only k nonzero components in u. We can
then take X 2 = ..,(N, Xoo = 1, IIuI12 = Vk, and U = k.
The loss bounds
become (16/3)k 2 Z1n(2N) for EG? and 2kZN for GD, so for N ~ k the EG?
algorithm clearly wins this comparison. On the other hand, the GD algorithm has
the advantage over the EG algorithm when each input vector is sparse and the best
weight vector is dense, having its weight distributed evenly over its components. For
example, if the inputs Xt are the rows of an N x N unit matrix and U E { -1, 1 } N ,
then X2 = Xoo = 1, IIuI12 = ..,(N, and U = N. Thus the upper bounds become
(16/3)N 2 Z1n(2N) for EG? and 2NZ for GD, so here GD wins the comparison.
Of course, a comparison of the upper bounds is meaningless unless the bounds are
known to be reasonably tight. Our experiments with artificial random data suggest
that the upper bounds are not tight. However, the experimental evidence also
indicates that EG? is much better than G D when the best weight vector is sparse.
Thus the upper bounds do predict the relative behaviors of the algorithms.
The bounds we give in this paper are very similar to the bounds Kivinen and
Warmuth (1994) obtained for the comparison class of linear functions and the square
loss. They observed how the relative performances of the GD and EG? algorithms
relate to the norms of the input vectors and the best weight vector in the linear
case.
Our methods are direct generalizations of those applied for the linear case (Kivinen
and Warmuth, 1994). The key notion here is a distance function d for measuring
the distance d( u, w) between two weight vectors U and w. Our main distance
measures are the Squared Euclidean distance ~ II u - w II ~ and the Relative Entropy
distance (or Kullback-Leibler divergence) L~l Ui In(ui/wi). The analysis exploits
an invariant over t and u of the form
aLr/>(Yt, Wt
. Xt) -
bLr/>(Yt, U? Xt) ~ d(u, Wt)
-
d(u, Wt+l)
,
where a and b are suitably chosen constants. This invariant implies that at each
trial, if the loss of the algorithm is much larger than that of an arbitrary vector
u, then the algorithm updates its weight vector so that it gets closer to u. By
summing the invariant over all trials we can bound the total loss of the algorithms
in terms of Lossr/>(u, S) and d(u, wI). Full details will be contained in a technical
report (Helmbold et al., 1996).
4
OPEN PROBLEMS
Although the presence of local minima in multilayer networks makes it difficult
to obtain worst case bounds for gradient-based algorithms, it may be possible to
Worst-case Loss Bounds for Single Neurons
315
analyze slightly more complicated settings than just a single neuron. One likely
candidate is to generalize the analysis to logistic regression with more than two
classes. In this case each class would be represented by one neuron.
As noted above, the matching loss for the logistic transfer function is the entropic
loss, so this pair does not create local minima. No bounded transfer function
matches the square loss in this sense (Auer et aI., 1996), and thus it seems impossible to get the same kind of strong loss bounds for a bounded transfer function
and the square loss as we have for any (increasing and differentiable) transfer function and its matching loss function .
As the bounds for EG? depend only logarithmically on the input dimension, the
following approach may be feasible. Instead of using a multilayer net , use a single
(linear or sigmoided) neuron on top of a large set of basis functions. The logarithmic
growth of the loss bounds in the number of such basis functions mean that large
numbers of basis functions can be tried.
Note that the bounds of this paper are only worst-case bounds and our experiments
on artificial data indicate that the bounds may not be tight when the input values
and best weights are large. However, we feel that the bounds do indicate the relative
merits of the algorithms in different situations. Further research needs to be done
to tighten the bounds. Nevertheless, this paper gives the first worst-case upper
bounds for neurons with nonlinear transfer functions.
References
P. Auer , M. Herbster, and M. K. Warmuth (1996). Exponentially many local minima for single neurons. In Advances in Neural Information Processing Systems 8.
N. Cesa-Bianchi, P. Long, and M. K. Warmuth (1996). Worst-case quadratic loss
bounds for on-line prediction of linear functions by gradient descent. IEEE Transactions on Neural Networks. To appear. An extended abstract appeared in COLT
'93, pp. 429-438.
D. P. Helmbold , J . Kivinen , and M. K. Warmuth (1996). Worst-case loss bounds
for single neurons. Technical Report UCSC-CRL-96-2 , Univ. of Calif. Computer
Research Lab, Santa Cruz, CA, 1996. In preparation.
J . Kivinen and M. K. Warmuth (1994). Exponentiated gradient versus gradient
descent for linear predictors. Technical Report UCSC-CRL-94-16 , Univ. of Calif.
Computer Research Lab, Santa Cruz , CA , 1994. An extended abstract appeared in
STOC '95, pp. 209-218.
N. Littlestone (1988) . Learning when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2:285-318.
N. Littlestone (1989) . From on-line to batch learning. In Proc. 2nd Annual Workshop on Computational Learning Theory, pages 269-284. Morgan Kaufmann, San
Mateo , CA.
D. G. Luenberger (1984). Linear and Nonlinear Programming. Addison-Wesley,
Reading, MA.
S. A. Solla, E. Levin, and M. Fleisher (1988) . Accelerated learning in layered neural
networks. Complex Systems , 2:625- 639 .
| 1095 |@word trial:5 version:1 achievable:2 norm:12 seems:2 nd:1 suitably:1 open:1 tried:1 initial:2 contains:1 interestingly:2 past:1 current:1 comparing:1 ilxl:1 activation:1 must:4 written:1 cruz:6 realize:1 additive:1 treating:1 update:6 warmuth:14 plane:1 manfred:1 unbounded:1 ucsc:2 direct:1 become:2 prove:4 combine:1 introduce:1 manner:1 behavior:4 themselves:1 considering:1 increasing:4 abound:1 begin:2 bounded:3 what:1 kind:1 minimizes:1 informed:1 transformation:1 growth:1 unit:1 appear:1 positive:2 before:2 ilxll:1 local:8 path:1 yd:1 alli:1 might:2 awi:1 nz:1 mateo:1 definite:1 area:2 empirical:11 matching:9 convenient:1 suggest:1 get:4 cannot:1 close:1 layered:1 influence:1 impossible:2 applying:2 charged:1 yt:21 dz:1 go:7 convex:2 immediately:1 helmbold:7 rule:2 insight:1 importantly:1 his:1 notion:1 feel:1 lcjj:2 programming:1 us:5 aixi:1 hypothesis:1 element:1 logarithmically:1 observed:1 hv:1 worst:13 fleisher:1 solla:2 oln:1 intuition:1 ui:2 depend:3 tight:3 segment:1 basis:3 easily:2 various:2 represented:1 univ:2 describe:1 artificial:2 formation:1 outcome:3 larger:2 valued:2 unseen:1 obviously:1 sequence:14 differentiable:4 advantage:1 net:1 ifthe:1 remainder:1 blr:1 convergence:2 r1:1 produce:6 derive:1 oo:2 measured:1 z1n:3 ij:2 strong:1 come:1 implies:1 convention:1 indicate:2 closely:1 discontinuous:1 attribute:2 opened:1 require:2 generalization:2 strictly:1 hold:3 mapping:5 predict:2 finland:1 entropic:6 achieves:1 purpose:1 proc:1 sensitive:1 create:1 clearly:1 always:2 modified:1 rather:2 avoid:1 derived:1 competed:1 vk:1 indicates:2 contrast:1 sense:1 unlikely:1 typically:1 entire:1 spurious:3 transformed:1 linearthreshold:1 colt:1 exponent:1 multiplies:1 special:1 once:2 having:3 adversarially:1 future:1 report:3 few:1 divergence:1 xoo:3 introduces:1 tj:7 closer:1 unless:1 euclidean:3 logarithm:1 littlestone:5 desired:1 calif:2 instance:7 measuring:1 alr:1 cost:1 introducing:1 uniform:2 predictor:1 levin:1 ilwlll:1 loo:1 gd:10 herbster:1 ie:1 yl:3 receiving:1 w1:2 squared:1 cesa:1 choose:1 possibly:1 nonlinearities:1 includes:1 coefficient:1 satisfy:2 depends:1 multiplicative:1 root:1 lab:2 analyze:2 start:3 maintains:1 complicated:1 slope:1 square:9 kaufmann:1 generalize:2 famous:1 produced:1 teollisuuskatu:1 against:1 pp:2 proof:1 treatment:1 recall:2 carefully:1 auer:3 back:3 lossr:2 appears:2 wesley:1 done:1 box:1 furthermore:2 just:1 hand:4 ei:1 nonlinear:2 propagation:3 logistic:10 grows:2 believe:1 usa:2 contain:2 normalized:1 hence:3 nonzero:2 leibler:1 eg:32 ll:1 during:1 noted:1 generalized:2 performs:1 fj:4 sigmoid:1 preceded:1 sigmoided:1 regulates:1 tracked:1 exponentially:1 significant:1 ai:3 session:1 similarly:1 add:1 recent:1 irrelevant:4 arbitrarily:1 life:1 yi:1 seen:2 minimum:8 additional:2 morgan:1 cesabianchi:1 ii:2 full:2 desirable:1 technical:3 match:1 long:2 prediction:6 basic:2 regression:3 multilayer:10 essentially:1 achieved:3 addition:1 crucial:1 meaningless:1 call:3 presence:1 exceed:1 feedforward:2 restrict:1 hessian:1 useful:2 santa:6 tth:4 supplied:1 notice:1 per:1 write:1 key:1 nevertheless:1 neither:1 thresholded:1 sum:1 wand:1 distorted:1 place:1 almost:1 reasonable:1 scaling:4 bound:54 quadratic:1 nonnegative:1 annual:1 constraint:2 helsinki:1 x2:1 performing:1 ern:2 department:3 combination:1 remain:1 describes:1 xixj:1 slightly:1 wi:8 invariant:3 needed:1 addison:1 merit:1 end:1 luenberger:2 available:1 studying:1 appropriate:1 batch:4 original:1 top:1 exploit:1 dependence:1 gradient:22 win:2 distance:5 evenly:1 reason:1 length:2 index:1 relationship:1 difficult:2 stoc:1 relate:1 hij:1 ofw:1 unknown:1 perform:2 bianchi:1 upper:10 neuron:23 fin:1 descent:10 situation:2 extended:2 rn:4 arbitrary:3 david:1 pair:1 california:2 adversary:1 below:1 usually:1 appeared:2 reading:1 kivinen:12 imply:1 prior:1 relative:5 embedded:1 loss:84 expect:2 interesting:1 versus:1 incurred:1 normalizes:1 row:1 course:2 free:1 formal:1 exponentiated:4 absolute:2 sparse:4 distributed:2 dimension:1 xn:2 cumulative:2 computes:1 commonly:1 san:1 tighten:1 transaction:1 emphasize:1 kullback:1 summing:1 xi:3 learn:2 transfer:23 reasonably:1 ca:5 complex:1 main:3 dense:2 whole:2 noise:3 x1:1 je:1 xl:4 candidate:1 xt:17 kzn:1 concern:1 evidence:1 workshop:1 gained:1 entropy:1 logarithmic:1 simply:1 likely:1 prevents:1 contained:1 scalar:1 satisfies:1 worstcase:1 ma:1 identity:7 jyrki:1 goal:1 infw:1 sized:1 crl:2 feasible:1 change:1 uniformly:1 wt:21 called:3 total:5 experimental:1 meaningful:1 formally:1 accelerated:1 preparation:1 |
107 | 1,096 | Predictive Q-Routing: A Memory-based
Reinforcement Learning Approach to
Adaptive Traffic Control
Samuel P.M. Choi, Dit-Yan Yeung
Department of Computer Science
Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
{pmchoi,dyyeung}~cs.ust.hk
Abstract
In this paper, we propose a memory-based Q-Iearning algorithm
called predictive Q-routing (PQ-routing) for adaptive traffic control. We attempt to address two problems encountered in Q-routing
(Boyan & Littman, 1994), namely, the inability to fine-tune routing policies under low network load and the inability to learn new
optimal policies under decreasing load conditions. Unlike other
memory-based reinforcement learning algorithms in which memory is used to keep past experiences to increase learning speed,
PQ-routing keeps the best experiences learned and reuses them
by predicting the traffic trend. The effectiveness of PQ-routing
has been verified under various network topologies and traffic conditions. Simulation results show that PQ-routing is superior to
Q-routing in terms of both learning speed and adaptability.
1
INTRODUCTION
The adaptive traffic control problem is to devise routing policies for controllers (i.e .
routers) operating in a non-stationary environment to minimize the average packet
delivery time. The controllers usually have no or only very little prior knowledge of
the environment. While only local communication between controllers is allowed,
the controllers must cooperate among themselves to achieve the common, global
objective. Finding the optimal routing policy in such a distributed manner is very
difficult. Moreover, since the environment is non-stationary, the optimal policy
varies with time as a result of changes in network traffic and topology.
In (Boyan & Littman, 1994), a distributed adaptive traffic control scheme based
946
S. P. M. CHOI, D. YEUNG
on reinforcement learning (RL), called Q-routing, is proposed for the routing of
packets in networks with dynamically changing traffic and topology. Q-routing is a
variant of Q-Iearning (Watkins, 1989), which is an incremental (or asynchronous)
version of dynamic programming for solving multistage decision problems. Unlike
the original Q-Iearning algorithm, Q-routing is distributed in the sense that each
communication node has a separate local controller, which does not rely on global
information of the network for decision making and refinement of its routing policy.
2
EXPLORATION VERSUS EXPLOITATION
As in other RL algorithms, one important issue Q-routing must deal with is the
tradeoff between exploration and exploitation. While exploration of the state space
is essential to learning good routing policies, continual exploration without putting
the learned knowledge into practice is of no use. Moreover, exploration is not done
at no cost. This dilemma is well known in the RL community and has been studied
by some researchers, e.g. (Thrun, 1992).
One possibility is to divide learning into an exploration phase and an exploitation
phase. The simplest exploration strategy is random exploration, in which actions
are selected randomly without taking the reinforcement feedback into consideration.
After the exploration phase, the optimal routing policy is simply to choose the next
network node with minimum Q-value (i.e. minimum estimated delivery time). In
so doing, Q-routing is expected to learn to avoid congestion along popular paths.
Although Q-routing is able to alleviate congestion along popular paths by routing
some traffic over other (possibly longer) paths, two problems are reported in (Boyan
& Littman, 1994). First, Q-routing is not always able to find the shortest paths
under low network load. For example, if there exists a longer path which has a
Q-value less than the (erroneous) estimate of the shortest path, a routing policy
that acts as a minimum selector will not explore the shortest path and hence will
not update its erroneous Q-value. Second, Q-routing suffers from the so-called
hysteresis problem, in that it fails to adapt to the optimal (shortest) path again
when the network load is lowered. Once a longer path is selected due to increase in
network load, a minimum selector is no longer able to notice the subsequent decrease
in traffic along the shortest path. Q-routing continues to choose the same (longer)
path unless it also becomes congested and has a Q-value greater than some other
path. Unless Q-routing continues to explore, the shortest path cannot be chosen
again even though the network load has returned to a very low level. However, as
mentioned in (Boyan & Littman, 1994), random exploration may have very negative
effects on congestion, since packets sent along a suboptimal path tend to increase
queue delays, slowing down all the packets passing through this path.
Instead of having two separate phases for exploration and exploitation, one alternative is to mix them together, with the emphasis shifting gradually from the former
to the latter as learning proceeds. This can be achieved by a probabilistic scheme for
choosing next nodes. For example, the Q-values may be related to probabilities by
the Boltzmann-Gibbs distribution, involving a randomness (or pseudo-temperature)
parameter T. To guarantee sufficient initial exploration and subsequent convergence, T usually has a large initial value (giving a uniform probability distribution)
and decreases towards 0 (degenerating to a deterministic minimum selector) during
the learning process. However, for a continuously operating network with dynamically changing traffic and topology, learning must be continual and hence cannot be
controlled by a prespecified decay profile for T. An algorithm which automatically
adapts between exploration and exploitation is therefore necessary. It is this very
reason which led us to develop the algorithm presented in this paper.
Predictive Q-Routing
3
947
PREDICTIVE Q-ROUTING
A memory-based Q-learning algorithm called predictive Q-routing (PQ-routing) is
proposed here for adaptive traffic control. Unlike Dyna (Peng & Williams, 1993)
and prioritized sweeping (Moore & Atkeson, 1993) in which memory is used to keep
past experiences to increase learning speed, PQ-routing keeps the best experiences
(best Q-values) learned and reuses them by predicting the traffic trend. The idea
is as follows. Under low network load, the optimal policy is simply the shortest
path routing policy. However, when the load level increases, packets tend to queue
up along the shortest paths and the simple shortest path routing policy no longer
performs well. If the congested paths are not used for a period of time, they will
recover and become good candidates again. One should therefore try to utilize these
paths by occasionally sending packets along them. We refer to such controlled
exploration activities as probing. The probing frequency is crucial, as frequent
probes will increase the load level along the already congested paths while infrequent
probes will make the performance little different from Q-routing. Intuitively, the
probing frequency should depend on the congestion level and the processing speed
(recovery rate) of a path. The congestion level can be reflected by the current
Q-value, but the recovery rate has to be estimated as part of the learning process.
At first glance, it seems that the recovery rate can be computed simply by dividing
the difference in Q-values from two probes by the elapse time. However, the recovery
rate changes over time and depends on the current network traffic and the possibility
of link/node failure. In addition, the elapse time does not truly reflect the actual
processing time a path needs. Thus this noisy recovery rate should be adjusted for
every packet sent. It is important to note that the recovery rate in the algorithm
should not be positive, otherwise it may increase the predicted Q-value without
bound and hence the path can never be used again.
Predictive Q-Routing Algorithm
TABLES:
Qx(d, y)
Bx(d, y)
Rx(d,y)
Ux (d, y)
- estimated delivery time from node x to node d via neighboring node y
- best estimated delivery time from node x to node d via neighboring node y
- recovery rate for path from node x to node d via neighboring node y
- last update time for path from node x to node d via neighboring node y
TABLE UPDATES: (after a packet arrives at node y from node
6.Q = (transmission delay + queueing time at y
Qx(d,y) ~ Qx(d,y) +O"6.Q
Bx(d, y) ~ min(Bx(d, y), Qx(d, y))
if (6.Q < 0) then
6.R ~ 6.Q / (current time - Ux(d, y))
Rx(d,y) ~ R x (d,y)+f36.R
else if (6.Q > 0) then
Rx(d,y) ~ -yRx(d,y)
x)
+ minz{Qy(d,z)})
- Qx(d,y)
end if
Ux(d, y)
~
current time
ROUTING POLICY: (packet is sent from node x to node
y)
6.t = current time - Ux(d,y)
Q~(d, y) = max(Qx(d, y) + 6.tRx(d, y), Bx(d, y))
y ~ argminy{Q~(d,y)}
There are three learning parameters in the PQ-routing algorithm. a is the Qfunction learning parameter as in the original Q-learning algorithm. In PQ-routing,
this parameter should be set to 1 or else the accuracy of the recovery rate may be
s. P. M. CHOI. D. YEUNG
948
affected. f3 is used for learning the recovery rate. In our experiments, the value of
0.7 is used. 'Y is used for controlling the decay of the recovery rate, which affects
the probing frequency in a congested path. Its value is usually chosen to be larger
than f3. In our experiments, the value of 0.9 is used.
PQ-Iearning is identical to Q-Iearning in the way the Q-function is updated. The
major difference is in the routing policy. Instead of selecting actions based solely
on the current Q-values, the recovery rates are used to yield better estimates of
the Q-values before the minimum selector is applied. This is desirable because the
Q-values on which routing decisions are based may become outdated due to the
ever-changing traffic.
4
EMPIRICAL RESULTS
4.1
A 15-NODE NETWORK
To demonstrate the effectiveness of PQ-routing, let us first consider a simple 15node network (Figure 1(a)) with three sources (nodes 12 to 14) and one destination
(node 15). Each node can process one packet per time step, except nodes 7 to 11
which are two times faster than the other nodes. Each link is bidirectional and has
a transmission delay of one time unit. It is not difficult to see that the shortest
paths are 12 ---+ 1 ---+ 4 ---+ 15 for node 12, 13 ---+ 2 ---+ 4 ---+ 15 for node 13, and
14 ---+ 3 ---+ 4 ---+ 15 for node 14. However, since each node along these paths can
process only one packet per time step, congestion will soon occur in node 4 if all
source nodes send packets along the shortest paths.
One solution to this problem is that the source nodes send packets along different
paths which share no common nodes. For instance, node 12 can send packets along
path 12 ---+ 1 ---+ 5 ---+ 6 ---+ 15 while node 13 along 13 ---+ 2 ---+ 7 ---+ 8 ---+ 9 ---+
10 ---+ 11 ---+ 15 and node 14 along 14 ---+ 3 ---+ 4 ---+ 15. The optimal routing policy
depends on the traffic from each source node. If the network load is not too high,
the optimal routing policy is to alternate between the upper and middle paths in
sending packets.
4.1.1
PERIODIC TRAFFIC PATTERNS UNDER LOW LOAD
For the convenience of empirical analysis, we first consider periodic traffic in which
each source node generates the same traffic pattern over a period of time. Figure 1(b) shows the average delivery time for Q-routing and PQ-routing. PQ-routing
performs better than Q-routing after the initial exploration phase (25 time steps),
despite of some slight oscillations. Such oscillations are due to the occasional probing activities of the algorithm. When we examine Q-routing ?more closely, we can
find that after the initial learning, all the source nodes try to send packets along
the upper (shortest) path, leading to congestion in node 4. When this occurs, both
nodes 12 and 13 switch to the middle path, which subsequently leads to congestion
in node 5. Later, nodes 12 and 13 detect this congestion and then switch to the
lower path. Since the nodes along this path have higher (two times) processing
speed, the Q-values become stable and Q-routing will stay there as long as the load
level does not increase. Thus, Q-routing fails to fine-tune the routing policy to
improve it. PQ-routing, on the other hand, is able to learn the recovery rates and
alternate between the upper and middle paths.
Predictive Q-Routing
949
'q' 'pq'----
'"
~
I
Source
30
20
Source
"
\
,'r,.,.---;
. J':-~.,""
......;-~..-=~,"""
,_~----.;-~_-.J;".-.
:- -.
__ ....;:-"
".....:-,.-.
--....-.--.-.,:..=
, ..-.-_.-_.-.--,-...-... 1
?O~~2~O~"'~~~~~~~~I~--~12-0~1"'~~I~~~,00~~200
Source
SlITIUM.lIonnM
(b) Periodic traffic patterns under low
load
(a) Network
30
,'-
'q' -
'pq' ----
'pq' ----
"
"
20
!.
l
.!
!.
l
"
.!
"
10
?O~~~~~,~~~--~20~O--~~~-_~~~=O--~~
S,""*lIonr,,,.
(c) Aperiodic traffic patterns under
high load
O.~~~-2=OO--~~=-~~~~~-.=~--~7~OO--~
SunulatlonT'irl'\tl
(d) Varying traffic patterns and network load
Figure 1: A 15-Node Network and Simulation Results
4.1.2
APERIODIC TRAFFIC PATTERNS UNDER HIGH LOAD
It is not realistic to assume that network traffic is strictly periodic. In reality,
the time interval between two packets sent by a node varies. To simulate varying
intervals between packets, a probability of 0.8 is imposed on each source node for
generating packets. In this case, the average delivery time for both algorithms
oscillates, Figure 1( c) shows the performance of Q-routing and PQ-routing under
high network load. The difference in delivery time between Q-routing and PQrouting becomes less significant, as there is less available bandwidth in the shortest
path for interleaving. Nevertheless, it can be seen that the overall performance of
PQ-routing is still better than Q-routing .
4.1.3
VARYING TRAFFIC PATTERNS AND NETWORK LOAD
In the more complicated situation of varying traffic patterns and network load,
PQ-routing also performs better than Q-routing. Figure 1( d) shows the hysteresis
problem in Q-routing under gradually changing traffic patterns and network load.
After an initial exploration phase of 25 time steps, the load level is set to medium
S. P. M. CHOI, D. YEUNG
950
from time step 26 to 200. From step 201 to 500, node 14 ceases to send packets
and nodes 12 and 13 slightly increase their load level. In this case, although the
shortest path becomes available again, Q-routing is not able to notice the change in
traffic and still uses the same routing policy, but PQ-routing is able to utilize the
optimal paths. After step 500, node 13 also ceases to send packets. PQ-routing is
successful in adapting to the optimal path 12 - 4 1 - 4 4 - 4 15.
4.2
A 6x6 GRID NETWORK
Experiments have been performed on some larger networks, including a 32-node
hypercube and some random networks, with results similar to those above . Figures 2(b) and 2( c) depict results for Boyan and Littman's 6x6 grid network (Figure 2( a)) under varying traffic patterns and network load.
(a) Network
700
'q ' 'pq' ----
t<"
.~-
'p? ?? -
eoo
120
500
100
~
...
.!
300
1"
eo
..
...
20
100
-
?O~--2=OO~~_~--=eoo~~eoo~~I=_~~,,,,~
...........
(b) Varying traffic patterns and network load
800
800
1000
1200
SlmulaHonTm.
1..ao
1800
1&00
2000
(c) Varying traffic patterns and network load
Figure 2: A 6x6 Grid Network and Simulation Results
In Figure 2(b), after an initial exploration for 50 time steps, the load level is set to
low. From step 51 to 300, the load level increases to medium but with the same
periodic traffic patterns. PQ-routing performs slightly better. From step 301 to
1000, the traffic patterns change dramatically under high network load. Q-routing
cannot learn a stable policy in this (short) period of time, but PQ-routing becomes
more stable after about 200 steps. From step 1000 onwards, the traffic patterns
change again and the load level returns to low. PQ-routing still performs better.
951
Predictive Q-Routing
In Figure 2{ c) , the first 100 time steps are for initial exploration. After this period,
packets are sent from the bottom right part of the grid to the bottom left part with
low network load. PQ-routing is found to be as good as the shortest path routing
policy, while Q-routing is slightly poorer than PQ-routing. From step 400 to 1000,
packets are sent from both the left and right parts of the grid to the opposite sides
at high load level. Both the two bottleneck paths become congested and hence the
average delivery time increases for both algorithms. From time step 1000 onwards,
the network load decreases to a more manageable level. We can see that PQ-routing
is faster than Q-routing in adapting to this change.
5
DISCUSSIONS
PQ-Iearning is generally better than Q-Iearning under both low and varying network
load conditions. Under high load conditions, they give comparable performance. In
general, Q-routing prefers stable routing policies and tends to send packets along
paths with higher processing power, regardless of the actual packet delivery time.
This strategy is good under extremely high load conditions, but may not be optimal
under other situations. PQ-routing, on the contrary, is more aggressive. It tries
to minimize the average delivery time by occasionally probing the shortest paths.
If the load level remains extremely high with the patterns unchanged, PQ-routing
will gradually degenerate to Q-routing, until the traffic changes again . Another advantage PQ-routing has over Q-routing is that shorter adaptation time is generally
needed when the traffic patterns change, since the routing policy of PQ-routing depends not only on the current Q-values but also on the recovery rates. In terms of
memory requirement, PQ-routing needs more memory for recovery rate estimation.
It should be noted, however, that extra memory is needed only for the visited states.
In the worst case, it is still in the same order as that of the original Q-routing algorithm. In terms of computational cost, recovery rate estimation is computationally
quite simple. Thus the overhead for implementing PQ-routing should be minimal.
References
J.A. Boyan & M.L. Littman (1994) . Packet routing in dynamically changing networks: a
reinforcement learning approach. Advances in Neural Information Processing Systems 6,
671-678. Morgan Kaufmann, San Mateo, California.
M. Littman & J. Boyan (1993) . A distributed reinforcement learning scheme for network routing. Proceedings of the First International Workshop on Applications of Neural
Networks to Telecommunications,45- 51. Lawrence Erlbaum, Hillsdale, New Jersey.
A.W. Moore & C.G. Atkeson (1993). Memory-based reinforcement learning: efficient computation with prioritized sweeping. Advances in Neural Information Processing Systems
5, 263-270 . Morgan Kaufmann, San Mateo, California.
A.W. Moore & C.G. Atkeson (1993). Prioritized sweeping: reinforcement learning with
less data and less time. Machine Learning, 13:103-130.
J . Peng & R.J. Williams (1993). Efficient learning and planning within the Dyna framework. Adaptive Behavior, 1:437- 454.
S. Thrun (1992). The role of exploration in learning control. In Handbook of Intelligent
Control: Neural, Fuzzy, and Adaptive Approaches, D.A. White & D.A. Sofge (eds). Van
Nostrand Reinhold, New York.
C.J.C.H . Watkins (1989).
Cambridge, England.
Learning from delayed rewards.
PhD Thesis, University of
| 1096 |@word kong:2 exploitation:5 middle:3 version:1 manageable:1 seems:1 simulation:3 initial:7 selecting:1 past:2 current:7 router:1 ust:1 must:3 realistic:1 subsequent:2 update:3 depict:1 stationary:2 congestion:9 selected:2 slowing:1 short:1 prespecified:1 node:53 along:16 become:4 overhead:1 manner:1 peng:2 expected:1 behavior:1 themselves:1 examine:1 planning:1 decreasing:1 automatically:1 little:2 actual:2 becomes:4 moreover:2 medium:2 fuzzy:1 finding:1 guarantee:1 pseudo:1 every:1 continual:2 act:1 iearning:7 oscillates:1 control:7 unit:1 reuses:2 positive:1 before:1 local:2 tends:1 despite:1 path:46 solely:1 emphasis:1 studied:1 mateo:2 dynamically:3 practice:1 empirical:2 yan:1 adapting:2 cannot:3 convenience:1 deterministic:1 imposed:1 send:7 williams:2 regardless:1 recovery:15 updated:1 congested:5 controlling:1 infrequent:1 programming:1 us:1 trend:2 continues:2 bottom:2 role:1 worst:1 decrease:3 mentioned:1 environment:3 reward:1 littman:7 multistage:1 dynamic:1 depend:1 solving:1 predictive:8 dilemma:1 various:1 jersey:1 choosing:1 quite:1 larger:2 otherwise:1 noisy:1 advantage:1 propose:1 adaptation:1 frequent:1 neighboring:4 degenerate:1 achieve:1 adapts:1 convergence:1 transmission:2 requirement:1 generating:1 incremental:1 oo:3 develop:1 dividing:1 c:1 predicted:1 closely:1 aperiodic:2 subsequently:1 exploration:19 packet:26 routing:91 implementing:1 hillsdale:1 eoo:3 ao:1 alleviate:1 adjusted:1 strictly:1 lawrence:1 major:1 estimation:2 visited:1 kowloon:1 always:1 avoid:1 varying:8 hk:1 sense:1 detect:1 issue:1 among:1 overall:1 once:1 never:1 having:1 f3:2 identical:1 intelligent:1 randomly:1 delayed:1 phase:6 attempt:1 onwards:2 possibility:2 truly:1 arrives:1 poorer:1 necessary:1 experience:4 shorter:1 unless:2 divide:1 minimal:1 instance:1 cost:2 uniform:1 delay:3 successful:1 erlbaum:1 too:1 reported:1 varies:2 periodic:5 international:1 stay:1 probabilistic:1 destination:1 together:1 continuously:1 again:7 reflect:1 thesis:1 choose:2 possibly:1 leading:1 bx:4 return:1 aggressive:1 hysteresis:2 depends:3 later:1 try:3 performed:1 doing:1 traffic:36 recover:1 complicated:1 minimize:2 accuracy:1 kaufmann:2 yield:1 rx:3 researcher:1 randomness:1 suffers:1 ed:1 failure:1 frequency:3 popular:2 knowledge:2 adaptability:1 bidirectional:1 higher:2 x6:3 reflected:1 done:1 though:1 until:1 hand:1 irl:1 glance:1 effect:1 former:1 hence:4 moore:3 deal:1 white:1 during:1 degenerating:1 noted:1 samuel:1 hong:2 demonstrate:1 performs:5 temperature:1 cooperate:1 consideration:1 argminy:1 superior:1 common:2 rl:3 slight:1 refer:1 significant:1 cambridge:1 gibbs:1 grid:5 pq:35 lowered:1 stable:4 longer:6 operating:2 dyyeung:1 occasionally:2 nostrand:1 devise:1 seen:1 minimum:6 greater:1 morgan:2 eo:1 shortest:16 period:4 elapse:2 mix:1 desirable:1 faster:2 adapt:1 england:1 long:1 controlled:2 variant:1 involving:1 controller:5 yeung:4 achieved:1 qy:1 addition:1 fine:2 interval:2 else:2 source:10 crucial:1 extra:1 unlike:3 tend:2 sent:6 contrary:1 effectiveness:2 switch:2 affect:1 topology:4 suboptimal:1 bandwidth:1 opposite:1 idea:1 qfunction:1 tradeoff:1 bottleneck:1 queue:2 returned:1 passing:1 york:1 action:2 prefers:1 dramatically:1 generally:2 clear:1 tune:2 dit:1 simplest:1 notice:2 estimated:4 per:2 affected:1 putting:1 nevertheless:1 queueing:1 changing:5 verified:1 utilize:2 telecommunication:1 oscillation:2 delivery:10 decision:3 comparable:1 bound:1 outdated:1 encountered:1 activity:2 occur:1 generates:1 speed:5 simulate:1 min:1 extremely:2 department:1 alternate:2 slightly:3 making:1 intuitively:1 gradually:3 computationally:1 remains:1 dyna:2 needed:2 end:1 sending:2 available:2 probe:3 occasional:1 alternative:1 original:3 giving:1 hypercube:1 unchanged:1 objective:1 already:1 occurs:1 strategy:2 separate:2 link:2 thrun:2 water:1 reason:1 difficult:2 negative:1 policy:22 boltzmann:1 upper:3 situation:2 communication:2 ever:1 sweeping:3 community:1 reinhold:1 namely:1 california:2 learned:3 address:1 able:6 proceeds:1 usually:3 pattern:17 max:1 memory:10 including:1 shifting:1 power:1 boyan:7 rely:1 predicting:2 scheme:3 improve:1 technology:1 prior:1 versus:1 sufficient:1 share:1 last:1 asynchronous:1 soon:1 side:1 taking:1 distributed:4 van:1 feedback:1 reinforcement:8 adaptive:7 refinement:1 san:2 atkeson:3 qx:6 selector:4 keep:4 global:2 sofge:1 handbook:1 bay:1 table:2 reality:1 learn:4 profile:1 allowed:1 tl:1 probing:6 fails:2 candidate:1 watkins:2 minz:1 interleaving:1 down:1 choi:4 erroneous:2 load:36 decay:2 cease:2 essential:1 exists:1 workshop:1 phd:1 led:1 simply:3 explore:2 ux:4 trx:1 towards:1 prioritized:3 change:8 except:1 called:4 latter:1 inability:2 |
108 | 1,097 | Using Unlabeled Data for Supervised
Learning
Geoffrey Towell
Siemens Corporate Research
755 College Road East
Princeton, NJ 08540
Abstract
Many classification problems have the property that the only costly
part of obtaining examples is the class label. This paper suggests
a simple method for using distribution information contained in
unlabeled examples to augment labeled examples in a supervised
training framework. Empirical tests show that the technique described in this paper can significantly improve the accuracy of a
supervised learner when the learner is well below its asymptotic
accuracy level.
1
INTRODUCTION
Supervised learning problems often have the following property: unlabeled examples
have little or no cost while class labels have a high cost. For example, it is trivial to
record hours of heartbeats from hundreds of patients. However, it is expensive to
hire cardiologists to label each of the recorded beats. One response to the expense of
class labels is to squeeze the most information possible out of each labeled example.
Regularization and cross-validation both have this goal. A second response is to
start with a small set of labeled examples and request labels of only those currently
unlabeled examples that are expected to provide a significant improvement in the
behavior of the classifier (Lewis & Catlett, 1994; Freund et al., 1993).
A third response is to tap into a largely ignored potential source of information;
namely, unlabeled examples. This response is supported by the theoretical work
of Castelli and Cover (1995) which suggests that unlabeled examples have value in
learning classification problems. The algorithm described in this paper, referred to
as SULU (Supervised learning Using Labeled and Unlabeled examples), takes this third
G. TOWELL
648
path by using distribution information from unlabeled examples during supervised
learning. Roughly, SULU uses the centroid of labeled and unlabeled examples in the
neighborhood of a labeled example as a new training example. In this way, SULU
extracts information about the local variability of the input from unlabeled data.
SULU is described in Section 2.
In its use of unlabeled examples to alter labeled examples, SULU is reminiscent of
techniques for adding noise to networks during training (Hanson, 1990; Matsuoka,
1992). SULU is also reminiscent of instantiations of the EM algorithm that attempt
to fill in missing parts of examples (Ghahramani & Jordan, 1994). The similarity
of SULU to these, and other, works is explored in Section 3.
is intended to work on classification problems for which there is insufficient labeled training data to allow a learner to approach its asymptotic accuracy level. To
explore this problem, the experiments described in Section 4 focus on the early parts
of the learning curves of six datasets (described in Section 4.1). The results show
that SULU consistently, and statistically significantly, improves classification accuracy over systems trained with only the labeled data. Moreover, SULU is consistently
more accurate than an implementation of the EM-algorithm that was specialized
for the task of filling in missing class labels. From these results, it is reasonable to
conclude that SULU is able to use the distribution information in unlabeled examples
to improve classification accuracy.
SULU
2
THE ALGORITHM
SULU uses standard neural-network supervised training techniques except that it
occasionally replaces a labeled example with a synthetic example. in addition, the
criterion to stop training is slightly modified to require that the network correctly
classify almost every labeled example and a majority of the synthetic examples. For
instance, the experiments reported in Section 4 generate synthetic examples 50% of
the time; the stopping criterion requires that 80% of the examples seen in a single
epoch are classified correctly. The main function in Table 1 provides psuedocode
for this process.
The synthesize function in Table 1 describes the process through which an example is
synthesized. Given a labeled example to use as a seed, synthesize collects neighboring
examples and returns an example that is the centroid of the collected examples
with the label of the starting point. synthesize collects neighboring examples until
reaching one of the following three stopping points. First, the maximum number of
points is reached; the goal of SULU is to get information about the local variance
around known points, this criterion guarantees locality. Second, the next closest
example to the seed is a labeled example with a different label; this criterion prevents
the inclusion of obviously incorrect information in synthetic examples. Third, the
next closest example to the seed is an unlabeled example and the closest labeled
example to that unlabeled example has a different label from the seed; this criterion
is intended to detect borders between classification areas in example space.
The call to synthesize from main effectively samples with replacement from a space
defined by a labeled example and its neighbors. As such, there are many ways in
which main and synthesize could be written. The principle consideration in this
implementation is memory; the space around the labeled examples can be huge.
649
Using Unlabeled Data for Supervised Learning
Table 1: Pseudocode for
SULU
RANDOH(min,max):
return a uniformly distributed random integer between min and max, inclusive
HAIN(B,H):
/* B - in [0 .. 100], controls the rate of example synthesis
/* H - controls neighborhood size during synthesis
Let: E /* a set of labeled examples
*/
U /* a set of unlabeled examples
*/
N /* an appropriate neural network */
Repeat
Permute E
Foreach e in E
if random(0,100) > B then
e (- SYNTHESIZE(e,E,U,random(2,M?
TRAIN N using e
Until a stopping criterion is reached
*/
*/
SYNTHESIZE(e,E,U,m):
Let: C
/* will hold a collection of examples */
For i from 1 to m
c (- ith nearest neighbor of e in E union U
if ?c is labeled) and (label of c not equal to label of e?
if c is not labeled
cc (- nearest neighbor of c in E
if label of cc not equal to label of e then STOP
add c to C
return an example whose input is the centroid of the
inputs of the examples in C and has the class label of e.
3
then STOP
RELATED WORK
is similar to two methods of exploring the input space beyond the boundaries of
the labeled examples; example generation and noise addition. Example generation
commonly uses a model of how a space deforms and an example of the space to
generate new examples. For instance, in training a vehicle to turn, Pomerleau
(1993) used information about how the scene shifts when a car is turned to gener?ate
examples of turns. The major problem with example generation is that deformation
models are uncommon.
SULU
By contrast to example generation, noise addition is a model-free procedure. In
general, the idea is to add a small amount of noise to either inputs (Matsuoka,
1992), link weights (Hanson, 1990), or hidden units (Judd & Munro, 1993). For
example, Hanson (1990) replaces link weights with a Gaussian. During a forward
pass, the Gaussian is sampled to determine the link weight. Training affects both
the mean and the variance of the Gaussian. In so doing, Hanson's method uses
distribution information in the labeled examples to estimate the global variance of
each input dimension. By contrast, SULU uses both labeled and unlabeled examples
to make local variance estimates. (Experiments, results not shown, with Hanson's
method indicate that it cannot improve classification results as much as SULU.)
Finally, there has been some other work on using unclassified examples during
training. de Sa (1994) uses the co-occurrence of inputs in multiple sensor modali-
650
G. TOWELL
ties to substitute for missing class information. However, sensor data from multiple
modalities is often not available. Another approach is to use the EM algorithm
(Ghahramani & Jordan, 1994) which iteratively guesses the value of missing information (both input and output) and builds structures to predict the missing
information. Unlike SULU, EM uses global information in this process so it may
not perform well on highly disjunctive problems. Also SULU may have an advantage
over EM in domains in which only the class label is missing as that is SULU'S specific
focus.
4
EXPERIMENTS
The experiments reported in this section explore the behavior of SULU on six
datasets. Each of the datasets has been used previously so they are only briefly
described in the first subsection. The results of the experiments reported in the
last part of this section show that SULU significantly and consistently improves
classification results.
4.1
DATASETS
The first two datasets are from molecular biology. Each take a DNA sequence and
encode it using four bits per nucleotide. The first problem, promoter recognition
(Opitz & Shavlik, 1994), is: given a sequence of 57 DNA nucleotides, determine if
a promoter begins at a particular position in the sequence. Following Opitz and
Shavlik, the experiments in this paper use 234 promoters and 702 non promoters.
The second molecular biology problem, splice-junction determination (Towell &
Shavlik, 1994), is: given a sequence of 60 DNA nucleotides, determine if there is a
splice-junction (and the type of the junction) at the middle of the sequence. The
data consist of 243 examples of one junction type (acceptors), 228 examples of the
other junction type (donors) and 536 examples of non-junctions. For both of these
problems, the best randomly initialized neural networks have a small number of
hidden units in a single layer (Towell & Shavlik, 1994).
The remaining four datasets are word sense disambiguation problems (Le. determine the intended meaning of the word "pen" in the sentence "the box is in the
pen"). The problems are to learn to distinguish between six noun senses of "line"
or four verb senses of "serve" using either topical or local encodings (Leacock et al.,
1993) of a context around the target word. The line dataset contains 349 examples
of each sense. Topical encoding, retaining all words that occur more than twice,
requires 5700 position vectors. Local encoding, using three words on either side
of line, requires 4500 position vectors. The serve dataset contains 350 examples of
each sense. Under the same conditions as line, topical encoding requires 4400 position vectors while local encoding requires 4500 position vectors. The best neural
networks for these problems have no hidden units (Leacock et al., 1993).
4.2
METHODOLOGY
The following methodology was used to test SULU on each dataset. First, the
data was split into three sets, 25 percent was set aside to be used for assessing
generalization, 50 percent had the class labels stripped off, and the remaining 25
percent was to be used for training. To create learning curves, the training set was
Using Unlabeled Data for Supervised Learning
651
Table 2: Endpoints of the learnings curves for standard neural networks and the
best result for each of the six datasets.
Training
Splice
Serve
Line
Set size
Promoter Junction Local Topical Local Topical
smallest
74.7
66.4
53.9
41.8
38.7
40.6
largest
90.3
85.4
71.7
63.0
58.8
63.3
asymptotic
95.8
94.4
83.1
75.5
70.1
79.2
further subdivided into sets containing 5, 10, 15, 20 and 25 percent of the data such
that smaller sets were always subsets of larger sets. Then, a single neural network
was created and copied 25 times. At each training set size, a new copy of the network
was trained under each of the following conditions: 1) using SULU, 2) using SULU
but supplying only the labeled training examples to synthesize, 3) standard network
training, 4) using a variant of the EM algorithm that has been specialized to the task
of filling in missing class labels, and 5) using standard network training but with
the 50% unlabeled prior to stripping the labels. This procedure was repeated eleven
times to average out the effects of example selection and network initialization.
When SULU was used, synthetic examples replaced labeled examples 50 percent of
the time. Networks using the full SULU (case 1) were trained until 80 percent of
the examples in a single epoch were correctly classified. All other networks were
trained until at least 99.5% of the examples were correctly classified. Stopping
criteria intended to prevent overfitting were investigated, but not used because
they never improved generalization.
4.3
RESULTS & DISCUSSION
Figure 1 and Table 2 summarize the results of these experiments. The graphs
in Figure 1 show the efficacy of each algorithm. Except for the largest training
set on the splice junction problem, SULU always results in a statistically significant
improvement over the standard neural network with at least 97.5 percent confidence
(according to a one-tailed paired-sample t-test). Interestingly, SULU'S improvement
is consistently between and ~ of that achieved by labeling the unlabeled examples.
This result contrasts Castelli and Cover's (1995) analysis which suggests that labeled
examples are exponentially more valuable than unlabeled examples.
:t
In addition, SUL U is consistently and significantly superior to the instantiation of the
EM-algorithm when there are very few labeled samples. As the number of labeled
samples increases the advantage of SULU decreases. At the largest training set sizes
tested, the two systems are roughly equally effective.
A possible criticism of SULU is that it does not actually need the unlabeled examples; the procedure may be as effective using only the labeled training data. This
hypothesis is incorrect, As shown in Figure 1, SULU when given no unlabeled examples is consistently and significantly inferior ti SULU when given a large number of
unlabeled examples. In addition, SULU with no unlabeled examples is consistently,
although not always significantly, inferior to a standard neural network.
The failure of
SULU
with only labeled examples points to a significant weakness
652
G. TOWELL
~r-----------~~_~_
~.~
~7_~~~~
~--'
~r---~------~~
_~
_
~~---+~
~-~----~
--SI..l.U Wl1h _un~
---
EM""'_~
SUlU .... O~
+
o
SIIl&IIIcII't'-..penorto5U.U
SIIIblIlcllly
SULU
_ new ..
~g
- - 8 l l . u .... 50'2un~
"
"\.
_
..fi~
'8~
........ . . .
~~~ :.~==
+
..
Strolllk:lllJ~tDSLl.U
~~..: lIIIaaly
nor ? SULU
"""-+-_
j:r---4~~
"-"-"~"'~"~"~"-"-"~"-"--~-"-"~~~':~--~
----G----_---~
0----
~ r---~c---~,~---,~--~~--~
~r---~--~
l ~
O --~
l ~~~--~~~{?
Size oIlraining sel
Size oIlralnlng sel
~ ~~
____~~~~~
_Ud_
~~U
~~_Dh~~~~~~-'
'-'-,-
'
== :-~-:oo~=
........
- - - ---
......... .........
+
......... ~o
SlJ..UWllhOurNbe'-d
Stabs\ll::aly' M.lpenor to SlLU
---..... _---.
$tabstlcaly' ...... m SlJ..U
~
__-..e-- -
- - SLl..U wrIh 1046 un1. . . .
'
..........
- - ~-----
...........
+
......... '+-- -- __
-
EM"lrI l04&~
su. Uwrttl ()lA'l ~
SlatlSt.c.Ity alpena, m SUlU
__ ....... -.!! _stat\s\lealyn1erD'tDS lJ..U
- --+-----.
~o
0.. .... -
C'"
~
__ __-Ouu - - _u __ -e-. __ ~
~o~----~~----~-----&------~
<>.
Figure 1: The effect of five training procedures on each of six learning problems. In
each of the above graphs, the effect of standard neural learning has been subtracted
from all results to suppress the increase in accuracy that results simply from an
increase in the number of labeled training examples. Observations marked by a '0'
or a '+' respectively indicate that the point is statistically significantly inferior or
superior to a network trained using SULU.
in its current implementation. Specifically, SULU finds the nearest neighbors of
an example using a simple mismatch counting procedure. Tests of this procedure
as an independent classification technique (results not shown) indicate that it is
consistently much worse than any of the methods plotted in in Figure 1. Hence, its
use imparts a downward bias to the generalizatio~ results.
A second indication of room for improvement in SULU is the difference in generalization between SULU and a network trained using data in which the unlabeled
examples provided to SULU have labels (case 5 above). On every dataset , the gain
from labeling the examples is statistically significant. The accuracy of a network
trained with all labeled examples is an upper bound for SULU, and one that is likely
not reachable. However, the distance between the upper bound and SULU'S current
performance indicate that there is room for improvement.
653
Using Unlabeled Data for Supervised Learning
5
CONCLUSIONS
This paper has presented the SULU algorithm that combines aspects of nearest neighbor classification with neural networks to learn using both labeled and unlabeled
examples. The algorithm uses the labeled and unlabeled examples to construct
synthetic examples that capture information about the local characteristics of the
example space. In so doing, the range of examples seen by the neural network
during its supervised learning is greatly expanded which results in improved generalization. Results of experiments on six real-work datasets indicate that SULU can
significantly improve generalization when when there is little labeled data. Moreover, the results indicate that SULU is consistently more effective at using unlabeled
examples than the EM-algorithm when there is very little labeled data. The results
suggest that SULU will be effective given the following conditions: 1) there is little
labeled training data, 2) unlabeled training data is essentially free, 3) the accuracy
of the classifier when trained with all of the available data is below the level which
is expected to be achievable. On problems with all of these properties SULU may
significantly improve the generalization accuracy of inductive classifiers.
References
Castelli, V. & Cover, T. (1995). The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. (Technical Report 86), Department
of Statistics: Stanford University.
de Sa, V. (1994). Learning classification with unlabeled data. Advances in Neural Information Processing Systems, 6.
Freund, Y., Seung, H. S., Shamit, E., & Tishby, N. (1993). Information, prediction and
query by committee. Advances in Neural Information Processing Systems, 5.
Ghahramani, Z. & Jordan, M. I. (1994). Supervised learning from incomplete data via
an EM approach. Advances in Neural Information Processing Systems, 6.
Hanson, S. J. (1990). A stochastic version of the delta rule. Physica D, 42, 265-272.
Judd, J. S. & Munro, P. W. (1993). Nets with unreliable hidden units learn errOrcorrecting codes. Advances in Neural Information Processing Systems, 5.
Leacock, C., Towell, G., & Voorhees, E. M. (1993). Towards building contextual representations of word senses using statistical models. Proceedings of SIGLEX Workshop:
Acquisition of Lexical Knowledge from Text. Association for Computational Linguistics.
Lewis, D. D. & Catlett, J. (1994). Heterogeneous uncertainty sampling for supervised
learning. Eleventh International Machine Learning Conference.
Matsuoka, K. (1992). Noise injection into inputs in back-propagation learning. IEEE
Transactions on Systems, Man and Cybernetics, 22, 436-440.
Opitz, D. W. & Shavlik, J. W. (1994). Using genetic search to refine knowledge-based
neural networks. Eleventh International Machine Learning Conference.
Pomerleau, D. A. (1993). Neural Network Perception for Mobile Robot Guidance. Boston:
Kluwer.
Towell, G. G. & Shavlik, J. W. (1994).
Artificial Intelligence, 70, 119-165.
Knowledge-based artificial neural networks.
| 1097 |@word middle:1 version:1 briefly:1 achievable:1 contains:2 efficacy:1 genetic:1 interestingly:1 current:2 contextual:1 si:1 reminiscent:2 written:1 eleven:1 aside:1 intelligence:1 guess:1 ith:1 record:1 supplying:1 provides:1 five:1 incorrect:2 combine:1 eleventh:2 expected:2 roughly:2 behavior:2 nor:1 gener:1 little:4 begin:1 provided:1 moreover:2 stab:1 nj:1 guarantee:1 every:2 ti:1 tie:1 classifier:3 control:2 unit:4 local:9 encoding:5 path:1 twice:1 initialization:1 suggests:3 collect:2 co:1 range:1 statistically:4 union:1 procedure:6 area:1 deforms:1 empirical:1 significantly:9 acceptor:1 word:6 road:1 confidence:1 suggest:1 get:1 cannot:1 unlabeled:33 selection:1 context:1 missing:7 lexical:1 starting:1 rule:1 fill:1 ity:1 target:1 us:8 hypothesis:1 synthesize:8 expensive:1 recognition:2 donor:1 labeled:37 disjunctive:1 capture:1 decrease:1 valuable:1 seung:1 trained:8 psuedocode:1 serve:3 heartbeat:1 learner:3 train:1 effective:4 query:1 artificial:2 labeling:2 neighborhood:2 whose:1 larger:1 stanford:1 statistic:1 obviously:1 advantage:2 sequence:5 un1:1 indication:1 net:1 hire:1 neighboring:2 turned:1 sll:1 mixing:1 squeeze:1 assessing:1 oo:1 stat:1 nearest:4 sa:2 indicate:6 stochastic:1 require:1 subdivided:1 generalization:6 exploring:1 physica:1 hold:1 around:3 seed:4 predict:1 major:1 catlett:2 early:1 smallest:1 label:19 currently:1 largest:3 create:1 sensor:2 gaussian:3 always:3 modified:1 reaching:1 sel:2 mobile:1 encode:1 focus:2 improvement:5 consistently:9 greatly:1 contrast:3 lri:1 centroid:3 criticism:1 detect:1 sense:3 stopping:4 lj:1 hidden:4 classification:11 augment:1 retaining:1 noun:1 equal:2 construct:1 never:1 tds:1 sampling:1 biology:2 filling:2 alter:1 report:1 few:1 randomly:1 replaced:1 intended:4 replacement:1 attempt:1 huge:1 highly:1 weakness:1 uncommon:1 sens:3 accurate:1 nucleotide:3 incomplete:1 initialized:1 plotted:1 deformation:1 guidance:1 theoretical:1 instance:2 classify:1 cover:3 cost:2 subset:1 hundred:1 tishby:1 reported:3 stripping:1 synthetic:6 international:2 off:1 synthesis:2 recorded:1 containing:1 worse:1 return:3 potential:1 de:2 vehicle:1 doing:2 reached:2 start:1 accuracy:9 variance:4 largely:1 characteristic:1 castelli:3 cc:2 cybernetics:1 classified:3 failure:1 acquisition:1 stop:3 sampled:1 dataset:4 gain:1 subsection:1 car:1 improves:2 knowledge:3 actually:1 back:1 supervised:13 methodology:2 response:4 improved:2 box:1 until:4 su:1 propagation:1 matsuoka:3 building:1 effect:3 inductive:1 regularization:1 hence:1 iteratively:1 ll:1 during:6 inferior:3 criterion:7 percent:7 meaning:1 consideration:1 fi:1 superior:2 specialized:2 pseudocode:1 endpoint:1 foreach:1 exponentially:1 association:1 kluwer:1 synthesized:1 significant:4 inclusion:1 had:1 reachable:1 robot:1 similarity:1 add:2 closest:3 occasionally:1 seen:2 determine:4 multiple:2 corporate:1 full:1 technical:1 determination:1 sul:1 cross:1 molecular:2 equally:1 paired:1 prediction:1 imparts:1 variant:1 heterogeneous:1 essentially:1 patient:1 achieved:1 addition:5 source:1 modality:1 unlike:1 jordan:3 call:1 integer:1 counting:1 split:1 affect:1 idea:1 shift:1 six:6 munro:2 ignored:1 amount:1 dna:3 generate:2 delta:1 towell:8 correctly:4 per:1 four:3 prevent:1 graph:2 unclassified:1 uncertainty:1 almost:1 reasonable:1 disambiguation:1 bit:1 layer:1 bound:2 distinguish:1 copied:1 replaces:2 refine:1 occur:1 inclusive:1 scene:1 aspect:1 min:2 slj:2 expanded:1 injection:1 department:1 according:1 request:1 describes:1 slightly:1 em:11 ate:1 smaller:1 errorcorrecting:1 previously:1 turn:2 committee:1 available:2 junction:8 appropriate:1 occurrence:1 subtracted:1 substitute:1 remaining:2 linguistics:1 hain:1 ghahramani:3 build:1 opitz:3 costly:1 distance:1 link:3 majority:1 collected:1 trivial:1 code:1 insufficient:1 expense:1 suppress:1 implementation:3 pomerleau:2 unknown:1 perform:1 upper:2 observation:1 datasets:8 beat:1 variability:1 topical:5 verb:1 aly:1 namely:1 sentence:1 hanson:6 tap:1 hour:1 able:1 beyond:1 below:2 pattern:1 mismatch:1 perception:1 summarize:1 max:2 memory:1 improve:5 created:1 extract:1 text:1 epoch:2 prior:1 asymptotic:3 relative:1 freund:2 generation:4 geoffrey:1 validation:1 principle:1 supported:1 repeat:1 free:2 last:1 copy:1 side:1 allow:1 bias:1 shavlik:6 neighbor:5 stripped:1 distributed:1 curve:3 boundary:1 judd:2 dimension:1 forward:1 collection:1 commonly:1 transaction:1 unreliable:1 global:2 overfitting:1 instantiation:2 conclude:1 un:2 pen:2 search:1 tailed:1 table:5 learn:3 obtaining:1 permute:1 investigated:1 domain:1 main:3 promoter:5 border:1 noise:5 repeated:1 lllj:1 referred:1 position:5 third:3 splice:4 specific:1 explored:1 voorhees:1 consist:1 workshop:1 adding:1 effectively:1 downward:1 boston:1 locality:1 simply:1 explore:2 likely:1 prevents:1 contained:1 cardiologist:1 lewis:2 dh:1 goal:2 marked:1 towards:1 room:2 man:1 specifically:1 except:2 uniformly:1 pas:1 la:1 siemens:1 east:1 college:1 princeton:1 tested:1 |
109 | 1,098 | Discovering Structure in Continuous
Variables Using Bayesian Networks
Reimar Hofmann and Volker Tresp*
Siemens AG, Central Research
Otto-Hahn-Ring 6
81730 Munchen, Germany
Abstract
We study Bayesian networks for continuous variables using nonlinear conditional density estimators. We demonstrate that useful structures can be extracted from a data set in a self-organized
way and we present sampling techniques for belief update based on
Markov blanket conditional density models.
1
Introduction
One of the strongest types of information that can be learned about an unknown
process is the discovery of dependencies and -even more important- of independencies. A superior example is medical epidemiology where the goal is to find the
causes of a disease and exclude factors which are irrelevant. Whereas complete
independence between two variables in a domain might be rare in reality (which
would mean that the joint probability density of variables A and B can be factored:
p(A, B)
p(A)p(B)), conditional independence is more common and is often a
result from true or apparent causality: consider the case that A is the cause of B
and B is the cause of C, then p(CIA, B)
p(CIB) and A and C are independent
under the condition that B is known. Precisely this notion of cause and effect and
the resulting independence between variables is represented explicitly in Bayesian
networks. Pearl (1988) has convincingly argued that causal thinking leads to clear
knowledge representation in form of conditional probabilities and to efficient local
belief propagating rules.
=
=
Bayesian networks form a complete probabilistic model in the sense that they represent the joint probability distribution of all variables involved. Two of the powerful
Reimar.Hofmann@zfe.siemens.de Volker.Tresp@zfe.siemens.de
Discovering Structure in Continuous Variables Using Bayesian Networks
501
features of Bayesian networks are that any variable can be predicted from any subset of known other variables and that Bayesian networks make explicit statements
about the certainty of the estimate of the state of a variable. Both aspects are particularly important for medical or fault diagnosis systems. More recently, learning
of structure and of parameters in Bayesian networks has been addressed allowing
for the discovery of structure between variables (Buntine, 1994, Heckerman, 1995).
Most of the research on Bayesian networks has focused on systems with discrete
variables, linear Gaussian models or combinations of both. Except for linear models, continuous variables pose a problem for Bayesian networks. In Pearl's words
(Pearl, 1988): "representing each [continuous] quantity by an estimated magnitude
and a range of uncertainty, we quickly produce a computational mess. [Continuous
variables] actually impose a computational tyranny of their own." In this paper we
present approaches to applying the concept of Bayesian networks towards arbitrary
nonlinear relations between continuous variables. Because they are fast learners we
use Parzen windows based conditional density estimators for modeling local dependencies. We demonstrate how a parsimonious Bayesian network can be extracted
out of a data set using unsupervised self-organized learning. For belief update we
use local Markov blanket conditional density models which - in combination with
Gibbs sampling- allow relatively efficient sampling from the conditional density of
an unknown variable.
2
Bayesian Networks
This brief introduction of Bayesian networks follows closely Heckerman, 1995. Considering a joint probability density I p( X) over a set of variables {Xl, ??. , XN} we can
decompose using the chain rule of probability
N
p(x) = IIp(xiIXI, ... ,Xi-I).
(1)
i=l
For each variable Xi, let the parents of Xi denoted by Pi ~ {XI, . .. , Xi- d be a set
of variables 2 that renders Xi and {x!, ... , Xi-I} independent, that is
(2)
Note, that Pi does not need to include all elements of {XI, ... , Xi- Il which indicates conditional independence between those variables not included in Pi and Xi
given that the variables in Pi are known. The dependencies between the variables
are often depicted as directed acyclic3 graphs (DAGs) with directed arcs from the
members of Pi (the parents) to Xi (the child). Bayesian networks are a natural
description of dependencies between variables if they depict causal relationships between variables. Bayesian networks are commonly used as a representation of the
knowledge of domain experts. Experts both define the structure of the Bayesian
network and the local conditional probabilities. Recently there has been great
1 For simplicity of notation we will only treat the continuous case. Handling mixtures
of continuous and discrete variables does not impose any additional difficulties.
2Usually the smallest set will be used. Note that in Pi is defined with respect to a
given ordering of the variables.
:li.e. not containing any directed loops.
R. HOFMANN. V. TRESP
502
emphasis on learning structure and parameters in Bayesian networks (Heckerman,
1995). Most of previous work concentrated on models with only discrete variables
or on linear models of continuous variables where the probability distribution of all
continuous given all discrete variables is a multidimensional Gaussian. In this paper
we use these ideas in context with continuous variables and nonlinear dependencies.
3
Learning Structure and Parameters in Nonlinear
Continuous Bayesian Networks
Many of the structures developed in the neural network community can be used to
model the conditional density distribution of continuous variables p( Xi IPi). Under
the usual signal-plus independent Gaussian noise model a feedforward neural network N N(.) is a conditional density model such that p(Xi IPi) = G(Xi; N N(Pi), 0- 2 ),
where G(x; c, 0- 2 ) is our notation for a normal density centered at c and with variance
0- 2 ? More complex conditional densities can, for example, be modeled by mixtures
of experts or by Parzen windows based density estimators which we used in our experiments (Section 5). We will use pM (Xi IP;) for a generic conditional probability
model. The joint probability model is then
N
pM (X) =
II pM (xi/Pi).
(3)
i=l
following Equations 1 and 2. Learning Bayesian networks is usually decomposed
into the problems of learning structure (that is the arcs in the network) and of
learning the conditional density models pM (Xi IPi) given the structure4 . First assume the structure of the network is given. If the data set only contains complete
data, we can train conditional density models pM (Xi IPi ) independently of each
other since the log-likelihood of the model decomposes conveniently into the individual likelihoods of the models for the conditional probabilities. Next, consider
two competing network structures. We are basically faced with the well-known
bias-variance dilemma: if we choose a network with too many arcs, we introduce
large parameter variance and if we remove too many arcs we introduce bias. Here,
the problem is even more complex since we also have the freedom to reverse arcs.
In our experiments we evaluate different network structures based on the model
likelihood using leave-one-out cross-validation which defines our scoring function
for different network structures. More explicitly, the score for network structure
S is Score = 10g(p(S)) + Lev, where p(S) is a prior over the network structures
and Lev
~f=llog(pM (xkIS, X - {xk})) is the leave-one-out cross-validation loglikelihood (later referred to as cv-Iog-likelihood). X = {xk}f=l is the set of training
samples, and pM (x k IS, X - {xk}) is the probability density of sample Xk given the
structure S and all other samples. Each of the terms pM (xk IS, X - {xk}) can be
computed from local densities using Equation 3.
=
Even for small networks it is computationally impossible to calculate the score for all
possible network structures and the search for the global optimal network structure
4Differing from Heckerman we do not follow a fully Bayesian approach in which priors
are defined on parameters and structure; a fully Bayesian approach is elegant if the occurring integrals can be solved in closed form which is not the case for general nonlinear
models or if data are incomplete.
Discovering Structure in Continuous Variables Using Bayesian Networks
503
is NP-hard. In the Section 5 we describe a heuristic search which is closely related to
search strategies commonly used in discrete Bayesian networks (Heckerman, 1995).
4
Prior Models
In a Bayesian framework it is useful to provide means for exploiting prior knowledge,
typically introducing a bias for simple structures. Biasing models towards simple
structures is also useful if the model selection criteria is based on cross-validation,
as in our case, because of the variance in this score. In the experiments we added
a penalty per arc to the log-likelihood i.e. 10gp(S) ex: -aNA where NA is the
number of arcs and the parameter a determines the weight of the penalty. Given
more specific knowledge in form of a structure defined by a domain expert we
can alternatively penalize the deviation in the arc structure (Heckerman, 1995).
Furthermore, prior knowledge can be introduced in form of a set of artificial training
data. These can be treated identical to real data and loosely correspond to the
concept of a conjugate prior.
5
Experiment
In the experiment we used Parzen windows based conditional density estimators to
model the conditional densities pM (Xj IPd from Equation 2, i.e.
(4)
where {xi }f=l is the training set. The Gaussians in the nominator are centered
at (x7, Pf) which is the location of the k-th sample in the joint input/output (or
parent/child) space and the Gaussians in the denominator are centered at (Pf)
which is the location of the k-th sample in the input (or parent) space. For each
conditional model, (J"j was optimized using leave-one-out cross validation 5 ?
The unsupervised structure optimization procedure starts with a complete Bayesian
model corresponding to Equation 1, i.e. a model where there is an arc between
any pair of variables 6 ? Next, we tentatively try all possible arc direction changes,
arc removals and arc additions which do not produce directed loops and evaluate
the change in score. After evaluating all legal single modifications, we accept the
change which improves the score the most. The procedure stops if every arc change
decreases the score. This greedy strategy can get stuck in local minima which
could in principle be avoided if changes which result in worse performance are also
accepted with a nonzero probability 7 (such as in annealing strategies, Heckerman,
1995). Calculating the new score at each step requires only local computation.
The removal or addition of an arc corresponds to a simple removal or addition of
the corresponding dimension in the Gaussians of the local density model. However,
5Note that if we maintained a global (7 for all density estimators, we would maintain
likelihood equivalence which means that each network displaying the same independence
model gets the same score on any test set.
6The order of nodes determining the direction of initial arcs is random.
7 In our experiments we treated very small changes in score as if they were exactly zero
thus allowing small decreases in score.
R. HOFMANN. V. TRESP
504
15~-----------------------,
10
-- - -
-
100~------~------~------~
--
50
~
~
"T
~
-5
I
-100
-10~----------~----------~
o
50
100
-150~--------------------~
o
Number of Iterations
5
10
15
Number of inputs
Figure 1: Left: evolution of the cv-log-Iikelihood (dashed) and of the log-likelihood
on the test set (continuous) during structure optimization. The curves are averages
over 20 runs with different partitions of training and test sets and the likelihoods
are normalized with respect to the number of cv- or test-samples, respectively. The
penalty per arc was a
0.1. The dotted line shows the Parzen joint density model
commonly used in statistics, i.e. assuming no independencies and using the same
width for all Gaussians in all conditional density models. Right: log-likelihood
of the local conditional Parzen model for variable 3 (pM (x3IP3)) on the test set
(continuous) and the corresponding cv-log-likelihood (dashed) as a function of the
number of parents (inputs).
=
2
a
4
5
6
7
8
9
10
11
12
13
14
crime ra.te
percent land zoned for lots
percent nonretail business
located on Charles river?
nitrogen oxide concentration
Average number of rooms
percent built before 1940
weighted distance to employment center
access to radial highways
tax rate
pupil/teacher ratio
percent black
percent lower-status population
median value of homes
Figure 2: Final structure of a run on the full data set.
after each such operation the widths of the Gaussians O'i in the affected local models
have to be optimized. An arc reversal is simply the execution of an arc removal
followed by an arc addition.
In our experiment, we used the Boston housing data set, which contains 506 samples. Each sample consists of the housing price and 14 variables which supposedly
influence the housing price in a Boston neighborhood (Figure 2). Figure 1 (left)
shows an experiment where one third of the samples was reserved as a test set to
monitor the process. Since the algorithm never sees the test data the increase in
likelihood of the model on the test data is an unbiased estimator for how much
the model has improved by the extraction of structure from the data. The large
increase in the log-likelihood can be understood by studying Figure 1 (right). Here
we picked a single variable (node 3) and formed a density model to predict this variable from the remaining 13 variables. Then we removed input variables in the order
of their significance. After the removal of a variable, 0'3 is optimized. Note that the
cv-Iog-likelihood increases until only three input variables are left due to the fact
Discovering Structure in Continuous Variables Using Bayesian Networks
505
that irrelevant variables or variables which are well represented by the remaining
input variables are removed. The log-likelihood of the fully connected initial model
is therefore low (Figure 1 left).
We did a second set of 15 runs with no test set. The scores of the final structures
had a standard deviation of only 0.4. However, comparing the final structures in
terms of undirected arcs8 the difference was 18% on average. The structure from one
of these runs is depicted in Figure 2 (right). In comparison to the initial complete
structure with 91 arcs, only 18 arcs are left and 8 arcs have changed direction.
One of the advantages of Bayesian networks is that they can be easily interpreted.
The goal of the original Boston housing data experiment was to examine whether
the nitrogen oxide concentration (5) influences the housing price (14). Under the
structure extracted by the algorithm, 5 and 14 are dependent given all other variables because they have a common child, 13. However, if all variables except 13 are
known then they are independent. Another interesting question is what the relevant quantities are for predicting the housing price, i.e. which variables have to be
known to render the housing price independent from all other variables. These are
the parents, children, and children's parents of variable 14, that is variables 8, 10,
11, 6, 13 and 5. It is well known that in Bayesian networks, different constellations
of directions of arcs may induce the same independencies, i.e. that the direction
of arcs is not uniquely determined. It can therefore not be expected that the arcs
actually reflect the direction of causality.
6
Missing Data and Markov Blanket Conditional Density
Model
Bayesian networks are typically used in applications where variables might be missing. Given partial information (i. e. the states of a subset of the variables) the goal
is to update the beliefs (i. e. the probabilities) of all unknown variables. Whereas
there are powerful local update rules for networks of discrete variables without
(undirected) loops, the belief update in networks with loops is in general NP-hard.
A generally applicable update rule for the unknown variables in networks of discrete
or continuous variables is Gibbs sampling. Gibbs sampling can be roughly described
as follows: for all variables whose state is known, fix their states to the known values. For all unknown variables choose some initial states. Then pick a variable Xi
which is not known and update its value following the probability distribution
p(xil{Xl, ... , XN} \ {xd) ex: p(xilPd
II
p(xjIPj ).
(5)
x.E1'j
Do this repeatedly for all unknown variables. Discard the first samples. Then,
the samples which are generated are drawn from the probability distribution of the
unknown variables given the known variables. Using these samples it is easy to
calculate the expected value of any of the unknown variables, estimate variances,
covariances and other statistical measures such as the mutual information between
variables.
8 Since the direction of arcs is not unique we used the difference in undirected arcs to
compare two structures. We used the number of arcs present in one and only one of the
structures normalized with respect to the number of arcs in a fully connected network.
R. HOFMANN, V. TRESP
506
Gibbs sampling requires sampling from the univariate probability distribution in
Equation 5 which is not straightforward in our model since the conditional density does not have a convenient form. Therefore, sampling techniques such as
importance sampling have to be used. In our case they typically produce many
rejected samples and are therefore inefficient. An alternative is sampling based
on Markov blanket conditional density models. The Markov blanket of Xi, Mi is
the smallest set of variables such that P(Xi I{ Xb . .. , XN} \ Xi) = P(Xi IMi) (given a
Bayesian network, the Markov blanket of a variable consists of its parents, its children and its children's parents.). The idea is to form a conditional density model
pM (xilMd ~ p(xdMd for each variable in the network instead of computing it
according to Equation 5. Sampling from this model is simple using conditional
Parzen models: the conditional density is a mixture of Gaussians from which we
can sample without rejection 9 ? Markov blanket conditional density models are also
interesting if we are only interested in always predicting one particular variable, as
in most neural network applications. Assuming that a signal-plus-noise model is a
reasonably good model for the conditional density, we can train an ordinary neural
network to predict the variable of interest. In addition, we train a model for each
input variable predicting it from the remaining variables. In addition to having obtained a model for the complete data case, we can now also handle missing inputs
and do backward inference using Gibbs sampling.
7
Conclusions
We demonstrated that Bayesian models of local conditional density estimators form
promising nonlinear dependency models for continuous variables. The conditional
density models can be trained locally if training data are complete. In this paper
we focused on the self-organized extraction of structure. Bayesian networks can
also serve as a framework for a modular construction of large systems out of smaller
conditional density models. The Bayesian framework provides consistent update
rules for the probabilities i.e. communication between modules. Finally, consider
input pruning or variable selection in neural networks. Note, that our pruning
strategy in Figure 1 can be considered a form of variable selection by not only
removing variables which are statistically independent of the output variable but
also removing variables which are represented well by the remaining variables. This
way we obtain more compact models. If input values are missing then the indirect
influence of the pruned variables on the output will be recovered by the sampling
mechanism.
References
Buntine, W. (1994). Operations for learning with graphical models. Journal of Artificial
Intelligence Research 2: 159-225.
Heckerman, D. (1995). A tutorial on learning Bayesian networks. Microsoft Research,
TR. MSR-TR-95-06, 1995.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. San Mateo, CA: Morgan
Kaufmann.
9There are, however, several open issues concerning consistency between the conditional
models.
| 1098 |@word msr:1 open:1 covariance:1 pick:1 tr:2 initial:4 contains:2 score:12 recovered:1 comparing:1 partition:1 hofmann:5 remove:1 update:8 depict:1 greedy:1 discovering:4 intelligence:1 xk:6 provides:1 node:2 location:2 ipi:4 consists:2 introduce:2 expected:2 ra:1 roughly:1 examine:1 decomposed:1 window:3 considering:1 pf:2 notation:2 what:1 interpreted:1 developed:1 differing:1 ag:1 certainty:1 every:1 multidimensional:1 xd:1 exactly:1 medical:2 before:1 understood:1 local:12 treat:1 lev:2 might:2 plus:2 emphasis:1 black:1 mateo:1 equivalence:1 range:1 statistically:1 directed:4 unique:1 procedure:2 convenient:1 word:1 radial:1 induce:1 get:2 selection:3 context:1 applying:1 impossible:1 influence:3 demonstrated:1 center:1 zfe:2 missing:4 straightforward:1 independently:1 focused:2 simplicity:1 factored:1 estimator:7 rule:5 population:1 handle:1 notion:1 construction:1 element:1 particularly:1 located:1 module:1 solved:1 calculate:2 connected:2 ordering:1 decrease:2 removed:2 disease:1 supposedly:1 employment:1 trained:1 dilemma:1 serve:1 learner:1 easily:1 joint:6 indirect:1 represented:3 train:3 fast:1 describe:1 artificial:2 neighborhood:1 apparent:1 heuristic:1 whose:1 modular:1 loglikelihood:1 otto:1 statistic:1 gp:1 ip:1 final:3 housing:7 advantage:1 relevant:1 loop:4 tax:1 description:1 exploiting:1 parent:9 produce:3 xil:1 ring:1 leave:3 propagating:1 pose:1 predicted:1 blanket:7 direction:7 closely:2 centered:3 ana:1 argued:1 fix:1 decompose:1 considered:1 normal:1 great:1 predict:2 cib:1 smallest:2 applicable:1 highway:1 weighted:1 gaussian:3 always:1 volker:2 indicates:1 likelihood:14 sense:1 inference:1 dependent:1 typically:3 accept:1 relation:1 interested:1 germany:1 issue:1 denoted:1 mutual:1 never:1 extraction:2 having:1 sampling:13 identical:1 unsupervised:2 thinking:1 np:2 intelligent:1 individual:1 maintain:1 microsoft:1 freedom:1 interest:1 mixture:3 xb:1 chain:1 integral:1 partial:1 incomplete:1 loosely:1 causal:2 modeling:1 ordinary:1 introducing:1 deviation:2 subset:2 rare:1 too:2 imi:1 buntine:2 dependency:6 teacher:1 density:33 epidemiology:1 river:1 probabilistic:2 parzen:6 quickly:1 na:1 central:1 reflect:1 iip:1 containing:1 choose:2 worse:1 oxide:2 expert:4 inefficient:1 li:1 exclude:1 de:2 explicitly:2 later:1 try:1 lot:1 closed:1 ipd:1 picked:1 start:1 il:1 formed:1 variance:5 reserved:1 kaufmann:1 correspond:1 bayesian:35 basically:1 strongest:1 involved:1 nitrogen:2 mi:1 stop:1 knowledge:5 improves:1 organized:3 actually:2 follow:1 improved:1 furthermore:1 rejected:1 until:1 nonlinear:6 defines:1 effect:1 concept:2 true:1 normalized:2 unbiased:1 evolution:1 nonzero:1 during:1 self:3 width:2 uniquely:1 maintained:1 criterion:1 complete:7 demonstrate:2 percent:5 reasoning:1 recently:2 charles:1 superior:1 common:2 gibbs:5 dag:1 cv:5 consistency:1 pm:11 reimar:2 had:1 access:1 own:1 irrelevant:2 reverse:1 discard:1 fault:1 scoring:1 morgan:1 minimum:1 additional:1 impose:2 signal:2 ii:2 dashed:2 full:1 cross:4 concerning:1 mess:1 e1:1 iog:2 denominator:1 iteration:1 represent:1 penalize:1 whereas:2 addition:6 addressed:1 annealing:1 median:1 elegant:1 undirected:3 member:1 nominator:1 feedforward:1 easy:1 independence:5 xj:1 competing:1 idea:2 whether:1 penalty:3 render:2 cause:4 repeatedly:1 useful:3 generally:1 clear:1 locally:1 concentrated:1 tutorial:1 dotted:1 estimated:1 per:2 diagnosis:1 discrete:7 affected:1 independency:3 monitor:1 drawn:1 backward:1 graph:1 run:4 powerful:2 uncertainty:1 parsimonious:1 home:1 followed:1 precisely:1 x7:1 aspect:1 pruned:1 relatively:1 according:1 combination:2 conjugate:1 heckerman:8 smaller:1 modification:1 handling:1 computationally:1 equation:6 legal:1 mechanism:1 reversal:1 studying:1 gaussians:6 operation:2 munchen:1 generic:1 cia:1 alternative:1 original:1 remaining:4 include:1 graphical:1 calculating:1 hahn:1 added:1 quantity:2 question:1 strategy:4 concentration:2 usual:1 distance:1 assuming:2 modeled:1 relationship:1 ratio:1 statement:1 unknown:8 allowing:2 markov:7 arc:29 communication:1 arbitrary:1 community:1 introduced:1 pair:1 optimized:3 crime:1 learned:1 pearl:4 usually:2 biasing:1 convincingly:1 built:1 belief:5 natural:1 difficulty:1 treated:2 business:1 predicting:3 representing:1 brief:1 tentatively:1 tresp:5 faced:1 prior:6 discovery:2 removal:5 determining:1 fully:4 interesting:2 validation:4 consistent:1 principle:1 displaying:1 pi:8 land:1 changed:1 bias:3 allow:1 curve:1 dimension:1 xn:3 evaluating:1 stuck:1 commonly:3 san:1 avoided:1 pruning:2 compact:1 status:1 global:2 xi:24 alternatively:1 zoned:1 continuous:20 search:3 decomposes:1 reality:1 promising:1 reasonably:1 ca:1 complex:2 domain:3 did:1 significance:1 noise:2 child:7 causality:2 referred:1 pupil:1 explicit:1 xl:2 third:1 removing:2 specific:1 constellation:1 importance:1 magnitude:1 te:1 execution:1 occurring:1 boston:3 rejection:1 depicted:2 simply:1 univariate:1 conveniently:1 corresponds:1 determines:1 extracted:3 conditional:33 goal:3 towards:2 room:1 price:5 hard:2 change:6 included:1 determined:1 except:2 llog:1 accepted:1 siemens:3 evaluate:2 ex:2 |
110 | 1,099 | Adaptive Mixture of Probabilistic Transducers
Yoram Singer
AT&T Bell Laboratories
singer@research.att.com
Abstract
We introduce and analyze a mixture model for supervised learning of
probabilistic transducers. We devise an online learning algorithm that
efficiently infers the structure and estimates the parameters of each model
in the mixture. Theoretical analysis and comparative simulations indicate
that the learning algorithm tracks the best model from an arbitrarily large
(possibly infinite) pool of models. We also present an application of the
model for inducing a noun phrase recognizer.
1 Introduction
Supervised learning of a probabilistic mapping between temporal sequences is an important
goal of natural sequences analysis and classification with a broad range of applications such
as handwriting and speech recognition, natural language processing and DNA analysis. Research efforts in supervised learning of probabilistic mappings have been almost exclusively
focused on estimating the parameters of a predefined model. For example, in [5] a second
order recurrent neural network was used to induce a finite state automata that classifies
input sequences and in [1] an input-output HMM architecture was used for similar tasks.
In this paper we introduce and analyze an alternative approach based on a mixture model
of a new subclass of probabilistic transducers, which we call suffix tree transducers. The
mixture of experts architecture has been proved to be a powerful approach both theoretically
and experimentally. See [4,8,6, 10,2, 7] for analyses and applications of mixture models,
from different perspectives such as connectionism, Bayesian inference and computational
learning theory. By combining techniques used for compression [13] and unsupervised
learning [12], we devise an online algorithm that efficiently updates the mixture weights
and the parameters of all the possible models from an arbitrarily large (possibly infinite)
pool of suffix tree transducers. Moreover, we employ the mixture estimation paradigm to
the estimation of the parameters of each model in the pool and achieve an efficient estimate
of the free parameters of each model. We present theoretical analysis, simulations and
experiments with real data which show that the learning algorithm indeed tracks the best
model in a growing pool of models, yielding an accurate approximation of the source. All
proofs are omitted due to the lack of space
2 Mixture of Suffix Tree Transducers
~in and ~Ot.lt be two finite alphabets. A Suffix Tree Transducer T over (~in, ~Ot.lt) is a
rooted,l~jn
tree where every internal node of T has one child for each symbol in ~in.
Let
I-ary
The nodes of the tree are labeled by pairs (s, l'~), where s is the string associated with the path
(sequence of symbols in ~n) that leads from the root to that node, and 1'~ : ~Ot.lt -+ [0,1]
is the output probability function. A suffix tree transducer (stochastically) maps arbitrarily
long input sequences over ~in to output sequences over ~Ot.lt as follows. The probability
382
Y. SINGER
that T will output a string Yl, Y2, ... ,Yn in I:~ut given an input string Xl, X2, ... , Xn in
I:in, denoted by PT(Yl, Y2, ... , YnlXl, X2 , "" x n), is n~=li8. (Yk), where sl = Xl, and
for 1 ::; j ::; n - 1, si is the string labeling the deepest node reached by taking the path
corresponding to xi, xi -1, Xi -2, ... starting at the root of T. A suffix tree transducer is
therefore a probabilistic mapping that induces a measure over the possible output strings
given an input string. Examples of suffix tree transducers are given in Fig. 1.
= ({O, 1} , {a , b, c}) and two ofits possible
sub-models (subtrees). The strings labeling the nodes are the suffixes of the input string used to predict
the output string. At each node there is an output probability function defined for each of the possible
output symbols. For instance, using the suffix tree transducer depicted on the left, the probability of
observing the symbol b given that the input sequence is ... ,0, 1,0, is 0.1. The probability of the
current output, when each transducer is associated with a weight (prior), is the weighted sum of the
predictions of each transducer. For example, assume that the weights of the trees are 0.7 (left tree), 0.2
(middle), and 0.1. then the probability thattheoutputYn = a given that (X n -2, Xn-l, Xn) = (0,1,0)
is 0.7? P7j (aIOl0) + 0.2? P7i(aIIO) + 0.1 . P7)(aIO) 0.7 . 0.8 + 0.2 . 0.7 + 0.1 . 0.5 0.75.
Figure 1: A suffix tree transducer (left) over (Lin, LQut)
=
=
Given a suffix tree transducer T we are interested in the prediction of the mixture of all
possible subtrees of T. We associate with each subtree (including T) a weight which can be
interpreted as its prior probability. We later show how the learning algorithm of a mixture
of suffix tree transducers adapts these weights with accordance to the performance (the
evidence in Bayesian terms) of each subtree on past observations. Direct calculation of the
mixture probability is infeasible since there might be exponentially many such subtrees.
However, the technique introduced in [13] can be generalized and applied to our setting.
Let T' be a subtree of T. Denote by nl the number of the internal nodes of T' and by
n2 the number of leaves of T' which are not leaves of T. For example, nl = 2 and
n2
I, for the tree depicted on the right part of Fig. 1, assuming that T is the tree
depicted on the left part of the figure. The prior weight of a tree T'. denoted by Po(T') is
defined to be (1 - Q-)n\ a n2 , where a E (0, 1). Denote by Sub(T) the set of all possible
subtrees of T including T itself. It can be easily verified that this definition of the weights
is a proper measure, i.e., LT/ESUb(T) Po(T') = 1. This distribution over trees can be
extended to unbounded trees assuming that the largest tree is an infinite lI:in I-ary suffix tree
transducer and using the following randomized recursive process. We start with a suffix
tree that includes only the root node. With probability a we stop the process and with
probability 1 - a we add all the possible lI:in I sons of the node and continue the process
recursively for each of the sons. Using this recursive prior the suffix tree transducers, we
can calculate the prediction of the mixture at step n in time that is linear in n, as follows,
aie(Yn) + (1 - a) (aixn(Yn) + (1- a) (aixn_\xn(Yn) + (1 - a) ...
Therefore, the prediction time of a single symbol is bounded by the maximal depth of T,
or the length of the input sequence if T is infinite. Denote by 1'8 (Yn) the prediction of the
mixture of subtrees rooted at s, and let Leaves(T) be the set of leaves of T . The above
=
Adaptive Mixture of Probabilistic Transducers
383
sum equals to 'Ye(Yn), and can be evaluated recursively as follows,1
'Y3(Yn) = { 13(Yn)
a I3 (Yn)
+ (I -
_
a)r(X n_I.I. 3)(Yn)
S
E
Le~ves(T)
otherwise
( )
I
For example, given that the input sequence is ... ,0, 1, 1,0, then the probabilities of the
mixtures of subtrees for the tree depicted on the left part of Fig. 1, for Yn = b and given
that a = 1/2, are, 'Yllo(b) = 0.4 , 'YlO(b) = 0.5 . 11O(b) + 0.5 . 0.4 = 0.3 , 'Yo(b) =
0.5 . lo(b) + 0 .5 ?0.3 = 0.25, 'Ye(b) = 0.5 . le(b) + 0 .5 ?0.25 = 0.25.
3
An Online Learning Algorithm
We now describe an efficient learning algorithm for a mixture of suffix tree transducers.
The learning algorithm uses the recursive priors and the evidence to efficiently update
the posterior weight of each possible subtree. In this section we assume that the output
probability functions are known. Hence, we need to evaluate the following,
~
P(Yn IT')P(T'I(XI, YI), ... ,(Xn_l, Yn-t)
T'ESub(T)
def
~
P(Yn IT')Pn (T')
(2)
T'ESub(T)
where Pn(T') is the posterior weight of T'. Direct calculation of the above sum requires
exponential time.. However, using the idea of recursive calculation as in Equ. (1) we
can efficiently calculate the prediction of the mixture. Similar to the definition of the
recursive prior a, we define qn (s) to be the posterior weight of a node S compared to the
mixture of all nodes below s. We can compute the prediction of the mixture of suffix tree
transducers rooted at s by simply replacing the prior weight a with the posterior weight,
qn-l (s), as follows,
_ ( ) _ { 13(Yn)
S E Leaves(T)
13 Yn qn-I(S)r3(Yn) + (1 - qn-l(S?'Y(X n _I.I. 3)(Yn) otherwise
, (3)
In order to update qn(s) we introduce one more variable, denoted by rn(s). Setting
ro(s) 10g(a/(1 for all s, rn(s) is updated as follows,
=
a?
rn(s)
= rn-l(s) + log(/3(Yn? -log('YXn_I.13(Yn?
.
(4)
Therefore, rn( s) is the log-likelihood ratio between the prediction of s and the prediction
of the mixture of all nodes below s in T. The new posterior weights qn (s) are calculated
from rn(s),
(5)
In summary, for each new observation pair, we traverse the tree by following the path that
corresponds to the input sequence x n Xn-I Xn _2 .. . The predictions of each sub-mixture are
calculated using Equ. (3). Given these predictions the posterior weights of each sub-mixture
are updated using Equ. (4) and Equ. (5). Finally, the probability of Yn induced by the whole
mixture is the prediction propagated out of the root node, as stated by Lemma 3.1.
Lemma3.1
LT'ESub(T)
P(YnlT')Pn(T') = 'Ye(Yn).
Let Loss n (T) be the logarithmic loss (negative log-likelihood) of a suffix tree transducer T
after n input-output pairs. That is, Lossn(T) = L7=1 -log(P(YiIT?. Similarly, the loss
1A similar derivation still holds even if there is a different prior
sake of simplicity we assume that 0' is constant.
0'.
at each node s of T. For the
384
Y. SINGER
of the mixture is defined to be, Lossr;:ix = 2:~=1 -log(.ye(yd). The advantage of using
a mixture of suffix tree transducers over a single suffix tree is due to the robustness of the
solution, in the sense that the prediction of the mixture is almost as good as the prediction
of the best suffix tree in the mixture.
Theorem 1 Let T be a (possibly infinite) suffix tree transducer, and let
(Xl, yd, .. . , (xn, Yn) be any possible sequence of input-output pairs. The loss of the
mixture is at most, Lossn(T') -log(Po(T'?,Jor each possible subtree T'. The running
time of the algorithm is D n where D is the maximal depth ofT or n 2 when T is infinite.
The proof is based on a technique introduced in [4]. Note that the additional loss is constant,
hence the normalized loss per observation pair is, Po(T')/n, which decreases like O(~).
Given a long sequence of input-output pairs or many short sequences, the structure of the
suffix tree transducer is inferred as well. This is done by updating the output functions, as
described in the next section, while adding new branches to the tree whenever the suffix
of the input sequence does not appear in the current tree. The update of the weights,
the parameters, and the structure ends when the maximal depth is reached, or when the
beginning of the input sequence is encountered.
4
Parameter Estimation
In this section we describe how the output probability functions are estimated. Again, we
devise an online scheme. Denote by C;'(y) the number of times the output symbol y was
observed out of the n times the node s was visited. A commonly used estimator smoothes
each count by adding a constant ( as follows,
(6)
The special case of ( = ~ is termed Laplace's modified rule of succession or the add~
estimator. In [9], Krichevsky and Trofimov proved that the loss of the add~ estimator, when
applied sequentially, has a bounded logarithmic loss compared to the best (maximumlikelihood) estimator calculated after observing the entire input-output sequence. The
additional loss of the estimator after n observations is, 1/2(II:out l - 1) log(n) + lI:outl-l.
When the output alphabet I: out is rather small, we approximate "y8(y) by 78 (y) using Equ. (6)
and increment the count of the corresponding symbol every time the node s is visited. We
predict by replacing "y with its estimate 7 in Equ. (3). The loss of the mixture with estimated
output probability functions, compared to any subtree T' with known parameters, is now
bounded as follows,
LOSS,:ix ~ Lossn(T') -log(Po(T')) + 1/2IT'1 (Il:outl-l) log(n/IT'I)
+ IT'I (Il:outl-l),
where IT'I is the number of leaves in T'. This bound is obtained by combining the bound
on the prediction of the mixture from Thm. 1 with the loss of the smoothed estimator while
applying Jensen's inequality [3].
When lI:out 1 is fairly large or the sample size if fairly small, the smoothing of the output
probabilities is too crude. However, in many real problems, only a small subset of the
output alphabet is observed in a given context (a node in the tree). For example, when
mapping phonemes to phones [II], for a given sequence of input phonemes the phones that
can be pronounced is limited to a few possibilities. Therefore, we would like to devise an
estimation scheme that statistically depends on the effective local alphabet and not on the
whole alphabet. Such an estimation scheme can be devised by employing again a mixture
of models, one model for each possible subset I:~ut of I:out . Although there are 2 11: ,1
subsets of I:out, we next show that if the estimators depend only on the size of each subset
then the whole mixture can be maintained in time linear in lI:out I.
0 ..
385
Adaptive Mixture of Probabilistic Transducers
Denote by .y~ (YIII:~ut 1 = i) the estimate of 'Y~ (y) after n observations given that the
alphabet I:~t is of size i. Using the add~ estimator, .y~(YIII:~utl
i) = (C~(y) +
1/2)/(n + i/2). Let I:~ut(s) be the set of different output symbols observed at node s, i.e.
=
I:~ut(s)
= {u 1u = Yi", s = (xi"-I~I+1' .. . ,Xi,,),
1$ k $
n} ,
and define I:0out (s) to be the empty set. There are (IIo.'I-II~.,(~)I)
possible alphabets of
i-II:.,(~)I
size i. Thus, the prediction of the mixture of all possible subsets of I: out is,
An( )
i~ Y
= I~I
L...J
(lI:outl- 1I:~ut(s)l) ':l An( I?)
. _ lI:n (s)1
w1 'Y~ Y J ,
j=II:.,(~)1
J
out
(7)
where wi is the posterior probability of an alphabet of size i. Evaluation of this sum
requires O(II:01.lt I) operations (and not 0(2 IIo .,1 ?. We can compute Equ. (7) in an online
fashion as follows. Let,
t l- 1I:~ut(s)l)
( lI:01.l
0_ lI:n ()I
out s
2
0
W,
TIn Ak-l( . 10)
'Y8
k=l
y,,,
(8)
~
Without loss of generality, let us assume a uniform prior for the possible alphabet sizes.
Then,
Po(I:~ut) = Po(II:~1.Itl = i) ~ w? = 1/ (I I:01.l t 1(lI::utl))
0
Thus, for all i ~(i) = 1/II:01.ltl. ~+1 (i) is updated from ~(i) as follows,
m+l (0)
_ m ( 0)
1 ~
2 - 1 ~ 2
X
if 1I:~ti/(s)1 > i
if 1I:~titl(s)1 $ i and Yi,,+1 E I:~1.It(s)
{~:(';'j;'
n+I/2)+'/2
i-II~y,(~)1
~
if 1I:~titl(s)1 $ i and Yi,,+1 ? I:~ut(s)
IIo.,I-II:.,(8)1 n+i/2
Informally: If the number of different symbols observed so far exceeds a given size then all
alphabets of this size are eliminated from the mixture by slashing their posterior probability
to zero. Otherwise, if the next symbol was observed before, the output probability is the
prediction of the addi estimator. Lastly, if the next symbol is entirely new, we need to sum
the predictions of all the alphabets of size i which agree on the first 1I:~1.It(s)1 and Yi,,+1 is
one of their i - 1I:~1.It(s)1 (yet) unobserved symbols. Funhermore, we need to multiply by
the apriori probability of observing Yi ,,+10 Assuming a uniform prior over the unobserved
symbols this probability equals to 1/(II:01.lt 1- 1I:~1.It( s)l). Applying Bayes rule again, the
prediction of the mixture of all possible subsets of the output alphabet is,
IIo."
IIo.,1
.y~(Yin+l)
= 2: ~+l(i) / 2: ~(i)
i=l
?
(9)
i=l
Applying twice the online mixture estimation technique, first for the structure and then
for the parameters, yields an efficient and robust online algorithm. For a sample of size
n, the time complexity of the algorithm is DII:01.ltln (or lI:01.ltln2 if 7 is infinite). The
predictions of the adaptive mixture is almost as good as any suffix tree transducer with any
set of parameters. The logarithmic loss of the mixture depends on the number of non-zero
parameters as follows,
Lossr;:ix $ Loss n (7') -log(Po(7'? + 1/21Nz log(n) + 0(17'III:01.ltl) ,
where lNz is the number of non-zero parameters of the transducer 7'0 If lNz ~ 17'III:out l
then the performance of the above scheme, when employing a mixture model for the
parameters as well, is significantly better than using the add~ rule with the full alphabet.
386
Y. SINGER
5 Evaluation and Applications
In this section we briefly present evaluation results of the model and its learning algorithm.
We also discuss and present results obtained from learning syntactic structure of noun
phrases. We start with an evaluation of the estimation scheme for a multinomial source.
In order to check the convergence of a mixture model for a multinomial source, we simulated
a source whose output symbols belong to an alphabet of size 10 and set the probabilities of
observing any of the last five symbols to zero. Therefore, the actual alphabet is of size 5.
The posterior probabilities for the sum of all possible subsets of I: out of size i (1 :::; i :::; 10)
were calculated after each iteration. The results are plotted on the left part of Fig. 2. The
very first observations rule out alphabets of size lower than 5 by slashing their posterior
probability to zero. After few observations, the posterior probability is concentrated around
the actual size, yielding an accurate online estimate of the multinomial source.
The simplicity of the learning algorithm and the online update scheme enable evaluation of
the algorithm on millions of input-output pairs in few minutes. For example, the average
update time for a suffix tree transducer of a maximal depth 10 when the output alphabet
is of size 4 is about 0.2 millisecond on a Silicon Graphics workstation. A typical result is
shown in Fig. 2 on the right. In the example, I: out = I:in = {I, 2, 3,4}. The description
of the source is as follows. If Xn ~ 3 then Yn is uniformly distributed over I:out. otherwise
(xn :::; 2) Yn
Xn-S with probability 0.9 and Yn-S
4 - Xn-S with probability 0.1. The
input sequence Xl, X2, .?? was created entirely at random. This source can be implemented
by a sparse suffix tree transducer of maximal depth 5. Note that the actual size of the
alphabet is only 2 at half of the leaves of the tree. We used a suffix tree transducer of
maximal depth 20 to learn the source. The negative of the logarithm of the predictions
(normalized per symbol) are shown for (a) the true source, (b) a mixture of suffix tree
transducers and their parameters, (c) a mixture of only the possible suffix tree transducers
(the parameters are estimated using the add l scheme), and (d) a single (overestimated)
model of depth 8. Clearly, the mixture mode? converge to the entropy of the source much
faster than the single model. Moreover, employing twice the mixture estimation technique
results in an even faster convergence.
=
Ub
..: UI
=
-
"b'~1
"I - "I ..
......... ~.:~.:~
.:~.:
.,.
?
,.
I
,.
?
1.
I"
"~'?
b"b"'b"b'?b
.. ?..
.. .. .. -.
..... t
..... t
0.1
?
?
I"
..... t
?
I"
....
It
?
I"
.... II
?
I"
1.B
I,.
????
.g 1.6
- -
(d) Single Overestimated Model
(e) Mixture 01 Models
MiX1ure 01 Models and Parameters
........ (a Source
Ii:
.--- (bl
0
I
"
?
"
.... .. ?LUL"~'?~
.. .. . .. . . .
'b'?L'?
...
??
"
??
II
??
I"
?
II
? ?
?
tI
I
1.
?
"
?
I"
I,.
?
I
I
"
I"
"I ... "I -. "I -_ ':1 -. "I .._ "I .._
,:~,:~,:~,~,:~,:~
I"
I
10
I
to
.,.
I
"
5
,.
50
100 150 200 250 300 350 400 450 500
Number 01 Examples
Figure 2: Left: Example of the convergence of the posterior probability of a mixture model for
a multinomial source with large number of possible outcomes when the actual number of observed
symbols is small. Right: performance comparison of the predictions of a single model, two mixture
models and the true underlying transducer.
We are currently exploring the applicative possibilities of the algorithm. Here we briefly
discuss and demonstrate how to induce an English noun phrase recognizer. Recognizing
noun phrases is an important task in automatic natural text processing, for applications
such as information retrieval, translation tools and data extraction from texts. A common
practice is to recognize noun phrases by first analyzing the text with a part-of-speech tagger,
which assigns the appropriate part-of-speech (verb, noun, adjective etc.) for each word in
Adaptive Mixture of Probabilistic Transducers
387
context. Then, noun phrases are identified by manually defined regular expression patterns
that are matched against the part-of-speech sequences. We took an alternative route by
building a suffix tree transducer based on a labeled data set from the UPENN tree-bank
{O, I},
corpus. We defined I:in to be the set of possible part-of-speech tags and set I:out
where the output symbol given its corresponding input symbol (the part-of-speech tag of
the current word) is 1 iff the word is part of a noun phrase. We used over 250, 000 marked
tags and tested the performance on more than 37 , 000 tags. The test phase was performed
by freezing the model structure, the mixture weights and the estimated parameters. The
suffix tree transducer was of maximal depth 15 hence very long phrases can be statistically
identified. By tresholding the output probability we classified the tags in the test data
and found that less than 2.4% of the words were misclassified. A typical result is given
in Table 1. We are currently investigating methods to incorporate linguistic knowledge
into the model and its learning algorithm and compare the performance of the model with
traditional techniques.
=
Scmrmc:e
pos tag
Class
Pred i ction
Sentence
P~S tag
Class
Prediction
Tcm
PNP
Smith
group
cru..f
PNP
1
0.99
ODd
CC
1
0.67
1
0.99
NN
1
0.98
mabr
NN
1
0.96
NN
1
0.98
industrial
JJ
1
0.96
0
0.01
material.
NNS
1
0.99
0
0.03
cxcc:utiYe
NN
1
0.98
of
U.K.
IN
PNP
0
0.02
1
0.99
will
bo:cune
chainnon
MD
VB
0
0.03
0
0.01
NN
1
0.81
metal.
NNS
1
0.99
0
0.01
Table 1: Extraction of noun phrases using a suffix tree transducer. In this typical example, two long
noun phrases were identified correctly with high confidence.
Acknowledgments
Thanks to Y. Bengio, Y. Freund, F. Pereira, D. Ron. R. Schapire. and N. Tishby for helpful discussions .
The work on syntactic structure induction is done in collaboration with I. Dagan and S. Engelson.
This work was done while the author was at the Hebrew University of Jerusalem.
References
[1] Y. Bengio and P. Fransconi. An input output HMM architecture. InNIPS-7. 1994.
[2] N. Cesa-Bianchi. Y. Freund. D. Haussler. D.P. Helmbold, R.E. Schapire, and M. K. Warmuth.
How to use expert advice. In STOC-24, 1993.
[3] T.M. Cover and J .A. Thomas. Elements of information theory. Wiley. 1991.
[4] A. DeSantis. G. Markowski. and M.N. Wegman. Learning probabilistic prediction functions.
In Proc. of the 1st Wksp. on Comp. Learning Theory. pages 312-328,1988.
[5] C.L. Giles. C.B. Miller, D. Chen, G.Z. Sun, H.H. Chen. and Y.C. Lee. Learning and extracting
finite state automata with second-orderrecurrent neural networks. Neural Computation. 4:393405.1992.
[6] D. Haussler and A. Barron. How well do Bayes methods work for on-line prediction of {+ 1, -1 }
values? In The3rdNEC Symp . on Comput. andCogn., 1993.
[7] D.P. HeImbold and R.E. Schapire. Predicting nearly as well as the best pruning of a decision
tree. In COLT-8. 1995.
[8] R.A. Jacobs, M.1. Jordan. SJ. NOWlan, and G.E. Hinton. Adaptive mixture of local experts.
Neural Computation, 3:79-87. 1991.
[9] R.E. Krichevsky and V.K. Trofimov. The performance of universal encoding. IEEE Trans. on
Inform. Theory. 1981.
[10] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and
Computation, 108:212-261,1994.
[11] M.D. Riley. A statistical model for generating pronounication networks. In Proc. of IEEE Con/.
on Acoustics. Speech and Signal Processing. pages 737-740.1991.
[12] D. Ron. Y. Singer, and N. Tishby. The power of amnesia. In NIPS-6. 1993.
[13] F.MJ. Willems. Y.M. Shtarkov. and TJ. Tjalkens. The context tree weighting method: Basic
properties. IEEE Trans. Inform. Theory. 41(3):653-664.1995.
| 1099 |@word briefly:2 middle:1 compression:1 trofimov:2 simulation:2 jacob:1 recursively:2 cru:1 att:1 exclusively:1 past:1 current:3 com:1 nowlan:1 si:1 yet:1 update:6 half:1 leaf:7 warmuth:2 p7:1 beginning:1 smith:1 short:1 manfred:1 node:18 ron:2 traverse:1 desantis:1 five:1 unbounded:1 tagger:1 shtarkov:1 direct:2 amnesia:1 transducer:38 symp:1 pnp:3 introduce:3 theoretically:1 upenn:1 indeed:1 growing:1 actual:4 estimating:1 bounded:3 classifies:1 moreover:2 underlying:1 matched:1 interpreted:1 string:9 unobserved:2 temporal:1 every:2 y3:1 subclass:1 ti:2 ro:1 yn:26 appear:1 li8:1 before:1 accordance:1 local:2 encoding:1 ak:1 analyzing:1 path:3 yd:2 might:1 twice:2 nz:1 limited:1 range:1 statistically:2 acknowledgment:1 recursive:5 practice:1 universal:1 bell:1 significantly:1 word:4 induce:2 regular:1 confidence:1 context:3 applying:3 map:1 jerusalem:1 starting:1 tjalkens:1 automaton:2 focused:1 y8:2 simplicity:2 assigns:1 helmbold:1 estimator:9 rule:4 haussler:2 increment:1 laplace:1 updated:3 pt:1 us:1 associate:1 element:1 recognition:1 updating:1 labeled:2 observed:6 calculate:2 sun:1 decrease:1 yk:1 complexity:1 ui:1 depend:1 po:9 easily:1 alphabet:18 derivation:1 lul:1 describe:2 effective:1 ction:1 labeling:2 outcome:1 whose:1 otherwise:4 addi:1 syntactic:2 itself:1 online:9 sequence:19 advantage:1 took:1 maximal:7 combining:2 iff:1 achieve:1 adapts:1 description:1 inducing:1 pronounced:1 convergence:3 empty:1 comparative:1 generating:1 recurrent:1 odd:1 implemented:1 indicate:1 iio:5 enable:1 dii:1 material:1 connectionism:1 exploring:1 hold:1 around:1 mapping:4 predict:2 omitted:1 recognizer:2 estimation:8 proc:2 currently:2 visited:2 largest:1 tool:1 weighted:2 clearly:1 i3:1 modified:1 rather:1 pn:3 linguistic:1 yo:1 likelihood:2 check:1 industrial:1 sense:1 utl:2 inference:1 helpful:1 suffix:33 nn:5 entire:1 misclassified:1 interested:1 classification:1 l7:1 colt:1 denoted:3 noun:10 special:1 fairly:2 smoothing:1 apriori:1 equal:2 extraction:2 eliminated:1 manually:1 broad:1 unsupervised:1 nearly:1 employ:1 few:3 engelson:1 recognize:1 phase:1 possibility:2 multiply:1 evaluation:5 mixture:52 nl:2 yielding:2 tj:1 predefined:1 accurate:2 subtrees:6 tree:50 logarithm:1 littlestone:1 plotted:1 theoretical:2 instance:1 giles:1 aio:1 cover:1 phrase:10 riley:1 subset:7 uniform:2 recognizing:1 tishby:2 too:1 graphic:1 nns:2 thanks:1 st:1 randomized:1 overestimated:2 probabilistic:10 yl:2 lee:1 pool:4 lnz:2 w1:1 again:3 cesa:1 possibly:3 stochastically:1 expert:3 li:11 includes:1 depends:2 later:1 root:4 performed:1 analyze:2 observing:4 reached:2 start:2 bayes:2 il:2 phoneme:2 efficiently:4 succession:1 yield:1 miller:1 bayesian:2 comp:1 cc:1 ary:2 classified:1 inform:2 whenever:1 definition:2 against:1 proof:2 associated:2 con:1 handwriting:1 propagated:1 stop:1 workstation:1 proved:2 knowledge:1 ut:9 infers:1 lossr:2 supervised:3 evaluated:1 done:3 generality:1 lastly:1 replacing:2 freezing:1 lack:1 mode:1 building:1 ye:4 normalized:2 y2:2 true:2 hence:3 laboratory:1 rooted:3 maintained:1 generalized:1 demonstrate:1 common:1 multinomial:4 ltl:2 exponentially:1 million:1 belong:1 silicon:1 automatic:1 similarly:1 language:1 etc:1 add:6 posterior:12 perspective:1 phone:2 termed:1 route:1 inequality:1 arbitrarily:3 continue:1 yi:6 devise:4 additional:2 converge:1 paradigm:1 signal:1 ii:15 branch:1 full:1 exceeds:1 faster:2 calculation:3 long:4 lin:1 retrieval:1 devised:1 prediction:25 basic:1 iteration:1 source:12 ot:4 induced:1 jordan:1 call:1 extracting:1 iii:2 bengio:2 architecture:3 identified:3 idea:1 expression:1 tcm:1 effort:1 f:1 speech:7 jj:1 informally:1 induces:1 concentrated:1 dna:1 schapire:3 sl:1 millisecond:1 estimated:4 track:2 per:2 correctly:1 group:1 verified:1 sum:6 powerful:1 almost:3 smoothes:1 decision:1 vb:1 entirely:2 def:1 bound:2 encountered:1 x2:3 sake:1 tag:7 son:2 wi:1 agree:1 discus:2 r3:1 count:2 singer:6 ofits:1 end:1 operation:1 barron:1 appropriate:1 alternative:2 robustness:1 jn:1 thomas:1 running:1 yiii:2 yoram:1 bl:1 md:1 traditional:1 krichevsky:2 simulated:1 hmm:2 majority:1 induction:1 assuming:3 length:1 ratio:1 hebrew:1 stoc:1 stated:1 negative:2 aie:1 proper:1 bianchi:1 observation:7 willems:1 finite:3 wegman:1 n_i:1 extended:1 hinton:1 rn:6 smoothed:1 verb:1 thm:1 inferred:1 introduced:2 pred:1 pair:7 sentence:1 nick:1 acoustic:1 nip:1 trans:2 below:2 pattern:1 oft:1 adjective:1 including:2 power:1 natural:3 predicting:1 scheme:7 created:1 text:3 prior:10 deepest:1 freund:2 loss:15 metal:1 bank:1 collaboration:1 translation:1 lo:1 outl:4 summary:1 last:1 free:1 english:1 infeasible:1 dagan:1 taking:1 sparse:1 distributed:1 depth:8 xn:11 calculated:4 jor:1 qn:6 author:1 commonly:1 adaptive:6 employing:3 far:1 sj:1 approximate:1 pruning:1 sequentially:1 investigating:1 corpus:1 equ:7 xi:6 table:2 learn:1 mj:1 robust:1 whole:3 n2:3 child:1 fig:5 advice:1 fashion:1 wiley:1 sub:4 pereira:1 ylo:1 xl:4 exponential:1 crude:1 comput:1 weighting:1 tin:1 ix:3 theorem:1 minute:1 jensen:1 symbol:19 evidence:2 applicative:1 adding:2 subtree:6 chen:2 entropy:1 depicted:4 lt:8 logarithmic:3 simply:1 yin:1 bo:1 corresponds:1 itl:1 goal:1 marked:1 experimentally:1 infinite:7 typical:3 uniformly:1 lemma:1 internal:2 maximumlikelihood:1 ub:1 incorporate:1 evaluate:1 tested:1 |
111 | 11 | 515
MICROELECTRONIC IMPLEMENTATIONS OF CONNECTIONIST
NEURAL NETWORKS
Stuart Mackie, Hans P. Graf, Daniel B. Schwartz, and John S. Denker
AT&T Bell Labs, Holmdel, NJ 07733
Abstract
In this paper we discuss why special purpose chips are needed for useful
implementations of connectionist neural networks in such applications as pattern
recognition and classification. Three chip designs are described: a hybrid
digital/analog programmable connection matrix, an analog connection matrix with
adjustable connection strengths, and a digital pipe lined best-match chip. The common
feature of the designs is the distribution of arithmetic processing power amongst the
data storage to minimize data movement.
RAMs
?????/..... Distributed
/ '. '. co mputati on
chips
... 0
/ ......
Q)Q)
,c"C
E~
''iiit:::::::;:::::,
::::S,....
ZO
???
..
Conventional
CPUs
??
??..
1
1
10 3
10 6
10 9
Node Complexity
(No. of Transistors)
Figure 1. A schematic graph of addressable node complexity and size for conventional
computer chips. Memories can contain millions of very simple nodes each
with a very few transistors but with no processing power. CPU chips are
essentially one very complex node. Neural network chips are in the
distributed computation region where chips contain many simple fixed
instruction processors local to data storage. (After Reece and Treleaven 1 )
? American Institute of Physics 1988
516
Introduction
It is clear that conventional computers lag far behind organic computers when it
comes to dealing with very large data rates in problems such as computer vision and
speech recognition. Why is this? The reason is that the brain performs a huge number
of operations in parallel whereas in a conventional computer there is a very fast
processor that can perform a variety of instructions very quickly, but operates on only
two pieces of data at a time.
The rest of the many megabytes of RAM is idle during any instruction cycle. The
duty cycle of the processor is close to 100%, but that of the stored data is very close to
zero. If we wish to make better use of the data, we have to distribute processing
power amongst the stored data, in a similar fashion to the brain. Figure 1 illustrates
where distributed computation chips lie in comparison to conventional computer chips
as regard number and complexity of addressable nodes per chip.
In order for a distributed strategy to work, each processing element must be small
in order to accommodate many on a chip, and communication must be local and hardwired. Whereas the processing element in a conventional computer may be able to
execute many hundred different operations, in our scheme the processor is hard-wired
to perform just one. This operation should be tailored to some particular application.
In neural network and pattern recognition algorithms, the dot products of an input
vector with a series of stored vectors (referred to as features or memories) is often
required. The general calculation is:
Sum of Products
v . F(i) = L. v.J f..IJ
J
where V is the input vector and F(i) is one of the stored feature vectors. Two
variations of this are of particular interest. In feature extraction, we wish to find all the
features for which the dot product with the input vector is greater than some threshold
T, in which case we say that such features are present in the input vector.
Feature Extraction
v . F(i) =
L. v.J f..IJ
J
In pattern classification we wish to find the stored vector that has the largest dot
product with the input vector, and we say that the the input is a member of the class
represented by that feature, or simply that that stored vector is closest to input vector.
Classification
max(V. F(i) =
LV.
f..
. J IJ
J
The chips described here are each designed to perform one or more of the above
functions with an input vector and a number of feature vectors in parallel. The overall
strategy may be summed up as follows: we recognize that in typical pattern recognition
applications, the feature vectors need to be changed infrequently compared to the input
517
vectors, and the calculation that is perfonned is fixed and low-precision, we therefore
distribute simple fixed-instruction processors throughout the data storage area, thus
minimizing the data movement and optimizing the use of silicon. Our ideal is to have
every transistor on the chip doing something useful during every instruction cycle.
Analog Sum-or-Products
U sing an idea slightly reminiscent of synapses and neurons from the brain, in two
of the chips we store elements of features as connections from input wires on which the
elements of the input vectors appear as voltages to summing wires where a sum-ofproducts is perfonned. The voltage resulting from the current summing is applied to
the input of an amplifier whose output is then read to determine the result of the
calculation. A schematic arrangement is shown in Figure 2 with the vertical inputs
connected to the horizontal summing wires through resistors chosen such that the
conductance is proportional to the magnitude of the feature element. When both
positive and negative values are required, inverted input lines are also necessary.
Resistor matrices have been fabricated using amorphous silicon connections and metal
linewidths. These were programmed during fabrication by electron beam lithography
to store names using the distributed feedback method described by Hopfield2 ,3. This
work is described more fully elsewhere. 4 ,5 Hard-wired resistor matrices are very
compact, but also very inflexible. In many applications it is desirable to be able to
reprogram the matrix without having to fabricate a new chip. For this reason, a series
of programmable chips has been designed.
Input lines
Feature 4
-t-----tI----4t---t---f--.---1
Feature 3 -+--4II--I--'--+---+--4~
oc:
--
"'C
c:
Feature 2 ~--+-""'-+--4---1~--I
( I)
Feature 1
Figure 2. A schematic arrangement for calculating parallel sum-of-products with a
resistor matrix. Features are stored as connections along summing wires and
the input elements are applied as voltages on the input wires. The voltage
generated by the current summing is thresholded by the amplifer whose
output is read out at the end of the calculation. Feedback connections may be
518
made to give mutual inhibition and allow only one feature amplifier to tum
on, or allow the matrix to be used as a distributed feedback memory.
Programmable Connection Matrix
Figure 3 is a schematic diagram of a programmable connection using the contents of
two RAM cells to control current sinking or sourcing into the summing wire. The
switches are pass transistors and the 'resistors' are transistors with gates connected to
their drains. Current is sourced or sunk if the appropriate RAM cell contains a '1' and
the input Vi is high thus closing both switches in the path. Feature elements can
therefore take on values (a,O,-b) where the values of a and b are determined by the
conductivities of the n- and p-transistors obtained during processing. A matrix with
2916 such connections allowing full interconnection of the inputs and outputs of 54
amplifiers was designed and fabricated in 2.5Jlm CMOS (Figure 4). Each connection
is about 100x100Jlm, the chip is 7x7mm and contains about 75,000 transistors. When
loaded with 49 49-bit features (7x7 kernel), and presented with a 49-bit input vector,
the chip performs 49 dot products in parallel in under 1Jls. This is equivalent to 2.4
billion bit operations/sec. The flexibility of the design allows the chip to be operated in
several modes. The chip was programmed as a distributed feedback memory
(associative memory), but this did not work well because the current sinking capability
of the n-type transistors was 6 times that of the p-types. An associative memory was
implemented by using a 'grandmother cell' representation, where the memories were
stored along the input lines of amplifiers, as for feature extraction, but mutually
inhibitory connections were also made that allowed only one output to tum on. With
10 stored vectors each 40 bits long, the best match was found in 50-600ns, depending
on the data. The circuit can also be programmed to recognize sequences of vectors and
to do error correction when vectors were omitted or wrong vectors were inserted into
the sequences. The details of operation of the chip are described more fully
elsewhere 6 . This chip has been interfaced to a UNIX minicomputer and is in everyday
use as an accelerator for feature extraction in optical character recognition of handwritten numerals. The chip speeds up this time consuming calculation by a factor of
more than 1000. The use of the chip enables experiments to be done which would be
too time consuming to simulate.
Experience with this device has led to the design of four new chips, which are
currently being tested. These have no feedback capability and are intended exclusively
for feature extraction. The designs each incorporate new features which are being
tested separately, but all are based on a connection matrix which stores 46 vectors each
96 bits long. The chip will perform a full parallel calculation in lOOns.
519
VDD
~,
Output(!)
<:]
Vj
~
Excitatory
Inhibitory
V?
J
Ivss
Figure 3. Schematic diagram of a programmable connection. A current sourcing or
sinking connection is made if a RAM cell contains a '1' and the input Vi is
high. The currents are summed on the input wire of the amplifier.
?1 Pads ?
Row Decoders
r:--3 Connections
ITII1 Amplifie rs
Figure 4. Programmable connection matrix chip. The chip contains 75,000 transistors
in 7x7mm, and was fabricated using 2.5Jlm design rules.
520
Adaptive Connection Matrix
Many problems require analog depth in the connection strengths, and this is
especially important if the chip is to be used for learning, where small adjustments are
required during training. Typical approaches which use transistors sized in powers of
two to give conductance variability take up an area equivalent to the same number of
minimum sized transistors as the dynamic range, which is expensive in area and
enables only a few connections to be put on a chip. We have designed a fully analog
connection based on a DRAM structure that can be fabricated using conventional
CMOS technology. A schematic of a connection and a connection matrix is shown in
Figure 5. The connection strength is represented by the difference in voltages stored
on two MOS capacitors. The capacitors are 33Jlm on edge and lose about 1% of their
charge in five minutes at room temperature. The leakage rate can be reduced by three
orders of magnitude by cooling the the capacitors to -50?C and by five orders of
magnitude by cooling to -100?C. The output is a current proportional to the product of
the input voltage and the connection strength. The output currents are summed on a
wire and are sent off chip to external amplifiers. The connection strengths can be
adjusted using transferring charge between the capacitors through a chain of transistors.
The connections strengths may be of either polarity and it is expected that the
connections will have about 7 bits of analog depth. A chip has been designed in
1.25Jlm CMOS containing 1104 connections in an array with 46 inputs and 24 outputs.
Input
Weight update and decay
by shifting charge
.1
..L
1..
'-1'--,
02
:or:
.4111.
..
'"
,
...
1"
'r"
...-
........
....
...
........
...
........
......
......
...
Input
w (l (01-02)
Output=w*lnput
!
....
...-
Output through external amplifiers
Figure 5. Analog connection. The connection strength is represented by the difference
in voltages stored on two capacitors. The output is a current proprtional to
the product of the input voltage and the connection strength.
Each connection is 70x240Jlm. The design has been sent to foundry, and testing is
expected to start in April 1988. The chip has been designed to perform a network
calculation in <30ns, i.e., the chip will perform at a rate of 33 billion multiplies/sec. It
can be used simply as a fast analog convolver for feature extraction, or as a learning
521
engine in a gradient descent algorithm using external logic for connection strength
adjustment. Because the inputs and outputs are true analog, larger networks may be
formed by tiling chips, and layered networks may be made by cascading through
amplifiers acting as hidden units.
Digital Classifier Chip
The third design is a digital implementation of a classifier whose architecture is not
a connectionist matrix. It is nearing completion of the design stage, and will be
fabricated using 1.25Jlm CMOS. It calculates the largest five V?P(i) using an alldigital pipeline of identical processors, each attached to one stored word. Each
processor is also internally pipelined to the extent that no stage contains more than two
gate delays. This is important, since the throughput of the processor is limited by the
speed of the slowest stage. Each processor calculates the Hamming distance (number
of difference bits) between an input word and its stored word, and then compares that
distance with each of the smallest 5 values previously found for that input word. An
updated list of 5 best matches is then passed to the next processor in the pipeline. At
the end of the pipeline the best 5 matches overall are output.
(1 ) Features stored in
Data
pipeline
ring shift register
:{ it
~
~
Best match list
pipeline
Tag register
+
~ ~
it {{ : : : Ii :::::;::::::.
f:: Jr }/ :{ it :r : : :: :mIfl
::t:t if::::: t::}; {}} [,
:i::
II:: ::::::::I; : I::::::I::;:: : : {::::::I:
; ;: : :I[HI tm.
: ~L!.~\.._-...-t!~[1
/ Pf --
(2) Input and feature
(3) Accumulator
are compared
dumps
distance
bit-serially
into comparison
register at end
of input word
Pig. 6
,
(4) Comparator inserts
new match and tag into
list when better than
old match
Schematic of one of the 50 processors in the digital classifier chip. The
Hamming distance of the input vector to the feature vector is calculated, and
if better than one of the five best matches found so far, is inserted into the
match list together with the tag and passed onto the next processor. At the
end of the pipeline the best five matches overall are output
522
The data paths on chip are one bit wide and all calculations are bit serial. This
means that the processing elements and the data paths are compact and maximizes the
number of stored words per chip. The layout of a single processor is shown in
Fig. 6. The features are stored as 128-bit words in 8 16-bit ring shift registers and
associated with each feature is a 14-bit tag or name string that is stored in a static
register. The input vector passes through the chip and is compared bit-by-bit to each
stored vector, whose shift registers are cycled in tum. The total number of bits
difference is summed in an accumulator. After a vector has passed through a processor,
the total Hamming distance is loaded into the comparison register together with the tag.
At this time, the match list for the input vector arrives at the comparator. It is an
ordered list of the 5 lowest Hamming distances found in the pipeline so far, together
with associated tag strings. The distance just calculated is compared bit-serially with
each of the values in the list in turn. If the current distance is smaller than one of the
ones in the list, the output streams of the comparator are switched, having the effect of
inserting the current match and tag into the list and deleting the previous fifth best
match. After the last processor in the pipeline, the list stream contains the best five
distances overall, together with the tags of the stored vectors that generated them. The
data stream and the list stream are loaded into 16-bit wide registers ready for output.
The design enables chips to be connected together to extend the pipeline if more than 50
stored vectors are required. The throughput is constant, irrespective of the number of
chips connected together; only the latency increases as the number of chips increases.
The chip has been designed to operate with an on-chip clock frequency of at least
l00MHz. This high speed is possible because stage sizes are very small and data paths
have been kept short. The computational efficiency is not as high as in the analog chips
because each processor only deals with one bit of stored data at a time. However, the
overall throughput is high because of the high clock speed. Assuming a clock
frequency of l00MHz, the chip will produce a list of 5 best distances with tag strings
every 1.3Jls, with a latency of about 2.5Jls. Even if a thousand chips containing
50,000 stored vectors were pipelined together, the latency would be 2.5ms, low
enough for most real time applications. The chip is expected to perform 5 billion bit
operation/sec.
While it is important to have high clock frequencies on the chip, it is also important
to have them much lower off the chip, since frequencies above 50MHz are hard to deal
on circuit boards. The 16-bit wide communication paths onto and off the chip ensure
that this is not a problem here.
Conclusion
The two approaches discussed here, analog and digital, represent opposites in
computational approach. In one, a single global computation is performed for each
match, in the other many local calculations are done. Both the approaches have their
advantages and it remains to be seen which type of circuit will be more efficient in
applications, and how closely an electronic implementation of a neural network should
resemble the highly interconnected nature of a biolOgical network.
These designs represent some of the first distributed computation chips. They are
characterized by having simple processors distributed amongst data storage. The
operation performed by the processor is tailored to the application. It is interesting to
note some of the reasons why these designs can now be made: minimum linewidths on
523
circuits are now small enough that enough processors can be put on one chip to make
these designs of a useful size, sophisticated design tools are now available that enable a
single person to design and simulate a complete circuit in a matter of months, and
fabrication costs are low enough that highly speculative circuits can be made without
requiring future volume production to offset prototype costs.
We expect a flurry of similar designs in the coming years, with circuits becoming
more and more optimized for particular applications. However, it should be noted that
the impressive speed gain achieved by putting an algorithm into custom silicon can only
be done once. Further gains in speed will be closely tied to mainstream technological
advances in such areas as transistor size reduction and wafer-scale integration. It
remains to be seen what influence these kinds of custom circuits will have in useful
technology since at present their functions cannot even be simulated in reasonable time.
What can be achieved with these circuits is very limited when compared with a three
dimensional, highly complex biological system, but is a vast improvement over
conventional computer architectures.
The authors gratefully acknowledge the contributions made by L.D. Jackel, and
R.E. Howard
References
1
M. Reece and P.C. Treleaven, "Parallel Architectures for Neural Computers", Neural
Computers, R. Eckmiller and C. v.d. Malsburg, eds (Springer-Verlag, Heidelberg,
1988)
2
J.I. Hopfield, Proc. Nat. Acad. Sci. 79.2554 (1982).
3
J.S. Denker, Physica 22D, 216 (1986).
4
R.E. Howard, D.B. Schwartz, J.S. Denker, R.W. Epworth, H.P. Graf, W .E.
Hubbard, L.D. Jackel, B.L. Straughn, and D.M. Tennant, IEEE Trans. Electron
Devices ED-34, 1553, (1987)
5
H.P. Oraf and P. deVegvar, "A CMOS Implementation of a Neural Network
Model", in "Advanced Research in VLSI", Proceedings of the 1987 Stanford
Conference, P. Losleben (ed.), (MIT Press 1987).
6
H.P. Oraf and P. deVegvar, "A CMOS Associative Memory Chip Based on Neural
Networks", Tech. Digest, 1987 IEEE International Solid-State Circuits Conference.
| 11 |@word instruction:5 r:1 solid:1 accommodate:1 reduction:1 series:2 contains:6 exclusively:1 daniel:1 current:12 must:2 reminiscent:1 john:1 enables:3 designed:7 update:1 device:2 short:1 node:5 five:6 along:2 fabricate:1 expected:3 brain:3 cpu:2 pf:1 circuit:10 maximizes:1 lowest:1 what:2 kind:1 string:3 fabricated:5 nj:1 jlm:5 every:3 ti:1 charge:3 wrong:1 classifier:3 schwartz:2 control:1 unit:1 conductivity:1 internally:1 appear:1 positive:1 local:3 acad:1 path:5 becoming:1 jls:3 co:1 programmed:3 limited:2 range:1 devegvar:2 accumulator:2 testing:1 addressable:2 area:4 bell:1 organic:1 idle:1 word:7 pipelined:2 close:2 layered:1 onto:2 cannot:1 storage:4 put:2 influence:1 equivalent:2 conventional:8 layout:1 rule:1 array:1 cascading:1 variation:1 updated:1 element:8 infrequently:1 recognition:5 expensive:1 cooling:2 inserted:2 thousand:1 region:1 connected:4 cycle:3 movement:2 technological:1 microelectronic:1 complexity:3 dynamic:1 flurry:1 vdd:1 efficiency:1 oraf:2 hopfield:1 chip:56 represented:3 zo:1 reece:2 fast:2 sourced:1 whose:4 lag:1 larger:1 stanford:1 say:2 interconnection:1 associative:3 sequence:2 advantage:1 transistor:13 interconnected:1 product:9 coming:1 inserting:1 lnput:1 flexibility:1 everyday:1 billion:3 wired:2 produce:1 cmos:6 ring:2 mackie:1 depending:1 completion:1 ij:3 implemented:1 resemble:1 come:1 closely:2 enable:1 numeral:1 require:1 biological:2 adjusted:1 insert:1 correction:1 physica:1 mo:1 electron:2 smallest:1 omitted:1 purpose:1 proc:1 lose:1 currently:1 jackel:2 hubbard:1 largest:2 tool:1 mit:1 voltage:8 sunk:1 improvement:1 slowest:1 tech:1 transferring:1 pad:1 hidden:1 vlsi:1 overall:5 classification:3 multiplies:1 special:1 summed:4 mutual:1 integration:1 once:1 extraction:6 having:3 identical:1 stuart:1 throughput:3 future:1 connectionist:3 loon:1 few:2 recognize:2 intended:1 amplifier:8 conductance:2 huge:1 interest:1 highly:3 custom:2 arrives:1 operated:1 behind:1 chain:1 edge:1 necessary:1 experience:1 iv:1 old:1 mhz:1 cost:2 hundred:1 delay:1 fabrication:2 too:1 stored:22 person:1 international:1 physic:1 off:3 together:7 quickly:1 containing:2 megabyte:1 external:3 nearing:1 american:1 distribute:2 lithography:1 sec:3 matter:1 register:8 vi:2 stream:4 piece:1 performed:2 lab:1 doing:1 start:1 parallel:6 capability:2 amorphous:1 minimize:1 formed:1 contribution:1 loaded:3 interfaced:1 handwritten:1 processor:19 synapsis:1 ed:3 frequency:4 associated:2 hamming:4 static:1 gain:2 convolver:1 sophisticated:1 tum:3 april:1 execute:1 done:3 just:2 stage:4 clock:4 horizontal:1 mode:1 name:2 effect:1 contain:2 true:1 requiring:1 read:2 deal:2 during:5 noted:1 oc:1 m:1 complete:1 performs:2 temperature:1 common:1 speculative:1 attached:1 volume:1 million:1 extend:1 analog:11 sinking:3 discussed:1 silicon:3 closing:1 gratefully:1 dot:4 han:1 impressive:1 mainstream:1 inhibition:1 something:1 closest:1 optimizing:1 store:3 verlag:1 iiit:1 inverted:1 seen:2 minimum:2 greater:1 determine:1 arithmetic:1 ii:3 full:2 desirable:1 match:14 characterized:1 calculation:9 long:2 serial:1 schematic:7 calculates:2 vision:1 essentially:1 kernel:1 tailored:2 represent:2 achieved:2 cell:4 beam:1 whereas:2 separately:1 diagram:2 rest:1 operate:1 pass:1 sent:2 member:1 capacitor:5 ideal:1 enough:4 variety:1 switch:2 architecture:3 opposite:1 idea:1 tm:1 prototype:1 shift:3 duty:1 passed:3 speech:1 programmable:6 useful:4 latency:3 clear:1 reduced:1 inhibitory:2 per:2 wafer:1 eckmiller:1 putting:1 four:1 threshold:1 thresholded:1 kept:1 ram:5 graph:1 vast:1 sum:4 year:1 unix:1 throughout:1 reasonable:1 electronic:1 holmdel:1 bit:21 hi:1 strength:9 straughn:1 tag:9 x7:1 speed:6 simulate:2 optical:1 jr:1 inflexible:1 slightly:1 smaller:1 character:1 pipeline:9 mutually:1 previously:1 remains:2 discus:1 turn:1 lined:1 needed:1 end:4 tiling:1 available:1 operation:7 denker:3 appropriate:1 gate:2 ensure:1 malsburg:1 calculating:1 especially:1 leakage:1 arrangement:2 digest:1 strategy:2 amongst:3 gradient:1 distance:10 simulated:1 sci:1 decoder:1 extent:1 reason:3 assuming:1 polarity:1 minimizing:1 negative:1 dram:1 implementation:5 design:16 adjustable:1 perform:7 allowing:1 vertical:1 neuron:1 wire:8 sing:1 acknowledge:1 howard:2 descent:1 communication:2 variability:1 required:4 pipe:1 connection:34 optimized:1 engine:1 trans:1 able:2 pattern:4 pig:1 grandmother:1 max:1 memory:8 deleting:1 shifting:1 power:4 perfonned:2 serially:2 hybrid:1 hardwired:1 advanced:1 scheme:1 technology:2 irrespective:1 ready:1 drain:1 graf:2 fully:3 expect:1 accelerator:1 interesting:1 proportional:2 lv:1 digital:6 switched:1 metal:1 cycled:1 production:1 row:1 elsewhere:2 changed:1 excitatory:1 sourcing:2 last:1 allow:2 institute:1 wide:3 treleaven:2 fifth:1 distributed:9 regard:1 feedback:5 depth:2 calculated:2 foundry:1 author:1 made:7 adaptive:1 far:3 compact:2 logic:1 dealing:1 global:1 summing:6 consuming:2 losleben:1 why:3 nature:1 heidelberg:1 complex:2 vj:1 did:1 allowed:1 fig:1 referred:1 dump:1 board:1 fashion:1 n:2 precision:1 wish:3 resistor:5 lie:1 tied:1 third:1 minute:1 list:12 decay:1 offset:1 tennant:1 magnitude:3 nat:1 illustrates:1 led:1 simply:2 adjustment:2 ordered:1 springer:1 comparator:3 sized:2 month:1 room:1 content:1 hard:3 typical:2 determined:1 operates:1 acting:1 total:2 pas:1 incorporate:1 tested:2 |
112 | 110 | 761
Adaptive Neural Networks Using MOS Charge Storage
D. B. Schwartz 1, R. E. Howard and W. E. Hubbard
AT&T Bell Laboratories
Crawfords Corner Rd.
Holmdel, N.J. 07733
Abstract
MOS charge storage has been demonstrated as an effective method to store
the weights in VLSI implementations of neural network models by several
workers 2 . However, to achieve the full power of a VLSI implementation of
an adaptive algorithm, the learning operation must built into the circuit. We
have fabricated and tested a circuit ideal for this purpose by connecting a
pair of capacitors with a CCD like structure, allowing for variable size weight
changes as well as a weight decay operation. A 2.51-' CMOS version achieves
better than 10 bits of dynamic range in a 140/' X 3501-' area. A 1.25/' chip
based upon the same cell has 1104 weights on a 3.5mm x 6.0mm die and is
capable of peak learning rates of at least 2 x 109 weight changes per second.
1
Adaptive Networks
Much of the recent excitement about neural network models of computation has
been driven by the prospect of new architectures for fine grained parallel computation using analog VLSI. Adaptive systems are espescially good targets for analog
VLSI because the ada.ptive process can compensate for the inaccuracy of individual
devices as easily as for the variability of the signal. However, silicon VLSI does not
provide us with an ideal solution for weight storage. Among the properties of an
ideal storage technology for analog VLSI adaptive systems are:
? The minimum available weight change ~w must be small. The simplest adaptive algorithms optimize the weights by minimizing the output error with a
steepest descent search in weight space [1]. Iterative improvement algorithms
such as steepest descent are based on the heuristic assumption of 'better'
weights being found in the neighborhood of 'good' ones; a heuristic that fails
when the granularity of the weights is not fine enough. In the worst case, the
resolution required just to represent a function can grow exponentially in the
dimension of the input space .
? The weights must be able to represent both positive and negative values and
the changes must be easily reversible. Frequently, the weights may cycle up
and down while the adaptive process is converging and millions of incremental
changes during a single training session is not unreasonable. If the weights
cannot easily follow all of these changes, then the learning must be done off
chip.
1 Now at GTE Laboratories, 40 Sylvan Rd., Waltham, Mass 02254 dbs@gte.com%relay.cs.net
2For example, see the papers by Mann and Gilbert, Walker and Akers, and Murray et. al. in
this proceedings
762
Schwartz, Howard and Hubbard
? The parallelism of the network can be exploited to the fullest only if the
mechanism controlling weight changes is simple enough to be reproduced at
each weight. Ideally, the change is determined by some easily computed combination of information local to each weight and signals global to the entire
system. This type of locality, which is as much a property of the algorithm as
of the hardware, is necessary to keep the wiring cost associated with learning
small.
=
? Weight decay, Wi
aw with a < 1 is useful although not essential. Global
decay of all the weights can be used to extend their dynamic range by rescaling
when the average magnitude becomes too large. Decay of randomly chosen
weights can be used both to control their magnitude [2] and to help gradient
searches escape from local minima.
To implement an analog storage cell with MOS VLSI the most obvious choices
are non-volatile devices like floating gate and MNOS transistors, multiplying DAC's
with conventional digital storage, and dynamic analog storage on MOS capacitors.
Most non-volatile devices rely upon electron tunneling to change the amount of
stored charge, typically requiring a large amount of circuitry to control weight
changes. DAC's have already proven themselves in situations where 5 bits or less
of resolution [3] [4] are sufficient, but higher resolution is prohibitively expensive in
terms of area. We will show the disadvantage of MOS charge storage, its volatility,
is more than outweighed by the resolution available and ease of making weight
changes.
Representation of both positive and negative weights can be obtained by storing
the weights Wi differentially on a pair of capacitors in which case
Differential storage can be used to obtain some degree of rejection of leakage and
can guarantee that leakage will reduce the magnitude of the weights as compared
with a scheme where the weights are defined with respect to a fixed level, in which
case as a weight decays it can change signs. A constant common mode voltage also
eases the design constraints on the differential input multiplier used to read out the
weights. An elegant way to manipulate the weights is to transfer charge from one
capacitor to the other, keeping constant the total charge on the system and thus
maximizing the dynamic range available from the readout circuit.
2
Weight Changes
Small packets of charge can easily be transferred from one capacitor to the other by
exploiting charge injection, a phenomena carefully avoided by designers of switched
capacitor circuits as a source of sampling error [5] [6] [7] [8] [9]. An example of a
storage cell with the simplest configuration for a charge transfer system is shown
in figure 1. A pair of MOS capacitors are connected by a string of narrow MOS
transistors, a long one to transfer charge and two minimum length ones to isolate
Adaptive Neural Networks Using MOS Charge Storage
TA
TP
TC
ru
IL
Ul~v-
I
TA
TP
TA
TM
I
ru
TCP
TI
TCM
TA
TM
UlJ
I
I
Figure 1: (a) The simplest storage cell, with provisions for only a single size increment/ decrement operations and no weight decay. (b) A more sophisticated cell with
facilities for weight decay. By suitable manipulation of the clock signals, the two
charge transfer transistors can be used to obtain different sizes of weight changes.
Both circuits are initialized by turning on the access transistors TA and charging
the capacitors up to a convenient voltage, typically Vnn /2.
the charge transfer transistor from the storage nodes. For the sake of discussion, we
can treat the isolation transistors as ideal switches and concentrate on the charge
transfer transistor that we here assume to be an n-channel device. To increase the
weight ( See figure 1 ), the charge transfer transistor (TC) and isolation transistor
attached to the positive storage node (TP) are turned on. When the system has
reached electrostatic equilibrium the charge transfer transistor (TC) is disconnected
from the plus storage node by turning off TP and connected to the minus storage
node by turning on TM. If the charge transfer transistor TC is slowly turned off, the
mobile charge in its channel will diffuse into the minus node, lowering its voltage.
A detailed analysis of the charge transfer mechanism has been given elsewhere [10],
but for the purpose of qualitative understanding of the circuit the inversion charge
in the charge transfer transistor's channel can be approximated by
qNinv
= Cox(VG -
VTE).
where VT E is the effective threshold voltage and Cox the gate to channel capacitance
of the charge transfer transistor. The effective threshold voltage is then given by
where VTO is the threshold voltage in the absence of body effect, 1; J the fermi level,
Vs the source to substrate voltage, and f the usual body effect coefficient. An even
763
764
Schwartz, Howard and Hubbard
rougher model can be obtained by linearizing the body effect term [6]
where Cell co.n tains both the gate oxide capacitance and the effects of parasitic
capacitance and T/ 'Y /2.j12? I I . Within the linearized approximation, the change
in voltage on a storage node with capacitance Cstore after n transfers is
=
Vn = Va
1
+ -(VG T/
VT - T/Va)(1- exp(-an))
(1)
with a = Cell /Cstore and where Va is the initial voltage on the storage node. Due
to the dependence of the size of the transfer on the stored voltage, when the transfer
direction is reversed the increment size changes unless the stored voltages on the
capacitors are equal. This can be partially compensated for by using complementary
pairs of p-channel and n-channel charge transfer transistors, in effect using a string
of transmission gates to perform charge transfers. A weight decay operation can be
introduced by using the more complex string of charge transfer transistors shown
in figure lb. A weight decay is initiated by turning off the transistor in the middle
of the string (TI) and turning on all the other transistors. When the two sides of
the charge transfer string have equilibrated with their respective storage nodes, the
connections to the storage nodes ( TM and TP ) are turned off and the two cha.rge
transfer transistors ( TCP and TCM ) are allowed to exchange charge by turning
on the transistor, TI, which separates them. When two equal charge packets have
been obtained TI is turned off again and the charge packets held by TCP and TCM
are injected back into the storage capacitors. The resulting change in the stored
weight is
tl. vdecay = - CCeff (V+ - v_).
ox
which corresponds to multiplying the weight by a constant a < 1 as desired. Besides
allowing for weight decay, the more complex charge string shown in figure Ib ca.n also
be used to obtain different size weight changes by using different clock sequences .
3
Experimental Evaluation
Test chips have been fabricated in both 1.25J.l and 2.5J.l CMOS, using the AT&T
Twin Tub technology[ll]. To evaluate the properties of an individual cell, especially
the charge transfer mechanism, an isolated test structure consisting of five storage
cells was built on one section of the 2.5J.l chip. The storage cells were differentially read out by two quadrant transconductance amplifiers whose input-output
characteristics are shown in figure 2. By using the bias current of the amplifiers as
an input, the amplifiers were used as two quadrant multipliers. Since many neural
network models call for a sigmoidal nonlinearity, no attempt was made to linearize
the operation of the multiplier. The output currents of the five multipliers were
summed by a single output wire and the voltages on each of the ten capacitors were
Adaptive Neural Networks Using MOB Charge Storage
10~--------------------------------------~
o
?10+-------,---~--_r--~--~--~----._--~~
o
2
3
4
5
Input Voltage
Figure 2: A family of transfer characteristics from one of the transconductance
multipliers for several different values of stored weight. The different branches of
the curves are each separated by ten large charge transfers. No attempt was made
to linearize the input/output characteristic since many neural network models call
for non-linearities.
buffered by voltage followers to allow for detailed examination of the inner workings
of the cell.
After trading off between hold time, resolution and area we decided upon 20Jl
long charge transfer transistors and 2000Jl2 storage capacitors with 2.5Jl technology
based upon the minimum channel width of 2.5Jl. For a 20Jl long channel and a
2.5V gate to source voltage the channel transit time To is approximately 5 ns and
charge transfer clock frequencies exceeding 1oMHz are possible without measurable
pumping of charge into the substrate. The 2.5p wide access transistors were 12J-l
long, leading to leakage rates from the individual capacitors of about 1% of the
stored value in 100s, limited by surface leakage in our unpassivated test structures.
Even with uncapped wafers, the leakage was small enough to allow all the tests
described here to be made without special provisions for environmental control of
either temperature or humidity. As mentioned earlier, the more complex set of
charge transfer transistors needed to introduce weight decay can also be used to
obtain several different size of charge transfers, a small weight change by using
the two long transistors in sequence and a coarse one by treating the two long
transistors and the isolation transistor separating them as a single device. Using
the small weight changes, the worst case resolution was 10 bits ( near ~ V = 0 )
and the results where in excellent agreement with the predictions of equation 1
765
766
Schwartz, Howard and Hubbard
3.2
3.0
Q)
...-as
>
...
C)
2.8
0
2.6
~
0
o-
2.4
f.)
as
as
Q.
0
2.2
2.0
1.8
0
100
200
300
400
500
Charge transfers
Figure 3: The voltage on the two storage capacitors when the weight is initially
set to saturation using large increments and then reduced back towards zero using
weight decay. The granularity of the curves is an experimental artifact of the digital
voltmeter's resolution.
using the effective capacitance as a fitting parameter. In the figure 3 we use large
charge transfers to quickly increment the weight up to its maximum value and then
reduce it back to zero with weight decays, demonstrating the expected exponential
dependence of the stored voltage on the number of weight decays. Even under
repeated cycling up and down through the entire differential voltage range of the
cell, the total amount of charge on the cell remained constant for frequencies under
10M H z with the exception of the expected losses due to leakage.
The long term goal of this work is to develop analog VLSI chips that are complete
'learning machines', capable of modified their own weights when provided with input
data and some feedback based on the output of the network. However, the study
of learning algorithms is in a state of flux and few, if any, algorithms have been
optimized for VLSI implementation. Rather than cast an inappropriate algorithm
in silicon, we have designed our first chips to be used as adaptive systems with
an external controller, allowing us to develop algorithms that are appropriate for
the medium once we understand its properties. The networks are organized as
rectangular matrix multipliers with voltage inputs and current outputs with 46
inputs and 24 outputs in a 96 pin package for the 1.251-' chip. Since none of the
analog input/output lines of the chip are multiplexed, larger and more complicated
networks can be built by cascading several chips.
To the digital controller, the chip looks like a 1104 x 2 static RAM with some
extra clock inputs to drive the charge transfers. The charge transfer clock signals are
Adaptive Neural Networks Using MOS Charge Storage
distributed globally and are connected to the individual strings of charge transfer
transistors through a pair of 2 x 2 cross bar switches controlled by two bits of static
RAM local to each cell. The use of a pair of cross bar switches is necessitated
by the faciltities for weight decay; if the simpler charge transfer string shown in
figure la were used then only a single switch would be needed. When both a
cell's RAMs are zeroed, the global charge transfer lines are not connected to the
charge transfer transistors. The global lines are connected to the individual strings
of charge transfer transistors either normally or in reverse depending upon which
RAM cell contains a one. By reversing the order of the signals on the charge
transfer lines, a weight change can also be reversed. Neglecting the dependence of
the size of the charge transfer upon stored weight, the RAM's represent a weight
change vector f).C ij with components f).wij E [-1,0,1]. Once a weight change vector
has been written serially to the RAM's, the weight changes along that vector are
made in parallel by manipulating the charge transfer lines. This architecture is
also a powerful way to implement programable networks of fixed weights since an
arbitrary matrix of 10 bit weights can be written to the chip in a few milliseconds
or less if an efficient decomposition of the desired weight vector into global charge
transfers is made. In view of the speed with which the chip can evaluate the output
of a network, an overhead of less than a percent for a refresh operation is acceptable
in many applications.
4
Conclusions
We have implemented a generic chip to facilitate studying adaptive networks by
building them in analog VLSI. By exploiting the well known properties of charge
storage and charge injection in a novel way, we have achieved a high enough level of
complexity ( > 103 weights and 10 bits of analog depth) to be interesting, in spite
of the limitation to a modest 6.00mm x 3.5mm die size required by a multi-project
fabrication run. If the cell were optimized to represent fixed weight networks by
eliminating weight decay and bi-directional weight changes, the density could easily
be increased by a factor of two with no loss in resolution. Once a weight change
vector has been written to the RAM cells, charge transfers can be clocked at a
rate of 2M H z chip corresponds to a peak learning rate of 2 x 10 9 updates/second,
exceeding the speeds of 'digital neurocomputers' based upon DSP chips by two
orders of magnitude.
Acknowledgements
A large group of people assisted the authors in taking this work from concept to
silicon, a few of whom we single out for mention here. The IDA design tools used
for the layouts were provided and supported by D. D. Hill and D. D. Shugard at
Murray Hill and the 1.25J.l process was supported by D. Wroge and R. Ashton. The
first author wishes to acknowledge helpful discussions with H. P. Graf, S. Mackie
and G. Taylor, with special thanks to R. G. Swartz.
767
768
Schwartz, Howard and Hubbard
References
[1] Bernard Widrow and Samuel D. Stearns. Adaptive Signal Processing.
Prentice-Hall, Inc., Englewood Cliffs, N. J., 1985.
[2] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A lea.rning algorithm for
Boltzman machines. Cognitive Science, 9:147, 1985.
[3] Jack Raffel, James Mann, Robert Berger, Antonio Soares, and Sheldon
Gilbert. A generic architecture for wafer-scale neuromorphic systems. In
IEEE First International Conference on Neural Networks. Volume III,
page 501, 1987.
[4] Joshua Alspector, Bhusan Gupta, and Robert B. Allen. Performance of a
stochastic learning microchip. In Advances in Neural Network Information
Processing Systems, 1988.
[5] William B. Wilson, Hisham Z. Massoud, Eric J. Swanson, Rhett T. George,
and Richard B. Fair. Measurement and modeling of charge feed through in
n-channel MOS analog switches. IEEE Journal of Solid-State Circuits,
SC-20(6):1206-1213, 1985.
[6] George Wegmann, Eric A. Vittoz, and Fouad Ra.ha.li. Charge injection in
analog MaS switches. IEEE Journal of Solid-State Circuits,
SC-20(6):1091-1097, 1987.
[7] James A. Kuo, Robert W. Dutton, and Bruce A. Wooley. MOS pass
transistor turn-off transient analysis. IEEE Transactions on Electron
Devices, ED-33(10):1545-1555, 1986.
[8] James R. Kuo, Robert W. Dutton, and Bruce A. Wooley. Turn-off tra.nsients
in circular geometry MOS pass transistors. IEEE Journal Solid-State
Circuits, SC-21(5):837-844, 1986.
[9] Je-Hurn Shieh, Mahesh Patil, and Bing J. Sheu. Measurement and analysis of
charge injection in MOS analog switches. IEEE Journal of Solid State
Circuits, SC-22(2):277-281, 1987.
[10J R. E. Howard, D. B. Schwartz, and W. E. Hubbard. A programmable analog
neural network chip. IEEE Journal of Solid-State Circuits, 24, 1989.
[11J J. Argraz-Guerena, R. A. Ashton, W. J. Bertram, R. C. Melin, R. C. Sun, and
J. T. Clemens. Twin Tub III - A third generation CMOS. In Proceedings
of the International Electron Device Meeting, 1984. Citation P63-6.
| 110 |@word cox:2 middle:1 version:1 inversion:1 eliminating:1 humidity:1 cha:1 linearized:1 decomposition:1 mention:1 minus:2 solid:5 initial:1 configuration:1 contains:1 current:3 com:1 ida:1 follower:1 must:5 written:3 refresh:1 treating:1 designed:1 update:1 v:1 v_:1 device:7 steepest:2 tcp:3 coarse:1 node:9 sigmoidal:1 simpler:1 five:2 along:1 differential:3 qualitative:1 microchip:1 fitting:1 overhead:1 introduce:1 ra:1 expected:2 alspector:1 themselves:1 frequently:1 multi:1 globally:1 inappropriate:1 becomes:1 provided:2 project:1 linearity:1 circuit:11 mass:1 medium:1 string:9 fabricated:2 guarantee:1 ti:4 charge:59 prohibitively:1 schwartz:6 control:3 normally:1 hurn:1 positive:3 local:3 treat:1 pumping:1 initiated:1 cliff:1 approximately:1 plus:1 co:1 ease:1 limited:1 range:4 bi:1 decided:1 implement:2 area:3 rning:1 bell:1 convenient:1 quadrant:2 spite:1 cannot:1 fullest:1 storage:28 prentice:1 optimize:1 gilbert:2 conventional:1 demonstrated:1 compensated:1 maximizing:1 measurable:1 layout:1 raffel:1 rectangular:1 resolution:8 cascading:1 j12:1 increment:4 target:1 controlling:1 substrate:2 agreement:1 expensive:1 approximated:1 ackley:1 worst:2 readout:1 cycle:1 connected:5 sun:1 prospect:1 mentioned:1 complexity:1 ideally:1 dynamic:4 upon:7 eric:2 easily:6 chip:16 separated:1 effective:4 sejnowski:1 sc:4 neighborhood:1 whose:1 heuristic:2 larger:1 neurocomputers:1 reproduced:1 sequence:2 transistor:30 net:1 turned:4 achieve:1 differentially:2 exploiting:2 fermi:1 transmission:1 cmos:3 incremental:1 help:1 volatility:1 linearize:2 develop:2 depending:1 mackie:1 widrow:1 ij:1 equilibrated:1 implemented:1 c:1 trading:1 waltham:1 vittoz:1 concentrate:1 direction:1 stochastic:1 packet:3 transient:1 mann:2 exchange:1 assisted:1 mm:4 hold:1 hall:1 exp:1 equilibrium:1 mo:13 electron:3 circuitry:1 achieves:1 relay:1 purpose:2 hubbard:6 fouad:1 tool:1 modified:1 rather:1 mobile:1 voltage:19 wilson:1 dsp:1 improvement:1 helpful:1 wegmann:1 entire:2 typically:2 initially:1 vlsi:10 manipulating:1 wij:1 among:1 summed:1 special:2 equal:2 once:3 sampling:1 look:1 escape:1 few:3 richard:1 randomly:1 individual:5 floating:1 geometry:1 consisting:1 william:1 attempt:2 amplifier:3 englewood:1 circular:1 evaluation:1 held:1 capable:2 worker:1 necessary:1 neglecting:1 respective:1 necessitated:1 unless:1 modest:1 taylor:1 initialized:1 desired:2 isolated:1 increased:1 earlier:1 modeling:1 disadvantage:1 tp:5 ada:1 neuromorphic:1 cost:1 fabrication:1 too:1 stored:8 ashton:2 aw:1 thanks:1 density:1 peak:2 international:2 eas:1 off:9 connecting:1 quickly:1 again:1 slowly:1 corner:1 oxide:1 external:1 cognitive:1 leading:1 rescaling:1 mnos:1 li:1 twin:2 coefficient:1 inc:1 tra:1 view:1 reached:1 clemens:1 parallel:2 complicated:1 bruce:2 il:1 characteristic:3 directional:1 outweighed:1 sylvan:1 none:1 multiplying:2 drive:1 ed:1 akers:1 frequency:2 james:3 obvious:1 associated:1 static:2 provision:2 organized:1 carefully:1 sophisticated:1 back:3 feed:1 higher:1 ta:5 follow:1 done:1 ox:1 just:1 clock:5 working:1 dac:2 reversible:1 mode:1 artifact:1 building:1 effect:5 facilitate:1 concept:1 multiplier:6 requiring:1 mob:1 facility:1 read:2 laboratory:2 wiring:1 ll:1 during:1 width:1 die:2 samuel:1 clocked:1 linearizing:1 hill:2 vte:1 complete:1 vnn:1 allen:1 temperature:1 percent:1 jack:1 novel:1 volatile:2 common:1 attached:1 exponentially:1 volume:1 million:1 analog:13 extend:1 jl:4 mahesh:1 sheu:1 silicon:3 buffered:1 measurement:2 rd:2 vdecay:1 session:1 nonlinearity:1 access:2 surface:1 electrostatic:1 soares:1 own:1 recent:1 driven:1 reverse:1 manipulation:1 store:1 vt:2 meeting:1 exploited:1 joshua:1 minimum:4 george:2 swartz:1 signal:6 branch:1 full:1 cross:2 compensate:1 long:7 manipulate:1 va:3 controlled:1 converging:1 prediction:1 bertram:1 controller:2 represent:4 achieved:1 cell:18 lea:1 fine:2 grow:1 walker:1 source:3 extra:1 isolate:1 elegant:1 db:1 capacitor:14 call:2 near:1 ideal:4 granularity:2 iii:2 enough:4 switch:7 isolation:3 architecture:3 reduce:2 inner:1 tm:4 tub:2 tcm:3 ul:1 programmable:1 antonio:1 useful:1 detailed:2 amount:3 ten:2 hardware:1 simplest:3 reduced:1 stearns:1 massoud:1 millisecond:1 sign:1 designer:1 per:1 wafer:2 group:1 threshold:3 demonstrating:1 lowering:1 ram:7 run:1 package:1 injected:1 powerful:1 family:1 vn:1 tunneling:1 holmdel:1 acceptable:1 bit:6 constraint:1 diffuse:1 sake:1 sheldon:1 speed:2 transconductance:2 injection:4 transferred:1 combination:1 disconnected:1 wi:2 making:1 equation:1 bing:1 pin:1 turn:2 mechanism:3 excitement:1 needed:2 studying:1 available:3 operation:6 unreasonable:1 jl2:1 appropriate:1 generic:2 gate:5 ccd:1 patil:1 murray:2 especially:1 leakage:6 capacitance:5 already:1 dependence:3 usual:1 cycling:1 gradient:1 reversed:2 separate:1 separating:1 transit:1 whom:1 evaluate:2 swanson:1 ru:2 length:1 besides:1 berger:1 minimizing:1 robert:4 negative:2 implementation:3 design:2 perform:1 allowing:3 wire:1 howard:6 acknowledge:1 descent:2 situation:1 hinton:1 variability:1 lb:1 arbitrary:1 introduced:1 pair:6 required:2 cast:1 connection:1 optimized:2 narrow:1 inaccuracy:1 rougher:1 able:1 bar:2 parallelism:1 saturation:1 built:3 charging:1 power:1 suitable:1 serially:1 rely:1 examination:1 turning:6 scheme:1 technology:3 crawford:1 understanding:1 acknowledgement:1 graf:1 loss:2 interesting:1 limitation:1 generation:1 proven:1 vg:2 digital:4 switched:1 degree:1 sufficient:1 zeroed:1 storing:1 shieh:1 elsewhere:1 supported:2 keeping:1 side:1 bias:1 allow:2 understand:1 wide:1 taking:1 distributed:1 curve:2 dimension:1 feedback:1 depth:1 author:2 made:5 adaptive:13 avoided:1 boltzman:1 flux:1 transaction:1 ulj:1 citation:1 keep:1 tains:1 global:5 search:2 iterative:1 dutton:2 channel:10 transfer:41 ca:1 excellent:1 complex:3 decrement:1 allowed:1 complementary:1 repeated:1 fair:1 body:3 je:1 tl:1 n:1 fails:1 exceeding:2 wish:1 exponential:1 ib:1 third:1 grained:1 down:2 remained:1 decay:16 gupta:1 essential:1 cstore:2 magnitude:4 rejection:1 locality:1 tc:4 partially:1 corresponds:2 environmental:1 ma:1 bhusan:1 goal:1 towards:1 absence:1 change:26 determined:1 reversing:1 gte:2 rge:1 total:2 bernard:1 kuo:2 experimental:2 la:1 pas:2 exception:1 parasitic:1 people:1 multiplexed:1 tested:1 phenomenon:1 |
113 | 1,100 | Tempering Backpropagation Networks:
Not All Weights are Created Equal
Nicol N. Schraudolph
EVOTEC BioSystems GmbH
Grandweg 64
22529 Hamburg, Germany
nici@evotec.de
Terrence J. Sejnowski
Computational Neurobiology Lab
The Salk Institute for BioI. Studies
San Diego, CA 92186-5800, USA
terry@salk.edu
Abstract
Backpropagation learning algorithms typically collapse the network's
structure into a single vector of weight parameters to be optimized. We
suggest that their performance may be improved by utilizing the structural information instead of discarding it, and introduce a framework for
''tempering'' each weight accordingly.
In the tempering model, activation and error signals are treated as approximately independent random variables. The characteristic scale of weight
changes is then matched to that ofthe residuals, allowing structural properties such as a node's fan-in and fan-out to affect the local learning rate
and backpropagated error. The model also permits calculation of an upper
bound on the global learning rate for batch updates, which in turn leads
to different update rules for bias vs. non-bias weights.
This approach yields hitherto unparalleled performance on the family relations benchmark, a deep multi-layer network: for both batch learning
with momentum and the delta-bar-delta algorithm, convergence at the
optimal learning rate is sped up by more than an order of magnitude.
1 Introduction
Although neural networks are structured graphs, learning algorithms typically view them
as a single vector of parameters to be optimized. All information about a network's architecture is thus discarded in favor of the presumption of an isotropic weight space - the
notion that a priori all weights in the network are created equal. This serves to decouple
the learning process from network design and makes a large body of function optimization
techniques directly applicable to backpropagation learning.
But what if the discarded structural information holds valuable clues for efficient weight
optimization? Adaptive step size and second-order gradient techniques (Battiti, 1992) may
N. N. SCHRAUDOLPH. T. J. SEJNOWSKI
564
recover some of it, at considerable computational expense. Ad hoc attempts to incorporate
structural information such as the fan-in (Plaut et aI., 1986) into local learning rates have become a familiar part of backpropagation lore; here we deri ve a more comprehensi ve framework - which we call tempering - and demonstrate its effectiveness.
Tempering is based on modeling the acti vities and error signals in a backpropagation network as independent random variables. This allows us to calculate activity- and weightinvariant upper bounds on the effect of synchronous weight updates on a node's activity.
We then derive appropriate local step size parameters by relating this maximal change in a
node's acti vi ty to the characteristic scale of its residual through a global learning rate.
Our subsequent derivation of an upper bound on the global learning rate for batch learning
suggests that the d.c. component of the error signal be given special treatment. Our experiments show that the resulting method of error shunting allows the global learning rate to
approach its predicted maximum, for highly efficient learning performance.
2 Local Learning Rates
Consider a neural network with feedforward activation given by
x j = /j (Yj)
,
Yj
=
L
(1)
Xi Wij ,
iEAj
where Aj denotes the set of anterior nodes feeding directly into node j, and /j is a nonlinear
(typically sigmoid) activation function. We imply that nodes are activated in the appropriate
sequence, and that some have their values clamped so as to represent external inputs.
With a local learning rate of'1j for node j, gradient descent in an objective function E produces the weight update
(2)
Linearizing
Ij
around
Yj
approximates the resultant change in activation
Xj
as
(3)
iEAj
iEAj
Our goal is to put the scale of ~Xj in relation to that of the error signal tSj . Specifically, when
averaged over many training samples, we want the change in output activity of each node
in response to each pattern limited to a certain proportion - given by the global learning
rate '1 - of its residual. We achieve this by relating the variation of ~X j over the training
set to that of the error signal:
(4)
where (.) denotes averaging over training samples. Formally, this approach may be interpreted as a diagonal approximation of the inverse Fischer information matrix (Amari, 1995).
We implement (4) by deriving an upper bound for the left-hand side which is then equated
with the right-hand side. Replacing the acti vity-dependent slope of Ij by its maximum value
s(/j) == maxl/j(u)1
u
(5)
and assuming that there are no correlations! between inputs Xi and error tSj ' we obtain
(~x}):::; '1} s(/j)2 (tS})f.j
1 Note
that such correlations are minimized by the local weight update.
(6)
Tempering Backpropagation Networks: Not All Weights Are Created Equal
565
from (3), provided that
ej ~ e;
==
([,Lxlf) ,
(7)
lEA]
We can now satisfy (4) by setting the local learning rate to
TJ' =
J -
8 (fj
TJ
(8)
).j[j .
ej
There are several approaches to computing an upper bound on the total squared input
power
One option would be to calculate the latter empirically during training, though
this raises sampling and stability issues. For external inputs we may precompute orderive
an upper bound based on prior knowledge of the training data. For inputs from other nodes
in the network we assume independence and derive
from the range of their activation
functions:
=
p(fd 2 , where p(fd == ffiuax/i(u)2.
(9)
e;.
e;
ej
ej
L
iEAj
Note that when all nodes use the same activation function
I,
we obtain the well-known
Vfan-in heuristic (Plaut et al., 1986) as a special case of (8).
3 Error Backpropagation
In deriving local learning rates above we have tacitly used the error signal as a stand-in for
the residual proper, i.e. the distance to the target. For output nodes we can scale the error to
never exceed the residual:
(10)
Note that for the conventional quadratic error this simplifies to <Pj = s(/j) . What about
the remainder of the network? Unlike (Krogh et aI., 1990), we do not wish to prescribe
definite targets (and hence residuals) for hidden nodes. Instead we shall use our bounds
and independence arguments to scale backpropagated error signals to roughly appropriate
magnitude. For this purpose we introduce an attenuation coefficient aj into the error backpropagation equation:
c5j
= II (Yi)
aj
L
Wjj
c5j
,
(11)
jEP,
where Pi denotes the set of posterior nodes fed directly from node i. We posit that the appropriate variation for c5i be no more than the weighted average of the variation of backpropagated errors:
(12)
whereas, assuming independence between the c5j and replacing the slope of Ii by its maximum value, (11) gives us
(c5?) ~
a? 8(f;)2 L
wi /
(c5/) .
(13)
jEP,
Again we equate the right-hand sides of both inequalities to satisfy (12), yielding
ai
==
1
8(fdJiP;T .
(14)
566
N. N. SCHRAUDOLPH, T. J. SEJNOWSKI
Note that the incorporation ofthe weights into (12) is ad hoc, as we have no a priori reason
to scale a node's step size in proportion to the size of its vector of outgoing weights. We
have chosen (12) simply because it produces a weight-invariant value for the attenuation
coefficient. The scale of the backpropagated error could be controlled more rigorously, at
the expense of having to recalculate ai after each weight update.
4
Global Learning Rate
We now derive the appropriate global learning rate for the batch weight update
LiWij
==
1]j
L dj (t)
Xi
(15)
(t)
tET
over a non-redundant training sample T. Assuming independent and zero-mean residuals,
we then have
(16)
by virtue of (4). Under these conditions we can ensure
~
2 ~ (dj 2)
/).Xj
(17)
,
i.e. that the variation of the batch weight update does not exceed that of the residual, by
using a global learning rate of
1]
~
1]*
== l/JiTf.
(18)
Even when redundancy in the training set forces us to use a lower rate, knowing the upper
bound 1]* effectively allows an educated guess at 1], saving considerable time in practice.
5 Error Shunting
It remains to deal with the assumption made above that the residuals be zero-mean, i.e. that
(dj)
O. Any d.c. component in the error requires a learning rate inversely proportional to
the batch size - far below 1]* , the rate permissible for zero-mean residuals. This suggests
handling the d.c. component of error signals separately. This is the proper job of the bias
weight, so we update it accordingly :
=
(19)
In order to allow learning at rates close to
then centered by subtracting the mean:
1]*
for all other weights, their error signals are
(20)
tET
T/j
(L
tET
dj (t)
X i (t)
- (Xi)
L
dj (t))
(21)
tET
Note that both sums in (21) must be collected in batch implementations of backpropagation
anyway - the only additional statistic required is the average input activity (Xi) ' Indeed
for batch update centering errors is equivalent to centering inputs, which is known to assist
learning by removing a large eigenvalue of the Hessian (LeCun et al., 1991). We expect
online implementations to perform best when both input and error signals are centered so
as to improve the stochastic approximation.
567
Tempering Backpropagation Networks: Not All Weights Are Created Equal
person
2 OO~O~OOOOOOO
000000000000
TJeff ~
'""<Lt tr,: j 1*'j:'i~
1.5 TJ
000000
A "~t{(d$?.?.?:?. ?.?.d+; BI?.
000000000000
~?;;?dt!i """" "'
~El~
000000
000000
~if;i ? ' :r:.? . ; ..0;:....",
..<1!. (i+1 ~ S?!? .? . ; ~
person 1 OOOOO~OOOOOO
000000000000
.25 TJ
.10TJ
.05 TJ
OOOOOOOOOO~O
relationship
Figure 1: Backpropagation network for learning family relations (Hinton, 1986).
6 Experimental Setup
We tested these ideas on the family relations task (Hinton, 1986): a backpropagation network is given examples of a family member and relationship as input, and must indicate
on its output which family members fit the relational description according to an underlying family tree. Its architecture (Figure 1) consists of a central association layer of hidden
units surrounded by three encoding layers that act as informational bottlenecks, forcing the
network to make the deep structure of the data explicit.
The input is presented to the network in a canonical local encoding: for any given training
example, exactly one input in each of the two input layers is active. On account of the always
active bias input, the squared input power for tempering at these layers is thus C = 4. Since
the output uses the same local code, only one or two targets at a time will be active; we
therefore do not attenuate error signals in the immediately preceding layer. We use crossentropy error and the logistic squashing function (1 + e- Y)-l at the output (giving ?> = 1)
but prefer the hyperbolic tangent for hidden units, with p(tanh) = s(tanh) = 1.
To illustrate the impact of tempering on this architecture we translate the combined effect
of local learning rate and error attenuation into an effective learning rate 2 for each layer,
shown on the right in Figure 1. We observe that effective learning rates are largest near the
output and decrease towards the input due to error attenuation. Contrary to textbook opinion
(LeCun, 1993; Haykin, 1994, page 162) we find that such unequal step sizes are in fact the
key to efficient learning here. We suspect that the logistic squashing function may owe its
popUlarity largely to the error attenuation side-effect inherent in its maximum slope of 114We expect tempering to be applicable to a variety of backpropagation learning algorithms;
here we present first results for batch learning with momentum and the delta-bar-delta
rule (Jacobs, 1988). Both algorithms were tested under three conditions: conventional,
tempered (as described in Sections 2 and 3), and tempered with error shunting. All experiments were performed with a customized simulator based on Xerion 3.1.3
For each condition the global learning rate TJ was empirically optimized (to single-digit precision) for fastest reliable learning performance, as measured by the sum of empirical mean
and standard deviation of epochs required to reach a given low value of the cost function.
All other parameters were held in variant across experiments; their values (shown in Table 1)
were chosen in advance so as not to bias the results.
2This is possible only for strictly layered networks, i.e. those with no shortcut (or "skip-through")
connections between topologically non-adjacent layers.
3 At the time of writing, the Xerion neural network simulator and its successor UTS are available
by anonymous file transfer from ai.toronto.edu, directory pub/xerion.
568
N. N. SCHRAUDOLPH. T. 1. SEJNOWSKI
Parameter
Val ue
training set size (= epoch)
momentum parameter
uniform initial weight range
weight decay rate per epoch
100
0.9
?0.3
10- 4
II
I Value I
Parameter
0.2
1.0
0.1
0.9
zero-error radius around target
acceptable error & weight cost
delta-bar-delta gain increment
delta-bar-delta gain decrement
Table 1: Invariant parameter settings for our experim~nts.
7
Experimental Results
Table 2 lists the empirical mean and standard deviation (over ten restarts) of the number
of epochs required to learn the family relations task under each condition, and the optimal
learning rate that produced this performance. Training times for conventional backpropagation are quite long; this is typical for deep multi-layer networks. For comparison, Hinton
reports around 1,500 epochs on this problem when both learning rate and momentum have
been optimized (personal communication). Much faster convergence - though to a far
looser criterion - has recently been observed for online algorithms (O'Reilly, 1996).
Tempering, on the other hand, is seen here to speed up two batch learning methods by almost an order of magnitude. It reduces not only the average training time but also its coefficient of variation, indicating a more reliable optimization process. Note that tempering
makes simple batch learning with momentum run about twice as fast as the delta-bar-delta
algorithm. This is remarkable since delta-bar-delta uses online measurements to continually adapt the learning rate for each individual weight, whereas tempering merely prescales
it based on the network's architecture. We take this as evidence that tempering establishes
appropriate local step sizes upfront that delta-bar-delta must discover empirically.
This suggests that by using tempering to set the initial (equilibrium) learning rates for deltabar-delta, it may be possible to reap the benefits of both prescaling and adaptive step size
control. Indeed Table 2 confirms that the respective speedups due to tempering and deltabar-delta multiply when the two approaches are combined in this fashion. Finally, the addition of error shunting increases learning speed yet further by allowing the global learning
rate to be brought close to the maximum of 7]*
0.1 that we would predict from (18).
=
8 Discussion
In our experiments we have found tempering to dramatically improve speed and reliability
of learning. More network architectures, data sets and learning algorithms will have to be
"tempered" to explore the general applicability and limitations of this approach; we also
hope to extend it to recurrent networks and online learning. Error shunting has proven useful
in facilitating of near-maximal global learning rates for rapid optimization.
Algorithm
Condition
conventional
with tempering
tempering & shunting
batch & momentum
7]=
mean
st.d.
3.10- 3 2438 ? 1153
1.10- 2 339 ? 95.0
4.10- 2 142?27.1
delta-bar-delta
7]=
mean
st.d.
3.10- 4 696? 218
3.10- 2 89.6 ? 11 .8
9.10- 2 61.7?8.1
Table 2: Epochs required to learn the family relations task.
Tempering Backpropagation Networks: Not All Weights Are Created Equal
569
Although other schemes may speed up backpropagation by comparable amounts, our approach has some unique advantages. It is computationally cheap to implement: local learning and error attenuation rates are invariant with respect to network weights and activities
and thus need to be recalculated only when the network architecture is changed.
More importantly, even advanced gradient descent methods typically retain the isotropic
weight space assumption that we improve upon; one would therefore expect them to benefit from tempering as much as delta-bar-delta did in the experiments reported here. For
instance, tempering could be used to set non-isotropic model-trust regions for conjugate and
second-order gradient descent algorithms.
Finally, by restricting ourselves to fixed learning rates and attenuation factors for now we
have arrived at a simplified method that is likely to leave room for further improvement.
Possible refinements include taking weight vector size into account when attenuating error
signals, or measuring quantities such as (6 2 ) online instead of relying on invariant upper
bounds. How such adaptive tempering schemes will compare to and interact with existing
techniques for efficient backpropagation learning remains to be explored.
Acknowledgements
We would like to thank Peter Dayan, Rich Zemel and Jenny Orr for being instrumental in
discussions that helped shape this work. Geoff Hinton not only offered invaluable comments, but is the source of both our simulator and benchmark problem. N. Schraudolph
received financial support from the McDonnell-Pew Center for Cognitive Neuroscience in
San Diego, and the Robert Bosch Stiftung GmbH.
References
Amari, S.-1. (1995). Learning and statistical inference. In Arbib, M. A., editor, The Handbook of Brain Theory and Neural Networks, pages 522-526. MIT Press, Cambridge.
Battiti, T. (1992). First- and second-order methods for learning: Between steepest descent
and Newton's method. Neural Computation,4(2):141-166.
Haykin, S. (1994). Neural Networks: A Comprehensive Foundation. Macmillan, New York.
Hinton, G. (1986). Learning distributed representations of concepts. In Proceedings of
the Eighth Annual Conference of the Cognitive Science Society, pages 1-12, Amherst
1986. Lawrence Erlbaum, Hillsdale.
Jacobs, R. (1988). Increased rates of convergence through learning rate adaptation. Neural
Networks,1:295-307.
Krogh, A., Thorbergsson, G., and Hertz, J. A. (1990). A cost function for internal representations. In Touretzky, D. S., editor,Advances in Neural Information Processing Systems, volume 2, pages 733-740, Denver, CO, 1989. Morgan Kaufmann, San Mateo.
LeCun, Y. (1993). Efficient learning & second-order methods. Tutorial given at the NIPS
Conference, Denver, CO.
LeCun, Y., Kanter, I., and Solla, S. A. (1991). Second order properties of error surfaces:
Learning time and generalization. In Lippmann, R. P., Moody, J. E., and Touretzky,
D. S., editors, Advances in Neural Information Processing Systems, volume 3, pages
918-924, Denver, CO, 1990. Morgan Kaufmann, San Mateo.
O'Reilly, R. C. (1996). Biologically plausible error-driven learning using local activation
differences: The generalized recirculation algorithm. Neural Computation, 8.
Plaut, D., Nowlan, S., and Hinton, G. (1986). Experiments on learning by back propagation. Technical Report CMU-CS-86-126, Department of Computer Science, Carnegie
Mellon University, Pittsburgh, PA.
| 1100 |@word proportion:2 instrumental:1 confirms:1 jacob:2 reap:1 tr:1 initial:2 pub:1 existing:1 anterior:1 nt:1 nowlan:1 activation:7 yet:1 must:3 subsequent:1 shape:1 cheap:1 update:10 v:1 guess:1 accordingly:2 directory:1 isotropic:3 steepest:1 haykin:2 plaut:3 node:15 toronto:1 become:1 consists:1 acti:3 introduce:2 indeed:2 rapid:1 roughly:1 multi:2 simulator:3 brain:1 informational:1 relying:1 provided:1 discover:1 matched:1 underlying:1 hitherto:1 what:2 interpreted:1 textbook:1 attenuation:7 act:1 exactly:1 control:1 unit:2 continually:1 educated:1 local:14 encoding:2 approximately:1 twice:1 mateo:2 suggests:3 co:3 fastest:1 collapse:1 limited:1 range:2 bi:1 averaged:1 unique:1 lecun:4 yj:3 practice:1 implement:2 definite:1 backpropagation:17 digit:1 jep:2 empirical:2 hyperbolic:1 reilly:2 suggest:1 close:2 layered:1 put:1 writing:1 conventional:4 equivalent:1 center:1 immediately:1 rule:2 utilizing:1 importantly:1 deriving:2 financial:1 stability:1 notion:1 variation:5 anyway:1 increment:1 diego:2 target:4 us:2 prescribe:1 pa:1 observed:1 calculate:2 recalculate:1 region:1 solla:1 decrease:1 valuable:1 wjj:1 rigorously:1 tacitly:1 personal:1 raise:1 upon:1 geoff:1 derivation:1 fast:1 effective:2 sejnowski:4 zemel:1 quite:1 heuristic:1 kanter:1 plausible:1 amari:2 favor:1 statistic:1 fischer:1 online:5 hoc:2 sequence:1 eigenvalue:1 xerion:3 advantage:1 subtracting:1 maximal:2 remainder:1 adaptation:1 translate:1 achieve:1 description:1 convergence:3 produce:2 leave:1 derive:3 oo:1 illustrate:1 recurrent:1 bosch:1 measured:1 ij:2 received:1 stiftung:1 job:1 krogh:2 predicted:1 skip:1 indicate:1 c:1 posit:1 radius:1 stochastic:1 centered:2 successor:1 opinion:1 hillsdale:1 feeding:1 generalization:1 anonymous:1 strictly:1 hold:1 around:3 equilibrium:1 recalculated:1 predict:1 lawrence:1 purpose:1 applicable:2 tanh:2 largest:1 establishes:1 weighted:1 hope:1 brought:1 mit:1 always:1 ej:4 crossentropy:1 improvement:1 inference:1 dependent:1 el:1 dayan:1 typically:4 hidden:3 relation:6 wij:1 germany:1 issue:1 priori:2 special:2 equal:5 never:1 having:1 saving:1 sampling:1 minimized:1 report:2 inherent:1 ve:2 comprehensive:1 individual:1 familiar:1 ourselves:1 owe:1 attempt:1 fd:2 highly:1 multiply:1 yielding:1 activated:1 tj:7 held:1 respective:1 tree:1 biosystems:1 recirculation:1 instance:1 increased:1 modeling:1 measuring:1 cost:3 applicability:1 deviation:2 uniform:1 erlbaum:1 reported:1 combined:2 person:2 st:2 tsj:2 amherst:1 retain:1 terrence:1 moody:1 squared:2 c5j:3 again:1 central:1 external:2 cognitive:2 tet:4 account:2 de:1 orr:1 coefficient:3 satisfy:2 ad:2 vi:1 performed:1 view:1 helped:1 lab:1 recover:1 option:1 slope:3 lore:1 kaufmann:2 characteristic:2 equate:1 largely:1 yield:1 ofthe:2 produced:1 reach:1 touretzky:2 centering:2 ty:1 resultant:1 gain:2 treatment:1 knowledge:1 ut:1 back:1 dt:1 restarts:1 response:1 improved:1 though:2 correlation:2 hand:4 replacing:2 trust:1 nonlinear:1 propagation:1 logistic:2 aj:3 usa:1 effect:3 concept:1 deri:1 hence:1 deal:1 adjacent:1 during:1 ue:1 linearizing:1 criterion:1 generalized:1 arrived:1 demonstrate:1 invaluable:1 fj:1 recently:1 sigmoid:1 sped:1 empirically:3 denver:3 c5i:1 volume:2 association:1 extend:1 approximates:1 relating:2 measurement:1 mellon:1 cambridge:1 ai:5 attenuate:1 pew:1 dj:5 reliability:1 surface:1 posterior:1 driven:1 forcing:1 hamburg:1 certain:1 vity:1 inequality:1 battiti:2 yi:1 tempered:3 seen:1 morgan:2 additional:1 preceding:1 nici:1 redundant:1 signal:12 ii:3 jenny:1 reduces:1 technical:1 faster:1 adapt:1 calculation:1 schraudolph:5 long:1 shunting:6 controlled:1 impact:1 variant:1 cmu:1 unparalleled:1 represent:1 lea:1 whereas:2 want:1 separately:1 addition:1 source:1 permissible:1 unlike:1 file:1 comment:1 suspect:1 member:2 contrary:1 effectiveness:1 call:1 structural:4 near:2 feedforward:1 exceed:2 variety:1 affect:1 xj:3 independence:3 fit:1 architecture:6 arbib:1 simplifies:1 idea:1 knowing:1 synchronous:1 bottleneck:1 assist:1 peter:1 hessian:1 york:1 deep:3 dramatically:1 useful:1 ooooo:1 amount:1 backpropagated:4 ten:1 canonical:1 tutorial:1 upfront:1 delta:20 neuroscience:1 popularity:1 per:1 carnegie:1 shall:1 redundancy:1 key:1 tempering:23 pj:1 graph:1 merely:1 sum:2 run:1 inverse:1 topologically:1 family:8 almost:1 looser:1 prefer:1 acceptable:1 comparable:1 bound:9 layer:9 fan:3 quadratic:1 annual:1 activity:5 incorporation:1 speed:4 argument:1 speedup:1 structured:1 department:1 according:1 precompute:1 mcdonnell:1 conjugate:1 hertz:1 across:1 wi:1 biologically:1 invariant:4 computationally:1 equation:1 remains:2 turn:1 fed:1 serf:1 available:1 permit:1 experim:1 observe:1 appropriate:6 batch:12 denotes:3 ensure:1 include:1 newton:1 giving:1 society:1 objective:1 quantity:1 diagonal:1 gradient:4 distance:1 thank:1 collected:1 reason:1 assuming:3 code:1 relationship:2 setup:1 robert:1 expense:2 design:1 implementation:2 proper:2 perform:1 allowing:2 upper:8 discarded:2 benchmark:2 descent:4 t:1 neurobiology:1 hinton:6 relational:1 communication:1 required:4 optimized:4 connection:1 unequal:1 nip:1 bar:9 below:1 pattern:1 eighth:1 reliable:2 terry:1 power:2 treated:1 force:1 residual:10 customized:1 advanced:1 scheme:2 improve:3 imply:1 inversely:1 created:5 prior:1 epoch:6 acknowledgement:1 tangent:1 val:1 nicol:1 deltabar:2 expect:3 presumption:1 limitation:1 proportional:1 proven:1 remarkable:1 foundation:1 offered:1 editor:3 pi:1 surrounded:1 squashing:2 changed:1 bias:5 side:4 allow:1 institute:1 taking:1 benefit:2 distributed:1 maxl:1 stand:1 rich:1 equated:1 c5:2 clue:1 san:4 adaptive:3 made:1 simplified:1 far:2 refinement:1 lippmann:1 global:11 active:3 handbook:1 pittsburgh:1 xi:5 table:5 learn:2 transfer:1 ca:1 interact:1 did:1 decrement:1 ooooooo:1 facilitating:1 body:1 gmbh:2 fashion:1 salk:2 precision:1 momentum:6 wish:1 explicit:1 clamped:1 removing:1 discarding:1 list:1 decay:1 explored:1 virtue:1 evidence:1 restricting:1 effectively:1 magnitude:3 lt:1 simply:1 explore:1 likely:1 macmillan:1 thorbergsson:1 bioi:1 goal:1 attenuating:1 towards:1 room:1 considerable:2 change:4 shortcut:1 specifically:1 typical:1 averaging:1 decouple:1 total:1 experimental:2 indicating:1 formally:1 internal:1 support:1 latter:1 incorporate:1 outgoing:1 tested:2 handling:1 |
114 | 1,101 | Finite State Automata that Recurrent
Cascade-Correlation Cannot Represent
Stefan C. Kremer
Department of Computing Science
University of Alberta
Edmonton, Alberta, CANADA T6H 5B5
Abstract
This paper relates the computational power of Fahlman' s Recurrent
Cascade Correlation (RCC) architecture to that of fInite state automata
(FSA). While some recurrent networks are FSA equivalent, RCC is not.
The paper presents a theoretical analysis of the RCC architecture in the
form of a proof describing a large class of FSA which cannot be realized
by RCC.
1 INTRODUCTION
Recurrent networks can be considered to be defmed by two components: a network
architecture, and a learning rule. The former describes how a network with a given set
of weights and topology computes its output values, while the latter describes how the
weights (and possibly topology) of the network are updated to fIt a specifIc problem. It is
possible to evaluate the computational power of a network architecture by analyzing the
types of computations a network could perform assuming appropriate connection weights
(and topology). This type of analysis provides an upper bound on what a network can be
expected to learn, since no system can learn what it cannot represent.
Many recurrent network architectures have been proven to be fInite state automaton or
even Turing machine equivalent (see for example [Alon, 1991], [Goudreau, 1994],
[Kremer, 1995], and [Siegelmann, 1992]). The existence of such equivalence proofs
naturally gives confIdence in the use of the given architectures.
This paper relates the computational power of Fahlman's Recurrent Cascade Correlation
architecture [Fahlman, 1991] to that of fInite state automata. It is organized as follows:
Section 2 reviews the RCC architecture as proposed by Fahlman. Section 3 describes
fInite state automata in general and presents some specifIc automata which will play an
important role in the discussions which follow. Section 4 describes previous work by other
Finite State Automata that Recurrent Cascade-Correlation Cannot Represent
613
authors evaluating RCC' s computational power. Section 5 expands upon the previous
work, and presents a new class of automata which cannot be represented by RCC. Section
6 further expands the result of the previous section to identify an infinite number of other
unrealizable classes of automata. Section 7 contains some concluding remarks.
2 THE RCC ARCHITECTURE
The RCC architecture consists of three types of units: input units, hidden units and output
units. After training, a RCC network performs the following computation: First, the
activation values of the hidden units are initialized to zero. Second, the input unit
activation values are initialized based upon the input signal to the network. Third, each
hidden unit computes its new activation value. Fourth, the output units compute their new
activations . Then, steps two through four are repeated for each new input signal.
The third step of the computation, computing the activation value of a hidden unit, is
accomplished according to the formula:
a(t+l)
J
=
a(
t
;=\
W . a(t+l) + w.a(t)].
'J'
JJ J
Here, ai(t) represents the activation value of unit i at time t, a(e) represents a sigmoid
squashing function with fInite range (usually from 0 to 1), and Wij represents the weight
of the connection from unit ito unitj. That is, each unit computes its activation value by
mUltiplying the new activations of all lowered numbered units and its own previous
activation by a set of weights, summing these products, and passing the sum through a
logistic activation function . The recurrent weight Wjj from a unit to itself functions as a
sort of memory by transmitting a modulated version of the unit's old activation value.
The output units of the RCC architecture can be viewed as special cases of hidden units
which have weights of value zero for all connections originating from other output units.
This interpretation implies that any restrictions on the computational powers of general
hidden units will also apply to the output units. For this reason, we shall concern ourselves
exclusively with hidden units in the discussions which follow.
Finally, it should be noted that since this paper is about the representational power of the
RCC architecture, its associated learning rule will not be discussed here. The reader
wishing to know more about the learning rule, or requiring a more detailed description of
the operation of the RCC architecture, is referred to [Fahlman, 1991].
3 FINITE STATE AUTOMATA
A Finite State Automaton (FSA) [Hopcroft, 1979] is a formal computing machine defmed
by a 5-tuple M=(Q,r.,8,qo,F), where Q represents a fmite set of states, r. a fmite input
alphabet, 8 a state transition function mapping Qxr. to Q, qoEQ the initial state, and FcQ
a set of fmal or accepting states. FSA accept or reject strings of input symbols according
to the following computation: First, the FSA' s current state is initialized to qo. Second,
the next inut symbol of the str ing, selected from r., is presented to the automaton by the
outside world. Third, the transition function, 8, is used to compute the FSA' s new state
based upon the input symbol, and the FSA's previous state. Fourth, the acceptability of
the string is computed by comparing the current FSA state to the set of valid fmal states,
F. If the current state is a member of F then the automaton is said to accept the string of
input symbols presented so far. Steps two through four are repeated for each input symbol
presented by the outside world. Note that the steps of this computation mirror the steps
of an RCC network's computation as described above.
It is often useful to describe specifIc automata by means of a transition diagram [Hopcroft,
1979]. Figure 1 depicts the transition diagrams of fIve FSA. In each case, the states, Q,
S.C. KREMER
614
are depicted by circles, while the transitions defmed by 0 are represented as arrows from
the old state to the new state labelled with the appropriate input symbol. The arrow
labelled "Start" indicates the initial state, qo; and fmal accepting states are indicated by
double circles.
We now defme some terms describing particular FSA which we will require for the
following proof. The first concerns input signals which oscillate. Intuitively, the input
signal to a FSA oscillates if every pm symbol is repeated for p> 1. More formally, a
sequence of input symbols, s(t), s(t+ 1), s(t+ 2), ... , oscillates with a period of p if and
only if p is the minimum value such that: Vt s(t)=s(t+p).
Our second definition concerns oscillations of a FSA's internal state, when the machine is
presented a certain sequence of input signals. Intuitively, a FSA' s internal state can
oscillate in response to a given input sequence if there is some starting state for which
every subsequent <.>th state is repeated. Formally, a FSA' s state can oscillate with a period
of <.> in response to a sequence of input symbols, s(t), s(t+ 1), s(t+2), ... , if and only if
<.> is the minimum value for which:
3qo S.t. Vt o(qo, s(t?
= o( .. , o( o( o(qo, s(t?, s(t+ 1?, s(t+2?,
... , s(t+<.>))
The recursive nature of this formulation is based on the fact a FSA' s state depends on its
previous state, which in tum depends on the state before, etc ..
We can now apply these two defmitions to the FSA displayed in Figure 1. The automaton
labelled "a)" has a state which oscillates with a period of <.>=2 in response to any sequence
consisting of Os and Is (e.g. "00000 ... ", "11111.. .. ", "010101. .. ", etc.). Thus, we can
say that it has a state cycle of period <.>=2 (Le. qoqtqoqt ... ), when its input cycles with a
period of p= 1 (Le. "0000 ... If). Similarly, when automaton b)'s input cycles with period
p= 1 (Le . ''000000 ... "), its state will cycle with period <.>=3 (Le. qOqtq2qOqtq2' .. ).
For automaton c), things are somewhat more complicated. When the input is the sequence
"0000 .. . ", the state sequence will either be qoqo%qo .. ' or fA fA fA fA .. . depending on the
initial state. On the other hand, when the input is the sequence "1111 ... ", the state
sequence will alternate between qo and qt. Thus, we say that automaton c) has a state cycle
of <.> = 2 when its input cycles with period p = 1. But, this automaton can also have larger
state cycles. For example, when the input oscillates with a period p=2 (Le.
"01010101. .. If), then the state of the automaton will oscillate with a period <.>=4 (Le .
qoqoqtqtqoqoqtqt ... ). Thus, we can also say that automaton c) has a state cycle of <.>=4
when its input cycles with period p =2.
The remaining automata also have state cycles for various input cycles, but will not be
discussed in detail. The importance of the relationship between input period (P) and the
state period (<.? will become clear shortly.
4 PREVIOUS RESULTS CONCERNING THE COMPUTATIONAL
POWEROFRCC
The first investigation into the computational powers of RCC was performed by Giles et.
al. [Giles, 1995]. These authors proved that the RCC architecture, regardless of
connection weights and number of hidden units, is incapable of representing any FSA
which "for the same input has an output period greater than 2" (p. 7). Using our
oscillation defmitions above, we can re-express this result as: if a FSA' s input oscillates
with a period of p= 1 (Le. input is constant), then its state can oscillate with a period of
at most <.>=2. As already noted, Figure Ib) represents a FSA whose state oscillates with
a period of <.>=3 in response to an input which oscillates with a period of p=1. Thus,
Giles et. al.'s theorem proves that the automaton in Figure Ib) cannot be implemented (and
hence learned) by a RCC network.
615
Finite State Automata that Recurrent Cascade-Correlation Cannot Represent
a)
Start
Start
b)
0, I
c)
Start
o
Start
d)
o
o
o
o
e)
Start
Figure I: Finite State Automata.
Giles et. al. also examined the automata depicted in Figures la) and lc). However, unlike
the formal result concerning FSA b), the authors' conclusions about these two automata
were of an empirical nature. In particular, the authors noted that while automata which
oscillated with a period of 2 under constant input (Le. Figure la? were realizable, the
automaton of Ic) appeared not be be realizable by RCC . Giles et. al. could not account
for this last observation by a formal proof.
616
S.C.KREMER
5 AUTOMATA WITH CYCLES UNDER ALTERNATING INPUT
We now turn our attention to the question: why is a RCC network unable to learn the
automaton of lc)? We answer this question by considering what would happen if lc) were
realizable. In particular, suppose that the input units of a RCC network which implements
automaton lc) are replaced by the hidden units of a RCC network implementing la). In
this situation, the hidden units of la) will oscillate with a period of 2 under constant input.
But if the inputs to lc) oscillate with a period of 2, then the state of Ic) will oscillate with
a period of 4. Thus, the combined network's state would oscillate with a period of four
under constant input. Furthermore, the cascaded connectivity scheme of the RCC
architecture implies that a network constructed by treating one network's hidden units as
the input units of another, would not violate any of the connectivity constraints of RCC.
In other words, if RCC could implement the automaton of lc), then it would also be able
to implement a network which oscillates with a period of 4 under constant input. Since
Giles et. al. proved that the latter cannot be the case, it must also be the case that RCC
cannot implement the automaton of lc).
The line of reasoning used here to prove that the FSA of Figure lc) is unrealizable can also
be applied to many other automata. In fact, any automaton whose state oscillates with a
period of more than 2 under input which oscillates with a period 2, could be used to
construct one of the automata proven to be illegal by Giles. This implies that RCC cannot
implement any automaton whose state oscillates with a period of greater than <.>=2 when
its input oscillates with a period of p=2.
6 AUTOMATA WITH CYCLES UNDER OSCILLATING INPUT
Giles et. aI.' s theorem can be viewed as defining a class of automata which cannot be
implemented by the RCC architecture. The proof in Section 5 adds another class of
automata which also cannot be realized. More precisely, the two proofs concern inputs
which oscillate with periods of one and two respectively. It is natural to ask whether
further proofs for state cycles can be developed when the input oscillates with a period of
greater than two. We now present the central theorem of this paper, a unified defmition
of unrealizable automata:
Theorem: If the input signal to a RCC network oscillates with a period, p, then the
network can represent only those FSA whose outputs form cycles of length <.>, where
pmod<.>=O if p is even and 2pmod<.> =0 if p is odd.
To prove this theorem we will first need to prove a simpler one relating the rate of
oscillation of the input signal to one node in an RCC network to the rate of oscillation of
that node's output signal. By "the input signal to one node" we mean the weighted sum
of all activations of all connected nodes (Le. all input nodes, and all lower numbered
hidden nodes), but not the recurrent signal. I. e . :
j - I
A(t+ 1)
== "L.J
WIJ.. a I.(t+ 1) .
1=1
Using this defmition, it is possible to rewrite the equation to compute the activation of node
j (given in Section 2) as:
ap+l)
==
a( A(t+l)+Wha/t) ) .
But if we assume that the input signal oscillates with a period of p, then every value of
A(t+ 1) can be replaced by one of a fmite number of input signals (.to, AI, A 2 , .,. Ap. I ) . In
other words, A(t+ 1) = A tmodp ' Using this substitution, it is possible to repeatedly expand
the addend of the previous equation to derive the formula:
ap+ 1)
= a( Atmodp + '")j
a(
.
+ Wp .
a( A(t-2)modp + '")j ....
A(t-I)modp
a(
A(t-p+I)modp
+
,")/ait-p+ 1) ) ... ) ) )
Finite State Automata that Recurrent Cascade-Correlation Cannot Represent
617
The unravelling of the recursive equation now allows us to examine the relationship
between ap+ 1) and t;(t-p+ 1). Specifically, we note that if ~ >0 or if p is even then
aj{t+ 1) = ft.ap-p+ 1? implies that/is a monotonically increasing function. Furthermore,
since 0' is a function with finite range,f must also have finite range.
It is well known that for any monotonically increasing function with [mite range, /, the
sequence, ft.x), fif(x? , fift.j{x?) , ... , is guaranteed to monotonically approach a fixed point
(whereft.x)=x). This implies that the sequence, ap+l), t;(t+p+l), q(t+2p+l), ... ,
must also monotonically approach a fixed point (where ap+ 1) = q.(t-p+ 1?. In other
words, the sequence does not oscillate. Since every prh value of ~{t) approaches a fixed
point, the sequence ap), ap+ 1), ap+2), '" can have a period of at most p, and must
have a period which divides p evenly. We state this as our first lemma:
Lemma 1: If A.(t) oscillates with even period, p, or if Wu > 0, then state unit j's activation
value must oscillate with a period c..>, where pmodc..> =0.
We must now consider the case where
'"11 < 0
and p is odd.
In this case,
ap+ 1) = ft.ap-p+ 1? implies that/is a monotonically decreasing function. But, in this
situation the function/ 2(x)=ft.f{x? must be monotonically increasing with finite range.
This implies that the sequence: ap+ 1), a;<t+2p+ 1), a;<t+4p+ 1), ... , must monotonically
approach a fixed point (where a;<t+ 1)=ap-2p+ 1?. This in turn implies that the sequence
ap), ap+ 1), ap+2), ... , can have a period of at most 2p, and must have a period which
divides 2p evenly. Once again, we state this result in a lemma:
Lemma 2: If A.(t) oscillates with odd period p, and if Wii<O, then state unit j must
oscillate with a period c..>, where 2pmodc..>=0.
Lemmas 1 and 2 relate the rate of oscillation of the weighted sum of input signals and
lower numbered unit activations, A.(t) to that of unitj. However, the theorem which we
wish to prove relates the rate of oscillation of only the RCC network's input signal to the
entire hidden unit activations. To prove the theorem, we use a proof by induction on the
unit number, i:
Basis: Node i= 1 is connected only to the network inputs. Therefore, if the input signal
oscillates with period p, then node i can only oscillate with period c..>, where pmodc..> =0 if
P is even and 2pmodc..> =0 if P is odd. (This follows from Lemmas 1 and 2).
Assumption: If the input signal to the network oscillates with period p, then node i can
only oscillate with period c..>, where pmodc..> =0 if p is even and 2pmodc..>=0 if p is odd.
Proof: If the Assumption holds for all nodes i, then Lemmas 1 and 2 imply that it must
also hold for node i+ 1.0
This proves the theorem:
Theorem: If the input signal to a RCC network oscillates with a period, p, then the
network can represent only those FSA whose outputs form cycles of length c..>, where
pmodc..>=O ifp is even and 2pmodc..> =0 ifp is odd.
7 CONCLUSIONS
It is interesting to note that both Giles et. al. 's original proof and the constructive proof by
contradiction described in Section 5 are special cases of the theorem. Specifically, Giles
et. al. S original proof concerns input cycles of length p = 1. Applying the theorem of
Section 6 proves that an RCC network can only represent those FSA whose state transitions
form cycles of length c..>, where 2(I)modc..>=0, implying that state cannot oscillate with a
period of greater than 2. This is exactly what Giles et. al concluded, and proves that
(among others) the automaton of Figure Ib) cannot be implemented by RCC.
I
618
S.C.KREMER
Similarly, the proof of Section 5 concerns input cycles of length p=2. Applying our
theorem proves that an RCC network can only represent those machines whose state
transitions form cycles of length <.>, where (2)modw=O. This again implies that state
cannot oscillate with a period greater than 2, which is exactly what was proven in Section
5. This proves that the automaton of Figure lc) (among others) cannot be implemented
by RCC.
In addition to unifying both the results of Giles et. al. and Section 5, the theorem of
Section 6 also accounts for many other FSA which are not representable by RCC. In fact,
the theorem identifies an inflnite number of other classes of non-representable FSA (for
p = 3, P =4, P = 5, ... ). Each class itself of course contains an infinite number of machines.
Careful examination of the automaton illustrated in Figure ld) reveals that it contains a
state cycle of length 9 (QcIJ.IQ2QIQ2Q3Q2Q3Q4QcIJ.IQ2Qlq2q3q2q3q4"') in response to an input cycle
of length 3 ("001001... "). Since this is not one of the allowable input/state cycle
relationships defined by the theorem, it can be concluded that the automaton of Figure Id)
(among others) cannot be represented by RCC.
Finally, it should be noted that it remains unknown if the classes identified by this paper IS
theorem represent the complete extent of RCC's computational limitations. Consider for
example the automaton of Figure Ie). This device has no input/state cycles which violate
the theorem, thus we cannot conclude that it is unrepresentable by RCC. Of course, the
issue of whether or not this particular automaton is representable is of little interest.
However, the class of automata to which the theorem does not apply, which includes
automaton Ie), requires further investigation. Perhaps all automata in this class are
representable; perhaps there are other subclasses (not identified by the theorem) which
RCC cannot represent. This issue will be addressed in future work.
References
N. Alon, A. Dewdney, and T. Ott, Efficient simulation of flnite automata by neural nets,
Journal of the Association for Computing Machinery, 38 (2) (1991) 495-514.
S. Fahlman, The recurrent cascade-correlation architecture, in: R. Lippmann, J. Moody
and D. Touretzky, Eds., Advances in Neural Information Processing Systems 3 (Morgan
Kaufmann, San Mateo, CA, 1991) 190-196.
C.L. Giles, D. Chen, G.Z. Sun, H.H. Chen, Y.C. Lee, and M.W. Goudreau,
Constructive Learning of Recurrent Neural Networks: Limitations of Recurrent Cascade
Correlation and a Simple Solution, IEEE Transactions on Neural Networks, 6 (4) (1995)
829-836.
M. Goudreau, C. Giles, S. Chakradhar, and D. Chen, First-order v.S. second-order single
layer recurrent neural networks, IEEE Transactions on Neural Networks, 5 (3) (1994) 511513.
J.E. Hopcroft and J.D. Ullman, Introduction to Automata Theory, Languages and
Computation (Addison-Wesley, Reading, MA, 1979).
S.C. Kremer, On the Computational Power of Elman-style Recurrent Networks, IEEE
Transactions on Neural Networks, 6 (4) (1995) 1000-1004.
H.T. Siegelmann and E.D. Sontag, On the Computational Power of Neural Nets, in:
Proceedings of the Fifth ACM Workshop on Computational Learning Theory, (ACM, New
York, NY, 1992) 440-449.
| 1101 |@word version:1 simulation:1 fmite:3 fif:1 ld:1 initial:3 substitution:1 contains:3 exclusively:1 current:3 comparing:1 activation:16 must:11 subsequent:1 happen:1 treating:1 implying:1 selected:1 device:1 accepting:2 provides:1 node:12 simpler:1 five:1 constructed:1 become:1 consists:1 prove:5 expected:1 elman:1 examine:1 decreasing:1 alberta:2 little:1 str:1 considering:1 increasing:3 what:5 string:3 developed:1 unified:1 every:4 expands:2 subclass:1 oscillates:20 exactly:2 unit:32 rcc:40 before:1 analyzing:1 id:1 ap:17 mateo:1 examined:1 equivalence:1 range:5 recursive:2 implement:5 empirical:1 reject:1 cascade:8 illegal:1 confidence:1 word:3 numbered:3 cannot:20 applying:2 restriction:1 equivalent:2 regardless:1 starting:1 attention:1 automaton:54 oscillated:1 contradiction:1 rule:3 updated:1 play:1 suppose:1 role:1 ft:4 cycle:24 connected:2 sun:1 wjj:1 rewrite:1 upon:3 basis:1 acceptability:1 hopcroft:3 represented:3 various:1 alphabet:1 describe:1 outside:2 whose:7 larger:1 say:3 prh:1 itself:2 fsa:27 sequence:15 net:2 product:1 representational:1 description:1 double:1 oscillating:1 depending:1 recurrent:16 alon:2 derive:1 odd:6 qt:1 implemented:4 implies:9 implementing:1 require:1 unravelling:1 investigation:2 unrealizable:3 hold:2 considered:1 ic:2 mapping:1 weighted:2 stefan:1 qcij:1 indicates:1 wishing:1 realizable:3 entire:1 accept:2 hidden:13 originating:1 wij:2 expand:1 chakradhar:1 issue:2 among:3 special:2 construct:1 once:1 represents:5 future:1 others:3 replaced:2 ourselves:1 consisting:1 interest:1 tuple:1 machinery:1 old:2 divide:2 initialized:3 circle:2 re:1 theoretical:1 giles:14 ott:1 answer:1 combined:1 ie:2 lee:1 moody:1 transmitting:1 connectivity:2 again:2 central:1 possibly:1 style:1 ullman:1 account:2 includes:1 depends:2 performed:1 start:6 sort:1 complicated:1 unrepresentable:1 kaufmann:1 identify:1 multiplying:1 touretzky:1 ed:1 definition:1 addend:1 naturally:1 proof:13 associated:1 proved:2 ask:1 organized:1 wesley:1 tum:1 follow:2 response:5 fmal:3 formulation:1 furthermore:2 correlation:8 hand:1 qo:8 o:1 logistic:1 aj:1 perhaps:2 indicated:1 requiring:1 former:1 hence:1 alternating:1 wp:1 illustrated:1 defmed:3 noted:4 allowable:1 complete:1 performs:1 reasoning:1 sigmoid:1 ifp:2 discussed:2 interpretation:1 association:1 defmition:2 relating:1 ai:3 pm:1 similarly:2 language:1 lowered:1 etc:2 add:1 own:1 certain:1 incapable:1 vt:2 accomplished:1 morgan:1 minimum:2 greater:5 somewhat:1 period:47 monotonically:7 signal:17 relates:3 violate:2 ing:1 dewdney:1 concerning:2 represent:11 addition:1 addressed:1 diagram:2 concluded:2 defmitions:2 unlike:1 thing:1 member:1 fit:1 architecture:17 topology:3 identified:2 whether:2 b5:1 sontag:1 passing:1 oscillate:17 jj:1 remark:1 repeatedly:1 york:1 useful:1 detailed:1 clear:1 shall:1 express:1 four:3 sum:3 turing:1 fourth:2 reader:1 wu:1 oscillation:6 bound:1 layer:1 guaranteed:1 constraint:1 precisely:1 concluding:1 department:1 according:2 alternate:1 representable:4 describes:4 intuitively:2 equation:3 remains:1 describing:2 turn:2 know:1 addison:1 operation:1 wii:1 apply:3 appropriate:2 shortly:1 existence:1 original:2 remaining:1 unifying:1 siegelmann:2 prof:6 already:1 realized:2 question:2 fa:4 said:1 unable:1 evenly:2 extent:1 reason:1 induction:1 assuming:1 length:8 relationship:3 relate:1 unknown:1 perform:1 upper:1 observation:1 finite:15 displayed:1 situation:2 defining:1 canada:1 defme:1 connection:4 learned:1 able:1 usually:1 appeared:1 reading:1 memory:1 power:9 natural:1 examination:1 cascaded:1 representing:1 scheme:1 imply:1 identifies:1 fcq:1 review:1 interesting:1 limitation:2 proven:3 squashing:1 course:2 kremer:6 fahlman:6 last:1 formal:3 fifth:1 valid:1 evaluating:1 transition:7 computes:3 world:2 author:4 san:1 far:1 transaction:3 lippmann:1 reveals:1 summing:1 conclude:1 why:1 learn:3 nature:2 ca:1 flnite:1 arrow:2 ait:1 repeated:4 referred:1 edmonton:1 depicts:1 ny:1 lc:9 wish:1 ib:3 third:3 ito:1 formula:2 theorem:19 specific:3 unitj:2 symbol:9 concern:6 goudreau:3 workshop:1 importance:1 mirror:1 chen:3 depicted:2 acm:2 ma:1 viewed:2 careful:1 labelled:3 infinite:2 specifically:2 lemma:7 la:4 formally:2 internal:2 latter:2 modulated:1 constructive:2 evaluate:1 pmod:2 |
115 | 1,102 | Hierarchical Recurrent Neural Networks for
Long-Term Dependencies
Yoshua Bengio?
Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
bengioyGiro.umontreal.ca
Salah El Hihi
Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
elhihiGiro.umontreal.ca
Abstract
We have already shown that extracting long-term dependencies from sequential data is difficult, both for determimstic dynamical systems such
as recurrent networks, and probabilistic models such as hidden Markov
models (HMMs) or input/output hidden Markov models (IOHMMs). In
practice, to avoid this problem, researchers have used domain specific
a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general
type of a-priori knowledge, namely that the temporal dependencIes are
structured hierarchically. This implies that long-term dependencies are
represented by variables with a long time scale. This principle is applied
to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach
is proposed for HMMs and IOHMMs.
1
Introduction
Learning from examples basically amounts to identifying the relations between random
variables of interest. Several learning problems involve sequential data, in which the variables are ordered (e.g., time series). Many learning algorithms take advantage of this
sequential structure by assuming some kind of homogeneity or continuity of the model
over time, e.g., bX sharing parameters for different times, as in Time-Delay Neural Networks (TDNNs) tLang, WaIbel and Hinton, 1990), recurrent neural networks (Rumelhart,
Hinton and Williams, 1986), or hidden Markov models (Rabiner and Juang, 1986). This
general a-priori assumption considerably simplifies the learning problem.
In previous papers (Bengio, Simard and Frasconi, 1994? Bengio and Frasconi, 1995a), we
have shown for recurrent networks and Markovian models that, even with this assumption,
dependencies that span longer intervals are significantly harder to learn. In all of the
systems we have considered for learning from sequential data, some form of representation
of context (or state) is required (to summarize all "useful" past information). The "hard
learning" problem IS to learn to represent context, which involves performing the proper
?also, AT&T Bell Labs, Holmdel, NJ 07733
S. E. HIHI. Y. BENGIO
494
credit assignment through time. Indeed, in practice, recurrent networks (e.g., injecting
prior knowledge for grammar inference (Giles and Omlin, 1992; Frasconi et al., 1993))
and HMMs (e.g., for speech recognition (Levinson, Rabiner and Sondhi, 1983; Rabiner
and Juang, 1986)) work quite well when the representation of context (the meaning of the
state variable) is decided a-priori. The hidden variable is not any more completely hidden.
Learning becomes much easier. Unfortunately, this requires a very precise knowledge of
the appropriate state variables, which is not available in many applications.
We have seen that the successes ofTDNNs, recurrent networks and HMMs are based on a
general assumption on the sequential nature of the data. In this paper, we propose another,
simple, a-priori assumption on the sequences to be analyzed: the temporal dependencies
have a hierarchical structure. This implies that dependencies spanning long intervals are
"robust" to small local changes in the timing of events, whereas dependencies spanning
short intervals are allowed to be more sensitive to the precise timing of events. This yields
a multi-resolution representation of state information. This general idea is not new and
can be found in various approaches to learning and artificial intelligence. For example, in
convolutional neural networks, both for sequential data with TDNNs (Lang, Waibel and
Hinton, 1990), and for 2-dimensional data with MLCNNs (LeCun et al., 1989; Bengio,
LeCun and Henderson, 1994), the network is organized in layers representing features
of increasing temporal or spatial coarseness . Similarly, mostly as a tool for analyzing
and preprocessing sequential or spatial data, wavelet transforms (Daubechies, 1990) also
represent such information at mUltiple resolutions. Multi-scale representations have also
been proposed to improve reinforcement learning systems (Singh , 1992; Dayan and Hinton,
1993; Sutton, 1995) and path planning systems. However, with these algorithms, one
generally assumes that the state of the system is observed, whereas, in this paper we
concentrate on the difficulty of learning what the state variable should represent. A
related idea using a hierarchical structure was presented in (Schmidhuber, 1992) .
On the HMM side, several researchers (Brugnara et al., 1992; Suaudeau, 1994) have
attempted to improve HMMs for speech recognition to better model the different types
of var1ables, intrmsically varying at different time scales in speech. In those papers, the
focus was on setting an a-priori representation, not on learning how to represent context.
In section 2, we attempt to draw a common conclusion from the analyses performed on
recurrent networks and HMMs to learn to represent long-term dependencies. This will
justify the proposed approach, presented in section 3. In section 4 a specific hierarchical
model is proposed for recurrent networks, using different time scales for different layers of
the network. EXp'eriments performed with this model are described in section 4. Finally,
we discuss a sim1lar scheme for HMMs and IOHMMs in section 5.
2
Too Many Products
In this section, we take another look at the analyses of (Bengio, Simard and Frasconi, 1994)
and (Bengio and Frasconi, 1995a), for recurrent networks and HMMs respectively. The
objective 1S to draw a parallel between the problems encountered with the two approaches,
in order to guide us towards some form of solution, and justify the proposals made here.
First, let us consider the deterministic dynamical systems (Bengio, Simard and Frasconi,
1994) (such as recurrent networks), which map an input sequence U1 l . .. , UT to an output
sequence Y1, .. . , ftr? The state or context information is represented at each time t by a
variable Xt, for example the activities of all the hidden units of a recurrent network:
(1)
where Ut is the system input at time t and 1 is a differentiable function (such as
tanh(Wxt_1 + ut)). When the sequence of inputs U1, U2, ? .. , UT is given, we can write
Xt
It(Xt-d
It(/t-1( .. . l1(xo)) . . .). A learning criterion Ct yields gradients on outputs, and therefore on the state variables Xt. Since parameters are shared across time,
learning using a gradient-based algorithm depends on the influence of parameters W on
Ct through an time steps before t :
=
=
aCt _ " aCt OXt oX T
oW - L...J
OXt OX T oW
T
(2)
Hierarchical Recurrent Neural Networks for Long-term Dependencies
The Jacobian matrix of derivatives
.!!:U{J{Jx
Xr
495
can further be factored as follows:
(3)
Our earlier analysis (Bengio, Simard and Frasconi, 1994) shows that the difficulty revolves
around the matrix product in equation 3. In order to reliably "store" informatIOn in the
dynamics of the network, the state variable Zt must remain in regions where If:! < 1
(i.e., near enough to a stable attractor representing the stored information). However, the
above products then rapidly converge to 0 when t - T increases. Consequently, the sum
in 2 is dominated by terms corresponding to short-term dependencies (t - T is small).
Let us now consider the case of Markovian models (including HMMs and IOHMMs (Bengio and Frasconi, 1995b)). These are probabilistic models, either of an "output"
sequence P(YI . . . YT) (HMMs) or of an output sequence given an input sequence
P(YI ... YT lUI ... UT) (IOHMMs). Introducing a discrete state variable Zt and using
Markovian assumptIOns of independence this probability can be factored in terms of transition probabilities P(ZtIZt-d (or P(ZtIZt-b ut}) and output probabilities P(ytlZt) (or
P(ytiZt, Ut)) . According to the model, the distribution of the state Zt at time t given the
state ZT at an earlier time T is given by the matrix
P(ZtlZT) = P(ZtiZt-I)P(Zt-Ilzt-2) . .. P(zT+dzT)
(4)
where each of the factors is a matrix of transition probabilities (conditioned on inputs in
the case of IOHMMs) . Our earlier analysis (Bengio and Frasconi, 1995a) shows that the
difficulty in representing and learning to represent context (i .e., learning what Zt should
represent) revolves around equation 4. The matrices in the above equations have one
eigenvalue equal to 1 (because of the normalization constraint) and the others ~ 1. In
the case in which all eIgenvalues are 1 the matrices have only i's and O's, i.e, we obtain
deterministic dynamics for IOHMMs or pure cycles for HMMs (which cannot be used to
model most interesting sequences) . Otherwise the above product converges to a lower
rank matrix (some or most of the eigenvalues converge toward 0). Consequently, P(ZtlZT)
becomes more and more independent of ZT as t - T increases. Therefore, both representing
and learning context becomes more difficult as the span of dependencies increases or when
the Markov model is more non-deterministic (transition probabilities not close to 0 or 1).
Clearly, a common trait of both analyses lies in taking too many products, too many time
steps, or too many transformations to relate the state variable at time T with the state variable at time t > T, as in equations 3 and 4. Therefore the idea presented in the next section
is centered on allowing several paths between ZT and Zt, some with few "transformations"
and some with many transformations. At least through those with few transformations,
we expect context information (forward) , and credit assignment (backward) to propagate
more easily over longer time spans than through "paths" lDvolving many tralIBformations.
3
Hierarchical Sequential Models
Inspired by the above analysis we introduce an assumption about the sequential data to
be modeled, although it will be a very simple and general a-priori on the structure of the
data. Basically, we will assume that the sequential structure of data can be described
hierarchically: long-term dependencies (e.g., between two events remote from each other
in time) do not depend on a precise time scale (Le., on the precise timing of these events).
Consequently, in order to represent a context variable taking these long-term dependencies
into account, we will be able to use a coarse time scale (or a Slowly changing state variable).
Therefore, instead of a single homogeneous state variable, we will introduce several levels
of state variables, each "working" at a different time scale. To implement in a discretetime system such a multi-resolution representation of context, two basic approaches can
be considered. Either the higher level state variables change value less often or they
are constrained to change more slowly at each time step. In our ex~eriments, we have
considered input and output variables both at the shortest time scale highest frequency),
but one of the potential advantages of the approach presented here is t at it becomes very
496
S. E. IDHI, Y. BENOIO
Figure 1: Four multi-resolution recurrent architectures used in the experiments. Small
sguares represent a discrete delay, and numbers near each neuron represent its time scale.
The architectures B to E have respectively 2, 3, 4, and 6 time scales.
simple to incorporate input and output variables that operate at different time scales.
For example, in speech recognition and synthesis, the variable of interest is not only
the speech signal itself (fast) but also slower-varying variables such as prosodic (average
energy, pitch, etc ... ) and phonemic (place of articulation, phoneme duration) variables.
Another example is in the application of learning algorithms to financial and economic
forecasting and decision taking. Some of the variables of interest are given daily, others
weekly, monthly, etc ...
4
Hierarchical Recurrent Neural Network: Experiments
As in TDNNs (Lang, Waibel and Hinton, 1990) and reverse-TDNNs (Simard and LeCun,
1992), we will use discrete time delays and subsampling (or oversampling) in order to
implement the multiple time scales. In the time-unfolded network, paths going through
the recurrences in the slow varying units (long time scale) will carry context farther,
while paths going through faster varying units (short time scale) will respond faster to
changes in input or desired changes in output. Examples of such multi-resolution recurrent
neural networks are shown in Figure 1. Two sets of simple experiments were performed to
validate some of the ideas presented in this paper. In both cases, we compare a hierarchical
recurrent network with a single-scale fully-connected recurrent network.
In the first set of experiments, we want to evaluate the performance of a hierarchical
recurrent network on a problem already used for studying the difficulty in learning longterm dependencies (Bengio, Simard and Frasconi, 1994; Bengio and Frasconi, 1994) . In
this 2-class J?roblem, the network has to detect a pattern at the beginning of the sequence,
keeping a blt of information in "memory" (while the inputs are noisy) until the end of
the sequence (supervision is only a the end of the sequence). As in (Bengio, Simard and
Frasconi, 1994; Bengio and Frasconi, 1994) only the first 3 time steps contain information
about the class (a 3-number pattern was randomly chosen for each class within [-1,1]3).
The length of the sequences is varied to evaluate the effect of the span of input/output
dependencies. Uniformly distributed noisy inputs between -.1 and .1 are added to the
initial patterns as well as to the remainder of the sequence. For each sequence length, 10
trials were run with different initial weights and noise patterns, with 30 training sequences.
Experiments were performed with sequence of lengths 10, 20,40 and 100.
Several recurrent network architectures were compared. All were trained with the same
algorithm (back-propagation through time) to minimize the sum of squared differences
between the final output and a desired value. The simplest architecture (A) is similar to
architecture B in Figure 1 but it is not hierarchical: it has a single time scale. Like the
Hierarchical Recurrent Neural Networks for Long-term Dependencies
eo
1.4
1.3
1.2
1.1
1.0
0.9
50
40
l
~
Is
Ii
30
I
20
seq.1engIh
497
ABCDE
ABCDE
ABCDE
ABCDE
10
20
40
100
0.8
0.7
0.6
0.5
0.4
~~ '-----'-I....L.~~~....L.-.W....L.mL.......J....I.ll
ABCDE
1IIq. 1engIh
10
ABCDE
20
ABCDE
40
ABCDE
100
Figure 2: Average classification error after training for 2-sequence problem (left, classification error) and network-generated data (right, mean squared error), for varying sequence
lengths and architectures. Each set of 5 consecutive bars represents the performance of 5
architectures A to E, with respectively 1, 2, 3, 4 and 6 time scales (the architectures B
to E are shown in Figure 1). Error bars show the standard deviation over 10 trials.
other networks, it has however a theoretically "sufficient" architecture, i.e., there exists
a set of weights for which it classifies perfectly the trainin~ sequences. Four of the five
architectures that we compared are shown in Figure 1, wIth an increasing number of
levels in the hierarchy. The performance of these four architectures (B to E) as well as
the architecture with a single time-scale (A) are compared in Figure 2 (left, for the 2sequence problem). Clearly, adding more levels to the hierarchy has significantly helped
to reduce the difficulty in learning long-term dependencies.
In a second set of experiments, a hierarchical recurrent network with 4 time scales was
initialized with random (but large) weights and used to generate a data set. To generate
the inputs as well as the outputs, the network has feedback links from hidden to input
units. At the initial time step as well as at 5% of the time steps (chosen randomly),
the input was clamped with random values to introduce some further variability. It is a
regression task, and the mean squared error is shown on Figure 2. Because of the network
structure, we expect the data to contain long-term dependencies that can be modeled with
a hierarchical structure. 100 training sequences of length 10, 20,40 and 100 were generated
by this network. The same 5 network architectures as in the previous experiments were
compared (see Figure 1 for architectures B to E), with 10 training trials per network and
per sequence length. The results are summarized in Figure 2 (right) . More high-level
hierarchical structure appears to have improved performance for long-term dependencies.
The fact that the simpler I-level network does not achieve a good performance suggests
that there were some difficult long-term dependencies in the the artificially generated data
set. It is interesting to compare those results with those reported in (Lin et al., 1995) which
show that using longer delays in certain recurrent connections helps learning longer-term
dependencies. In both cases we find that introducing longer time scales allows to learn
dependencies whose span is proportionally longer.
5
Hierarchical HMMs
How do we represent multiple time scales with a HMM? Some solutions have already been
proposed in the speech recognition literature, motivated by the obvious presence of different time scales in the speech phenomena. In (Brugnara et al., 1992) two Markov chains
are coupled in a "master/slave" configuration. For the "master" HMM, the observations
are slowly varying features (such as the signal energy), whereas for the "slave" HMM the
observations are t.he speech spectra themselves. The two chains are synchronous and operate at the same time scale, therefore the problem of diffusion of credit in HMMs would
probably also make difficult the learning of long-term dependencies. Note on the other
498
S. E. HIHI, Y. BENOIO
hand that in most applications of HMMs to speech recognition the meaning of states is
fixed a-priori rather than learned from the data (see (Bengio and Frasconi, 1995a) for a
discussion). In a more recent contribution, Nelly Suaudeau (Suaudeau, 1994) proposes a
"two-level HMM" in which the higher level HMM represents "segmental" variables (such
as phoneme duration). The two levels operate at different scales: the higher level state
varIable represents the phonetic identity and models the distributions of the average energy
and the duration within each phoneme. Again, this work is not geared towards learning a
representation of context, but rather, given the traditional (phoneme-based) representation of context in speech recognition, towards building a better model of the distribution
of "slow" segmental variables such as phoneme duration and energy. Another promising
approach was recently proposed in (Saul and Jordan, 1995). Using decimation techniques
from statistical mechanics, a polynomial-time algorithm is derived for parallel Boltzmann
chains (which are similar to parallel HMMs), which can operate at different time scales.
The ideas presented here point toward a HMM or IOHMM in which the (hidden) state
variable Xt is represented by the Cartesian product of several state variables Xt , each
"working" at a different time scale: Xt = (x;, x~, ... I xf).. To take advantage of the
decomposition, we propose to consider that tbe state dIstrIbutions at the different levels
are conditionally independent (given the state at the previous time step and at the current
and previous levels). Transition probabilities are therefore factored as followed:
(5)
To force the state variable at a each level to effectively work at a given time scale, selftransition probabilities are constrained as follows (using above independence assumptions):
P(x:=i3Ixt_l=iI,.? ., x:_l=i3" .. , xt-l=is) = P(x:=i3Ix:_1 =i3, X::t=i3-d
6
= W3
Conclusion
Motivated by the analysis of the problem of learning long-term dependencies in sequential data, i.e., of learning to represent context, we have proposed to use a very general
assumption on the structure of sequential data to reduce the difficulty of these learning
tasks. Following numerous previous work in artificial intelligence we are assuming that
context can be represented with a hierarchical structure. More precisely, here, it means
that long-term dependencies are insensitive to small timing variations, i.e., they can be
represented with a coarse temporal scale. This scheme allows context information and
credit information to be respectively propagated forward and backward more easily.
Following this intuitive idea, we have proposed to use hierarchical recurrent networks for sequence processing. These networks use multiple-time scales to achieve a multi-resolution
representation of context. Series of experiments on artificial data have confirmed the
advantages of imposing such structures on the network architecture. Finally we have
proposed a similar application of this concept to hidden Markov models (for density estimation) and input/output hidden Markov models (for classification and regression).
References
Bengio, Y. and Frasconi, P. (1994). Credit assignment through time: Alternatives to
backpropagation . In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in
Neural Information Processing Systems 6. Morgan Kaufmann.
Bengio, Y. and Frasconi, P. (1995a) . Diffusion of context and credit information in markovian models. Journal of Artificial Intelligence Research, 3:223-244.
Bengio, Y. and Frasconi, P. (1995b). An input/output HMM architecture. In Tesauro,
G., Touretzky, D., and Leen, T., editors, Advances in Neural Information Processmg
Systems 7, pages 427-434. MIT Press, Cambridge, MA.
Bengio, Y., LeCun, Y., and Henderson, D. (1994). Globally trained handwritten word
recognizer using spatial representation, space displacement neural networks and hidden Markov models. In Cowan, J ., Tesauro, G., and Alspector, J., editors, Advances
in Neural Information Processing Systems 6, pages 937- 944.
Hierarchical Recurrent Neural Networks for Long-term Dependencies
499
Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with
gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166.
Brugnara, F., DeMori, R, Giuliani, D., and Omologo, M. (1992). A family of parallel
hidden markov models. In International Conference on Acoustics, Speech and Signal
Processing, pages 377-370, New York, NY, USA. IEEE.
Daubechies, I. (1990). The wavelet transform, time-frequency localization and signal
analysis. IEEE Transaction on Information Theory, 36(5):961-1005 .
Dayan, P. and Hinton, G. (1993). Feudal reinforcement learning. In Hanson, S. J., Cowan,
J. D., and Giles, C. L., edItors, Advances in Neural Information Processing Systems
5, San Mateo, CA. Morgan Kaufmann.
Frasconi, P., Gori, M., Maggini, M., and Soda, G. (1993). Unified integration of explicit
rules and learning by example in recurrent networks. IEEE Transactions on Knowledge and Data Engineering. (in press).
Giles, C. 1. and amlin, C. W. (1992). Inserting rules into recurrent neural networks. In
Kung, Fallside, Sorenson, and Kamm, editors, Neural Networks for Signal Processing
II, Proceedings of the 1992 IEEE workshop, pages 13-22. IEEE Press.
Lang, K. J., Waibel, A. H., and Hinton, G. E. (1990). A time-delay neural network
architecture for isolated word recognition. Neural Networks, 3:23-43.
LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R, Hubbard, W., and Jackel,
L. (1989) . Backpropagation applied to handwritten zip code recognition. Neural
Computation, 1:541-551.
Levinson, S., Rabiner, 1., and Sondhi, M. (1983). An introduction to the application ofthe
theory of probabilistic functions of a Markov process to automatic speech recognition.
Bell System Technical Journal, 64(4):1035-1074.
Lin, T ., Horne, B., Tino, P., and Giles, C. (1995). Learning long-term dependencies is not
as difficult with NARX recurrent neural networks. Techmcal Report UMICAS-TR95-78, Institute for Advanced Computer Studies, University of Mariland.
Rabiner, L. and Juang, B. (1986). An introduction to hidden Markov models. IEEE A SSP
Magazine, pages 257-285.
Rumelhart, D., Hinton, G., and Williams, R (1986). Learning internal representations by
error propagation. In Rumelhart, D. and McClelland, J., editors, Parallel Distributed
Processing, volume 1, chapter 8, pages 318-362. MIT Press, Cambridge.
Saul, L. and Jordan, M. (1995). Boltzmann chains and hidden markov models. In Tesauro,
G., Touretzky, D., and Leen, T., editor~ Advances in Neural Information Processing
Systems 7, pages 435--442. MIT Press, vambridge, MA.
Schmidhuber, J. (1992). Learning complex, extended sequences using the principle of
history compression. Neural Computation, 4(2):234-242.
Simard, P. and LeCun, Y. (1992). Reverse TDNN: An architecture for trajectory generation. In Moody, J., Hanson, S., and Lipmann, R, editors, Advances in Neural
Information Processing Systems 4, pages 579-588, Denver, CO. Morgan Kaufmann,
San Mateo.
Singh, S. (1992). Reinforcement learning with a hierarchy of abstract models. In Proceedings of the 10th National Conference on Artificial Intelligence, pages 202-207 .
MIT/ AAAI Press.
Suaudeau, N. (1994). Un modele probabiliste pour integrer la dimension temporelle dans
un systeme de reconnaissance automatique de la parole. PhD thesis, Universite de
Rennes I, France.
Sutton, RjI995). TD models: modeling the world at a mixture of time scales. In Proceedings 0 the 12th International Conference on Machine Learning. Morgan Kaufmann.
| 1102 |@word trial:3 longterm:1 compression:1 polynomial:1 coarseness:1 propagate:1 decomposition:1 systeme:1 harder:1 carry:1 initial:3 configuration:1 series:2 past:2 current:1 lang:3 must:1 intelligence:4 beginning:1 short:3 farther:1 recherche:2 coarse:2 simpler:1 five:1 introduce:3 operationnelle:2 theoretically:1 indeed:1 automatique:1 alspector:2 themselves:1 planning:1 mechanic:1 multi:6 pour:1 inspired:1 globally:1 kamm:1 unfolded:1 td:1 increasing:2 becomes:4 classifies:1 horne:1 what:2 kind:1 unified:1 transformation:4 nj:1 temporal:4 act:2 weekly:1 unit:4 before:1 engineering:1 local:1 timing:4 mlcnns:1 sutton:2 analyzing:1 path:5 mateo:2 suggests:1 co:1 hmms:15 revolves:2 decided:1 lecun:6 practice:2 implement:2 backpropagation:2 xr:1 displacement:1 bell:2 significantly:2 word:2 cannot:1 close:1 context:19 influence:1 dzt:1 deterministic:3 map:1 yt:2 williams:2 duration:4 resolution:6 qc:2 identifying:1 pure:1 factored:3 rule:2 financial:1 variation:1 hierarchy:3 magazine:1 homogeneous:1 decimation:1 rumelhart:3 recognition:9 observed:1 abcde:8 region:1 cycle:1 connected:1 remote:1 highest:1 dynamic:2 trained:2 singh:2 depend:1 localization:1 completely:1 easily:2 sondhi:2 represented:5 various:1 chapter:1 informatique:2 fast:1 prosodic:1 artificial:5 quite:1 whose:1 otherwise:1 roblem:1 grammar:1 transform:1 h3c:2 itself:1 noisy:2 final:1 advantage:5 sequence:24 differentiable:1 eigenvalue:3 propose:3 product:6 remainder:1 dans:1 inserting:1 rapidly:1 achieve:2 intuitive:1 validate:1 juang:3 converges:1 help:1 recurrent:28 montreal:4 phonemic:1 involves:1 implies:2 concentrate:1 centered:1 around:2 considered:3 credit:6 exp:1 ztizt:3 jx:1 consecutive:1 recognizer:1 estimation:1 injecting:1 tanh:1 jackel:1 sensitive:1 hubbard:1 tool:1 mit:4 clearly:2 j7:2 i3:3 rather:2 avoid:1 varying:6 derived:1 focus:1 rank:1 detect:1 inference:1 dayan:2 el:1 iohmm:1 hidden:15 relation:1 going:2 france:1 classification:3 priori:8 proposes:1 spatial:3 constrained:2 integration:1 equal:1 frasconi:19 represents:3 look:1 yoshua:1 others:2 report:1 few:2 randomly:2 national:1 homogeneity:1 attractor:1 attempt:1 interest:3 henderson:3 analyzed:1 mixture:1 chain:4 daily:1 sorenson:1 initialized:1 desired:2 isolated:1 earlier:3 giles:4 markovian:4 modeling:1 assignment:3 introducing:2 deviation:1 delay:6 too:4 stored:1 reported:1 dependency:29 considerably:1 density:1 international:2 probabilistic:3 reconnaissance:1 synthesis:1 moody:1 daubechies:2 squared:3 again:1 aaai:1 thesis:1 slowly:3 simard:9 derivative:1 bx:1 account:1 potential:1 de:5 summarized:1 includes:1 depends:1 performed:4 helped:1 lab:1 parallel:5 contribution:1 minimize:1 convolutional:1 phoneme:5 kaufmann:4 rabiner:5 yield:2 ofthe:1 handwritten:2 basically:2 trajectory:1 confirmed:1 researcher:2 history:1 touretzky:2 sharing:1 energy:4 frequency:2 obvious:1 universite:3 propagated:1 knowledge:5 ut:7 organized:1 back:1 appears:1 higher:3 improved:1 leen:2 ox:2 until:1 working:2 hand:1 propagation:2 continuity:1 usa:1 effect:1 building:1 contain:2 concept:1 conditionally:1 ll:1 tino:1 recurrence:1 lipmann:1 criterion:1 l1:1 omologo:1 meaning:3 recently:1 umontreal:2 common:2 blt:1 denver:1 insensitive:1 volume:1 salah:1 he:1 hihi:3 trait:1 monthly:1 cambridge:2 imposing:1 automatic:1 similarly:1 stable:1 geared:1 longer:6 supervision:1 processmg:1 etc:2 segmental:2 recent:1 reverse:2 schmidhuber:2 store:1 certain:1 tesauro:4 phonetic:1 success:1 yi:2 seen:1 morgan:4 zip:1 eo:1 converge:2 shortest:1 signal:5 ii:3 levinson:2 multiple:5 giuliani:1 technical:1 faster:2 xf:1 long:21 lin:2 dept:2 maggini:1 ftr:1 pitch:1 basic:1 regression:2 represent:12 normalization:1 tdnns:4 proposal:1 whereas:3 want:1 nelly:1 interval:3 operate:4 rennes:1 probably:1 cowan:3 jordan:2 extracting:1 near:2 presence:1 bengio:21 enough:1 independence:2 w3:1 architecture:18 perfectly:1 economic:1 simplifies:1 idea:6 reduce:2 synchronous:1 motivated:2 forecasting:1 speech:12 york:1 useful:1 generally:1 proportionally:1 involve:1 amount:1 transforms:1 mcclelland:1 discretetime:1 simplest:1 generate:2 oversampling:1 brugnara:3 per:2 write:1 discrete:3 four:3 amlin:1 changing:1 diffusion:2 backward:2 sum:2 tbe:1 run:1 master:2 respond:1 soda:1 place:1 family:1 seq:1 draw:2 decision:1 holmdel:1 layer:2 ct:2 followed:1 encountered:1 activity:1 constraint:1 precisely:1 feudal:1 dominated:1 u1:2 span:5 performing:1 structured:1 according:1 waibel:4 across:1 remain:1 xo:1 equation:4 discus:1 iiq:1 end:2 studying:1 available:1 denker:1 hierarchical:18 appropriate:1 alternative:1 slower:1 assumes:1 gori:1 subsampling:1 eriments:2 narx:1 objective:1 already:3 added:1 traditional:1 ssp:1 gradient:3 ow:2 fallside:1 link:1 hmm:8 spanning:2 toward:2 parole:1 assuming:2 length:6 code:1 modeled:2 modele:1 difficult:6 unfortunately:1 mostly:1 relate:1 reliably:1 proper:1 zt:10 boltzmann:2 allowing:1 neuron:1 observation:2 markov:12 howard:1 iohmms:7 descent:1 hinton:8 variability:1 precise:4 extended:1 y1:1 varied:1 namely:1 required:1 connection:1 hanson:2 acoustic:1 learned:1 boser:1 able:1 bar:2 dynamical:2 pattern:4 articulation:1 summarize:1 including:1 memory:1 omlin:1 event:4 difficulty:6 force:1 advanced:1 representing:5 scheme:2 improve:2 numerous:1 tdnn:1 coupled:1 prior:1 literature:1 fully:1 expect:2 interesting:2 generation:1 sufficient:1 principle:2 editor:8 keeping:1 side:1 guide:1 institute:1 saul:2 taking:3 distributed:2 feedback:1 dimension:1 transition:4 world:1 forward:2 made:1 reinforcement:3 preprocessing:1 san:2 transaction:3 confirm:1 ml:1 spectrum:1 un:2 promising:1 learn:4 nature:1 robust:1 ca:3 complex:1 artificially:1 domain:1 hierarchically:2 noise:1 allowed:1 slow:2 ny:1 explicit:1 slave:2 lie:1 trainin:1 clamped:1 jacobian:1 wavelet:2 specific:2 xt:8 exists:1 workshop:1 sequential:12 adding:1 effectively:1 phd:1 conditioned:1 cartesian:1 easier:1 ordered:1 u2:1 ma:2 identity:1 consequently:3 towards:3 shared:1 hard:1 change:5 lui:1 uniformly:1 justify:2 la:2 attempted:1 internal:1 kung:1 incorporate:1 evaluate:2 phenomenon:1 ex:1 |
116 | 1,103 | Optimizing Cortical Mappings
Geoffrey J. Goodhill
The Salk Institute
10010 North Torrey Pines Road
La Jolla, CA 92037, USA
Steven Finch
Human Communication Research Centre
University of Edinburgh, 2 Buccleuch Place
Edinburgh EH8 9LW, GREAT BRITAIN
Terrence J. Sejnowski
The Howard Hughes Medical Institute
The Salk Institute for Biological Studies
10010 North Torrey Pines Road, La Jolla, CA 92037, USA
&
Department of Biology, University of California San Diego
La Jolla, CA 92037, USA
Abstract
"Topographic" mappings occur frequently in the brain. A popular approach to understanding the structure of such mappings
is to map points representing input features in a space of a few
dimensions to points in a 2 dimensional space using some selforganizing algorithm. We argue that a more general approach
may be useful, where similarities between features are not constrained to be geometric distances, and the objective function for
topographic matching is chosen explicitly rather than being specified implicitly by the self-organizing algorithm. We investigate
analytically an example of this more general approach applied to
the structure of interdigitated mappings, such as the pattern of
ocular dominance columns in primary visual cortex.
1 INTRODUCTION
A prevalent feature of mappings in the brain is that they are often "topographic".
In the most straightforward case this simply means that neighbouring points on
a two-dimensional sheet (e.g. the retina) are mapped to neighbouring points in a
more central two-dimensional structure (e.g. the optic tectum). However a more
complex case, still often referred to as topographic, is the mapping from an abstract
space of features (e.g. position in the visual field, orientation, eye of origin etc) to
Optimizing Cortical Mappings
331
the cortex (e.g. layer 4 of VI). In many cortical sensory areas, the preferred sensory
stimuli of neighbouring neurons changes slowly, except at discontinuous jumps,
suggestive of an optimization principle that attempts to match "similar" features
to nearby points in the cortex. In this paper, we (1) discuss what might constitute
an appropriate measure of similarity between features, (2) outline an optimization
principle for matching the similarity structure of two abstract spaces (i.e. a measure
of the degree of topography of a mapping), and (3) use these ideas to analyse the
case where two equivalent input variables are mapped onto one target structure,
such as the "ocular dominance" mapping from the right and left eyes to VI in the
cat and monkey.
2
SIMILARITY MEASURES
A much-investigated computational approach to the study of mappings in VI is
to consider the input features as pOints in a multidimensional euclidean space
[1,5,9]. The input dimensions then consist of e.g. spatial position, orientation,
ocular dominance, and so on. Some distribution of points in this space is assumed
which attempts, in some sense, to capture the statistics of these features in the visual
world. For instance, in [5], distances between points in the space are interpreted
as a decreasing function of the degree to which the corresponding features are
correlated over an ensemble of images. Some self-organizing algorithm is then
applied which produces a mapping from the high-dimensional feature space to
a two-dimensional sheet representing the cortex, such that nearby points in the
feature space map to nearby points in the two-dimensional sheet. l
However, such approaches assume that the dissimilarity structure of the input
features is well-captured by euclidean distances in a geometric space. There is
no particular reason why this should be true. For instance, such a representation
implies that the dissimilarity between features can become arbitrarily large, an
unlikely scenario. In addition, it is difficult to capture higher-order relationships in
such a representation, such as that two oriented line-segment detectors will be more
correlated if the line segments are co-linear than if they are not. We propose instead
that, for a set of features, one could construct directly from the statistics of natural
stimuli a feature matrix representing similarities or dissimilarities, without regard
to whether the resulting relationships can be conveniently captured by distances in
a euclidean feature space. There are many ways this could be done; one example is
given below. Such a similarity matrix for features can then be optimally matched
(in some sense) to a similarity matrix for positions in the output space.
A disadvantage from a computational point of view of this generalized approach is
that the self-organizing algorithms of e.g. [6,2] can no longer be applied, and possibly less efficient optimization techniques are required. However, an advantage
of this is that one may now explore the consequences of optimizing a whole range
of objective functions for quantifying the quality of the mapping, rather than having to accept those given explicitly or implicitly by the particular self-organizing
algorithm.
lWe mean this in a rather loose sense, and wish to include here the principles of mapping
nearby points in the sheet to nearby points in the feature space, mapping distant points in
the feature space to distant points in the sheet, and so on.
G. J. GOODHILL. S. FINCH. T.J. SEJNOWSKI
332
Vout
M
Figure 1: The mapping framework.
3
OPTIMIZATION PRINCIPLES
We now outline a general framework for measuring to what degree a mapping
matches the structure of one similarity matrix to that of another. It is assumed that
input and output matrices are of the same (finite) dimension, and that the mapping
is bijective. Consider an input space Yin and an output space V out, each of which
contains N points. Let M be the mapping from points in Yin to points in Vout (see
figure 1). We use the word "space" in a general sense: either or both of Yin and
V out may not have a geometric interpretation. Assume that for each space there is
a symmetric "similarity" function which, for any given pair of points in the space,
specifies how similar (or dissimilar) they are. Call these functions F for Yin and G
for Vout. Then we define a cost functional C as follows
N
C=
L L F(i,j)G(M(i), MO)),
(1)
i=1 i<i
where i and j label pOints in ViT\J and M(i) and M(j) are their respective images in
Vout . The sum is over all possible pairs of points in Yin. Since M is a bijection it is
invertible, and C can equivalently be written
N
C=
LL F(M-1(i),M-1(j))G(i,j),
(2)
i=1 i<i
where now i and j label points in Vout! and M - I is the inverse map. A good (i.e.
highly topographic) mapping is one with a high value of C. However, if one of F or
G were given as a dissimilarity function (i.e. increasing with decreasing similarity)
then a good mapping would be one with a low value of C. How F and G are defined
is problem-specific.
C has a number of important properties that help to justify its adoption as a
measure of the degree of topography of a mapping (for more details see [3]). For
instance, it can be shown that if a mapping that preserves ordering relationships
between two similarity matrices exists, then maximizing C will find it. Such maps
are homeomorphisms. However not all homeomorphisms have this propert}j
so we refer to such "perfect" maps as "topographic homeomorphisms". Several
previously defined optimization principles, such as minimum path and minimum
333
Optimizing Cortical Mappings
wiring [1], are special cases of C. It is also closely related (under the assumptions
above) to Luttrell's minimum distortion measure [7], if F is euclidean distance in a
geometric input space, and G gives the noise process in the output space.
4
INTERDIGITATED MAPPINGS
As a particular application of the principles discussed so far, we consider the case
where the similarity structure of Yin can be expressed in matrix form as
where Qs and Qc are of dimension Nil. This means that Yin consists of two
halves, each with the same internal similarity structure, and an in general different
similarity structure between the two halves. The question is how best to match
this dual similarity structure to a single similarity structure in Vout. This is of
mathematical interest since it is one of the simplest cases of a mismatch between
the similarity structures of V in and Vout! and of biological interest since it abstractly
represents the case of input from two equivalent sets of receptors coming together
in a single cortical sheet, e.g. ocular dominance columns in primary visual cortex
(see e.g. [8, 5]). For simplicity we consider only the case of two one-dimensional
retinae mapping to a one-dimensional cortex.
The feature space approach to the problem presented in [5] says that the dissimilarities in Yin are given by squared euclidean distances between points arranged
in two parallel rows in a two-dimensional space. That is,
. .
F(l., J) =
? J'12
{ Ii _ j _Il.NIll2 + k 2
: i, j in same half of Yin
: i, j in different halves of Yin
(3)
assuming that indices 1 ... Nil give points in one half and indices Nil + 1 ... N
give pOints in the other half. G(i, j) is given by
G(. .) _ {1 : i, j neighbouring
l., J 0 : otherwise
(4)
>1
is to keep the two halves of Vin entirely separate in Vout [5]. However, there is also a
local minimum for an interdigitated (or "striped") map, where the interdigitations
have width n = lk. By varying the value of k it is thus possible to smoothly vary
the periodicity of the locally optimal striped map. Such behavior predicted the
outcome of a recent biological experiment [4]. For k < 1 the globally optimal map
is stripes of width n = 1.
It can be shown that the globally optimal mapping (i.e. minimum of C) when k
However, in principle many alternative ways of measuring the similarity in Yin
are possible. One obvious idea is to assume that similarity is given directly by the
degree of correlation between points within and between the two eyes. A simple
assumption about the form of these correlations is that they are a gaussian function
of physical distance between the receptors (as in [8]). That is,
. .
F(l.,J)=
{
e- ottI? -)'12
ce-f3li-i-N/211
i, j in same half of Yin
i, j in different halves of Yin
(5)
with c < 1. We assume for ease of analysis that G is still as given in equation 4.
This directly implements an intuitive notion put forward to account for the interdigitation of the ocular dominance mapping [4]: that the cortex tries to represent
G. J. GOODHILL, S. FINCH, TJ. SEJNOWSKI
334
similar inputs close together, that similarity is given by the degree of correlation
between the activities of points (cells), and additionally that natural visual scenes
impose a correlational structure of the same qualitative form as equation 5. We
now calculate C analytically for various mappings (c.f. [5]), and compare the cost
of a map that keeps the two halves of Yin entirely separate in V out to those which
interdigitate the two halves of Yin with some regular periodicity. The map of the
first type we consider will be refered to as the "up and down" map: moving from
one end of V out to the other implies moving entirely through one half of ViT\l then
back in the opposite direction through the other half. For this map, the cost Cud is
given by
Cud = 2(N - l)e- ct + c.
(6)
For an interdigitated (striped) map where the stripes are of width n ~ 2:
Cs(n) = N [2 (1 -
~) e-
ct
+
~ (e-~f(n) + e-~g(n))]
(7)
where for n even f(n) = g(n) = (n"22)2 and for n odd f(n) = (n"2I)2, g(n) =
(n"23 ) 2. To characterize this system we now analyze how the n for which Cs( n) has
a local maximum varies with c, a., 13, and when this local maximum is also a global
maximum. Setting dCci?n) = 0 does not yield analytically tractable expressions
(unlike [5]). However, more direct methods can be used: there is a local maximum
atnifC s(n-1) < Cs(n) > Cs(n+ 1). Using equation 7we derive conditions on C
for this to be true. For n odd, we obtain the condition CI < C < C2 where CI = C2;
that is, there are no local maxima at odd values of n. For n even, we also obtain
CI < C < C2 where now
CI =
2e- ct
n-4 2
ne-~(-z)
- (n -
n-2 2
2)e-~(-z)
and c2(n) = CI (n + 2). CI (n) and c2(n) are plotted in figure 2, from which one
can see the ranges of C for which particular n are local maxima. As 13 increases,
maxima for larger values of n become apparent, but the range of c for which they
exist becomes rather small. It can be shown that Cud is always the global maximum,
except when e- ct > c, when n = 2 is globally optimal. As C decreases the optimal
stripe width gets wider, analogously to k increasing in the dissimilarities given by
equation 3. When 13 is such that there is no local maximum the only optimum is
stripes as wide as possible. This fits with the intuitive idea that if corresponding
points in the two halves of Yin (Le. Ii - j I = N/2) are sufficiently similar then it is
favorable to interdigitate the two halves in VoutJ otherwise the two halves are kept
completely separate.
The qualitative behavior here is similar to that for equation 3. n = 2 is a global
optimum for large c (small k), then as Cdecreases (k increases) n = 2 first becomes a
local optimum, then the position of the local optimum shifts to larger n. However,
~n important difference is that in equation 3 the dissimilarities increase without
limit with distance, whereas in equation 5 the similarities tend to zero with distance. Thus for equation 5 the extra cost of stripes one unit wider rapidly becomes
negligible, whereas for equation 3 this extra cost keeps on increasing by ever larger
amounts. As n -+ 00, Cud'" Cs(n) for the similarities defined by equation 5 (i.e.
there is the same cost for traversing the two blocks in the same direction as in the
opposite direction), whereas for the dissimilarities defined by equation 3 there is a
quite different cost in these two cases. That F and G should tend to a bounded value
as i and j become ever more distant neighbors seems biologically more plausible
than that they should be potentially unbounded.
335
Optimizing Cortical Mappings
(b)
(a)
""~'"
??
??
?
1.0
0.9
<.>
0.8
'"
.t<
,
1.0
.,,?,.
,,?
~ 0.9
<.>
0.8
o
D
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
cl
0.3
cl
0.3
c:2
0.2
c:2
0.2
0.1
0.1
0.0
...... ,'
,
.. .?
0
2
4
6
8
10
12
14
n
0.0
0
n
Figure 2: The ranges of c for which particular n are local maxima. (a) oc = f3 = 0.25. (b)
oc = 0.25, i3 = 0.1. When the Cl (dashed) line is below the c, (solid) line no local maxima
exist. For each (even) value of n to the left of the crossing point, the vertical range between
the two lines gives the values of c for which that n is a local maximum. Below the solid line
and to the right of the crossing point the only maximum is stripes as wide as possible.
Issues such as those we have addressed regarding the transition from "striped" to
"blocked" solutions for combining two sets of inputs distinguished by their intraand inter-population similarity structure may be relevant to understanding the
spatial representation of functional attributes across cortex. The results suggest
the hypothesis that two variables are interdigitated in the same area rather than
being represented separately in two distinct areas if the inter-population similarity
is sufficiently high. An interesting point is that the striped solutions are often
only local optima. It is possible that in reality developmental constraints (e.g. a
chemically defined bias towards overlaying the two projections) impose a bias
towards finding a striped rather than blocked solution, even though the latter may
be the global optimum.
5 DISCUSSION
We have argued that, in order to understand the structure of mappings in the
brain, it could be useful to examine more general measures of similarity and of
topographic matching than those implied by standard feature space models. The
consequences of one particular alternative set of choices has been examined for the
case of an interdigitated map of two variables. Many alternative objective functions
for topographic matching are of course possible; this topic is reviewed in [3]. Two
issues we have not discussed are the most appropriate way to define the features
of interest, and the most appropriate measures of similarity between features (see
[10] for an interesting discussion).
A next step is to apply these methods to more complex structures in VI than just the
ocular dominance map. By examining more of the space of possibilities than that
occupied by the current feature space models, we hope to understand more about
the optimization strategies that might be being pursued by the cortex. Feature
space models may still tum out to be more or less the right answer; however even
if this is true, our approach will at least give a deeper level of understanding why.
336
G. 1. GOODHILL, S. FINCH, T.l. SEINOWSKI
Acknowledgements
We thank Gary Blasdel, Peter Dayan and Paul Viola for stimulating discussions.
References
[1] Durbin, R. & Mitchison, G. (1990). A dimension reduction framework for
understanding cortical maps. Nature, 343, 644-647.
[2] Durbin, R. & Willshaw, D.J. (1987). An analogue approach to the travelling
salesman problem using an elastic net method. Nature, 326,689-691.
[3] Goodhill, G. J., Finch, S. & Sejnowski, T. J. (1995). Quantifying neighbourhood preservation in topographic mappings. Institute for Neural Computation Technical Report Series, No. INC-9505, November 1995. Available from ftp:/ / salk.edu/pub / geoff/ goodhillJinch_sejnowski_tech95.ps.Z
or http://cnl.salk.edu/ ""geoff.
[4] Goodhill, G.J. & Lowel, S. (1995). Theory meets experiment: correlated neural activity helps determine ocular dominance column periodicity. Trends in
Neurosciences, 18,437-439.
[5] Goodhill, G.J. & Willshaw, D.J. (1990). Application of the elastic net algorithm
to the formation of ocular dominance stripes. Network, 1, 41-59.
[6] Kohonen, T. (1982). Self-organized formation of topologically correct feature
maps. Bioi. Cybern., 43, 59-69.
[7] Luttrell, S.P. (1990). Derivation of a class of training algorithms. IEEE Trans.
Neural Networks, 1,229-232.
[8] Miller, KD., Keller, J.B. & Stryker, M.P. (1989). Ocular dominance column
development: Analysis and simulation. Science, 245, 605-615.
[9] Obermayer, K, Blasdel, G.G. & Schulten, K (1992). Statistical-mechanical
analysis of self-organization and pattern formation during the development
of visual maps. Phys. Rev. A, 45, 7568-7589.
[10] Weiss, Y. & Edelman, S. (1995). Representation of similarity as a goal of early
sensory coding. Network, 6, 19-41.
| 1103 |@word seems:1 simulation:1 solid:2 reduction:1 contains:1 series:1 pub:1 current:1 written:1 distant:3 half:16 pursued:1 bijection:1 unbounded:1 mathematical:1 c2:5 direct:1 become:3 qualitative:2 consists:1 edelman:1 inter:2 behavior:2 frequently:1 examine:1 brain:3 globally:3 decreasing:2 increasing:3 becomes:3 matched:1 bounded:1 what:2 interpreted:1 monkey:1 finding:1 multidimensional:1 willshaw:2 unit:1 medical:1 overlaying:1 negligible:1 local:13 limit:1 consequence:2 receptor:2 meet:1 path:1 might:2 examined:1 co:1 ease:1 range:5 adoption:1 hughes:1 block:1 implement:1 area:3 matching:4 projection:1 word:1 road:2 regular:1 suggest:1 get:1 onto:1 close:1 sheet:6 put:1 cybern:1 equivalent:2 map:18 britain:1 maximizing:1 straightforward:1 vit:2 keller:1 qc:1 simplicity:1 q:1 population:2 notion:1 diego:1 tectum:1 target:1 neighbouring:4 hypothesis:1 origin:1 crossing:2 trend:1 stripe:7 steven:1 capture:2 calculate:1 ordering:1 decrease:1 developmental:1 segment:2 completely:1 geoff:2 cat:1 various:1 represented:1 derivation:1 distinct:1 sejnowski:4 formation:3 outcome:1 apparent:1 quite:1 larger:3 plausible:1 cnl:1 distortion:1 say:1 otherwise:2 buccleuch:1 statistic:2 topographic:9 torrey:2 analyse:1 abstractly:1 advantage:1 net:2 propose:1 coming:1 luttrell:2 kohonen:1 relevant:1 combining:1 rapidly:1 organizing:4 intuitive:2 optimum:6 p:1 produce:1 perfect:1 help:2 derive:1 wider:2 ftp:1 odd:3 predicted:1 c:5 implies:2 direction:3 closely:1 discontinuous:1 attribute:1 correct:1 human:1 argued:1 biological:3 sufficiently:2 great:1 mapping:31 blasdel:2 mo:1 pine:2 vary:1 early:1 favorable:1 label:2 hope:1 gaussian:1 always:1 i3:1 rather:6 occupied:1 varying:1 prevalent:1 cud:4 sense:4 dayan:1 unlikely:1 accept:1 issue:2 dual:1 orientation:2 development:2 constrained:1 spatial:2 special:1 field:1 construct:1 f3:1 having:1 biology:1 represents:1 report:1 stimulus:2 few:1 retina:2 oriented:1 preserve:1 attempt:2 organization:1 interest:3 investigate:1 highly:1 possibility:1 tj:1 respective:1 traversing:1 euclidean:5 plotted:1 instance:3 column:4 lwe:1 disadvantage:1 measuring:2 cost:7 examining:1 optimally:1 characterize:1 answer:1 varies:1 finch:5 terrence:1 invertible:1 together:2 analogously:1 squared:1 central:1 slowly:1 possibly:1 account:1 coding:1 north:2 inc:1 explicitly:2 vi:4 view:1 try:1 analyze:1 parallel:1 vin:1 il:1 ensemble:1 yield:1 miller:1 vout:8 detector:1 phys:1 ocular:9 obvious:1 popular:1 organized:1 back:1 tum:1 higher:1 wei:1 arranged:1 done:1 though:1 just:1 correlation:3 quality:1 usa:3 true:3 analytically:3 symmetric:1 wiring:1 ll:1 during:1 self:6 width:4 oc:2 generalized:1 bijective:1 outline:2 image:2 functional:2 physical:1 discussed:2 interpretation:1 refer:1 blocked:2 centre:1 moving:2 similarity:27 cortex:9 longer:1 etc:1 recent:1 optimizing:5 jolla:3 scenario:1 arbitrarily:1 interdigitated:6 captured:2 minimum:5 impose:2 determine:1 dashed:1 ii:2 preservation:1 technical:1 match:3 represent:1 cell:1 addition:1 whereas:3 separately:1 addressed:1 extra:2 unlike:1 tend:2 call:1 fit:1 opposite:2 idea:3 regarding:1 shift:1 whether:1 expression:1 peter:1 constitute:1 useful:2 selforganizing:1 amount:1 locally:1 simplest:1 http:1 specifies:1 exist:2 neuroscience:1 dominance:9 ce:1 kept:1 sum:1 inverse:1 topologically:1 place:1 chemically:1 entirely:3 layer:1 ct:4 durbin:2 activity:2 occur:1 optic:1 constraint:1 striped:6 scene:1 propert:1 nearby:5 department:1 kd:1 across:1 rev:1 biologically:1 refered:1 equation:11 previously:1 discus:1 loose:1 tractable:1 end:1 travelling:1 salesman:1 available:1 apply:1 appropriate:3 distinguished:1 neighbourhood:1 alternative:3 include:1 implied:1 objective:3 question:1 strategy:1 primary:2 stryker:1 lowel:1 obermayer:1 distance:9 separate:3 mapped:2 thank:1 topic:1 argue:1 reason:1 assuming:1 index:2 relationship:3 equivalently:1 difficult:1 potentially:1 vertical:1 neuron:1 howard:1 finite:1 november:1 viola:1 communication:1 ever:2 pair:2 required:1 specified:1 mechanical:1 california:1 eh8:1 trans:1 below:3 goodhill:7 pattern:2 mismatch:1 analogue:1 natural:2 representing:3 eye:3 ne:1 lk:1 understanding:4 geometric:4 acknowledgement:1 topography:2 interesting:2 geoffrey:1 degree:6 principle:7 row:1 periodicity:3 course:1 bias:2 understand:2 deeper:1 institute:4 wide:2 neighbor:1 edinburgh:2 regard:1 dimension:5 cortical:7 world:1 transition:1 sensory:3 forward:1 jump:1 san:1 far:1 implicitly:2 preferred:1 keep:3 suggestive:1 global:4 assumed:2 mitchison:1 why:2 reality:1 additionally:1 reviewed:1 nature:2 ca:3 elastic:2 investigated:1 complex:2 cl:3 whole:1 noise:1 paul:1 referred:1 salk:4 position:4 schulten:1 wish:1 lw:1 down:1 specific:1 consist:1 exists:1 ci:6 dissimilarity:8 smoothly:1 yin:16 simply:1 explore:1 visual:6 conveniently:1 expressed:1 gary:1 stimulating:1 bioi:1 goal:1 quantifying:2 towards:2 change:1 except:2 justify:1 correlational:1 nil:3 la:3 internal:1 latter:1 dissimilar:1 correlated:3 |
117 | 1,104 | KODAK lMAGELINK? OCR
Alphanumeric Handprint Module
Alexander Shustorovich and Christopher W. Thrasher
Business Imaging Systems, Eastman Kodak Company, Rochester, NY 14653-5424
ABSTRACT
This paper describes the Kodak Imageliok TM OCR alphanumeric
handprint module. There are two neural network algorithms at its
cme: the first network is trained to find individual characters in an
alphamuneric field, while the second one perfmns the classification.
Both networks were trained on Gabor projections of the ociginal
pixel images, which resulted in higher recognition rates and greater
noise immunity. Compared to its purely numeric counterpart
(Shusurovich and Thrasher, 1995), this version of the system has a
significant applicatim specific postprocessing module. The system
has been implemented in specialized parallel hardware, which allows
it to run at 80 char/sec/board. It has been installed at the Driver and
Vehicle Licensing Agency (DVLA) in the United Kingdom. and its
overall success rate exceeds 96% (character level without rejects).
which translates into 85% field rate. If approximately 20% of the
fields are rejected. the system achieves 99.8% character and 99.5%
field success rate.
1 INTRODUCTION
The system we describe below was designed to process alphanumeric fields extracted
from forms. The major assumptialS were that (1) the form layoot and definition allows
the system to capture the field image with a single line of characters, (2) the
characters are handprinted capital letters and numerals, with possible addition of
several special characters, and (3) the characters may occasimally touch, but generally
they do not overlap. We also assume that some additional informatim about the
cootents of the field is available to assist in the process of disambiguation. Otherwise,
it is virtually impossible to distinguish not only between" 0 " and zero, but also" I "
and one, " Z " and two, " S " and five, etc.
A good example of such an applicatim is the processing of vehicle registration forms
at the Driver and Vehicle Licensing Agency (DVLA) in the United Kingdom. The
alphamuneric field in question contains a license plate. There are 29 allowed patterns
of character combinations, fran two to seven characters long. For example, "
A999AAA " is a valid license, whereas" A9A9A9 " is not (here .. A " stands for any
alpha character, " 9 .. - for any numeric character). In addition, every field has a
779
KODAK IMAGELINKTM OCR Alphanumeric Handprint Module
control character box on the right. This control cltaracter is computed as a remainder
of the integer division by 37 of a linear e<mbination of numeric values of the
characters in the main field. Ambiguous cltaracters. namely " 0 ". " I ". and " S " are
not allowed in the role of the control character. so they are replaced here by " - ". " +
". and " I " (not a very good choice. and the 37th character used is the " %
To
make things m<m complicated. sometimes the control character is not available at the
moment of filing the form (at a local post dfice). and this lack. of knowledge is
indicated by putting an asterisk instead. Later we will discuss possible ways to use this
additiooal information in an application specific postprocessing module.
fl.
2 SEGMENTATION AND ALTERNATIVE APPROACHES
The most challenging problem for handprint OCR. is finding individual characters in a
field. A number of approaches to this problem can be found in the literature. the two
most common being (1) segmentation (Gupta et al.? 1993. as an example of a recent
publication). and (2) combined segmentation and recognition (Keeler and Rumelhart.
1992).
The segmentation approach has difficulty separating touching characters. and recendy
the consensus of practitioners in the field started shifting towards e<mbined
segmentation and recognition. In this scheme. the algmthm moves a window of a
certain width along the field. and confidence values of competing classification
hypotheses are used (sometimes with a separate centered/noncentered node) to decide
if the window is positioned on top of a cltaracter. In the Saccade system (Martin et al .?
1993). for example. the neural network was trained not only to recognize characters in
the center of the moving window (and whether there is a character centered in the
window). but also to make corrective jumps (saccades) to the nearest character and.
after classification. to the next character.
Still another variation on the theme is an arrangement when the classification window
is duplicated with one- or several-pixel shifts along the field (Benjio et al.? 1994).
Then the outputs of the classifiers serve as input for a postprocessing module (in this
paper. a IDdden Markov Model) used to decide which of the multitude of processing
windows actually have centered cltaracters in them.
All these approaches have deficiencies. As we mentioned earlier. touching cltaracters
are difficult for autonomous segmenters. The moving (and jumping) window with a
sing1e cemered/noncentered node tends to miss narrow characters and sometimes to
duplicate wide ones. The replication of a classifier together with postprocessing tends
to be quite expensive computationally.
3 POSmONING NETWORK
To do the positioning. we decided to introduce an may of output units corresponding
to successive pixels in the middle portion of the window. These nodes signal if a
center ("heart") of a character lies at the c<rresponding positions. Because the
precision with which a human operator can mark the character heart is low (usually
within one or two pixels at best). the target activatims of three cmsecutive nodes are
set to one if there is a cltaracter heart at a pixel positioo corresponding to the middle
node. The rest of the target activations are set to zero.
The network is then trained to produce bumps of activation indicating the cltaracter
hearts. Two buffer regions on the left and on the right of the window (pixels without
COITesponding output nodes) are necessary to allow all or most of the cltaracter
centered at each of the output node positions to fit inside the window. The
replacement of a single centered/noncentered node by an array allows us to average
output activations. generated by different window shifts. while corresponding to the
same position. lbis additional procedure allows us to slide the window several pixels
780
A. SHUSTOROVICH, C. W. THRASHER
at a time: the appropriate step is a trade-off between the processing speed and the
required level of robustness. The final procedure involves thresholding of the
activation-wave and the estimation of the predicted character position as the center of
mass of the activation-bubble. The resulting algmthm is very effective: touching
characters do not present significant problems. and only abnormally wide characters
sometimes fool the system into false alarms.
The system works with preprocessed images. Each field is divided into subfields of
disconnected groups of characters. These subfields are size-normalized to a height of
20 pixels. After that they are reassembled into a single field again. with 6 pixel gaps
between them. Two blank rows are added both along the top and the bottom of the
recombined field as preferred by the Gabor projection technique (Shustorovich. 1994).
In our current system. the input nodes of a sliding window are organized in a 24 x 36
array. The first. intermediary. layer of the network implements the Galxr projections.
It has 12 x 12 local receptive fields (LRFs) with fixed precanputed weights. The step
between LRFs is 6 pixels in both directions. We work with 16 Gabor basis functions
with circular Gaussian envelopes centered within each LRF; they are both sine and
cosine wavelets in four mentati(llS and two sizes. All 16 projections fr<m each LRF
constitute the input to a column of 20 hidden units. thus the second (first trainable)
hidden layer is organized in a three-dimensional array 3 x 5 x 20. The third hidden
layer of the network also has local receptive fields. they are three-dimensiooal 2 x 2 x
20 with the step 1 x 1 x O. The units in the third hidden layer are also duplicated 20
times. thus this layer is organized in a three-dimensional array 2 x 4 x 20. The fourth
hidden layer has 60 units fully connected to the third layer. Fmally. the output layer
has 12 units. also fully connected to the fourth layer.
The network was trained using a variant of the Back-Propagation algorithm. Both
training and testing sets were drawn from. the field data collected at DVLA. The
training set contained approximately 60.000 charactel"s from 8.000 fields. and about
5,000 charactel"s from 650 fields were used for testing. On this test set. more than
92% of all character hearts were found within I-pixel precision, and only 0.4% were
missed by more than 4 pixels.
4 CLASSIFICATION NETWORK
The structure of the classification network resembles that of the positioning network.
The Gabor projection layer w<X'ks in exactly the same way. but the window size is
smaller. only 24 x 24 pixels. We chose this size because after height normalization to
20 pixels. only occasionally the charactel"s are wider than 24 pixels. Widening the
window complicates training: it increases the dimensionality of the input while
providing information. mostly about irrelevant pieces of adjacent characters. As a
result. the second layer is organized as a 3 x 3 x 20 array of units with LRFs and
shared weights. the third is a 2 x 2 x 20 array of units with LRFs. and there are 37
output units fully connected to the 80 units in the third layer. The number of ouq,ut
units in this variant of our system has been determined by the intended application. It
was necessary to recognize uppercase letters. numerals. and also five special
charactel"s. namely plus (+). minus (-). slash (f). percent (%). and asterisk (*). Since
additional information was available for the purposes of disambiguation. we combined
.. 0 .. and zero. .. I .. and one. .. Z to and two. .. S .. and five. and so the number of
output classes became 26 (alpha) + 6 (numerals 3,4,6.7.8.9) + 5 (special characters)
37.
Because we did not expect any positioning module to provide precision higher than 1
or 2 pixels. the classifier network was trained and tested. on five copies of all centered
characters in the database, with shifts of O. 1, and 2 pixels, both left and right On the
same test set mentioned in the previous section. the corresponding character
recognition rates averaged 93.0%. 955%. and 96.0% for characters normalized to the
=
KODAK IMAGELINKTM OCR Alphanumeric Handprint Module
781
height of 18 to 20 pixels and placed in the middle of the window with shifts of 0 and
1 pixel up and down.
S
POSTPROCESSING MODULE
The postprocessing module is a rule-based algorithm. Fust. it monitors the width of
each subfield and rejects it if the number of predicted charactex hearts is inconsistent
with the width. For example. if the positioning system cannot find a single character in
a subfield. the output of the system bec<mes a question made. Second. the
postprocessing module <rganizes competition between predicted character hearts if
they are too close to each other. For example. it will kill a predicted center with a
lower activation value if its distance from a competitor is Jess than ten pixels. but it
may allow both to survive if one of the two labels is "one". It is especially sensitive to
closely positioned centers with identical labels. and will remove the weaker one for
wide characters such as II W " or " Mil.
The rest of the postprocessing had to rely on the applicatioo knowledge. Since the
alphanumeric fields on DVLA forms contain license plates. we could use the fact that
there ~ exactly 29 allowed patterns of symbol combinations. and that carect strings
should match control characters from the box on the right.
Because in this applicatioo rejection of individual characters is meaningless. we
decided to keep and analyze all possible candidates for each detected positioo. that is.
characters with output activations above a certain threshold (currently. 0.1). Of course.
special charactexs are not allowed in the main field. The field as a whole is rejected if
for any one position there is not even a single candidate cllaracter. All possible
COOlbinations of candidate characters are analyzed A candidate string is rejected if it
does not conform to any of allowed patterns. or if it does not match any of the
candidate control cllaracters. All remaining candidate strings are assigned confidences.
Since a chain is no stronger than its weakest link. in the case of an asterisk (no control
charactex information). the string confidence equals that of its least confident cllaracter.
If there is a valid control character. then we can tolerate one low-confidence cllaracter.
and so the string confidence equals that of its charactex with the second lowest
individual confidence. If there are two or mme candidate strings. the difference in
confidence between the best and the second best is compared to another threshold
(currently. 0.7) in order to pass the final round of rejects.
6 CONCLUSIONS
Kodak Imagelink?OCR alphanumeric handprint module desaibed in this paper uses
one neural network to find individual cllaracters in a field. and then the second
network performs the classification. The outputs of both networks are interpreted by a
postprocessing module that generates the final label string (Figure 1. Figure 2).
The algmthms were designed within the constraints of the planned hardware
implementation. At the same time. they provide a high level of positioning accuracy
as well as classification ability. One new feature of our approach is the use of an
array of centered/noncentered nodes to significantly improve speed and robustness of
the positioning scheme. The overall robustness of the system is further improved by
noise resistance provided by a layer of Gabor projection units. The positioning module
and the classification module are unified by the postprocessing module.
System-level testing was performed on a test set mentioned above. The image quality
was generally very good. but the data included some fields with touching characters.
The character level success rate (without rejects) achieved on this test exceeded 96%.
which corresponded to above 85% field rate. With approximately 20% d the fields
rejected. the system achieved 99.8% character and 995% field success rate.
782
A. SHUSTOROVICH, C. W. THRASHER
In the testing mode, the preprocessing module would separate characters if it can
reliably do so, normalize them individually, and place them with gaps of ten blank
pixels, in order to simplify the job of both the positioning and the classification
modules. When it is impossible to segment individual characters, our system is still
able to perform on the level of approximately 94% (since it has beea trained on such
data). The robustness of our system is an impOOant factor in its success. Most other
systems have substantial difficulties trying to recover from. errors in segmentation.
References
Benjio, Y., Le Gm, Y., and Henderson, D. (1994) Globally Trained Handwritten Word
Recognizer Using Spatial Representation, Space Displacement Neural Networks and
Hidden Markov Models. In Cowan, J.D., Tesauro, G., and Alspector, J. (eds.),
Advances in Neural Information Processing Systems 6, pp. 937-944. San Mateo, CA:
Morgan Kaufmann Publishers.
Gupta, A., Nagendraprasad, M.V., Lin. A., Wang, P.S.P., and Ayyadurai, S. (1993) An
Integrated Architecture for Recognition of Totally Unconstrained Handwritten
Numerals. International Journal of Pattern Recognition and Artificial Intelligence 7
(4), pp. 757-773.
Keeler, J. and Rume1hart. DE (1992) A Self-Organizing Integrated Segmentation and
Recognition Neural Net. In Moody, J.E., Hanson. S.1., and Lippmann, R.P. (eds.),
Advances in Neural Information Processing Systems 4, pp. 496-503. San Mateo, CA:
Morgan Kaufmann Publisbers.
Martin. G., Mosfeq, R, Otapman. D., and Pittman, J. (1993) Learning to See Where
and What: Training a Net to Make Saccades and Recognize Handwritten Otaracters.
In Hanson. S.J., Cowan. JD., and Giles, c.L. (eds.), Advances in Neural Information
Processing Systems 5, pp. 441-447. San Mateo, CA: Morgan Kallfm8l1D Publishers.
Shustorovich, A. (1994) A Subspace Projection Approach to Feature Extraction: the
Gab? Transform for Character Recognition. Neural Networks 7
(8), 1295-1301.
Shustorovich, A. and Thrasher, C.W. (1995) KODAK IMAGELINK?OCR Numeric
Handprint Module: Neural Network Positioning and Oassification. ~ings of
Session 11 (Document Processing) of the industrial conference of ICANN-95 Paris,
October 9-13, 1995.
Tw~Dimensianal
KODAK IMAGELINKTM OCR Alphanumeric Handprint Module
783
Original Image with Detected Subimages
Scaled Subimages
Character Heart Index Waveform
Detected Character Hearts
Best Guess Characters
MY 9 Z B E
M7 9 2 B E
Final Character string (After Post-Processing)
we
we
Figure 1: An Example of a Field Processed by the System
Outline characters indicate low confidence.
784
A. SHUSTOROVICH. C. W. THRASHER
Original Image with Detected Subimages
Scaled Subimages
Olaracter Heart Index Waveform
Detected Character Hearts
Best Guess Cll81'acters
G3S8AAF3
G358AAF3
Final 0Iaracter String (MterPost-Processing)
Figure 2: Another Example cI a FJeld Processed by the System.
| 1104 |@word middle:3 version:1 stronger:1 minus:1 applicatioo:2 moment:1 contains:1 united:2 document:1 blank:2 current:1 activation:7 alphanumeric:8 remove:1 designed:2 intelligence:1 guess:2 node:10 successive:1 five:4 height:3 along:3 m7:1 driver:2 replication:1 inside:1 introduce:1 alspector:1 globally:1 company:1 window:16 totally:1 provided:1 mass:1 lowest:1 what:1 interpreted:1 string:9 unified:1 finding:1 every:1 exactly:2 classifier:3 scaled:2 control:8 unit:11 local:3 tends:2 installed:1 approximately:4 chose:1 plus:1 resembles:1 k:1 mateo:3 challenging:1 subfields:2 averaged:1 decided:2 testing:4 implement:1 procedure:2 displacement:1 gabor:5 reject:4 projection:7 significantly:1 confidence:8 word:1 cannot:1 close:1 operator:1 impossible:2 center:5 rule:1 array:7 fmally:1 variation:1 autonomous:1 target:2 gm:1 oassification:1 us:1 hypothesis:1 rumelhart:1 recognition:8 expensive:1 mosfeq:1 bec:1 database:1 bottom:1 role:1 module:20 wang:1 capture:1 region:1 connected:3 trade:1 mentioned:3 substantial:1 agency:2 trained:8 ings:1 segment:1 purely:1 serve:1 division:1 mme:1 basis:1 noncentered:4 corrective:1 describe:1 effective:1 detected:5 artificial:1 corresponded:1 quite:1 otherwise:1 ability:1 transform:1 final:5 net:2 remainder:1 fr:1 organizing:1 competition:1 normalize:1 produce:1 gab:1 wider:1 lbis:1 nearest:1 job:1 implemented:1 predicted:4 involves:1 indicate:1 direction:1 waveform:2 closely:1 centered:8 human:1 char:1 numeral:4 recombined:1 keeler:2 slash:1 bump:1 major:1 achieves:1 purpose:1 recognizer:1 estimation:1 intermediary:1 label:3 currently:2 sensitive:1 individually:1 gaussian:1 mil:1 publication:1 industrial:1 integrated:2 fust:1 hidden:6 pixel:21 overall:2 classification:10 spatial:1 special:4 field:31 equal:2 extraction:1 identical:1 survive:1 simplify:1 duplicate:1 lls:1 resulted:1 recognize:3 individual:6 replaced:1 intended:1 replacement:1 circular:1 henderson:1 analyzed:1 uppercase:1 chain:1 necessary:2 jumping:1 complicates:1 column:1 earlier:1 giles:1 planned:1 too:1 my:1 combined:2 confident:1 international:1 off:1 together:1 moody:1 again:1 pittman:1 de:1 sec:1 piece:1 vehicle:3 later:1 sine:1 performed:1 analyze:1 portion:1 wave:1 recover:1 parallel:1 complicated:1 rochester:1 accuracy:1 became:1 kaufmann:2 handwritten:3 ed:3 definition:1 competitor:1 pp:4 duplicated:2 knowledge:2 ut:1 dimensionality:1 segmentation:7 organized:4 positioned:2 actually:1 back:1 exceeded:1 higher:2 tolerate:1 improved:1 box:2 rejected:4 christopher:1 touch:1 lack:1 propagation:1 mode:1 quality:1 indicated:1 normalized:2 contain:1 counterpart:1 assigned:1 adjacent:1 round:1 width:3 self:1 ambiguous:1 cosine:1 trying:1 plate:2 outline:1 performs:1 percent:1 postprocessing:10 image:6 common:1 specialized:1 handprint:8 significant:2 unconstrained:1 session:1 had:1 moving:2 etc:1 recent:1 touching:4 irrelevant:1 tesauro:1 occasionally:1 certain:2 buffer:1 success:5 morgan:3 greater:1 additional:3 abnormally:1 signal:1 ii:1 sliding:1 exceeds:1 positioning:9 match:2 long:1 lin:1 divided:1 post:2 variant:2 sometimes:4 normalization:1 achieved:2 addition:2 whereas:1 publisher:2 envelope:1 rest:2 meaningless:1 virtually:1 thing:1 cowan:2 inconsistent:1 benjio:2 integer:1 practitioner:1 fit:1 architecture:1 competing:1 tm:1 translates:1 shift:4 whether:1 assist:1 resistance:1 constitute:1 generally:2 fool:1 slide:1 ten:2 hardware:2 processed:2 kill:1 conform:1 group:1 putting:1 four:1 threshold:2 monitor:1 license:3 capital:1 drawn:1 segmenters:1 preprocessed:1 registration:1 imaging:1 run:1 letter:2 fourth:2 place:1 decide:2 fran:1 missed:1 disambiguation:2 fl:1 layer:13 distinguish:1 constraint:1 deficiency:1 generates:1 speed:2 martin:2 combination:2 disconnected:1 describes:1 smaller:1 character:55 tw:1 heart:11 computationally:1 discus:1 available:3 ocr:8 appropriate:1 kodak:8 alternative:1 robustness:4 jd:1 original:2 top:2 remaining:1 especially:1 move:1 question:2 arrangement:1 added:1 reassembled:1 receptive:2 subspace:1 distance:1 separate:2 link:1 separating:1 me:1 seven:1 collected:1 consensus:1 index:2 providing:1 handprinted:1 kingdom:2 difficult:1 mostly:1 october:1 implementation:1 reliably:1 perform:1 markov:2 namely:2 required:1 paris:1 trainable:1 immunity:1 hanson:2 narrow:1 able:1 below:1 pattern:4 usually:1 shifting:1 overlap:1 business:1 difficulty:2 widening:1 rely:1 scheme:2 improve:1 started:1 bubble:1 literature:1 fully:3 expect:1 subfield:2 licensing:2 asterisk:3 rume1hart:1 thresholding:1 filing:1 row:1 course:1 placed:1 copy:1 allow:2 weaker:1 wide:3 numeric:4 valid:2 stand:1 cme:1 made:1 jump:1 preprocessing:1 san:3 alpha:2 lippmann:1 preferred:1 keep:1 ca:3 did:1 icann:1 main:2 whole:1 noise:2 alarm:1 allowed:5 board:1 ny:1 precision:3 theme:1 position:5 lie:1 candidate:7 third:5 wavelet:1 down:1 specific:2 symbol:1 gupta:2 multitude:1 weakest:1 false:1 subimages:4 ci:1 gap:2 rejection:1 eastman:1 contained:1 saccade:3 extracted:1 towards:1 shared:1 included:1 determined:1 miss:1 pas:1 indicating:1 mark:1 alexander:1 tested:1 |
118 | 1,105 | Cholinergic suppression of transmission may
allow combined associative memory function and
self-organization in the neocortex.
Michael E. Hasselmo and Milos Cekic
Department of Psychology and Program in Neurosciences,
Harvard University, 33 Kirkland St., Cambridge, MA 02138
hasselmo@katIa.harvard.edu
Abstract
Selective suppression of transmission at feedback synapses during
learning is proposed as a mechanism for combining associative feedback with self-organization of feed forward synapses. Experimental
data demonstrates cholinergic suppression of synaptic transmission in
layer I (feedback synapses), and a lack of suppression in layer IV (feedforward synapses). A network with this feature uses local rules to learn
mappings which are not linearly separable. During learning, sensory
stimuli and desired response are simultaneously presented as input.
Feedforward connections form self-organized representations of input,
while suppressed feedback connections learn the transpose of feedforward connectivity. During recall, suppression is removed, sensory input
activates the self-organized representation, and activity generates the
learned response.
1
INTRODUCTION
The synaptic connections in most models of the cortex can be defined as either associative
or self-organizing on the basis of a single feature: the relative infl uence of modifiable synapses on post-synaptic activity during learning (figure 1). In associative memories, postsynaptic activity during learning is determined by nonmodifiable afferent input connections, with no change in the storage due to synaptic transmission at modifiable synapses
(Anderson, 1983; McNaughton and Morris, 1987). In self-organization, post-synaptic
activity is predominantly influenced by the modifiable synapses, such that modification of
synapses influences subsequent learning (Von der Malsburg, 1973; Miller et al., 1990).
Models of cortical function must combine the capacity to form new representations and
store associations between these representations. Networks combining self-organization
and associative memory function can learn complex mapping functions with more biologically plausible learning rules (Hecht-Nielsen, 1987; Carpenter et al., 1991; Dayan et at.,
132
M. E. HASSELMO, M. CEKIC
1995), but must control the influence of feedback associative connections on self-organization. Some networks use special activation dynamics which prevent feedback from
influencing activity unless it coincides with feedforward activity (Carpenter et al., 1991).
A new network alternately shuts off feedforward and feedback synaptic transmission
(Dayan et al., 1995).
A.
Self-organizing
c.
Afferent
Self-organizing
feedforward
Associative
feedback
Figure 1 - Defining characteristics of self-organization and associative memory. A. At
self-organizing synapses, post-synaptic activity during learning depends predominantly
upon transmission at the modifiable synapses. B. At synapses mediating associative memory function, post-synaptic activity during learning does not depend primarily on the modifiable synapses, but is predominantly influenced by separate afferent input. C. Selforganization and associative memory function can be combined if associative feedback
synapses are selectively suppressed during learning but not recall.
Here we present a model using selective suppression of feedback synaptic transmission
during learning to allow simultaneous self-organization and association between two
regions. Previous experiments show that the neuromodulator acetylcholine selectively
suppresses synaptic transmission within the olfactory cortex (Hasselmo and Bower, 1992;
1993) and hippocampus (Hasselmo and Schnell, 1994). If the model is valid for neocortical structures, cholinergic suppression should be stronger for feedback but not feedforward synapses. Here we review experimental data (Hasselmo and Cekic, 1996) comparing
cholinergic suppression of synaptic transmission in layers with predominantly feedforward or feedback synapses.
2. BRAIN SLICE PHYSIOLOGY
As shown in Figure 2, we utilized brain slice preparations of the rat somatosensory neocortex to investigate whether cholinergic suppression of synaptic transmission is selective
for feedback but not feedforward synaptic connections. This was possible because feedforward and feedback connections show different patterns of termination in neocortex. As
shown in Figure 2, Layer I contains primarily feedback synapses from other cortical
regions (Cauller and Connors, 1994), whereas layer IV contains primarily afferent synapses from the thalamus and feedforward synapses from more primary neocortical structures (Van Essen and Maunsell, 1983). Using previously developed techniques (Cauller
and Connors, 1994; Li and Cauller, 1995) for testing of the predominantly feedback connections in layer I, we stimulated layer I and recorded in layer I (a cut prevented spread of
133
Cholinergic Suppression of Transmission in the Neocortex
activity from layers II and III). For testing the predominantly feedforward connections
terminating in layer IV, we elicited synaptic potentials by stimulating the white matter
deep to layer VI and recorded in layer IV. We tested suppression by measuring the change
in height of synaptic potentials during perfusion of the cholinergic agonist carbachol at
lOOJ,1M. Figure 3 shows that perfusion of carbachol caused much stronger suppression of
synaptic transmission in layer I as compared to layer IV (Hasselmo and Cekic, 1996), suggesting that cholinergic suppression of transmission is selective for feedback synapses and
not for feedforward synapses.
I
ll-ill
I
IV
~~
V-VI
Layer IV
recording
Region 1
.1
Foedback
I
ll-ill
IV
V-VI
Region 2
White matter /
stimulation
1/
Figure 2. A. Brain slice preparation of somatosensory cortex showing location of stimulation and recording electrodes for testing suppression of synaptic transmission in layer I
and in layer IV. Experiment based on procedures developed by Cauller (Cauller and Connors, 1994; Li and Cauller, 1995). B. Anatomical pattern of feedforward and feedback
connectivity within cortical structures (based on Van Essen and Maunsell, 1983).
Feedforward -layer IV
Control
Carbachol (1 OOJlM)
Wash
I~
-0
5ms
Feedback - layer I
'!oi'
Control
Carbachol (1 OOJlM)
Wash
Figure 3 - Suppression of transmission in somatosensory neocortex. Top: Synaptic potentials recorded in layer IV (where feedforward and afferent synapses predominate) show
little effect of l00J.tM carbachol. Bottom: Synaptic potentials recorded in layer I (where
feedback synapses predominate) show suppression in the presence of lOOJ,1M carbachol.
M. E. HASSELMO, M. CEKIC
134
3. COMPUTATIONAL MODELING
These experimental results supported the use of selective suppression in a computational
model (Hasselmo and Cekic, 1996) with self-organization in its feedforward synaptic connections and associative memory function in its feedback synaptic connections (Figs 1 and
4). The proposed network uses local, Hebb-type learning rules supported by evidence on
the physiology of long-tenn potentiation in the hippocampus (Gustafsson and Wigstrom,
1986). The learning rule for each set of connections in the network takes the fonn:
tlWS:'Y)
= 11 (a?) -
9(Y? g (ar?
Where W(x. Y) designates the connections from region x to region y, 9 is the threshold of
synaptic modification in region y, 11 is the rate of modification, and the output function is
g(a;.(x~ = [tanh(~(x) - J.1(x~]+ where []+ represents the constraint to positive values only.
Feedforward connections (Wi/x<y? have self-organizing properties, while feedback connections (Wir>=Y~ have associative memory properties. This difference depends entirely
upon the selective suppression of feedback synapses during learning, which is implemented in the activation rule in the form (I-c). For the entire network, the activation rule
takes the fonn:
M II(X)
a?)
= A?) + 2, 2,
N II(X)
Wi~<Y) g (a~x? +
x=lk=l
2, 2,
II(Y)
(1- c) Wi~~Y) g (a~x? -
x=lk=l
2, Hi~) (g (af?)
k=l
where a;.(y) represents the activity of each of the n(y) neurons in region y, ~ (x) is the activity of each of the n(x) neurons in other regions x, M is the total number of regions providing feedforward input, N is the total number of regions providing feedback input, Aj(y) is
the input pattern to region y, H(Y) represents the inhibition between neurons in region y,
and (1 - c) represents the suppression of synaptic transmission. During learning, c takes a
value between 0 and 1. During recall, suppression is removed, c = O. In this network, synapses (W) between regions only take positive values, reflecting the fact that long-range
connections between cortical regions consist of excitatory synapses arising from pyramidal cells. Thus, inhibition mediated by the local inhibitory interneurons within a region is
represented by a separate inhibitory connectivity matrix H.
After each step of learning, the total weight of synaptic connections is nonnalized pre-synaptically for each neuron j in each region:
~--------------
W ij (t+l)
=
[Wij(t) + l1W;j(t)]I(
.i
1=
[Wij(t) +l1Wij (t)]
2)
1
Synaptic weights are then normalized post-synaptically for each neuron i in each region
(replacing i with j in the sum in the denominator in equation 3). This nonnalization of
synaptic strength represents slower cellular mechanisms which redistribute pre and postsynaptic resources for maintaining synapses depending upon local influences.
In these simulations, both the sensory input stimuli and the desired output response to be
learned are presented as afferent input to the neurons in region 1. Most networks using
error-based learning rules consist of feedforward architectures with separate layers of
input and output units. One can imagine this network as an auto-encoder network folded
back on itself, with both input and output units in region 1, and hidden units in region 2.
135
Cholinergic Suppression of Transmission in the Neocortex
As an example of its functional properties, the network presented here was trained on the
XOR problem. The XOR problem has previously been used as an example of the capability of error based training schemes for solving problems which are not linearly separable.
The specific characteristics of the network and patterns used for this simulation are shown
in figure 4. The two logical states of each component of the XOR problem are represented
by two separate units (designated on or off in figures 4 and 5), ensuring that activation of
the network is equal for each input condition. The problem has the appearance of two
XOR problems with inverse logical states being solved simultaneously.
As shown in figure 4, the input and desired output of the network are presented simultaneously during learning to region 1. The six neurons in region 1 project along feedforward connections to four neurons in region 2, the hidden units of the network. These four
neurons project along feedback connections to the six neurons in region 1. All connections take random initial weights. During learning, the feedforward connections undergo
self-organization which ultimately causes the hidden units to become feature detectors
responding to each of the four patterns of input to region 1. Thus, the rows of the feedforward synaptic connectivity matrix gradually take the form of the individual input patterns.
on
1.
2.
3.
4.
STIMULUS
off on off
RESPONSE
yes no
oeeo eo
oeoe oe
eooe eo
9 9 9
Afferent
input
"'-Region 1
Figure 4 - Network for learning the XOR problem, with 6 units in region 1 and 4 units in
region 2. Four different patterns of afferent input are presented successively to region 1.
The input stimuli of the XOR problem are represented by the four units on the left, and the
desired output designation of XOR or not-XOR is represented by the two units on the
right. The XOR problem has four basic states: on-off and off-on on the input is categorized by yes on the output, while on-on and off-off on the input is categorized by no on the
output.
Modulation is applied during learning in the form of selective suppression of synaptic
transmission along feedback connections (this suppression need not be complete), giving
these connections associative memory function. Hebbian synaptic modification causes
these connections to link each of the feature detecting hidden units in region 2 with the
cells in region 1 activated by the pattern to which the hidden unit responds. Gradually, the
feedback synaptic connectivity matrix becomes the transpose of the feedforward connectivity matrix. (parameters used in simulation: Aj(l) 0 or I, h 2.0, q(l) 0.5, q(2)
0.6, (1) 0 .2, (2) 0.5, c 1.0 and Hik(2) 0.6). Function was similar and convergence
was obtained more rapidly with c = 0.5. Feedback synaptic transmission prevented con-
=
=
=
=
=
=
=
=
M. E. HASSELMO. M. CEKIC
136
vergence during learning when c = 0.367).
During recall, modulation of synaptic transmission is removed, and the various input stimuli of the XOR problem are presented to region 1 without the corresponding output pattern. Activity spreads along the self-organized feedforward connections to activate the
specific hidden layer unit responding to that pattern. Activity then spreads back along
feedback connections from that particular unit to activate the desired output units. The
activity in the two regions settles into a final pattern of recall. Figure 5 shows the settled
recall of the network at different stages of learning. It can be seen that the network initially may show little recall activity, or erroneous recall activity, but after several cycles of
learning, the network settles into the proper response to each of the XOR problem states.
Convergence during learning and recall have been obtained with other problems, including recognition of whether on units were on the left or right, symmetry of on units, and
number of on units. In addition, larger scale problems involving multiple feedforward and
feedback layers have been shown to converge.
~
?L
3
R
1
0>
--- ------------..:11-==
_
--.-?
--.
?
---------- --- --- --?
-.
?
.=-.:
-.
???
..--------.------..:
------ --- --- -?
-:=
.:
-.-- --- ---- -?- -? ---- ----- -?- .:--=:=:
-?
?
?
=:
??
:=.
.:
?= -? -- --- ?- ---?? --- -- ---?? -- -- ?=:..-- -- --- -? .:=:
-:==
-- -- -- -- - -- - -on on no
---
????
-
:-=.::
::=::
<
-<
.
??
??
--
??
?-? ???
?--
-:==~:
:11
I.
==
26
off on yes
??
. . . =-=: -
~.
3.
2.
off off no
1.
en
~.
3
:;0
0
~
Region 1
I
t
4.
-.. ? ?
. ---- --- - - ? ?
on off yes
-- 11------?-- ?
-------- ---- ------- ---- ?--- ---.-- -? ?- -? ------ -?
?
---- --? -?= ------ ?- ?=- --11---
11:11::
.~.:
?
?
Region 2
Figure 5 - Output neuronal activity in the network shown at different learning steps. The
four input patterns are shown at top. Below these are degraded patterns presented during
recall, missing the response components of the input pattern. The output of the 6 region 1
units and the 4 region 2 units are shown at each stage of learning. As learning progresses,
gradually one region 2 unit starts to respond selectively to each input pattern, and the correct output unit becomes active in response to the degraded input. Note that as learning
progresses the response to pattern 4 changes gradually from incorrect (yes) to correct (no).
Cholinergic Suppression of Transmission in the Neocortex
137
References
Anderson, 1.A. (1983) Cognitive and psychological computation with neural models.
IEEE Trans. Systems, Man, Cybem. SMC-13,799-815.
Carpenter, G.A., Grossberg, S. and Reynolds, 1.H. (1991) ARTMAP: Supervised realtime learning and classification of nonstationary data by a self-organizing neural network.
Neural Networks 4: 565-588.
Cauller, LJ. and Connors, B.W. (1994) Synaptic physiology of horizontal afferents to
layer I in slices of rat SI neocortex. 1. Neurosci. 14: 751-762.
Dayan, P., Hinton, G.E., Neal, RM. and Zemel, RS. (1995) The Helmholtz machine.
Neural Computation.
Gustafsson, B. and Wigstrom, H. (1988) Physiological mechanisms underlying long-term
potentiation. Trends Neurosci. 11: 156-162.
Hasselmo, M.E. (1993) Acetylcholine and learning in a cortical associative memory.
Neural Computation. 5(1}: 32-44.
Hasselmo M.E. and Bower 1.M. (1992) Cholinergic suppression specific to intrinsic not
afferent fiber synapses in rat piriform (olfactory) cortex. 1. Neurophysiol. 67: 1222-1229.
Hasselmo, M.E. and Bower, 1.M. (1993) Acetylcholine and Memory. Trends Neurosci.
26: 218-222.
Hasselmo, M.E. and Cekic, M. (1996) Suppression of synaptic transmission may allow
combination of associative feedback and self-organizing feed forward connections in the
neocortex. Behav. Brain Res. in press.
Hasselmo M.E., Anderson B.P. and Bower 1.M. (1992) Cholinergic modulation of cortical
associative memory function. 1. Neurophysiol. 67: 1230-1246.
Hasselmo M.E. and Schnell, E. (1994) Laminar selectivity of the cholinergic suppression
of synaptic transmission in rat hippocampal region CAl: Computational modeling and
brain slice physiology. 1. Neurosci. 15: 3898-3914.
Hecht-Nielsen, R (1987) Counterpropagation networks. Applied Optics 26: 4979-4984.
Li, H. and Cauller, L.l. (1995) Acetylcholine modulation of excitatory synaptic inputs
from layer I to the superficial layers of rat somatosensory neocortex in vitro. Soc. Neurosci. Abstr. 21: 68.
Linsker, R (1988) Self-organization in a perceptual network. Computer 21: 105-117.
McNaughton B.L. and Morris RG.M. (1987) Hippocampal synaptic enhancement and
information storage within a distributed memory system. Trends in Neurosci. 10:408-415.
Miller, K.D., Keller, 1.B. and Stryker, M.P. (1989) Ocular dominance column development Analysis and simulation. Science 245: 605-615.
van Essen, D.C. and Maunsell, 1.H.R. (1983) Heirarchical organization and functional
streams in the visual cortex. Trends Neurosci. 6: 370-375.
von der Malsburg, C. (1973) Self-organization of orientation sensitive cells in the striate
cortex. Kybemetik 14: 85-100.
| 1105 |@word selforganization:1 hippocampus:2 stronger:2 termination:1 simulation:4 r:1 fonn:2 initial:1 contains:2 reynolds:1 comparing:1 activation:4 si:1 must:2 subsequent:1 tenn:1 detecting:1 location:1 wir:1 height:1 along:5 become:1 gustafsson:2 incorrect:1 combine:1 olfactory:2 brain:5 little:2 becomes:2 project:2 underlying:1 suppresses:1 developed:2 shuts:1 demonstrates:1 rm:1 control:3 unit:22 maunsell:3 positive:2 influencing:1 local:4 modulation:4 smc:1 range:1 grossberg:1 testing:3 procedure:1 physiology:4 pre:2 nonnalization:1 cal:1 storage:2 influence:3 missing:1 artmap:1 keller:1 rule:7 mcnaughton:2 imagine:1 us:2 harvard:2 trend:4 helmholtz:1 recognition:1 utilized:1 nonmodifiable:1 cut:1 bottom:1 solved:1 region:40 cycle:1 oe:1 removed:3 schnell:2 dynamic:1 ultimately:1 terminating:1 depend:1 trained:1 solving:1 upon:3 basis:1 neurophysiol:2 represented:4 various:1 fiber:1 activate:2 zemel:1 larger:1 plausible:1 encoder:1 itself:1 final:1 associative:17 combining:2 rapidly:1 organizing:7 convergence:2 electrode:1 transmission:23 abstr:1 enhancement:1 perfusion:2 depending:1 ij:1 progress:2 soc:1 implemented:1 somatosensory:4 correct:2 settle:2 potentiation:2 mapping:2 tanh:1 sensitive:1 hasselmo:16 activates:1 acetylcholine:4 suppression:27 dayan:3 entire:1 lj:1 initially:1 hidden:6 selective:7 wij:2 classification:1 ill:2 orientation:1 development:1 special:1 equal:1 represents:5 linsker:1 stimulus:5 primarily:3 simultaneously:3 individual:1 organization:12 interneurons:1 investigate:1 essen:3 cholinergic:13 redistribute:1 activated:1 unless:1 iv:11 desired:5 re:1 uence:1 psychological:1 column:1 modeling:2 ar:1 measuring:1 heirarchical:1 combined:2 st:1 off:12 michael:1 connectivity:6 von:2 recorded:4 successively:1 settled:1 cognitive:1 li:3 suggesting:1 potential:4 matter:2 caused:1 afferent:10 depends:2 vi:3 stream:1 start:1 elicited:1 capability:1 oi:1 degraded:2 xor:11 characteristic:2 miller:2 yes:5 agonist:1 simultaneous:1 synapsis:27 influenced:2 detector:1 synaptic:37 ocular:1 con:1 logical:2 recall:10 organized:3 nielsen:2 reflecting:1 back:2 feed:2 supervised:1 response:8 anderson:3 stage:2 horizontal:1 replacing:1 lack:1 aj:2 effect:1 normalized:1 neal:1 white:2 ll:2 during:20 self:21 coincides:1 rat:5 m:1 hippocampal:2 hik:1 neocortical:2 complete:1 predominantly:6 stimulation:2 functional:2 vitro:1 association:2 cambridge:1 cortex:6 inhibition:2 store:1 selectivity:1 der:2 seen:1 eo:2 converge:1 ii:4 multiple:1 thalamus:1 hebbian:1 af:1 long:3 post:5 prevented:2 hecht:2 ensuring:1 involving:1 basic:1 denominator:1 synaptically:2 cell:3 whereas:1 addition:1 pyramidal:1 recording:2 undergo:1 nonstationary:1 presence:1 feedforward:26 iii:1 psychology:1 architecture:1 kirkland:1 tm:1 whether:2 six:2 cause:2 behav:1 deep:1 neocortex:10 morris:2 inhibitory:2 neuroscience:1 arising:1 modifiable:5 anatomical:1 milo:1 dominance:1 four:7 threshold:1 prevent:1 sum:1 inverse:1 respond:1 realtime:1 entirely:1 layer:27 hi:1 laminar:1 activity:17 strength:1 optic:1 constraint:1 cauller:8 generates:1 separable:2 department:1 designated:1 combination:1 suppressed:2 postsynaptic:2 wi:3 modification:4 biologically:1 gradually:4 equation:1 resource:1 previously:2 mechanism:3 slower:1 top:2 responding:2 maintaining:1 malsburg:2 giving:1 primary:1 stryker:1 striate:1 responds:1 predominate:2 separate:4 link:1 capacity:1 cellular:1 providing:2 piriform:1 mediating:1 proper:1 neuron:10 defining:1 hinton:1 nonnalized:1 connection:27 learned:2 alternately:1 trans:1 below:1 pattern:16 program:1 including:1 memory:13 scheme:1 lk:2 mediated:1 auto:1 review:1 wigstrom:2 relative:1 katia:1 designation:1 neuromodulator:1 row:1 excitatory:2 supported:2 transpose:2 allow:3 van:3 slice:5 feedback:31 distributed:1 cortical:6 valid:1 sensory:3 forward:2 cekic:8 connors:4 active:1 cybem:1 designates:1 vergence:1 stimulated:1 learn:3 superficial:1 symmetry:1 complex:1 carbachol:6 spread:3 linearly:2 neurosci:7 categorized:2 carpenter:3 neuronal:1 fig:1 counterpropagation:1 en:1 hebb:1 bower:4 perceptual:1 erroneous:1 specific:3 showing:1 physiological:1 evidence:1 consist:2 intrinsic:1 wash:2 rg:1 appearance:1 visual:1 ma:1 stimulating:1 man:1 change:3 determined:1 folded:1 total:3 experimental:3 selectively:3 preparation:2 tested:1 |
119 | 1,106 | Handwritten Word Recognition using Contextual
Hybrid Radial Basis Function NetworklHidden
Markov Models
Bernard Lemarie
La Poste/SRTP
10, Rue de l'lle-Mabon
F-44063 Nantes Cedex France
lemarie@srtp.srt-poste.fr
Michel Gilloux
La Poste/SRTP
10, Rue de l'1le-Mabon
F-44063 Nantes Cede x France
gilloux@srtp.srt-poste.fr
Manuel Leroux
La Poste/SRTP
10, Rue de l'lle-Mabon
F-44063 Nantes Cedex France
leroux@srtp.srt-poste.fr
Abstract
A hybrid and contextual radial basis function networklhidden Markov
model off-line handwritten word recognition system is presented. The
task assigned to the radial basis function networks is the estimation of
emission probabilities associated to Markov states. The model is contextual because the estimation of emission probabilities takes into account
the left context of the current image segment as represented by its predecessor in the sequence. The new system does not outperform the previous system without context but acts differently.
1 INTRODUCTION
Hidden Markov models (HMMs) are now commonly used in off-line recognition of
handwritten words (Chen et aI., 1994) (Gilloux et aI., 1993) (Gilloux et al. 1995a). In some
of these approaches (Gilloux et al. 1993), word images are transformed into sequences of
image segments through some explicit segmentation procedure. These segments are passed
on to a module which is in charge of estimating the probability for each segment to appear
when the corresponding hidden state is some state s (state emission probabilities). Model
probabilities are generally optimized for the Maximum Likelihood Estimation (MLE) criterion.
MLE training is known to be sub-optimal with respect to discrimination ability when the
underlying model is not the true model for the data. Moreover, estimating the emission
probabilities in regions where examples are sparse is difficult and estimations may not be
accurate. To reduce the risk of over training, images segments consisting of bitmaps are often replaced by feature vector of reasonable length (Chen et aI., 1994) or even discrete symbols (Gilloux et aI., 1993).
765
Handwritten Word Recognition Using HMMlRBF Networks
In a previous paper (Gilloux et aI., 1995b) we described a hybrid HMMlradial basis
function system in which emission probabilities are computed from full-fledged bitmaps
though the use of a radial basis function (RBF) neural network. This system demonstrated
better recognition rates than a previous one based on symbolic features (Gilloux et aI.,
1995b). Yet, many misclassification examples showed that some of the simplifying assumptions made in HMMs were responsible for a significant part of these errors. In particular, we observed that considering each segment independently from its neighbours would
hurt the accuracy of the model. For example, figure 1 shows examples of letter a when it is
segmented in two parts. The two parts are obviously correlated.
al
Figure 1: Examples of segmented a.
We propose a new variant of the hybrid HMMIRBF model in which emission probabilities are estimated by taking into account the context of the current segment. The context
will be represented by the preceding image segment in the sequence.
The RBF model was chosen because it was proven to be an efficient model for recognizing isolated digits or letters (Poggio & Girosi, 1990) (Lemarie, 1993). Interestingly
enough, RBFs bear close relationships with gaussian mixtures often used to model emission probabilities in markovian contexts. Their advantage lies in the fact that they do not
directly estimate emission probabilities and thus are less prone to errors in this estimation
in sparse regions. They are also trained through the Mean Square Error (MSE) criterion
which makes them more discriminant.
The idea of using a neural net and in particular a RBF in conjunction with a HMM is not
new. In (Singer & Lippman, 1992) it was applied to a speech recognition task. The use of
context to improve emission probabilities was proposed in (Bourlard & Morgan, 1993)
with the use of a discrete set of context events. Several neural networks are there used to
estimate various relations between states, context events and current segment. Our point is
to propose a different method without discrete context based on a adapted decomposition
of the HMM likelihood estimation.This model is next applied to off-line handwritten word
recognition.
The organization of this paper is as follows. Section 1 is an overview of the architecture
of our HMM. Section 2 describes the justification for using RBF outputs in a contextual
hidden Markov model. Section 3 describes the radial basis function network recognizer.
Section 4 reports on an experiment in which the contextual model is applied to the recognition of handwritten words found on french bank or postal cheques.
2 OVERVIEW OF THE HIDDEN MARKOV MODEL
In an HMM model (Bahl et aI., 1983), the recognition scores associated to words ware
likelihoods
L(wli) ... i n) = P(i1 ???inlw)xP(W)
in which the first term in the product encodes the probability with which the model of each
word w generates some image (some sequence of image segments) ij ... in- In the HMM paradigm, this term is decomposed into a sum on all paths (i.e. sequence of hidden states) of
products of the probability of the hidden path by the probability that the path generated the
Image sequence:
p(i) ... inlw) =
766
B. LEMARIE. M. GILLOUX. M. LEROUX
It is often assumed that only one path contributes significantly to this term so that
In HMMs, each sequence element is assumed to depend only on its corresponding state:
n
=
p(il???i n ISI??? sn )
ITp(i?ls
} }.)
j=1
Moreover, first-order Markov models assume that paths are generated by a first-order
Markov chain so that
n
P(sl ? ? ? s n ) = ITp(s}. ls.}- I)
j
=I
We have reported in previous papers (Gilloux et aI., 1993) (Gilloux et aI., 1995a) on several handwriting recognition systems based on this assumption.The hidden Markov model
architecture used in all systems has been extensively presented in (Gilloux et aI., 1995a).
In that model, word letters are associated to three-states models which are designed to account for the situations where a letter is realized as 0, 1 or 2 segments. Word models are the
result of assembling the corresponding letter models. This architecture is depicted on figure
2. We used here transition emission rather than state emission. However, this does not
E, 0.05
E,0.05
a
val
I
Figure 2: Outline of the model for "laval".
change the previous formulas if we replace states by transitions, i.e. pairs of states.
One of these systems was an hybrid RBFIHMM model in which a radial basis function
network was used to estimate emission probabilities p (i. Is.) . The RBF outputs are introduced by applying Bayes rule in the expression ofp (i I .~. I s I . .. S n) :
i;
n p(s.1 i.) xp(i.)
IT
n.
p(il ? ? ? i IsI? ? ?s) =
n
}= 1
}}
}
p (s.)
}
Since the product of a priori image segments probabilities p (i.) does not depend on the
word hypothesis w, we may write:
}
n p (s. Ii.)
p(il???inlsl???sn)oc.IT p~s./
} = 1
}
In the above formula, terms of form p (s . Is. _ I) are transition probabilities which may
be estimated through the Baum-Welch re-istirhatlOn algorithm. Terms of form p (s.) are
a priori probabilities of states. Note that for Bayes rule to apply, these probabilitid have
and only have to be estimated consistently with terms of form p (s. Ii.) since p (i. Is.)
is independent of the statistical distribution of states.
} }
} }
It has been proven elsewhere (Richard & Lippman, 1992) that systems trained through
the MSE criterion tend to approximate Bayes probabilities in the sense that Bayes proba-
Handwritten Word Recognition Using HMMlRBF Networks
767
bilities are optimal for the MSE criterion. In practice, the way in which a given system
comes close to Bayes optimum is not easily predictable due to various biases of the trained
system (initial parameters, local optimum, architecture of the net, etc.). Thus real output
scores are generally not equal to Bayes probabilities. However, there exist different procedures which act as a post-processor for outputs of a system trained with the MSE and make
them closer to Bayes probabilities (Singer & Lippman, 1992). Provided that such a postprocessor is used, we will assume that terms p (s. Ii .) are well estimated by the post-processed outputs of the recognition system. Then, u~~ p (s .) are just the a priori probabilities of states on the set used to train the system or post-prbcess the system outputs.
This hybrid handwritten word recognition system demonstrated better performances
than previous systems in which word images were represented through sequences of symbolic features instead of full-fledged bitmaps (Gilloux et aI., 1995b). However, some recognition errors remained, many of which could be explained by the simplifying assumptions made in the model. In particular, the fact that emission probabilities depend only on
the state corresponding to the current bitmap appeared to be a poor choice. For example,
on figure 3 the third and fourth segment are classified as two halves of the letter i. For letters
Figure 3: An image of trente classified as mille.
segmented in two parts, the second half is naturally correlated to the first (see figure 1). Yet,
our Markov model architecture is designed so that both halves are assumed uncorrelated.
This has two effects. Two consecutive bitmaps which cannot be the two parts of a unique
letter are sometimes recognized as such like on figure 3. Also, the emission probability of
the second part of a segmented letter is lower than if the first part has been considered for
estimating this probability. The contextual model described in the next section is designed
so has to make a different assumption on emission probabilities.
3 THE HYBRID CONTEXTUAL RBFIHMM MODEL
The exact decomposition of the emission part of word likelihoods is the following:
n
p(i1 ??? inls1???s n ) = P(il ls 1??? s n) x ITp(ijlsl ... sn,il ... ij_l)
j=2
We assume now that bitmaps are conditioned by their state and the previous image in the
sequence:
n
IT p (ij I sj'ij _ l )
P(il??? in I sl? ?? sn) ==p(i11 sl) x
j=2
The RBF is again introduced by applying Bayes rule in the following way:
P(s1 1i l ) xp(i l )
n p(s . 1i ., i . 1) xp (i . I i . 1)
p (i 1??? in lSI?? ? s n) ==
()
x
I?
/) P sl
.
P s. !. 1
IT
}}}(-
J=2
J
J-
Since terms of form p (i . Ii . _ 1) do not contribute to the discrimination of word hypotheses, we may write:
J J
.
. I)
P ( 1 1 ? ? ?l n sl??? s n
p (s 1 IiI)
oc
()
pSI
x
ITn p (s}. Ii.,IJ .iJ-. )1)
.
J=2
P (s .
J
I.
J-
1
768
B. LEMARIE, M. GILLOUX, M. LEROUX
The RBF has now to estimate not only terms of form p (s. Ii ., i. _ 1) but also terms like
1) which are no longer computed by mere countind. 'two radial basis function
netJoris-are then used to estimate these probabilities. Their common architecture is described in the next section.
p (s . Ii.
4 THE RADIAL BASIS FUNCTION MODEL
The radial basis function model has been described in (Lemarie, 1993). RBF networks
are inspired from the theory of regularization (Poggio & Girosi, 1990). This theory studies
how multivariate real functions known on a finite set of points may be approximated at
these points in a family of parametric functions under some bias of regularity. It has been
shown that when this bias tends to select smooth functions in the sense that some linear
combination of their derivatives is minimum, there exist an analytical solution which is a
linear combination of gaussians centred on the points where the function is known (Poggio
& Girosi, 1990). It is straightforward to transpose this paradigm to the problem of learning
probability distributions given a set of examples.
In practice, the theory is not tractable since it requires one gaussian per example in the
training set. Empirical methods (Lemarie, 1993) have been developed which reduce the
number of gaussian centres. Since the theory is no longer applicable when the number of
centres is reduced, the parameters of the model (centres and covariance matrices for gaussians, weights for the linear combination) have to be trained by another method, in that case
the gradient descent method and the MSE criterion. Finally, the resulting RBF model may
be looked at like a particular neural network with three layers. The first is the input layer.
The second layer is completely connected to the input layer through connections with unit
weights. The transfer functions of cells in the second layer are gaussians applied to the
weighed distance between the corresponding centres and the weighed input to the cell. The
weight of the distance are analogous to the parameters of a diagonal covariance matrix. Finally, the last layer is completely connected to the second one through weighted connections. Cells in this layer just output the sum of their input.
In our experiments, inputs to the RBF are feature vectors of length 138 computed from
the bitmaps of a word segment (Lemarie, 1993). The RBF that estimates terms of form
p (s. Ii., i. 1) uses to such vectors as input whereas the second RBF (terms
p (/ I/_il) ) is only fed with the vector associated to ij _l . These vectors are inspired from
"cha'rac{eristic loci" methods (Gluksman, 1967) and encode the proportion of white pixels
from which a bitmap border can be reached without meeting any black pixel in various of
directions.
5 EXPERIMENTS
The model has been assessed by applying it to the recognition of words appearing in legal amounts of french postal or bank cheques. The size of the vocabulary is 30 and its perplexity is only 14.3 (Bahl et aI., 1983). The training and test bases are made of images of
amount words written by unknown writers on real cheques. We used 7 191 images during
training and 2 879 different images for test. The image resolution was 300 dpi. The amounts
were manually segmented into words and an automatic procedure was used to separate the
words from the preprinted lines of the cheque form.
The training was conducted by using the results of the former hybrid system. The segmentation module was kept unchanged. There are 48 140 segments in the training set and
19577 in the test set. We assumed that the base system is almost always correct when aligning segments onto letter models. We thus used this alignment to label all the segments in
the training set and took these labels as the desired outputs for the RBF. We used a set of
63 different labels since 21 letters appear in the amount vocabulary and 3 types of segments
are possible for each letter. The outputs of the RBF are directly interpreted as Bayes prob-
769
Handwritten Word Recognition Using HMMJRBF Networks
abilities without further post-processing.
First of all, we assessed the quality of the system by evaluating its ability to recognize
the class of a segment through the value of p (s . Ii., i. 1) and compared it with that of
the previous hybrid system. The results of this e'xpdrirhent are reported on table 1 for the
test set. They demonstrate the importance of the context and thus its potential interest for a
Table 1: Recognition and confusion rates for segment classifiers
Recognition rate
Confusion rate
Mean square error
RBF system without context
32.6%
67.4%
0.828
RBF system with context
41.7%
58.3%
0.739
word recognition system.
We next compare the performance on word recognition on the data base of 2878 images
of words. Results are shown in table 2. The first remark is that the system without context
Table 2: Recognition and confusion rates for the word recognition systems
Recognition rate
Confusion rate
# Confusions
RBF system without context
81,3%
16,7%
536
RBF system with context
76,3%
23,7%
683
present better results than the contextual system. Some of the difference between the systems with and without context are shown below in figures 4 and 5 and may explain why the
contextual system remains at a lower level of performance. The word "huit" and "deux" of
figure 4 are well recognized by the system without context but badly identified by the contextual system respectively as "trente" and "franc". The image of the word "huit", for example, is segmented into eight segments and each of the four letters of the word is thus necessarily considered as separated in two parts. The fifth and sixth segments are thus
recognized as two halves of the letter "i" by the standard system while the contextual system avoids this decomposition of the letter "i". On the next image, the contextual system
proposes "ra" for the second and third segments mainly because of the absence of information on the relative position ofthese segments. On the other hand, figure 5 shows examples
where the contextual system outperforms the system without context. In the first case the
latter proposed the class "trois" with two halves on the letter "i" on the fifth and sixth segments. In the second case the context is clearly useful for the recognition on the first letter
of the word. Forthcoming experiments will try to combine the two systems so as to benefit
of their respective characteritics.
Figure 4 : some new confusions produced by the contextual system.
Experiments have also revealed that the contextual system remains very sensible to the
numerical output values for the network which estimates p (s. Ii . _ 1) . Several approaches
for solving this problem are currently under investigation. Ffrst'results have yet been obtained by trying to approximate the network which estimates p (Sj I ij _ 1) from the network
which estimates p (Sj I ij' ij _ 1) .
770
B. LEMARIE, M. GILLOUX, M. LEROUX
6 CONCLUSION
We have described a new application of a hybrid radial basis function/hidden Markov
model architecture to the recognition of off-line handwritten words. In this architecture, the
estimation of emission probabilities is assigned to a discriminant classifier. The estimation
of emission probabilities is enhanced by taking into account the context as represented by
the previous bitmap in the sequence to be classified. A formula have been derived introducing this context in the estimation of the likelihood of word scores. The ratio of the output
values of two networks are now used so as to estimate the likelihood.
The reported experiments reveal that the use of context, if profitable at the segment recognition level, is not yet useful at the word recognition level. Nevertheless, the new system
acts differently from the previous system without context and future applications will try
to exploit this difference. The dynamic of the ratio networks output values is also very unstable and some solutions to stabilize it which will be deeply tested in the forthcoming experiences.
References
Bahl L, Jelinek F, Mercer R, (1983). A maximum likelihood approach to speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 5(2): 179-190.
Bahl LR, Brown PF, de Souza PV, Mercer RL, (1986). Maximum mutual information estimation of hidden Markov model parameters for speech recognition. In: Proc of the Int
Conf on Acoustics, Speech, and Signal Processing (ICASSP'86):49-52.
Bourlard, H., Morgan, N., (1993). Continuous speech recognition by connectionist statistical methods, IEEE Trans. on Neural Networks, vol. 4, no. 6, pp. 893-909, 1993.
Chen, M.-Y., Kundu, A., Zhou, J., (1994). Off-line handwritten word recognition using a
hidden Markov model type stochastic network, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 16, no. 5:481-496.
Gilloux, M., Leroux, M., Bertille, J.-M., (1993). Strategies for handwritten words recognition using hidden Markov models, Proc. of the 2nd Int. Conf. on Document Analysis and
Recognition:299-304.
Gilloux, M., Leroux, M., Bertille, J.-M., (1995a). "Strategies for Cursive Script Recognition Using Hidden Markov Models", Machine Vision & Applications, Special issue on
Handwriting recognition, R Plamondon ed., accepted for publication.
Gilloux, M., Lemarie, B., Leroux, M., (l995b). "A Hybrid Radial Basis Function Network!
Hidden Markov Model Handwritten Word Recognition System", Proc. of the 3rd Int. Conf.
on Document Analysis and Recognition:394-397.
Gluksman, H.A., (1967). Classification of mixed font alphabetics by characteristic loci, 1st
Annual IEEE Computer Conf.: 138-141.
Lemarie, B., (1993). Practical implementation of a radial basis function network for handwritten digit recognition, Proc. of the 2nd Int. Conf. on Document Analysis and Recognition:412-415.
Poggio, T., Girosi, F., (1990). Networks for approximation and learning, Proc. of the IEEE,
vol 78, no 9.
Richard, M.D., Lippmann, RP., (1991). "Neural network classifiers estimate bayesian a
posteriori probabilities", Neural Computation, 3:461-483.
Singer, E, Lippmann, RP., (1992). A speech recognizer using radial basis function networks in an HMM framework, Proc. of the Int. Conf. on acoustics, Speech, and Signal
Processing.
| 1106 |@word proportion:1 nd:2 cha:1 simplifying:2 decomposition:3 covariance:2 initial:1 score:3 itp:3 document:3 interestingly:1 outperforms:1 bitmap:9 current:4 contextual:15 manuel:1 yet:4 written:1 numerical:1 girosi:4 designed:3 discrimination:2 half:5 intelligence:2 lr:1 postal:2 contribute:1 predecessor:1 combine:1 ra:1 isi:2 inspired:2 decomposed:1 pf:1 considering:1 provided:1 estimating:3 underlying:1 moreover:2 interpreted:1 developed:1 poste:6 act:3 charge:1 classifier:3 unit:1 appear:2 local:1 tends:1 ware:1 path:5 black:1 hmms:3 unique:1 responsible:1 practical:1 practice:2 lippman:3 digit:2 procedure:3 empirical:1 significantly:1 word:37 radial:13 symbolic:2 cannot:1 close:2 onto:1 context:23 risk:1 applying:3 weighed:2 demonstrated:2 baum:1 straightforward:1 independently:1 l:3 welch:1 resolution:1 rule:3 deux:1 hurt:1 justification:1 analogous:1 trois:1 enhanced:1 profitable:1 exact:1 us:1 hypothesis:2 element:1 recognition:39 approximated:1 observed:1 module:2 region:2 connected:2 deeply:1 bilities:1 predictable:1 dynamic:1 trained:5 depend:3 solving:1 segment:26 writer:1 basis:14 completely:2 easily:1 icassp:1 differently:2 represented:4 various:3 train:1 separated:1 gilloux:18 ability:3 obviously:1 sequence:10 advantage:1 net:2 analytical:1 took:1 propose:2 product:3 fr:3 regularity:1 optimum:2 ij:9 come:1 direction:1 correct:1 stochastic:1 investigation:1 considered:2 consecutive:1 recognizer:2 estimation:10 proc:6 applicable:1 label:3 currently:1 cheque:4 weighted:1 clearly:1 gaussian:3 always:1 rather:1 zhou:1 publication:1 conjunction:1 encode:1 derived:1 emission:18 consistently:1 likelihood:7 mainly:1 sense:2 posteriori:1 hidden:13 relation:1 transformed:1 nantes:3 i1:2 france:3 pixel:2 issue:1 classification:1 priori:3 proposes:1 special:1 mutual:1 equal:1 manually:1 future:1 report:1 connectionist:1 richard:2 franc:1 neighbour:1 recognize:1 replaced:1 consisting:1 proba:1 wli:1 organization:1 interest:1 alignment:1 mixture:1 chain:1 accurate:1 closer:1 experience:1 poggio:4 respective:1 re:1 desired:1 isolated:1 markovian:1 introducing:1 recognizing:1 conducted:1 reported:3 st:1 off:5 again:1 conf:6 derivative:1 michel:1 account:4 potential:1 de:4 centred:1 stabilize:1 int:5 script:1 try:2 reached:1 bayes:9 rbfs:1 square:2 il:7 accuracy:1 characteristic:1 handwritten:14 bayesian:1 produced:1 mere:1 processor:1 classified:3 explain:1 ed:1 sixth:2 pp:1 naturally:1 associated:4 psi:1 handwriting:2 segmentation:2 though:1 just:2 hand:1 french:2 bahl:4 quality:1 reveal:1 effect:1 brown:1 true:1 former:1 regularization:1 assigned:2 white:1 during:1 oc:2 criterion:5 trying:1 outline:1 demonstrate:1 confusion:6 image:19 common:1 leroux:8 laval:1 overview:2 rl:1 assembling:1 significant:1 ai:12 automatic:1 rd:1 eristic:1 centre:4 longer:2 etc:1 base:3 aligning:1 multivariate:1 showed:1 perplexity:1 meeting:1 morgan:2 minimum:1 preceding:1 recognized:3 paradigm:2 itn:1 signal:2 ii:10 full:2 segmented:6 smooth:1 ofp:1 post:4 mle:2 variant:1 vision:1 sometimes:1 cell:3 whereas:1 postprocessor:1 cedex:2 tend:1 revealed:1 iii:1 enough:1 forthcoming:2 architecture:8 identified:1 reduce:2 idea:1 expression:1 passed:1 speech:7 remark:1 generally:2 useful:2 cursive:1 amount:4 extensively:1 processed:1 reduced:1 outperform:1 sl:5 exist:2 lsi:1 estimated:4 per:1 discrete:3 write:2 vol:3 four:1 nevertheless:1 kept:1 sum:2 prob:1 letter:17 fourth:1 family:1 reasonable:1 almost:1 layer:7 annual:1 badly:1 adapted:1 encodes:1 generates:1 combination:3 poor:1 describes:2 ofthese:1 s1:1 explained:1 legal:1 remains:2 singer:3 locus:2 tractable:1 fed:1 gaussians:3 apply:1 eight:1 appearing:1 rac:1 rp:2 exploit:1 unchanged:1 realized:1 looked:1 font:1 parametric:1 strategy:2 diagonal:1 gradient:1 distance:2 separate:1 hmm:6 sensible:1 discriminant:2 unstable:1 length:2 relationship:1 ratio:2 difficult:1 implementation:1 unknown:1 markov:16 finite:1 descent:1 situation:1 dpi:1 souza:1 introduced:2 pair:1 optimized:1 connection:2 acoustic:2 trans:2 below:1 pattern:2 appeared:1 misclassification:1 event:2 hybrid:11 bourlard:2 kundu:1 improve:1 sn:4 val:1 relative:1 bear:1 mixed:1 proven:2 xp:4 mercer:2 bank:2 uncorrelated:1 prone:1 elsewhere:1 last:1 transpose:1 bias:3 lle:2 fledged:2 taking:2 fifth:2 sparse:2 jelinek:1 benefit:1 vocabulary:2 transition:3 evaluating:1 avoids:1 commonly:1 made:3 transaction:1 sj:3 approximate:2 lippmann:2 assumed:4 continuous:1 huit:2 why:1 table:4 transfer:1 contributes:1 mse:5 necessarily:1 rue:3 border:1 sub:1 position:1 pv:1 explicit:1 lie:1 third:2 formula:3 remained:1 symbol:1 i11:1 importance:1 conditioned:1 chen:3 depicted:1 rbf:18 srt:3 replace:1 absence:1 change:1 plamondon:1 bernard:1 accepted:1 la:3 select:1 latter:1 assessed:2 tested:1 correlated:2 |
120 | 1,107 | Human Reading and the Curse of
Dimensionality
Gale L. Martin
MCC Austin, TX 78613 galem@mcc.com
Abstract
Whereas optical character recognition (OCR) systems learn to classify single characters; people learn to classify long character strings
in parallel, within a single fixation . This difference is surprising
because high dimensionality is associated with poor classification
learning. This paper suggests that the human reading system
avoids these problems because the number of to-be-classified images is reduced by consistent and optimal eye fixation positions,
and by character sequence regularities.
An interesting difference exists between human reading and optical character recognition (OCR) systems. The input/output dimensionality of character classification
in human reading is much greater than that for OCR systems (see Figure 1) . OCR
systems classify one character at time; while the human reading system classifies
as many as 8-13 characters per eye fixation (Rayner, 1979) and within a fixation,
character category and sequence information is extracted in parallel (Blanchard,
McConkie, Zola, and Wolverton, 1984; Reicher, 1969).
OCR (Low Dbnensionality)
IDorothy lived
In the ....
[Q] . . _....................................
~
o
I
"D"
................................. .. "0"
~ "R"
HUnlan Reading (High Dbnensionality)
I Dorothy lived
IDorothy
In the midst of the .....
.............. "DOROTHY LI"
.. . .. )00 "LIVED IN THE"
lil
Ilived In the I .......
Imidst
of the
I
... "MIDST OF THE"
Figure 1: Character classification versus character sequence classification.
This is an interesting difference because high dimensionality is associated with poor
classification learning-the so-called curse of dimensionality (Denker, et ali 1987;
Geman, Bienenstock, & Doursat, 1992). OCR systems are designed to classify
single characters to minimize such problems. The fact that most people learn to read
quite well even with the high dimensional inputs and outputs, implies that variance
18
G. L. MARTIN
is somehow lowered in this domain, thereby making accurate classification learning
possible. The present paper reports on simulations of parallel character classification
which suggest that variance is lowered through regularities in eye fixation positions
and in character sequences making up valid words.
1
Training and Testing Materials
Training and testing materials were drawn from the story The Wonderful Wizard
of Oz by L. Frank Baum. Images of text lines were created from 120 pages of text
(about 160,000 characters, 33,000 total words, or 2,600 different words), which were
divided into 6 different font and case conditions of 20 pages each. Three different
fonts (variable and constant-width fonts), and two different cases (all upper-case or
mixed-case characters) were used. Text line images were normalized with respect
to height, but not width . All training and test sets contained an equal mix of the
six font/case conditions. Two generalization sets were used, for test and crossvalidation, and each consisted of about 14,000 characters.
Dorothy lived in the JDidst of the great Kansas Prairies.
DOROTHY LIVED IN THE MIDST OF THE GREAT KANSAS PRAIRIES.
Dorothy lived in the midst of the great Kansas Prairies.
DOROTHY LIVED IN THE MIDST OF THE GREAT KANSAS PRAIRIES.
Dorothy 1~ved ~n the m~dst of the great Kansas Pra~r~es.
DOROTHY LIVED IN THE MIDST OF THE GREAT KANSAS PRAIRIES.
Figure 2: Samples of the type font and case conditions used in the simulations
2
Network Architectures
The simulations used backpropagation networks (Rumelhart, Hinton & Williams,
1986) that extended the local receptive field , shared-weight architecture used in
many character-based OCR neural networks (LeCun, et aI, 1989; Martin & Pittman,
1991) . In the previous single character-based approach, the input to the net is an
image of a single character. The output is a vector representing the category ofthe
character. Hidden nodes have local receptive fields that receive input from a spatially local region, (e.g., a 6x6 area) in the preceding layer. Groups of hidden nodes
share their weights. Corresponding weights in each receptive field are initialized to
the same value and updated by the same value. Different hidden nodes within a
group learn to detect the same feature at different locations. A group is depicted
as hidden nodes within a single plane of a cube that corresponds to a hidden layer.
Different groups occupy different planes in the cube, and learn to detect different
features. This architecture biases learning by reducing the number of free parameters available for representing a function. The fact that these nets usually train and
generalize well in this domain, and that the local feature detectors that emerge are
similar to the oriented-edge and -line detectors found in mammalian visual cortex
(Hubel & Wiesel, 1979), suggests that the bias is at least roughly appropriate.
The extension of this character network to a character-sequence network is illustrated in Figure 3, where n (number of to-be-classified characters) is equal to 4.
Each output node represents a character category (e.g., "D") in one of the nth ordinal positions (e.g., "First character on the left") . The size of the input window
is expanded horizontally to cover at least the n widest characters ( "WWWW") .
When the character string is made up of relatively narrow characters, more than n
characters will appear in the input window and the network must learn to ignore
19
Human Reading and the Curse of Dimensionality
them . Increasing input/output dimensionality is accomplished by expanding the
number of hidden nodes horizontally. Network capacity is described by the depth
of each hidden layer (the number of different features detected), as well as by the
width of each hidden layer (the spatial coverage of the network) .
The network is potentially sensitive to both local and global visual information.
Local receptive fields build in a sensitivity to letter features. Shared weights make
learning transfer possible across representations of the same character at different
positions. Output nodes are globally connected to all the nodes in the second hidden
layer, but not with one another or with any word-level representations. Networks
were trained until the training set accuracy failed to improve by at least .1% over 5
epochs, or overfitting became evident from periodic testing with the generalization
test set.
ABC EFOHIJKLMNOP RSTUVWXYZ
ABCDEFOHIJKLMN PQRSTUVWXYZ
Local, sharr!d-weight rl'!ceptive /ielcls
Figure 3: Net architecture for parallel character sequence classification, n=4 chars.
3
Effects of Dimensionality on Training Difficulty and
Generalization
Experiment 1 provides a baseline measure of the impact of dimensionality. Increases
in dimensionality result in exponential increases in the number of input and output
patterns and the number of mapping functions . As a result, training problems arise
due to limitations in network capacity or search scope. Generalization problems
arise because it becomes impractical to use training sets large enough to obtain a
good estimate of the underlying function. Four different levels of dimensionality
were used (see Figure 4), from an input window of 20x20 pixels, with 1 to-beclassified character to an 80x20 window, with 4 to-be-classified characters ). Input
patterns were generated by starting the window at the left edge of the text line such
that the first character was centered 10 pixels from the left of the window, and then
successively scanning across the text line at each character position . Five training
set sizes were used (about 700 samples to 50,000). Two relative network capacities
were used (15 and 18 different feature detectors per hidden layer). Forty different
2Ox2O - 1 Character
~
---lOo-"D"
@] .-.,. "a"
Low
40x20 - 2 Characters
~ _"DO"
@!9 .-~ "OR"
60x20 - 3 Characters
80x20 - 4 Characters
lDocol ... ?- "DaR"
lorotij ~ "ORO"
I Dorothl ~ "DORa"
................... Dimensionality .
Iorothy \ ._.... "OROT"
.............. ........ .....~
High
Figure 4: Four levels of input/output dimensionality used in the experiment.
G. L. MARTIN
20
networks were trained, one for each combination of dimensionality, training set size
and relative network capacity (4x5x2). Training difficulty is described by asymptotic
accuracy achieved on the training set and by amount of training required to reach
the asymptote. Generalization is reported for both the test set (used to check for
overfitting) and the cross-validation set. The results (see Figure 5) are consistent
Higher Capdy Nets
Lower Capa:ity Nets
B
a)1~~8~
3~
(.)
~
94
o
10lXl
4IlD
:nm
'
?XDJ
4 Ch
,
o
!illll
10lXl
:nm
4IlD
?XDJ
!illll
Training Set Size
Training Set Size
Amount of Training Required to Reach Asymptote
d)
4 Ch
3 Ch
2 Ch
1 Ch
o
Geoeralization AtturIII:)' on Test Set
e)
~~====~::::-. 1
~
4
f)
~
0
10lXl
m:D
:nm
?XDJ
m~~~~.:o1
~
.._... __
4 Ch
.t....
!illll
0
10lXl
4IlD
:nm
?XDJ
&nO
Oeoeralization Aa:uraq on Validation Set
g)
28R
h)
l ~~
3 Ch
&i
t:
0
4 h
4 Ch
(.)
~
0
1croJ
m:D
:nm
?XDJ
!DJl)
0
10lXl 4IlD
:nm
?XDJ &nO
Figure 5: Impact of dimensionality on training and generalization.
with expectations. Increasing dimensionality results in increased training difficulty
and lower generalization. Since the problems associated with high dimensionality
occur in both training and test sets, and seem to be alleviated somewhat in the
high capacity nets, they are presumably due to both capacity/search limitations
and insufficient sample size,
4
Regularities in Window Positioning
One way human reading might reduce the problems associated with high dimensionality is to constrain eye fixation positions during reading; thereby reducing the
number of different input images the system must learn to classify. Eye movement
21
Human Reading and the Curse of Dimensionality
studies suggest that, although fixation positions within words do vary, there are
consistencies Rayner, 1979). Moreover, the particular locations fixated, slightly to
the left of the middle of words, appear to be optimal. People are most efficient at
recognizing words at these locations (O'Regan & Jacobs, 1992). These fixation positions reduce variance by reducing the average variability in the positions of ordered
characters within a word. Position variability increases as a function of distance
from the fixated character. The average distance of characters within a word is
minimized when the fixation position is toward the center of a word, as compared
to when it is at the beginning or end of a word.
Experiment 2 simulated consistent and optimal positioning with an 80x20 input
window fixated on the 3rd character. Only words of 3 or more characters were
fixated (see Figure 6) . The network learned to classify the first 4 characters in the
word. This condition was compared to a consistent positioning only condition, in
which the input window was fixated on the first character of a word. Two control
conditions were also examined. They were replications ofthe 20x20-1Character and
the 80x20-4 Character conditions of Experiment 1, except that in the first case, the
network was trained and tested only on the first 4 characters in each word and in
the second case, the network was trained as before but was tested with the window
fixated on the first character of the word. Four levels of training set size were used
and three replications of each training set size x window conditions were run (4 x 4
x 3 48 networks trained and tested). All networks employed 18 different feature
detectors for each hidden layer. The results (see Figure 7) support the idea that
=
Consistent & Optimal
80x20 - 4 Chars
IDOfthi --.. "DORO"
Consistent Only
80x20 - 4 Chars
'i0roth~ --.. "DORO"
High Dim. Control
80x20 - 4 Chars
Low Dim. Control
2Ox20 - 1 Char
~oroth~ --.. "DORO"
~--""LIVE"
Figure 6: Window positioning and dimensionality manipulations in Experiment 2
consistent and optimal positioning reduces variance, as indicated by reductions
in training difficulties and improved generalization. The consistent and optimal
positioning networks achieved training and generalization results superior to the
high dimensionality control condition, and equivalent to, or better than those for
the low dimensionality control. They were also slightly better than the consistent
positioning only nets.
5
Character Sequence Regularities
Since only certain character sequences are allowed in words, character sequence
regularities in words may also reduce the number of distinct images the system
must learn to classify. The system may also reduce variance by optimizing accuracy on highest frequency words. These hypotheses were tested by determining
whether or not the three consistent and optimal positioning networks trained on
the largest training set in Experiment 2, were more accurate in classifying high frequency words, as compared to low frequency words; and more accurate in classifying
words as compared to pronounceable non-words or random character strings. The
control condition used the networks trained in the low dimensional control (20x20
-1 Character) condition from Experiment 2. Human reading exhibits increased efficiency I accuracy in classifying high frequency as compared to low frequency words
22
G. L.MARTIN
Tralnln3 Dlmculty
Amount or Training to Reach Asymptote
Asymptotic Accuracy on Tralning Set
---
_
'OOf~
'.
tl
98
0)
am --- -m .. ConsISt.
""""" ? ""
Cnlrl-20x20
............
!3
-- -- ----------.. Cnlrl-80x20
U96
~
94
J !
o
600J
10000
1600J
20000
'Itaining Set Size
Generalization Acclll'acy
Test Set
_
1oo Ir----:::~~~==::::::;:==::!!I Cn,Irl-20x20
CCJ"ISist. & Opt.
Consist.
Validation Set
100.-------__-,--:xconslsl. & Opt
,' ---CCJ"Islst.
Cnlrl-20x20
____ , __ , ____ ? - -" Cntrl-BOx20
_ __ , ___ ? Cntrl-BOx20
~ ~ ~~.,,'~'------
~
26
.. '
?O~--~6OOJ~--1-0000~---16OOJ~----20000
~~~6OOJ~-~10000~-1~6OOJ~~20000~
Training Set Size
'Itaining Set Size
Figure 7: Impact of consistent & optimal window positions.
(Howes & Solomon, 1951; Solomon & Postman, 1952) , and in classifying characters in words as compared to pronounceable non-words or random character strings
(Baron & Thurston, 1973; Reicher, 1969). Experiment 3 involved creating a list
of 30 4-letter words drawn from the Oz text, of which 15 occurred very frequently
in the text (e.g., SAID), and 15 occurred infrequently (e.g., PAID), and creating
a list of 30 4-letter pronounceable non-words (e.g., TOlD) and a list of 30 4-letter
random strings (e.g., SDIA). Each string was reproduced in each of the 6 font Icase
conditions and labeled to create a test set. One further condition involved creating a
version of the word list in which the cases of the characters aLtErN aTeD. Psychologists used this manipulation to demonstrate that the advantages in processing words
can not simply be due to the use of word-shape feature detectors, since the word
advantage carries over to the alternating case condition, which destroys word-level
features (McClelland, 1976).
Consistent with human reading (see Figure 8), the character-sequence-based networks were most accurate on high frequency words and least accurate for low frequency words. The character-sequence-based networks also showed a progressive
decline in accuracy as the character string became less word-like. The advantage
for word-like strings can not be due to the use of word shape feature detectors
because accuracy on aLtErNaTiNg case words, where word shape is unfamiliar,
remains quite high .
Word Frequency Effect
Hgh Fraq Low Fraq
D
Character-Sequence-Based
Consistent & Optimal Positioning
Word Superiority Effect
WordS Pmn NonWorcts Random
?
aLtErNaTINg
Control condition,
20x20 single character
Figure 8: Sensitivity to word frequency and character sequence regularities
23
Human Reading and the Curse of Dimensionality
The present results raise questions about the role played by high dimensionality
in determining reading disabilities and difficulties. Reading difficulties have been
associated with reduced perceptual spans (Rayner, 1986; Rayner, et al., 1989), and
with irregular eye fixation patterns (Rayner & Pollatsek, 1989). This suggests that
some reading difficulties and disorders may be related to problems in generating the
precise eye movements necessary to maintain consistent and optimal eye fixations.
More generally, these results highlight the importance of considering the role of
character classification in learning to read, particularly since content factors, such
as word frequency, appear to influence even low-level classification operations.
References
Blanchard, H., McConkie, G., Zola, D., & Wolverton, G. (1984) Time course of
visual information utilization during fixations in reading. Jour. of Exp. Psych.:
Human Perc. fj Perf, 10, 75-89.
Denker, J., Schwartz, D., Wittner, B., Solla, S., Howard, R., Jackel, L., & Hopfield,
J. (1987) Large automatic learning, rule extraction and generalization, Complex
Systems, 1, 877-933.
Geman, S., Bienenstock, E., and Doursat, R. (1992) Neural networks and the
bias/variance dilemma. Neural Computation, 4, 1-58.
Howes, D. and Solomon, R. L. (1951) Visual duration threshold as a function of
word probability. Journal of Exp. Psych., 41, 401-410.
Hubel, D. & Wiesel, T. (1979) Brain mechanisms of vision.
150-162.
Sci.
Amer., 241,
LeCun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., &
Jackel, L. (1990) Handwritten digit recognition with a backpropagation network.
In Adv. in Neural Inf Proc. Sys. 2, D. Touretzky (Ed) Morgan Kaufmann.
Martin, G . L. & Pittman, J. A. (1991) Recognizing hand-printed letters and digits
using backpropagation learning. Neural Computation, 3, 258-267.
McClelland, J. L. (1976) Preliminary letter identification in the perception of words
and nonwords. Jour. of Exp. Psych.: Human Perc. fj Perf, 2, 80-91.
O'Regan, J. & Jacobs, A.(1992) Optimal viewing position effect in word recognition.
Jour. of Exp. Psych.: Human Perc.fj Perf, 18, 185-197.
Rayner, K. (1986) Eye movements and the perceptual span in beginning and skilled
readers. Jour. of Exp. Child Psych., 41, 211-236.
Rayner, K. (1979) Eye guidance in reading. Perception, 8, 21-30.
Rayner, K., Murphy, 1., Henderson, J. & Pollatsek, A. (1989) Selective attentional
dyslexia. Cognitive Neuropsych., 6, 357-378.
Rayner, K. & Pollatsek, A. (1989) The Psychology of reading. Prentice Hall
Reicher, G. (1969) Perceptual recognition as a function of meaningfulness of stimulus material. Jour. of Exp. Psych., 81, 274-280.
Rumelhart, D., Hinton, G., and Williams, R. (1986) Learning internal representations by error propagation. In D. Rumelhart and J. McClelland, (Eds) Parallel
Distributed Processing, 1. MIT Press.
Solomon, R. & Postman, L. (1952) Frequency of usage as a determinant of recognition thresholds for words. Jour. of Exp. Psych., 43, 195-210.
| 1107 |@word mcconkie:2 determinant:1 version:1 middle:1 wiesel:2 simulation:3 jacob:2 rayner:9 paid:1 thereby:2 carry:1 reduction:1 com:1 surprising:1 must:3 shape:3 asymptote:3 designed:1 ccj:2 plane:2 beginning:2 sys:1 provides:1 node:8 location:3 five:1 height:1 skilled:1 replication:2 ooj:4 fixation:12 roughly:1 frequently:1 brain:1 globally:1 curse:5 window:13 considering:1 increasing:2 becomes:1 classifies:1 underlying:1 moreover:1 string:8 psych:7 impractical:1 schwartz:1 control:8 utilization:1 appear:3 superiority:1 before:1 local:7 pmn:1 might:1 examined:1 suggests:3 lecun:2 testing:3 backpropagation:3 digit:2 area:1 mcc:2 printed:1 alleviated:1 word:49 djl:1 suggest:2 prentice:1 live:1 influence:1 equivalent:1 center:1 baum:1 williams:2 starting:1 duration:1 disorder:1 rule:1 ity:1 updated:1 hypothesis:1 rumelhart:3 recognition:6 infrequently:1 particularly:1 mammalian:1 geman:2 labeled:1 role:2 region:1 connected:1 adv:1 solla:1 movement:3 highest:1 nonwords:1 trained:7 raise:1 ali:1 dilemma:1 efficiency:1 hopfield:1 tx:1 train:1 distinct:1 detected:1 quite:2 reproduced:1 sequence:13 advantage:3 net:7 wizard:1 hgh:1 oz:2 crossvalidation:1 regularity:6 generating:1 oo:1 pronounceable:3 coverage:1 wonderful:1 implies:1 centered:1 human:14 char:5 viewing:1 material:3 generalization:11 preliminary:1 opt:2 extension:1 hall:1 exp:7 great:6 presumably:1 mapping:1 scope:1 oro:1 vary:1 wolverton:2 proc:1 jackel:2 sensitive:1 hubbard:1 largest:1 create:1 mit:1 destroys:1 check:1 baseline:1 ved:1 detect:2 dim:2 am:1 bienenstock:2 hidden:11 selective:1 pixel:2 classification:10 spatial:1 cube:2 equal:2 field:4 extraction:1 represents:1 progressive:1 minimized:1 report:1 acy:1 stimulus:1 oriented:1 fraq:2 murphy:1 maintain:1 pra:1 henderson:2 accurate:5 edge:2 necessary:1 initialized:1 guidance:1 increased:2 classify:7 cover:1 recognizing:2 loo:1 reported:1 scanning:1 periodic:1 jour:6 sensitivity:2 told:1 nm:6 successively:1 solomon:4 pittman:2 gale:1 perc:3 cognitive:1 creating:3 li:1 blanchard:2 ated:1 parallel:5 minimize:1 ir:1 accuracy:7 became:2 variance:6 baron:1 kaufmann:1 ofthe:2 generalize:1 handwritten:1 identification:1 itaining:2 classified:3 detector:6 reach:3 touretzky:1 ed:2 frequency:11 involved:2 associated:5 dimensionality:24 higher:1 x6:1 improved:1 amer:1 until:1 hand:1 irl:1 ild:4 propagation:1 somehow:1 indicated:1 usage:1 effect:4 normalized:1 consisted:1 read:2 spatially:1 alternating:3 illustrated:1 during:2 width:3 evident:1 demonstrate:1 fj:3 image:6 superior:1 rl:1 occurred:2 unfamiliar:1 ai:1 rd:1 automatic:1 consistency:1 lowered:2 cortex:1 showed:1 optimizing:1 inf:1 manipulation:2 certain:1 accomplished:1 morgan:1 greater:1 somewhat:1 preceding:1 employed:1 forty:1 mix:1 reduces:1 positioning:9 cross:1 long:1 divided:1 wittner:1 impact:3 vision:1 expectation:1 achieved:2 irregular:1 receive:1 whereas:1 dorothy:8 howe:2 doursat:2 seem:1 enough:1 psychology:1 architecture:4 reduce:4 idea:1 cn:1 decline:1 whether:1 six:1 dar:1 generally:1 amount:3 category:3 mcclelland:3 reduced:2 occupy:1 per:2 group:4 four:3 threshold:2 drawn:2 run:1 letter:6 dst:1 reader:1 layer:7 played:1 occur:1 constrain:1 span:2 expanded:1 optical:2 martin:6 relatively:1 combination:1 poor:2 across:2 slightly:2 character:66 making:2 psychologist:1 remains:1 mechanism:1 ordinal:1 end:1 available:1 operation:1 denker:3 ocr:7 appropriate:1 dora:1 dyslexia:1 build:1 widest:1 meaningfulness:1 question:1 font:6 receptive:4 disability:1 said:1 exhibit:1 distance:2 attentional:1 simulated:1 capacity:6 sci:1 toward:1 o1:1 insufficient:1 x20:17 potentially:1 frank:1 lived:8 lil:1 upper:1 howard:2 hinton:2 extended:1 variability:2 precise:1 required:2 learned:1 narrow:1 boser:1 usually:1 pattern:3 perception:2 reading:19 difficulty:7 nth:1 representing:2 improve:1 kansa:6 eye:10 created:1 perf:3 text:7 epoch:1 determining:2 relative:2 asymptotic:2 highlight:1 mixed:1 interesting:2 limitation:2 regan:2 versus:1 validation:3 consistent:14 story:1 classifying:4 share:1 austin:1 course:1 free:1 bias:3 emerge:1 distributed:1 depth:1 valid:1 avoids:1 made:1 ignore:1 global:1 hubel:2 overfitting:2 fixated:6 search:2 learn:8 transfer:1 expanding:1 complex:1 domain:2 midst:6 arise:2 allowed:1 child:1 tl:1 position:13 exponential:1 perceptual:3 list:4 exists:1 consist:2 xdj:6 icase:1 importance:1 depicted:1 lxl:5 simply:1 visual:4 failed:1 horizontally:2 contained:1 ordered:1 ch:8 corresponds:1 aa:1 extracted:1 abc:1 oof:1 shared:2 content:1 except:1 reducing:3 called:1 total:1 e:1 internal:1 people:3 support:1 tested:4 |
121 | 1,108 | Sample Complexity for Learning
Recurrent Percept ron Mappings
Bhaskar Dasgupta
Department of Computer Science
University of Waterloo
Waterloo, Ontario N2L 3G 1
CANADA
Eduardo D. Sontag
Department of Mathematics
Rutgers University
New Brunswick, NJ 08903
USA
bdasgupt~daisy.uwaterloo.ca
sontag~control.rutgers.edu
Abstract
Recurrent perceptron classifiers generalize the classical perceptron
model. They take into account those correlations and dependences
among input coordinates which arise from linear digital filtering.
This paper provides tight bounds on sample complexity associated
to the fitting of such models to experimental data.
1
Introduction
One of the most popular approaches to binary pattern classification, underlying
many statistical techniques, is based on perceptrons or linear discriminants; see
for instance the classical reference (Duda and Hart, 1973). In this context, one is
interested in classifying k-dimensional input patterns
V=(Vl, . . . ,Vk)
into two disjoint classes A + and A -. A perceptron P which classifies vectors into
A + and A - is characterized by a vector (of "weights") C E lR k, and operates as
follows. One forms the inner product
C.V
= CIVI + ...CkVk .
If this inner product is positive, v is classified into A +, otherwise into A - .
In signal processing and control applications, the size k of the input vectors v is
typically very large, so the number of samples needed in order to accurately "learn"
an appropriate classifying perceptron is in principle very large. On the other hand,
in such applications the classes A + and A-often can be separated by means of a
dynamical system of fairly small dimensionality. The existence of such a dynamical
system reflects the fact that the signals of interest exhibit context dependence and
Sample Complexity for Learning Recurrent Perceptron Mappings
205
correlations, and this prior information can help in narrowing down the search for a
classifier. Various dynamical system models for classification appear from instance
when learning finite automata and languages (Giles et. al., 1990) and in signal
processing as a channel equalization problem (at least in the simplest 2-level case)
when modeling linear channels transmitting digital data from a quantized source,
e.g. (Baksho et. al., 1991) and (Pulford et. al., 1991).
When dealing with linear dynamical classifiers, the inner product c. v represents
a convolution by a separating vector c that is the impulse-response of a recursive
digital filter of some order n ~ k. Equivalently, one assumes that the data can
be classified using a c that is n-rec'Ursive, meaning that there exist real numbers
TI, ... , Tn SO that
n
Cj
= 2: Cj-iTi, j = n + 1, ... , k .
i=1
Seen in this context, the usual perceptrons are nothing more than the very special
subclass of "finite impulse response" systems (all poles at zero); thus it is appropriate to call the more general class "recurrent" or "IIR (infinite impulse response)"
perceptrons. Some authors, particularly Back and Tsoi (Back and Tsoi, 1991; Back
and Tsoi, 1995) have introduced these ideas in the neural network literature. There
is also related work in control theory dealing with such classifying, or more generally
quantized-output, linear systems; see (Delchamps, 1989; Koplon and Sontag, 1993).
The problem that we consider in this paper is: if one assumes that there is an
n-recursive vector c that serves to classify the data, and one knows n but not
the particular vector, how many labeled samples v(i) are needed so as to be able
to reliably estimate C? More specifically, we want to be able to guarantee that
any classifying vector consistent with the seen data will classify "correctly with
high probability" the unseen data as well. This is done by computing the VC
dimension of the related concept class and then applying well-known results from
computational learning theory.
Very roughly speaking, the main result is that
the number of samples needed is proportional to the logarithm of the length k (as
opposed to k itself, as would be the case if one did not take advantage of the recurrent
structure). Another application of our results, again by appealing to the literature
from computational learning theory, is to the case of "noisy" measurements or more
generally data not exactly classifiable in this way; for example, our estimates show
roughly that if one succeeds in classifying 95% of a data set of size logq, then with
confidence ~ lone is assured that the prediction error rate will be < 90% on future
(unlabeled) samples.
Section 5 contains a result on polynomial-time learnability: for n constant, the
class of concepts introduced here is PAC learnable. Generalizations to the learning
of real-valued (as opposed to Boolean) functions are discussed in Section 6. For
reasons of space we omit many proofs; the complete paper is available by electronic
mail from the authors.
2
Definitions and Statements of Main Results
Given a set X, and a subset X of X, a dichotomy on X is a function
{-I, I}.
Assume given a class F of functions X - {-I, I}, to be called the class of classifier
fJ: X -
functions. The subset X ~ X is shattered by F if each dichotomy on X is the
restriction to X of some <P E F. The Vapnik-Chervonenkis dimension vc (F) is the
supremum (possibly infinite) of the set of integers K for which there is some subset
B. DASGUPTA,E.D.SONTAG
206
x
~ X of cardinality K, which can be shattered by:F. Due to space limitations,
we omit any discussion regarding the relevance of the VC dimension to learning
problems; the reader is referred to the excellent surveys in (Maass, 1994; Thran,
1994) regarding this issue.
Pick any two integers n>O and
q~O.
A sequence
C= (Cl, ... , cn+q) E lR. n+q
is said to be n-recursive if there exist real numbers r1, .. . , rn so that
n
cn+j =
2: cn+j-iri,
j = 1, . .. , q.
i=l
(In particular, every sequence of length n is n-recursive, but the interesting cases
are those in which q i= 0, and in fact q ~ n .) Given such an n-recursive sequence
C, we may consider its associated perceptron classifier. This is the map
?c: lR. n+q --+{-1,1}:
(X1, ... ,Xn+q)
H
sign (I:CiXi)
.=1
where the sign function is understood to be defined by sign (z) = -1 if z ~ 0 and
sign (z) = 1 otherwise. (Changing the definition at zero to be + 1 would not change
the results to be presented in any way.) We now introduce, for each two fixed n, q
as above, a class of functions:
:Fn,q := {?cl cE lR. n+q is n-recursive}.
This is understood as a function class with respect to the input space X = lR. n +q ,
and we are interested in estimating vc (:Fn,q).
Our main result will be as follows (all logs in base 2):
Theorem 1
Imax {n, nLlog(L1
+ ~ J)J} ~
vc (:Fn ,q)
~
min {n + q, 18n + 4n log(q + 1)} I
Note that, in particular, when q> max{2 + n 2 , 32}, one has the tight estimates
n
"2 logq ~ vc (:Fn ,q) ~ 8n logq .
The organization of the rest of the paper is as follows. In Section 3 we state an
abstract result on VC-dimension, which is then used in Section 4 to prove Theorem 1. Finally, Section 6 deals with bounds on the sample complexity needed for
identification of linear dynamical systems, that is to say, the real-valued functions
obtained when not taking "signs" when defining the maps ?c.
3
An Abstract Result on VC Dimension
Assume that we are given two sets X and A, to be called in this context the set of
inputs and the set of parameter values respectively. Suppose that we are also given
a function
F: AxX--+{-1,1}.
Associated to this data is the class of functions
:F := {F(A,?): X --+ {-1, 1} I A E A}
Sample Complexity for Learning Recurrent Perceptron Mappings
207
obtained by considering F as a function of the inputs alone, one such function for
each possible parameter value A. Note that, given the same data one could, dually,
study the class
F*: {F(-,~) : A-{-I,I}I~EX}
which obtains by fixing the elements of X and thinking of the parameters as inputs.
It is well-known (and in any case, a consequence of the more general result to be
presented below) that vc (F) ~ Llog(vc (F*?J, which provides a lower bound on
vc (F) in terms of the "dual VC dimension." A sharper estimate is possible when
A can be written as a product of n sets
A = Al
X
A2
X ??.
(1)
x An
and that is the topic which we develop next.
We assume from now on that a decomposition of the form in Equation (1) is given,
and will define a variation of the dual VC dimension by asking that only certain
dichotomies on A be obtained from F*. We define these dichotomies only on "rectangular" subsets of A, that is, sets of the form
L = Ll
X .?.
x Ln ~ A
with each Li ~ Ai a nonempty subset. Given any index 1 ::; K ::; n, by a K-axis
dichotomy on such a subset L we mean any function 6 : L - {-I, I} which depends
only on the Kth coordinate, that is, there is some function ? : Lit - {-I, I} so that
6(Al, . . . ,A n ) = ?(AIt) for all (Al, . . . ,An ) E L; an axis dichotomy is a map that
is a K-axis dichotomy for some K. A rectangular set L will be said to be axisshattered if every axis dichotomy is the restriction to L of some function of the form
F(?,~): A - {-I, I}, for some ~ EX.
Theorem 2 If L = Ll X ... x Ln ~ A can be axis-shattered and each set Li has
cardinality ri, then vc (F) ~ Llog(rt)J + ... + Llog(rn)J .
(In the special case n=1 one recovers the classical result vc (F)
The proof of Theorem 2 is omitted due to space limitations.
4
~
Llog(vc (F*)J.)
Proof of Main Result
We recall the following result; it was proved, using Milnor-Warren bounds on the
number of connected components of semi-algebraic sets, by Goldberg and Jerrum:
Fact 4.1 (Goldberg and Jerrum, 1995) Assume given a function F : A x X -
{-I, I} and the associated class of functions F:= {F(A,?): X - {-I, I} I A E A} .
Suppose that A = ~ k and X = ~ n, and that the function F can be defined in terms
of a Boolean formula involving at most s polynomial inequalities in k + n variables,
each polynomial being of degree at most d. Then, vc (F) ::; 2k log(8eds).
0
Using the above Fact and bounds for the standard "perceptron" model, it is not
difficult to prove the following Lemma.
Lemma 4.2 vc (Fn,q) ::; min{n + q, 18n + 4nlog(q + I)}
Next, we consider the lower bound of Theorem 1.
Lemma 4.3 vc (Fn,q) ~ maxin, nLlog(Ll
+ q~1 J)J}
B. DASGUPTA, E. D. SONTAG
208
Proof As Fn,q contains the class offunctions <Pc with c= (C1, ... , cn , 0, ... ,0), which
in turn being the set of signs of an n-dimensional linear space of functions, has VC
dimension n, we know that vc (Fn,q) ~ n. Thus we are left to prove that if q > n
then vc(Fn,q) ~ nLlog(l1 + ~J)J.
The set of n-recursive sequences of length n + q includes the set of sequences of the
following special form:
n
= L.Jlf- 1 ,
~.
Cj
(2)
j=I, ... ,n+q
i=l
where ai, h E lR for each i = 1, ... , n.
Hence, to prove the lower bound, it is
sufficient to study the class of functions induced by
F : lll.n x lll.n+.
~ {-I, I}, (~I"'" ~n, XI,???, x +.) >-> sign
n
(t, ~ ~i-I Xj) .
Let r = Lq+~-l J and let L 1, ... ,Ln be n disjoint sets of real numbers (if desired,
integers), each of cardinality r. Let L = U:'::l Lj . In addition, if rn < q+n-1, then
select an additional set B of (q+n-rn-1) real numbers disjoint from L.
We will apply Theorem 2, showing that the rectangular subset L1 x ... x Ln can
be axis-shattered. Pick any,.. E {1, ... , n} and any <P : L,. ~ {-1, 1}. Consider the
(unique) interpolating polynomial
n+q
=L
peA)
XjA j - 1
j=l
in A of degree q + n - 1 such that
peA)
Now pick
= { ~(A)
if A E L,.
if A E (L U B) - L,..
e= (Xl, ... , X + -1). Observe that
n
q
F(lt, I" ... , In,
Xl, .. . ,
n = sign
x +.)
(t, P(I'?) =
?(I.)
for all (11, ... , In) E L1 X .?? X L n , since p(l) = 0 fori ? L,. and p(l)
It follows from Theorem 2 that vc (Fn,q) ~ nLlog(r)J, as desired.
[)
= <P(I) otherwise.
?
The Consistency Problem
We next briefly discuss polynomial time learnability of recurrent perceptron mappings. As discussed in e.g. (Turan, 1994), in order to formalize this problem we
need to first choose a data structure to represent the hypotheses in Fn,q. In addition, since we are dealing with complexity of computation involving real numbers,
we must also clarify the meaning of "finding" a hypothesis, in terms of a suitable
notion of polynomial-time computation. Once this is done, the problem becomes
that of solving the consistency problem:
Given a set ofs ~ s(c,8) inputs6,6, ... ,e& E lR n +q , and an
arbitrary dichotomy ~ : {e1, 6, ... , e&} ~ {-I, I} find a representation of a hypothesis <Pc E Fn,q such that the restriction of <Pc to
the set {e1,6, ... ,e&} is identical to the dichotomy ~ (or report
that no such hypothesis exists).
Sample Complexity for Learning Recurrent Perceptron Mappings
209
The representation to be used should provide an "efficient encoding" of the values
of the parameters rl, ? .. , rn , Cl , . . . , cn : given a set of inputs (Xl" ' " Xn+q) E jRn+ q,
one should be able to efficiently check concept membership (that is, compute
sign (L:7~l CjXj)). Regarding the precise meaning of polynomial-time computation,
there are at least two models of complexity possible: the unit cost model which deals
with algebraic complexity (arithmetic and comparison operations take unit time)
and the logarithmic cost model (computation in the Turing machine sense; inputs
(Xl , . . . , X n +q ) are rationals, and the time involved in finding a representation of
rl , . .. , r n , Cl, . .. , Cn is required to be polynomial on the number of bits L.
Theorem 3 For each fixed n > 0, the consistency problem for :Fn,q can be solved
in time polynomial in q and s in the unit cost model, and time polynomial in q, s,
and L in the logarithmic cost model.
Since vc (:Fn ,q) = O(n + nlog(q + 1)), it follows from here that the class :Fn,q is
learnable in time polynomial in q (and L in the log model). Due to space limitations,
we must omit the proof; it is based on the application of recent results regarding
computational complexity aspects of the first-order theory of real-closed fields.
6
Pseudo-Dimension Bounds
In this section, we obtain results on the learnability of linear systems dynamics, that
is, the class of functions obtained if one does not take the sign when defining recurrent perceptrons. The connection between VC dimension and sample complexity
is only meaningful for classes of Boolean functions; in order to obtain learnability
results applicable to real-valued functions one needs metric entropy estimates for
certain spaces of functions. These can be in turn bounded through the estimation
of Pollard's pseudo-dimension. We next briefly sketch the general framework for
learning due to Haussler (based on previous work by Vapnik, Chervonenkis , and
Pollard) and then compute a pseudo-dimension estimate for the class of interest.
The basic ingredients are two complete separable metric spaces X and If (called
respectively the sets of inputs and outputs), a class :F of functions f : X -+ If
(called the decision rule or hypothesis space) , and a function f : If x If -+ [0, r] C jR
(called the loss or cost function). The function f is so that the class of functions
(x, y) ~ f(f(x), y) is "permissible" in the sense of Haussler and Pollard.
Now,
one may introduce, for each f E :F, the function
AJ,l : X x If x jR -+ {-I, I} : (x, y, t) ~ sign (f(f(x) , y) - t)
as well as the class A.1",i consisting of all such A/,i ' The pseudo-dimension of :F
with respect to the loss function f, denoted by PO [:F, f], is defined as:
PO [:F,R] := vc (A.1",i).
Due to space limitations, the relationship between the pseudo-dimension and the
sample complexity of the class :F will not be discussed here; the reader is referred
to the references (Haussler, 1992; Maass, 1994) for details.
For our application we define , for any two nonnegative integers n, q, the class
:F~ ,q
:=
{?<! ICE jRn+q is n-recursive}
where ?c jRn+q -+ jR: (Xl , ..
can be proved using Fact 4.1.
.,
Xn+q) ~ L:7~l
CjX j .
The following Theorem
Theorem 4 Let p be a positive integer and assume that the loss function f is given
byf(Yl,Y2) = IYl- Y2I P ? Then , PO [:F~ , q,f] ~ 18n+4nlog(p(q+ 1)) .
210
B. DASGUPTA, E. D. SONTAG
Acknowledgements
This research was supported in part by US Air Force Grant AFOSR-94-0293.
References
A.D. BACK AND A.C. TSOI, FIR and IIR synapses, a new neural network architecture for time-series modeling, Neural Computation, 3 (1991), pp. 375-385 .
A .D. BACK AND A .C. TSOI, A comparison of discrete-time operator models for
nonlinear system identification, Advances in Neural Information Processing Systems
(NIPS'94), Morgan Kaufmann Publishers, 1995, to appear.
A.M . BAKSHO, S . DASGUPTA, J .S. GARNETT, AND C.R. JOHNSON, On the similarity of conditions for an open-eye channel and for signed filtered error adaptive
filter stability, Proc. IEEE Conf. Decision and Control, Brighton, UK, Dec. 1991,
IEEE Publications, 1991, pp . 1786-1787.
A. BLUMER, A. EHRENFEUCHT, D. HAUSSLER, AND M . WARMUTH, Learnability
and the Vapnik-Chervonenkis dimension, J. of the ACM, 36 (1989), pp. 929-965.
D.F. DELCHAMPS, Extracting State Information from a Quantized Output Record,
Systems and Control Letters, 13 (1989), pp. 365-372.
R .O. DUDA AND P.E. HART, Pattern Classification and Scene Analysis, Wiley,
New York, 1973.
C.E. GILES, G.Z. SUN, H .H. CHEN, Y.C. LEE, AND D . CHEN, Higher order recurrent networks and grammatical inference, Advances in Neural Information Processing Systems 2, D.S. Touretzky, ed., Morgan Kaufmann, San Mateo, CA, 1990.
P . GOLDBERG AND M. JERRUM, Bounding the Vapnik-Chervonenkis dimension of
concept classes parameterized by real numbers, Mach Learning, 18, (1995): 131-148.
D. HAUSSLER, Decision theoretic generalizations of the PAC model for neural nets
and other learning applications, Information and Computation, 100, (1992): 78-150.
R. KOPLON AND E.D. SONTAG, Linear systems with sign-observations, SIAM J.
Control and Optimization, 31(1993): 1245 - 1266.
W. MAASS, Perspectives of current research about the complexity of learning in
neural nets, in Theoretical Advances in Neural Computation and Learning, V.P.
Roychowdhury, K.Y. Siu, and A. Orlitsky, eds., Kluwer, Boston, 1994, pp. 295-336.
G.W. PULFORD, R.A . KENNEDY, AND B.D.O. ANDERSON, Neural network structure for emulating decision feedback equalizers, Proc. Int . Conf. Acoustics, Speech,
and Signal Processing, Toronto, Canada, May 1991, pp. 1517-1520.
E.D . SONTAG, Neural networks for control, in Essays on Control: Perspectives
in the Theory and its Applications (H.L. Trentelman and J .C. Willems, eds.),
Birkhauser, Boston, 1993, pp. 339-380.
GYORGY TURAN, Computational Learning Theory and Neural Networks:A Survey
of Selected Topics, in Theoretical Advances in Neural Computation and Learning,
V.P. Roychowdhury, K.Y. Siu,and A. Orlitsky, eds., Kluwer, Boston, 1994, pp.
243-293.
L.G. VALIANT A theory of the learnable, Comm. ACM, 27, 1984, pp. 1134-1142.
V.N .VAPNIK, Estimation of Dependencies Based on Empirical Data, Springer,
Berlin, 1982.
| 1108 |@word briefly:2 jlf:1 polynomial:11 duda:2 open:1 essay:1 decomposition:1 pick:3 contains:2 series:1 chervonenkis:4 current:1 written:1 must:2 fn:15 offunctions:1 alone:1 selected:1 warmuth:1 record:1 lr:7 cjx:1 filtered:1 provides:2 quantized:3 ron:1 toronto:1 prove:4 fitting:1 introduce:2 roughly:2 lll:2 cardinality:3 considering:1 becomes:1 classifies:1 underlying:1 estimating:1 bounded:1 turan:2 lone:1 finding:2 nj:1 eduardo:1 guarantee:1 pseudo:5 every:2 ti:1 subclass:1 orlitsky:2 exactly:1 classifier:5 uk:1 control:8 unit:3 grant:1 omit:3 appear:2 positive:2 ice:1 understood:2 consequence:1 encoding:1 mach:1 signed:1 mateo:1 unique:1 tsoi:5 recursive:8 y2i:1 empirical:1 confidence:1 unlabeled:1 operator:1 context:4 applying:1 equalization:1 restriction:3 map:3 iri:1 automaton:1 survey:2 rectangular:3 rule:1 haussler:5 imax:1 stability:1 notion:1 coordinate:2 variation:1 suppose:2 goldberg:3 hypothesis:5 element:1 particularly:1 rec:1 labeled:1 logq:3 narrowing:1 solved:1 connected:1 sun:1 comm:1 complexity:13 dynamic:1 tight:2 solving:1 iyl:1 po:3 various:1 separated:1 dichotomy:10 valued:3 say:1 otherwise:3 jerrum:3 unseen:1 itself:1 noisy:1 advantage:1 sequence:5 net:2 nlog:3 product:4 ontario:1 r1:1 help:1 recurrent:10 develop:1 fixing:1 filter:2 pea:2 vc:26 generalization:2 clarify:1 mapping:5 a2:1 omitted:1 estimation:2 proc:2 applicable:1 waterloo:2 reflects:1 publication:1 vk:1 check:1 sense:2 inference:1 membership:1 vl:1 typically:1 shattered:4 lj:1 interested:2 issue:1 among:1 classification:3 dual:2 denoted:1 special:3 fairly:1 field:1 once:1 identical:1 represents:1 lit:1 thinking:1 future:1 report:1 consisting:1 organization:1 interest:2 pc:3 logarithm:1 desired:2 theoretical:2 instance:2 classify:2 modeling:2 giles:2 boolean:3 asking:1 cost:5 pole:1 subset:7 siu:2 johnson:1 learnability:5 iir:2 dependency:1 siam:1 lee:1 yl:1 transmitting:1 again:1 opposed:2 choose:1 possibly:1 fir:1 conf:2 li:2 account:1 includes:1 int:1 depends:1 closed:1 daisy:1 air:1 kaufmann:2 percept:1 efficiently:1 generalize:1 identification:2 accurately:1 kennedy:1 classified:2 synapsis:1 touretzky:1 ed:5 definition:2 pp:9 involved:1 ofs:1 associated:4 proof:5 recovers:1 rational:1 proved:2 popular:1 recall:1 dimensionality:1 cj:3 formalize:1 back:5 higher:1 response:3 done:2 anderson:1 correlation:2 hand:1 sketch:1 nonlinear:1 aj:1 impulse:3 usa:1 concept:4 y2:1 hence:1 maass:3 ehrenfeucht:1 deal:2 ll:3 brighton:1 complete:2 theoretic:1 tn:1 l1:4 fj:1 meaning:3 discriminants:1 rl:2 discussed:3 kluwer:2 measurement:1 ai:2 n2l:1 consistency:3 mathematics:1 language:1 similarity:1 base:1 recent:1 perspective:2 certain:2 inequality:1 binary:1 seen:2 morgan:2 additional:1 gyorgy:1 signal:4 semi:1 arithmetic:1 characterized:1 hart:2 e1:2 prediction:1 involving:2 basic:1 metric:2 rutgers:2 represent:1 dec:1 c1:1 addition:2 want:1 source:1 publisher:1 permissible:1 rest:1 induced:1 bhaskar:1 call:1 integer:5 extracting:1 xj:1 fori:1 architecture:1 inner:3 idea:1 regarding:4 cn:6 algebraic:2 sontag:8 speech:1 speaking:1 pollard:3 york:1 generally:2 jrn:3 simplest:1 exist:2 roychowdhury:2 sign:12 disjoint:3 correctly:1 discrete:1 dasgupta:5 changing:1 ce:1 equalizer:1 turing:1 letter:1 parameterized:1 classifiable:1 reader:2 electronic:1 decision:4 bit:1 bound:8 nonnegative:1 ri:1 scene:1 aspect:1 min:2 separable:1 department:2 jr:3 appealing:1 ln:4 equation:1 turn:2 discus:1 nonempty:1 needed:4 know:2 serf:1 available:1 operation:1 apply:1 observe:1 appropriate:2 existence:1 assumes:2 classical:3 dependence:2 usual:1 rt:1 said:2 exhibit:1 kth:1 separating:1 berlin:1 topic:2 mail:1 reason:1 length:3 index:1 relationship:1 equivalently:1 difficult:1 statement:1 sharper:1 reliably:1 convolution:1 observation:1 willems:1 iti:1 finite:2 defining:2 emulating:1 precise:1 rn:5 dually:1 arbitrary:1 canada:2 introduced:2 required:1 axx:1 connection:1 acoustic:1 nip:1 able:3 dynamical:5 pattern:3 below:1 max:1 suitable:1 force:1 eye:1 axis:6 prior:1 literature:2 acknowledgement:1 afosr:1 loss:3 interesting:1 limitation:4 filtering:1 proportional:1 ingredient:1 digital:3 degree:2 sufficient:1 consistent:1 principle:1 classifying:5 supported:1 warren:1 perceptron:10 taking:1 grammatical:1 feedback:1 dimension:16 xn:3 author:2 adaptive:1 san:1 obtains:1 supremum:1 dealing:3 xi:1 search:1 learn:1 channel:3 ca:2 excellent:1 cl:4 interpolating:1 garnett:1 assured:1 did:1 main:4 uwaterloo:1 bounding:1 arise:1 nothing:1 ait:1 x1:1 referred:2 wiley:1 lq:1 xl:5 down:1 theorem:10 formula:1 pac:2 showing:1 learnable:3 exists:1 vapnik:5 valiant:1 chen:2 boston:3 entropy:1 lt:1 logarithmic:2 springer:1 acm:2 blumer:1 change:1 infinite:2 specifically:1 operates:1 birkhauser:1 llog:4 lemma:3 called:5 experimental:1 succeeds:1 milnor:1 meaningful:1 perceptrons:4 select:1 brunswick:1 relevance:1 ex:2 |
122 | 1,109 | Generalization in Reinforcement
Learning: Successful Examples Using
Sparse Coarse Coding
Richard S. Sutton
University of Massachusetts
Amherst, MA 01003 USA
richOcs.umass.edu
Abstract
On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are
no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore
reported at last year's meeting a series of negative results in attempting
to apply dynamic programming together with function approximation
to simple control problems with continuous state spaces. In this paper,
we present positive results for all the control tasks they attempted, and
for one that is significantly larger. The most important differences are
that we used sparse-coarse-coded function approximators (CMACs)
whereas they used mostly global function approximators, and that we
learned online whereas they learned offline. Boyan and Moore and
others have suggested that the problems they encountered could be
solved by using actual outcomes ("rollouts"), as in classical Monte
Carlo methods, and as in the TD().) algorithm when). = 1. However,
in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly
in conjunction with function approximators, and that there is little
justification at present for avoiding the case of general )..
1
Reinforcement Learning and Function Approximation
Reinforcement learning is a broad class of optimal control methods based on estimating
value functions from experience, simulation, or search (Barto, Bradtke &; Singh, 1995;
Sutton, 1988; Watkins, 1989). Many of these methods, e.g., dynamic programming
and temporal-difference learning, build their estimates in part on the basis of other
Generalization in Reinforcement Learning
1039
estimates. This may be worrisome because, in practice, the estimates never become
exact; on large problems, parameterized function approximators such as neural networks must be used. Because the estimates are imperfect, and because they in turn
are used as the targets for other estimates, it seems possible that the ultimate result
might be very poor estimates, or even divergence. Indeed some such methods have
been shown to be unstable in theory (Baird, 1995; Gordon, 1995; Tsitsiklis & Van Roy,
1994) and in practice (Boyan & Moore, 1995). On the other hand, other methods have
been proven stable in theory (Sutton, 1988; Dayan, 1992) and very effective in practice
(Lin, 1991; Tesauro, 1992; Zhang & Diett erich , 1995; Crites & Barto, 1996). What are
the key requirements of a method or task in order to obtain good performance? The
experiments in this paper are part of narrowing the answer to this question.
The reinforcement learning methods we use are variations of the sarsa algorithm (Rummery & Niranjan, 1994; Singh & Sutton, 1996). This method is the same as the TD(>.)
algorithm (Sutton, 1988), except applied to state-action pairs instead of states, and
where the predictions are used as the basis for selecting actions. The learning agent
estimates action-values, Q""(s, a), defined as the expected future reward starting in
state s, taking action a, and thereafter following policy 71'. These are estimated for
all states and actions, and for the policy currently being followed by the agent. The
policy is chosen dependent on the current estimates in such a way that they jointly
improve, ideally approaching an optimal policy and the optimal action-values. In our
experiments, actions were selected according to what we call the ?-greedy policy. Most
of the time, the action selected when in state s was the action for which the estimate
Q(s,a) was the largest (with ties broken randomly). However, a small fraction, ?, ofthe
time, the action was instead selected randomly uniformly from the action set (which
was always discrete and finite). There are two variations of the sarsa algorithm, one
using conventional accumulate traces and one using replace traces (Singh & Sutton,
1996). This and other details of the algorithm we used are given in Figure 1.
To apply the sarsa algorithm to tasks with a continuous state space, we combined
it with a sparse, coarse-coded function approximator known as the CMAC (Albus,
1980; Miller, Gordon & Kraft, 1990; Watkins, 1989; Lin & Kim, 1991; Dean et al.,
1992; Tham, 1994). A CMAC uses multiple overlapping tilings of the state space to
produce a feature ~epresentation for a final linear mapping where all the learning takes
place. See Figure 2. The overall effect is much like a network with fixed radial basis
functions, except that it is particularly efficient computationally (in other respects one
would expect RBF networks and similar methods (see Sutton & Whitehead, 1993) to
work just as well). It is important to note that the tilings need not be simple grids.
For example, to avoid the "curse of dimensionality," a common trick is to ignore some
dimensions in some tilings, i.e., to use hyperplanar slices instead of boxes. A second
major trick is "hashing"-a consistent random collapsing of a large set of tiles into
a much smaller set. Through hashing, memory requirements are often reduced by
large factors with little loss of performance. This is possible because high resolution is
needed in only a small fraction of the state space. Hashing frees us from the curse of
dimensionality in the sense that memory requirements need not be exponential in the
number of dimensions, but need merely match the real demands of the task.
2
Good Convergence on Control Problems
We applied the sarsa and CMAC combination to the three continuous-state control
problems studied by Boyan and Moore (1995): 2D gridworld, puddle world, and mountain car. Whereas they used a model of the task dynamics and applied dynamic programming backups offline to a fixed set of states, we learned online, without a model,
and backed up whatever states were encountered during complete trials. Unlike Boyan
R. S. SUTION
1040
1. Initially: wa(f) := ~, ea(f) := 0, 'ria E Actions, 'rifE CMAC-tiles.
2. Start of Trial: s:= random-stateO;
F := features(s);
a := E-greedy-policy(F).
3. Eligibility Traces: e,,(f) := )..e,,(f), 'rib, 'rIf;
3a. Accumulate algorithm: ea(f) := ea(f) + 1, 'rIf E F.
3b. Replace algorithm:
ea(f) := 1, e,,(f) := 0, 'rIf E F, 'rib
t a.
4. Environment Step:
Take action a; observe resultant reward, r, and next state, s' .
5. Choose Next Action:
F' := features(s'), unless s' is the terminal state, then F' := 0;
a' := ?-greedy-policy(F').
7. Loop: a := a'; s := s'; F := F'; if s' is the terminal state, go to 2; else go to 3.
Figure 1: The sarsa algorithm for finite-horizon (trial based) tasks. The function ?greedy-policy( F) returns , with probability E, a random action or, with probability 1- ?,
computes L:JEF Wa for each action a and returns the action for which the sum is
largest, resolving any ties randomly. The function features( s) returns the set of CMAC
tiles corresponding to the state s. The number of tiles returned is the constant c. Qo,
a, and)" are scalar parameters.
....................;.......... .,...
. .._ - - Tiling #1
----f---- ??? ??i-??? ... -.~ ... C\I
=It:
C
o
. +... . . !. ......
_?_-t -?_? ...-
~
Tiling #2
~
<.. .. ..... .... ?. .. :
<Ii
C
Q)
E
o
....;........ -f.."..---L.?????1????
....:
Dimension #1
Figure 2: CMACs involve multiple overlapping tilings of the state space. Here we show
two 5 x 5 regular tilings offset and overlaid over a continuous, two-dimensional state
space. Any state, such as that shown by the dot, is in exactly one tile of each tiling. A
state's tiles are used to represent it in the sarsa algorithm described above. The tilings
need not be regular grids such as shown here. In particular, they are often hyperplanar
slices, the number of which grows sub-exponentially with dimensionality of the space.
CMACs have been widely used in conjunction with reinforcement learning systems
(e.g., Watkins, 1989; Lin &. Kim, 1991; Dean, Basye &. Shewchuk, 1992; Tham, 1994).
Generalization in Reinforcement Learning
1041
and Moore, we found ro bust good performance on all tasks. We report here results for
the puddle world and the mountain car, the more difficult of the tasks they considered.
Training consisted of a series of trials, each starting from a randomly selected nongoal state and continuing until the goal region was reached. On each step a penalty
(negative reward) of -1 was incurred. In the puddle-world task, an additional penalty
was incurred when the state was within the "puddle" regions. The details are given in
the appendix. The 3D plots below show the estimated cost-to-goal of each state, i.e.,
maXa Q(8, a). In the puddle-world task, the CMACs consisted of 5 tilings, each 5 x 5,
as in Figure 2. In the mountain-car task we used 10 tilings, each 9 x 9.
Puddle World
Learned State Values
60
68
Trial 12
Trial 100
Figure 3: The puddle task and the cost-to-goal function learned during one run.
Mountain
Car
Goal
Step 428
Figure 4: The mountain-car task and the cost-to-goal function learned during one run.
The engine is too weak to accelerate directly up the slope; to reach the goal, the car
must first move away from it. The first plot shows the value function learned before
the goal was reached even once.
We also experimented with a larger and more difficult task not attempted by Boyan and
Moore. The acrobot is a two-link under-actuated robot (Figure 5) roughly analogous
to a gymnast swinging on a highbar (Dejong & Spong, 1994; Spong & Vidyasagar,
1989). The first joint (corresponding to the gymnast's hands on the bar) cannot exert
R.S. SUTIQN
1042
The object is to swing the endpoint (the feet) above the bar by an amount equal to
one of the links. As in the mountain-car task, there are three actions, positive torque,
negative torque, and no torque, and reward is -Ion all steps. (See the appendix.)
100~~-----------------------------,
The Acrobat
Acrobot Learning Curves
Goal: Raise tiP above line
Typical
StepsfTrial
(log scale)
:/ Sln910 Run
Smoothod
Average of
10 Runs
10
100
200
300
400
500
Trials
Figure 5: The Acrobot and its learning curves.
3
The Effect of A
A key question in reinforcement learning is whether it is better to learn on the basis of
actual outcomes, as in Monte Carlo methods and as in TD(A) with A = 1, or to learn
on the basis of interim estimates, as in TD(A) with A < 1. Theoretically, the former has
asymptotic advantages when function approximators are used (Dayan, 1992; Bertsekas,
1995), but empirically the latter is thought to achieve better learning rates (Sutton,
1988). However, hitherto this question has not been put to an empirical test using
function approximators. Figures 6 shows the results of such a test.
Mountain Car
Puddle World
r r....--.-------------. "",---,r---r---...,....,-----,
700
StepslTrial
CostfTrial
A veragtd over
fLlSI 20 tnals
and
Avengod over
600
2')1)
30 runs
Accunrulate
I--~--
o2
__~--__~---} .,; " -f--~---.--~~-__-l
04
0 6
II 8
1J
U
0 2
0 4
0 6
IX
Figure 6: The effects of A and
Ct
0. 11
1 2.
Replare
1--__- -__
u2
04
~
06
____
08
~--+
l
S I
tu'Sl 40 lna15
and 30 ruDS
I
12
IX
in the Mountain-Car and Puddle-World tasks.
Figure 7 summarizes this data, and that from two other systematic studies with different tasks, to present an overall picture of the effect of A. In all cases performance is an
inverted- U shaped function of A, and performance degrades rapidly as A approaches I,
where the worst performance is obtained. The fact that performance improves as A is
increased from 0 argues for the use of eligibility traces and against I-step methods such
as TD(O) and 1-step Q-Iearning. The fact that performance improves rapidly as A is
reduced below 1 argues against the use of Monte Carlo or "rollout" methods. Despite
the theoretical asymptotic advantages of these methods, they are appear to be inferior
in practice.
Acknowledgments
The author gratefully acknowledges the assistance of Justin Boyan, Andrew Moore, Satinder
Singh, and Peter Dayan in evaluating these results.
Generalization in Reinforcement Learning
Mountain Car
7 0 0 , - - - - -......--~
1043
Random Walk
I>
??
650
6
Accumulate:
600
Stepsffrial
0.5
0 .?
Root
Mean
550
03
500
Squared
Error
.50
Replace
0"1
0.4
0.6
0. 8
0.2
0.4
0.6
0.2
0.8
A.
Puddle World
240
Cart and Pole
~
230
220
300
250
210
200
Costffrial 200
190
180
170
Replace
i--_~
~_;:i
T
Accumulate
---~-
~
.L
~
........
0"
160
??
.... ~ ...~
~
150
Failures per
100,000 steps
150
100
50
0
0.2
0.4
0 .6
A.
0.8
0.2
0.6
0 .?
0.'
A.
Figure 7: Performance versus A, at best Q, for four different tasks. The left panels
summarize data from Figure 6. The upper right panel concerns a 21-state Markov
chain, the objective being to predict, for each state, the probability of terminating in
one terminal state as opposed to the other (Singh & Sutton, 1996). The lower left
panel concerns the pole balancing task studied by Barto, Sutton and Anderson (1983).
This is previously unpublished data from an earlier study (Sutton, 1984).
References
Albus, J. S. (1981) Brain, Behavior, and RoboticI, chapter 6, pages 139-179. Byte Books.
Baird, L. C. (1995) Residual Algorithms: Reinforcement Learning with Function Approximation. Proc. ML95. Morgan Kaufman, San Francisco, CA.
Barto, A. G., Bradtke, S. J., & Singh, S. P. (1995) Real-time learning and control using
asynchronous dynamic programming. Artificial Intelligence.
Barto, A. G., Sutton, R. S., & Anderson, C. W. (1983) Neuronlike elements that can solve
difficult learning control problems. TranI. IEEE SMC, 13, 835-846.
Bertsekas, D . P . (1995) A counterexample to temporal differences learning. Neural Computation, 7, 270-279.
Boyan, J. A. & Moore, A. W. (1995) Generalization in reinforcement learning: Safelyapproximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann.
Crites, R. H. & Barto, A. G. (1996) Improving elevator performance using reinforcement
learning. NIPS-8. Cambridge, MA: MIT Press.
Dayan, P. (1992) The convergence of TD(~) for general ~. Machine Learning, 8,341-362.
Dean, T., Basye, K. & Shewchuk, J. (1992) Reinforcement learning for planning and control. In
S. Minton, Machine Learning Methodl for Planning and Scheduling. Morgan Kaufmann.
Dejong, G. & Spong, M. W. (1994) Swinging up the acrobot: An example of intelligent
control. In Proceedingl of the American Control Conference, pagel 1.158-1.161..
Gordon, G. (1995) Stable function approximation in dynamic programming. Proc. ML95.
Lin, L. J. (1992) Self-improving reactive agents based on reinforcement learning, planning
and teaching. Machine Learning, 8(3/4), 293-321.
Lin, CoS. & Kim, H. (1991) CMAC-based adaptive critic self-learning control. IEEE TranI.
Neural Networkl, I., 530-533.
Miller, W. T., Glanz, F. H., & Kraft, L. G. (1990) CMAC: An associative neural network
alternative to backpropagation. Proc. of the IEEE, 78, 1561-1567.
1044
R.S. SUTION
Rummery, G. A. & Niranjan, M. (1994) On-line Q-Iearning using connectionist systems.
Technical Report CUED /F-INFENG /TR 166, Cambridge University Engineering Dept.
Singh, S. P. & Sutton, R. S. (1996) Reinforcement learning with replacing eligibility traces.
Machine Learning.
Spong, M. W. & Vidyasagar, M. (1989) Robot Dynamic, and Control. New York: Wiley.
Sutton, R. S. (1984) Temporal Credit A"ignment in Reinforcement Learning. PhD thesis,
University of Massachusetts, Amherst, MA.
Sutton, R. S. (1988) Learning to predict by the methods of temporal differences. Machine
Learning, 3, 9-44.
Sutton, R. S. & Whitehead, S. D. (1993) Online learning with random representations. Proc.
ML93, pages 314-321. Morgan Kaufmann.
Tham, C. K. (1994) Modular On-Line Function Approximation for Scaling up Reinforcement
Learning. PhD thesis, Cambridge Univ., Cambridge, England.
Tesauro, G. J. (1992) Practical issues in temporal difference learning. Machine Learning,
8{3/4),257-277.
Tsitsiklis, J. N. & Van Roy, B. (1994) Feature-based methods for large-scale dynamic programming. Techical Report LIDS-P2277, MIT, Cambridge, MA 02139.
Watkins, C. J. C. H. (1989) Learning from Delayed Reward,. PhD thesis, Cambridge Univ.
Zhang, W. & Dietterich, T. G., (1995) A reinforcement learning approach to job-shop scheduling. Proc. IJCAI95.
Appendix: Details of the Experiments
In the puddle world, there were four actions, up, down, right, and left, which moved approximately 0.05 in these directions unless the movement would cause the agent to leave the limits
of the space. A random gaussian noise with standard deviation 0.01 was also added to the
motion along both dimensions. The costs (negative rewards) on this task were -1 for each
time step plus additional penalties if either or both of the two oval "puddles" were entered.
These penalties were -400 times the distance into the puddle (distance to the nearest edge).
The puddles were 0.1 in radius and were located at center points (.1, .75) to (A5, .75) and
(A5, A) to (045, .8). The initial state of each trial was selected randomly uniformly from the
non-goal states. For the run in Figure 3, a == 0.5, >. == 0.9, c == 5, f == 0.1, and Qo == O. For
Figure 6, Qo == -20.
Details of the mountain-car task are given in Singh & Sutton (1996). For the run in Figure 4,
a == 0.5, >. == 0.9, c == 10, f == 0, and Qo == O. For Figure 6, c == 5 and Qo == -100.
In the acrobot task, the CMACs used 48 tilings. Each of the four dimensions were divided
into 6 intervals. 12 tilings depended in the usual way on all 4 dimensions. 12 other tilings
depended only on 3 dimensions (3 tilings for each of the four sets of 3 dimensions). 12 others
depended only on two dimensions (2 tilings for each of the 6 sets of two dimensions. And
finally 12 tilings depended each on only one dimension (3 tilings for each dimension). This
resulted in a total of 12 .64 + 12 . 63 + 12 .6 2 + 12 ?6 == 18,648 tiles. The equations of motion
were:
91 = _d~l (d 2 9. + rPl)
9. == ( m.I~2 + I.
-
~:) (T + ~: rPl -1
rP2 )
d1 == mll~l + m.(l~ + 1~2 + 2hlc. cosO.) + II + I.)
d. == m.(l~. + hie. cosO.) + I.
.
rPl = -m.lde.O.,inO. - 2m.ldc.0.Ol,inO. + (ml l el + m.h)gcos(Ol - 7r/2) + rP.
rP. == m 2 Ie.gcos(01 +0. - 7r/2)
where T E {+1, -1,0} was the torque applied at the second joint, and .6. == 0.05 was the
time increment. Actions were chosen after every four of the state updates given by the above
equations, corresponding to 5 Hz. The angular velocities were bounded by 91 E [-47r, 47r] and
9. E [-97r,97r]. Finally, the remaining constants were m1 == m2 == 1 (masses of the links),
11 == h == 1 (lengths of links), lei == 'e2 == 0.5 (lengths to center of mass of links), II == 12 == 1
(moments of inertia of links), and g == 9.8 (gravity). The parameters were a == 0.2, >. == 0.9,
c == 48, f == 0, Qo == O. The starting state on each trial was 01 == O. == O.
..
.
| 1109 |@word trial:9 seems:1 simulation:1 tr:1 moment:1 initial:1 series:2 uma:1 selecting:1 o2:1 current:1 must:3 plot:2 update:1 greedy:4 selected:5 intelligence:1 ria:1 coarse:3 zhang:2 rollout:1 along:1 become:1 rife:1 theoretically:1 expected:1 indeed:1 behavior:1 roughly:1 planning:3 brain:1 terminal:3 torque:4 ol:2 td:6 actual:2 little:2 curse:2 estimating:1 bounded:1 panel:3 mass:2 what:2 mountain:10 hitherto:1 kaufman:1 substantially:1 maxa:1 dejong:2 temporal:5 every:1 iearning:2 tie:2 exactly:1 ro:1 gravity:1 control:12 whatever:1 appear:1 bertsekas:2 positive:2 before:1 engineering:1 limit:1 depended:4 sutton:17 despite:1 approximately:1 might:1 plus:1 exert:1 studied:2 mateo:1 co:1 smc:1 acknowledgment:1 practical:1 practice:4 backpropagation:1 cmac:7 empirical:1 significantly:1 thought:1 radial:1 regular:2 cannot:1 scheduling:2 put:1 conventional:1 dean:3 center:2 backed:1 go:2 starting:3 resolution:1 swinging:2 m2:1 tham:3 variation:2 justification:1 analogous:1 increment:1 target:1 exact:1 programming:6 us:1 shewchuk:2 trick:2 element:1 roy:2 velocity:1 particularly:1 located:1 coso:2 narrowing:1 solved:1 worst:1 region:2 movement:1 environment:1 broken:1 reward:6 ideally:1 dynamic:8 terminating:1 singh:8 raise:1 kraft:2 basis:5 accelerate:1 joint:2 chapter:1 univ:2 effective:1 monte:3 artificial:1 outcome:2 modular:1 larger:2 widely:1 solve:1 jointly:1 final:1 online:3 associative:1 advantage:2 tu:1 loop:1 rapidly:2 entered:1 achieve:1 albus:2 moved:1 convergence:3 requirement:3 produce:1 leave:1 object:1 cued:1 andrew:1 nearest:1 job:1 strong:1 direction:1 foot:1 radius:1 generalization:5 sarsa:6 considered:1 credit:1 overlaid:1 mapping:1 predict:2 major:1 proc:5 currently:1 largest:2 mit:2 always:2 gaussian:1 avoid:1 barto:6 conjunction:2 minton:1 kim:3 sense:1 dayan:4 dependent:1 el:1 initially:1 overall:2 issue:1 equal:1 once:1 never:1 shaped:1 broad:1 future:1 others:2 report:3 gordon:3 richard:1 intelligent:1 connectionist:1 randomly:5 resulted:2 divergence:1 elevator:1 delayed:1 mll:1 rollouts:1 neuronlike:1 a5:2 chain:1 poorer:1 edge:1 experience:1 unless:2 continuing:1 walk:1 theoretical:2 increased:1 earlier:1 cost:4 pole:2 deviation:1 successful:1 too:1 reported:1 answer:1 combined:1 amherst:2 ie:1 systematic:1 tip:1 together:1 squared:1 thesis:3 opposed:1 choose:1 tile:7 collapsing:1 glanz:1 book:1 american:1 return:3 coding:1 baird:2 root:1 reached:2 start:1 slope:1 accuracy:1 kaufmann:3 miller:2 ofthe:1 generalize:1 weak:1 carlo:3 reach:1 lde:1 against:2 failure:1 e2:1 resultant:1 massachusetts:2 car:11 dimensionality:3 improves:2 ea:4 rif:3 hashing:3 box:1 anderson:2 just:1 angular:1 until:1 hand:2 qo:6 replacing:1 overlapping:2 lei:1 grows:1 usa:1 effect:4 dietterich:1 consisted:2 swing:1 former:1 moore:8 assistance:1 during:3 self:2 eligibility:3 inferior:1 complete:1 argues:2 bradtke:2 motion:2 common:1 empirically:1 bust:1 endpoint:1 exponentially:1 hie:1 m1:1 accumulate:4 cambridge:6 counterexample:1 erich:1 grid:2 teaching:1 gratefully:1 dot:1 stable:2 robot:2 tesauro:2 approximators:7 meeting:1 inverted:1 morgan:4 additional:2 ii:4 resolving:1 multiple:2 technical:1 match:1 england:1 lin:5 divided:1 niranjan:2 coded:2 prediction:1 infeng:1 spong:4 represent:1 ion:1 whereas:3 interval:1 else:1 unlike:1 cart:1 hz:1 call:1 approaching:1 imperfect:1 whether:1 ultimate:1 penalty:4 peter:1 returned:1 york:1 cause:1 action:21 involve:1 amount:1 reduced:2 sl:1 estimated:2 per:1 discrete:1 key:2 thereafter:1 four:5 merely:1 fraction:2 year:1 sum:1 run:7 parameterized:2 place:1 appendix:3 summarizes:1 scaling:1 ct:1 followed:1 encountered:2 rp2:1 attempting:1 interim:1 according:1 combination:1 poor:1 smaller:1 lid:1 computationally:1 equation:2 previously:1 turn:1 needed:1 whitehead:2 tiling:18 apply:2 observe:1 away:1 robustly:1 alternative:1 rp:2 remaining:1 build:1 classical:1 move:1 objective:1 question:3 added:1 degrades:1 usual:1 distance:2 link:6 epresentation:1 unstable:1 length:2 jef:1 difficult:3 mostly:1 hlc:1 trace:5 negative:4 policy:8 upper:1 markov:1 finite:2 situation:1 ino:2 gridworld:1 pair:1 unpublished:1 engine:1 learned:7 nip:2 justin:1 suggested:1 bar:2 below:2 summarize:1 memory:2 ldc:1 vidyasagar:2 boyan:8 residual:1 rummery:2 improve:1 shop:1 picture:1 rpl:3 acknowledges:1 byte:1 asymptotic:2 loss:1 expect:1 mixed:1 worrisome:1 proven:1 approximator:1 versus:1 incurred:2 agent:4 consistent:1 critic:1 balancing:1 last:1 free:1 asynchronous:1 offline:2 tsitsiklis:2 taking:1 sparse:3 van:2 slice:2 curve:2 dimension:12 world:9 evaluating:1 computes:1 author:1 inertia:1 reinforcement:20 san:2 adaptive:1 ignore:1 satinder:1 ml:1 global:1 rib:2 conclude:1 francisco:1 continuous:4 search:1 learn:2 ca:2 actuated:1 improving:2 crites:2 backup:1 noise:1 wiley:1 sub:1 exponential:1 watkins:4 ix:2 down:1 offset:1 experimented:1 concern:2 phd:3 acrobot:5 demand:1 horizon:1 scalar:1 u2:1 ma:4 goal:9 cmacs:5 rbf:1 replace:4 typical:1 except:2 uniformly:2 total:1 oval:1 attempted:2 puddle:14 latter:1 reactive:1 dept:1 d1:1 avoiding:1 |
123 | 111 | 568
DYNAMICS OF ANALOG NEURAL
NETWORKS WITH TIME DELAY
C.M. Marcus and RM. Westervelt
Division of Applied Sciences and Department of Physics
Harvard University, Cambridge Massachusetts 02138
ABSTRACT
A time delay in the response of the neurons in a network can
induce sustained oscillation and chaos. We present a stability
criterion based on local stability analysis to prevent sustained
oscillation in symmetric delay networks, and show an
example of chaotic dynamics in a non-symmetric delay
network.
L INTRODUCTION
Understanding how time delay affects the dynamics of neural networks is important for
two reasons: First, some degree of time delay is intrinsic to any physically realized
network, both in biological neural systems and in electronic artificial neural networks.
As we will show, it is not obvious what constitutes a "small" (i.e. ignorable) delay
which will not qualitatively change the network dynamics. For some network
configurations, delay much smaller than the intrinsic relaxation time of the network can
induce collective oscillatory behavior not predicted by mathematical models which ignore
delay. These oscillations mayor may not be desirable; in either case, one should
understand when and how new dynamics can appear. The second reason to study time
delay is for its intentional use in parallel computation. The dynamics of neural networks
which always converge to fixed points are now fairly well understood. Several neural
network models have appeared recently which use time delay to produce dynamic
computation such as associative recall of sequences [Kleinfeld,1986; Sompolinsky and
Kanter, 1986]. It has also been suggested that time delay produces an effective noise in
the network dynamics which can yield improved recall of memories [Conwell, 1987]
Finally, to the extent that neural networks research is inspired by biological systems, the
known presence of time delays in a many real neural systems suggests their usefulness
in parallel computation.
In this paper we will show how time delay in an analog neural network can produce
sustained oscillation and chaos. In section 2 we consider the case of a symmetrically
connected network. It is known [Cohen and Grossberg, 1983; Hopfield, 1984] that in the
absence of time delay a symmetric network will always converge to a fixed point
attractor. We show that adding a fixed delay to the response of each neuron will produce
sustained oscillation when the magnitude of the delay exceeds a critical value, which
depends on the neuron gain and the network connection topology. We then analyze the
Dynamics of Analog Neural Networks with Time Delay
all-inhibitory and symmetric ring topologies as examples. In section 3, we discuss
chaotic dynamics in asymmetric neural networks, and give an example of a small (N=3)
network which shows delay-induced chaos. The analytical results presented here are
supported by numerical simulations and experiments performed on a small electronic
neural network with controllable time. A detailed derivation of the stability results for
the symmetric network is given in [Marcus and Westervelt, 1989], and the electronic
circuit used is described in described [Marcus and Westervelt, 1988].
II. STABILITY OF SYMMETRIC NETWORKS WITH DELAY
The dynamical system we consider describes an electronic circuit of N saturable
amplifiers ("neurons") coupled by a resistive interconnection matrix. The neurons do not
respond to an input voltage ui instantaneously, but produce an output after a delay,
which we take to be the same for all neurons. The neuron input voltages evolve
according to the following equations:
iI.(t)
1
= -u.(t)
1
N
+ L J .. f(u.(t-t?.
j
=1
IJ
(1 )
J
The transfer function for each neuron is taken to be an identical sigmoidal function feu)
with a maximum slope df/du = ~ at u = O. The unit of time in these equations has been
scaled to the characteristic network relaxation time, thus t can be thought of as the ratio
of delay time to relaxation time. The symmetric interconnection matrix J ij describes the
conductance between neurons i and j is normalized to satisfy LjlJijl = 1 for all i. This
normalization assumes that each neuron sees the same conductance at its input [Marcus
and Westervelt, 1989]. The initial conditions for this system are a set of N continuous
functions defined on the interval -t ~ t ~ O. We take each initial function to be constant
over that interval, though possibly different for different i. We find numerically that the
results do not depend on the form of the initial functions.
Linear Stability Analysis at Low Gain
StUdying the stability of the fixed point at the origin (ui = 0 for all i) is useful for
understanding the source of delay-induced sustained oscillation and will lead to a low-gain
stability criterion for symmetric networks. It is important to realize however, that for
the system (1) with a sigmoidal nonlinearity, if the origin is stable then it is the unique
attractor, which makes for rather uninteresting dynamics. Thus the origin will almost
certainly be unstable in any useful configuration. Linear stability analysis about the
origin will show that at 't = 0, as the gain ~ is increased, the origin always loses
stability by a type of bifurcation which only produces other fixed points, but for 't > 0
an alternative type of bifurcation of the origin can occur which produces the sustained
oscillatory modes. The stability criterion derived insures that this alternate bifurcation a Hopf bifurcation - does not occur.
The natural coordinate system for the linearized version of (1) is the set of N
eigenvectors of the connection matrix Jij' defined as xi(t), i=I, ..N. In terms of the xi(t),
569
570
Marcus and Westervelt
the linearized system can be written
i .( t) = - x .( t) +
1
~
(2)
A. x.( t - t )
I I I
where ~ is the neuron gain and Ai (i=I, ..N) are the eigenvalues of Jij' In general, these
eigenvalues have both real and imaginary parts; for Jij = Jji the A' are purely real.
Assuming exponential time evolution of the form xi(t) = Xi(O)e~it, where si is a
complex characteristic exponent, yields a set of N transcendental characteristic equations:
(si + l)esit = ~Ai' The condition for stability of the origin, Re(si) < 0 for all i, and the
characteristic equations can be used to specify a stability region in the complex plane of
eigenvalues, as illustrated in Fig. (la). When all eigenvalues of J ij are within the
stability region, the origin is stable. For t = 0, the stability region is defined by
Re(A) < lI~, giving a half-plane stability condition familiar from ordinary differential
equations. For t > 0, we define the border of the stability region A(e) at an angle e
from the Re(A) axis as the radial distance from the point A = 0 to the frrst point (Le.
smallest value of A(e? which satisfies the characteristic equation for purely imaginary
characteristic exponent Sj i5 iroj. The delay-dependent value of A(e) is given by
A(e) =
~.J ro 2 +
where ro is in the range (e-1tI2)
~ClYt ~
1
;
ro = - tan (rot - e)
(3)
e, modulo 21t.
Im(A)
(a)
(b)
A(O;t=l)
100
--'"""""'''-
J3A
ReO.)
~~--~--~----~----~
10
0.1
't
1
10
Figure 1. (a) Regions of Stability in the Complex Plane of Eigenvalues A of the
Connection Matrix Jij' for't = 0,1,00. (b) Where Stability Region Crosses the Real-A
Axis in the Negative Half Plane.
Notice that for nonzero delay the stability region closes on the Re(A) axis in the negative
half-plane. It is therefore possible for negative real eigenvalues to induce an instability
of the origin. Specifically, if the minimum eigenvalue of the symmetric matrix Jij is
more negative than -A(e = 1t) then the origin is unstable. We define this "back door"
to the stability region along the real axis as A > 0, dropping the argument e = 1t. A is
inversely proportional to the gain ~ and depends on delay as shown in Fig. (lb). For
large and small delay, A can be approximated as an explicit function of delay and gain:
Dynamics of Analog Neural Networks with Time Delay
t?l
(4 a)
t> > 1
(4b)
A _
In the infinite-delay limit, the delay-differential system (1) is equivalent to an iterated
map or parallel-update network of the form ui(t+l) = 1] J ij f(uj(t? where t is a discrete
iteration index. In this limit, the stability region is circular, corresponding to the fixed
point stability condition for the iterated map system.
Consider the stability of the origin in a symmetrically connected delay system (1) as the
neuron gain ~ is increased from zero to a large value. A bifurcation of the origin will
occur when the maximum eigenvalue Amax > 0 of Jij becomes larger than l/~ or when
the minimum eigenvalue Amin < 0 becomes more negative than -A = _~-I(ro2+1)lJ2,
where ro = -tan(rot), [1CI2 < ro < x]. Which bifurcation occurs first depends on the
delay and the eigenvalues of Jr. The bifurcation at Amax = ~-1 is a pitchfork (as it is
for t = 0) corresponding to a ctaracteristic exponent si crossing into the positive real
half plane along the real axis. This bifurcation creates a pair of fixed points along the
eigenvector Xi associated with that eigenvalue. These fixed points constitute a single
memory state of the network. The bifurcation at Amin = - A corresponds to a Hopf
bifurcation [Marsden and McCracken, 1976] , where a pair of characteristic exponents pass
into the real half plane with imaginary components ?ro where ro = -tan(rot), [x/2 < ro
< xl. This bifurcation, not present at t = 0, creates an oscillatory attractor along the
eigenvector associated with ~in'
A simple stability criterion can be constructed by requiring that the most negative
eigenvalue of the (symmetric) connection matrix not be more negative than -A. Because
A is always larger than its small-delay limit 7tI(2t~), the criterion can be stated as a
limit on the size on the delay (in units of the network relaxation time.)
t<-
x
2~A
.
=>
no sustained oscillation.
(5)
mIn
Linear stability analysis does not prove global stability, but the criterion (5) is supported
by considerable numerical and eXferimental evidence [Marcus and Westervelt, 1989].
For long delays, where A == W ,linear stability analysis suggests that sustained
oscillation will not exist as long as _~-1 < Amin' In the infinite-delay limit, it can be
shown that this condition insures global stability in the discrete-time parallel-update
network. [Marcus and Westervelt, to appear].
At large gain, Eq. (5) does not provide a useful stability criterion because the delay
required for stability tends to zero as ~ ~ 00. The nonlinearity of the transfer function
becomes important at large gain and stable, fixed-point-only dynamics are found at large
gain and nonzero delay, indicating that Eq. (5) is overly conservative at large gain. To
understand this, we must include the nonlinearity and consider the stability of the
oscillatory modes themselves. This is described in the next section.
571
572
Marcus and Westervelt
Stability in the Large-Gain Limit
We now analyze the oscillatory mode at large gain for the particular case of coherent
oscillation. We find a second stability criterion which predicts a gain-independent critical
delay below which all initial conditions lead to fixed points. This result complements
the low gain result of the previous section for this class of network; experimentally and
numerically we find excellent agree in both regimes, with a cross-over at the value of
gain where fixed points appear away from the origin, p = lIAmax .
In considering only coherent oscillation, we not only assume that Iij is symmetric but
that its maximum and minimum eigenvalues satisfy 0 < Amax < -Amin and that the
eigenvector associated with Amin points in a coherent direction, defined to be along any
of the 2N vectors of the form (?I,?l,?I,... ) in the ui basis. For this case, we find that
in the limit of infinite gain, where the nonlinearity is of the form f(u) = sgn(u), multiple
fixed point attractors coexist with the oscillatory attractor and that the size of the basin
of attraction for the oscillatory mode varies with the delay [Marcus and Westervelt,
1988]. At a critical value of delay 'tcrit the basin of attraction for oscillation vanishes
and the oscillatory mode loses stability. In [Marcus and Westervelt, 1989] we show:
't cnt
.
= -In( 1 + A max / A mIn
. )
(6)
For delays less than this critical value, all initial states lead to stable fIXed points.
Notice that the critical delay for coherent oscillation diverges as IAmax/Aminl ~ 1-.
Experimentally and numerically we find that this prediction has more general
applicability: None of the symmetric networks investigated which satisfied
IAmax/Aminl ~ 1 (and Amax > 0) showed sustained oscillation for 't < -10. This
observation is a useful criterion for electronic circuit design, where single-device delays
are generally shorter than the circuit relaxation time ('t < 1), but only the case of
coherent oscillation is supported by analysis.
Examples
As a first example, we consider the fully-connected all-inhibitory network, Eq. (1) with
Iii = (N-IrI(~ij - 1). This matrix has N-I degenerate eigenvalues at +lI(N-I) and a
smgle eigenvalue at -1. A similar network configuration (with delays) has been studied
as a model of lateral inhibition in the eye of the horseshoe crab, Limulus [Coleman and
Renninger,I975,I976; Hadeler and Tomiuk,I977; anderHeiden, 1980]. Previous analysis
of sustained oscillation in this system has assumed a coherent form for the oscillatory
solution, which reduces the problem to a single scalar delay-differential equation.
However, by constraining the solution to lie on along the coherent direction, the
instability of the oscillatory mode discussed above is not seen. Because of this
assumption, fixed-point-only dynamics in the large-gain limit with finite delay are not
predicted by previous treatments, to our knowledge.
Dynamics of Analog Neural Networks with Time Delay
The behavior of the network at various values of gain and delay are illustrated in Fig.2
for the particular case of N=3. The four regions labeled A,B,C and D characterize the
behavior for all N. At low gain (~ < N-1) the origin is the unique attractor for small
delay (region A) and undergoes a Hopf bifurcation at to sustained coherent oscillation at
't - 7t(~2_1)-112 for large delay (region B). At ~ = N-1 fixed points away from the origin
appear. In addition to these fixed points, an oscillatory attractor exists at large gain for
't > In [(N-1)/(N-2)] (== liN for large N) (region C). Sustained oscillation does not
exist below this critical delay (region D).
c
D
A
10
100
Figure 2. Stability Diagram for the All-Inhibitory Delay Network for the Case N
See Text for a Description of A,B,C and D.
=
3.
As a second example, we consider a ring of delayed neurons. We allow the symmetric
connections to be of either sign - that is, connections between neighboring pairs can be
mutually excitatory or inhibitory - but are all the same strength. The eigenvalues for the
symmetric ring of size N are Ak = cos(27t(k+<p)/N), where k = O,1,2 ...N-1, <p = 112 if
the produet of connection strengths around the ring is negative, <p = 0 if the product is
positive. Borrowing from the language of disordered magnetic systems, a ring which
contains an odd number of negative connections (the case <p = 112) is said to be
"frustrated." [Toulouse, 1977]. The large-gain stability analysis for the symmetric ring
gives a rather surprising result Only frustrated rings with an odd number of neurons
will show sustained oscillation. For this case (N odd.aru1 an odd number of negative
connections) the critical delay is given by 'tcrit = -In (1 - cos(1tIN?. This agrees very
well with experimental and numerical data, as does the conclusion that rings with even N
do not show sustained oscillation [Marcus and Westervelt, 1989]. The theoretical largegain critical delay for the all-inhibitory network and the frustrated ring of the same size
are compared in Fig. 3. Note that the critical delay for the all-inhibitory network
decreases (roughly as lIN) for larger networks while the ring becomes less prone to
oscillation as the network size increases.
573
574
Marcus and Westervelt
10
~
'terit
1
?
0.1
1
II
II
II
?
~. ? ?
? ? ?
3
5
N
7
9
11
Figure 3. Critical Delay from Large-Gain Theory for All-Inhibitory Networks (circles)
and Frustrated Rings (squares) of size N.
ITI. CHAOS IN NON-SYMMETRIC DELAY NETWORKS
Allowing non-symmetric interconnections greatly expands the repertoire of neural
network dynamics and can yield new, powerful computational properties. For example,
several recent studies have shown that by using both asymmetric connections and time
delay, a neural network can accurately recall of sequences of stored patterns
[Kleinfeld,1986; Sompolinsky and Kanter,1986]. It has also been shown that for some
parameter values, these pattern-generating networks can produce chaotic dynamics
[Riedel, et ai, 1988].
Relatively little is known about the dynamics of large asymmetric networks [Amari,
1971,1972; KUrten and Clark,1986; Shinomoto,1986; Sompolinsky, et ai, 1988,
Gutfreund,et al,1988]. A recent study of continuous-time networks with random
asymmetric connections shows that as N -+ 00 these systems will be chaotic whenever
the origin is unstable [Sompolinsky,et al,1988]. In discrete-state (?1) networks, with
either parallel or sequential deterministic dynamics, oscillatory modes with long periods
are also seen for fully asymmetric random connections (Jij and Jji uncorrelated), but
when J ij has either symmetric or antisymmetric correlations short-period attractors seem
to predominate [Gutfreund,et al,1988]. It is not clear whether the chaotic dynamics of
large random networks will appear in small networks with non-symmetric, but nonrandom, connections.
Small networks with asymmetric connections have been used as models of central
pattern generators found in many biological neural systems. [Cohen,et ai, 1988] These
models frequently use time delay to produce sustained rhythmic output, motivated in part
by the known presence of time delay in real central pattern generators. General theoretical
principles concerning the dynamics of asymmetric network with delay do not exist at
present. It has been shown, however, that large system size is not necessary to produce
chaos in neural networks with delay [e.g. Babcock and Westervelt, 1987]. We fmd that
small systems (N~ 3) with certain asymmetric connections and time delay can produce
sustained chaotic oscillation. An example is shown in Fig. 4: These data were produced
using an electronic network [Marcus and Westervelt, 1988] of three neurons with
Dynamics of Analog Neural Networks with Time Delay
sigmoidal transfer functions f 1(u(t?=3.8tanh(8u(t-tp, f 2 (u(t?=2tanh(6.1u(t?,
f 3 (u(t?=3.5tanh(2.5u(t?, connection resistances of ?1O .0 and input capacitances of
lOnF. Fig. 4 shows the network configuration and output voltages VIand V 2 for
increasing delay in neuron 1. For t < O.64ms a periodic attractor similar to the upper
left figure is found; for t > O.97ms both periodic and chaotic attractors are found.
1.0-
V2
't
A
-
o1.0-
V2
-
o0
VI
1.0
0
VI
1.0
Figure 4. Period Doubling to Chaos as the Delay in Neuron 1 is Increased.
Chaos in the network of Fig.4 is closely related to a well-known chaotic delaydifferential equation with a noninvertible feedback term [Mackey and Glass,1977]. The
noninvertible or "mixed" feedback necessary to produce chaos in the Mackey-Glass
equation is achieved in the neural network - which has only monotone transfer
functions - by using asymmetric connections.
This association between asymmetry and noninvertible feedback suggests that
asymmetric connections may be necessary to produce chaotic dynamics in neural
networks, even when time delay is present. This conjecture is further supported by
considering the two limiting cases of zero delay and infinite delay, neither of which show
chaotic dynamics for symmetric connections.
IV. CONCLUSION AND OPEN PROBLEMS
We have considered the effects of delayed response in a continuous-time neural network.
We find that when the delay of each neuron exceeds a critical value sustained oscillatory
modes appear in a symmetric network. Stability analysis yields a design criterion for
building stable electronic neural networks, but these results can also be used to created
desired oscillatory modes in delay networks. For example, a variation of the Hebb rule
[Hebb, 1949], created by simply taking the negative of a Hebb matrix, will give
negative real eigenvalues corresponding to programed oscillatory patterns. Analyzing the
storage capacities and other properties of neural networks with dynamic attractors remain
575
576
Marcus and Westervelt
challenging problems [see, e.g. Gutfreund and Mezard, 1988].
In analyzing the stability of delay systems, we have assumed that the delays and gains of
all neurons are identical. This is quite restrictive and is certainly not justified from a
biological viewpoint. It would be interesting to study the effects of a wide range of
delays in both symmetric and non-symmetric neural networks. It is possible, for
example, that the coherent oscillation described above will not persist when the delays
are widely distributed.
Acknowledgements
One of us (CMM) acknowledges support as an AT&T Bell Laboratories Scholar.
Research supported in part by JSEP contract NOOOI4-84-K-0465.
References
S. Amari, 1971, Proc. IEEE, 59, 35.
S. Amari, 1972, IEEE Trans. SMC-2, 643.
U. an der Heiden, 1980, Analysis of Neural Networks, Vol. 35 of Lecture Notes in
Biomathematics (Springer, New York).
K.L. Babcock and R.M. Westervelt, 1987, Physica 28D, 305.
M.A. Cohen and S. Grossberg, 1983, IEEE Trans. SMC-13, 815.
A.H. Cohen, S. Rossignol and S. Grillner, 1988, Neural Control of Rhythmic Motion,
(Wiley, New York).
B.D. Coleman and G.H. Renninger, 1975, J. Theor. BioI. 51, 243.
B.D. Coleman and G.H. Renninger, 1976, SIAM J. Appl. Math. 31, 111.
P. R. Conwell, 1987, in Proc. of IEEE First Int. Con! on Neural Networks.III-95.
H. Gutfreund, J.D. Reg~r and A.P. Young, 1988, J. Phys. A, 21, 2775.
H. Gutfreund and M. Mezard, 1988, Phys. Rev. Lett. 61, 235.
K.P. Hadeler and J. Tomiuk, 1977, Arch. Rat. Mech. Anal. 65,87.
D.O. Hebb, 1949, The Organization of B,ehavior (Wiley, New York).
J.1. Hopfield, 1984, Proc. Nat. Acad. Sci. USA 81, 3008.
D. Kleinfeld, 1984, Proc. Nat. Acad. Sci. USA 83,9469.
K.E. KUrten and J.W. Clark, 1986, Phys. Lett. 114A, 413.
M.C. Mackey and L. Glass, 1977, Science 197, 287.
C.M. Marcus and R.M. Westervelt, 1988, in: Proc. IEEE Con[ on Neural Info. Proc.
Syst.? Denver. CO. 1987, (American Institute of Physics, New York).
C.M. Marcus and R.M. Westervelt, 1989, Phys. Rev. A 39, 347.
J .E. Marsden and M. McCracken, The Hopf Bifurcation and its Applications, (SpringerVerlag, New York).
U. Riedel, R. KUhn, and J. L. van Hemmen, 1988, Phys. Rev. A 38, 1105.
S. Shinomoto, 1986, Prog. Theor. Phys. 75, 1313.
H. Sompolinsky and I. Kanter, 1986, Phys. Rev. Lett. 57, 259.
H. Sompolinsky, A. erisanti and H.I. Sommers, 1988, Phys. Rev. Lett. 61, 259.
G. Toulouse, 1977, Commun. Phys. 2, 115.
| 111 |@word version:1 open:1 simulation:1 linearized:2 ci2:1 biomathematics:1 initial:5 configuration:4 contains:1 imaginary:3 surprising:1 si:4 written:1 must:1 transcendental:1 realize:1 numerical:3 update:2 mackey:3 half:5 device:1 plane:7 coleman:3 short:1 math:1 sigmoidal:3 mathematical:1 along:6 constructed:1 differential:3 hopf:4 prove:1 sustained:17 resistive:1 lj2:1 roughly:1 themselves:1 frequently:1 behavior:3 inspired:1 little:1 considering:2 increasing:1 becomes:4 circuit:4 what:1 eigenvector:3 gutfreund:5 nonrandom:1 ti:1 expands:1 ro:8 rm:1 scaled:1 control:1 unit:2 appear:6 positive:2 understood:1 local:1 tends:1 limit:8 acad:2 ak:1 analyzing:2 feu:1 studied:1 suggests:3 challenging:1 appl:1 co:3 smc:2 range:2 grossberg:2 unique:2 chaotic:10 mech:1 bell:1 thought:1 induce:3 radial:1 close:1 coexist:1 storage:1 instability:2 equivalent:1 map:2 deterministic:1 iri:1 renninger:3 rule:1 attraction:2 amax:4 stability:38 coordinate:1 variation:1 limiting:1 tan:3 modulo:1 origin:16 harvard:1 crossing:1 approximated:1 ignorable:1 asymmetric:10 predicts:1 labeled:1 persist:1 rossignol:1 region:14 connected:3 sompolinsky:6 decrease:1 vanishes:1 ui:4 dynamic:25 depend:1 purely:2 creates:2 division:1 basis:1 hopfield:2 various:1 derivation:1 effective:1 artificial:1 kanter:3 quite:1 larger:3 widely:1 interconnection:3 amari:3 jsep:1 toulouse:2 associative:1 sequence:2 eigenvalue:17 analytical:1 jij:7 product:1 neighboring:1 ro2:1 degenerate:1 amin:5 frrst:1 description:1 asymmetry:1 diverges:1 produce:13 generating:1 ring:11 cnt:1 ij:6 odd:4 eq:3 predicted:2 direction:2 kuhn:1 closely:1 disordered:1 sgn:1 conwell:2 scholar:1 repertoire:1 biological:4 theor:2 im:1 physica:1 crab:1 intentional:1 around:1 considered:1 limulus:1 smallest:1 proc:6 tanh:3 agrees:1 instantaneously:1 always:4 rather:2 voltage:3 derived:1 greatly:1 glass:3 dependent:1 cmm:1 borrowing:1 exponent:4 fairly:1 bifurcation:13 identical:2 constitutes:1 delayed:2 familiar:1 attractor:11 amplifier:1 conductance:2 organization:1 circular:1 certainly:2 necessary:3 shorter:1 iv:1 re:4 circle:1 desired:1 theoretical:2 increased:3 tp:1 ordinary:1 applicability:1 uninteresting:1 usefulness:1 delay:77 characterize:1 stored:1 varies:1 periodic:2 siam:1 pitchfork:1 programed:1 contract:1 physic:2 central:2 satisfied:1 possibly:1 american:1 li:2 syst:1 int:1 satisfy:2 depends:3 vi:2 performed:1 analyze:2 parallel:5 slope:1 square:1 characteristic:7 yield:4 iterated:2 accurately:1 produced:1 none:1 oscillatory:15 phys:9 whenever:1 obvious:1 associated:3 con:2 gain:25 treatment:1 massachusetts:1 noooi4:1 recall:3 knowledge:1 back:1 response:3 improved:1 specify:1 though:1 heiden:1 arch:1 correlation:1 kleinfeld:3 mode:9 undergoes:1 usa:2 effect:2 building:1 normalized:1 requiring:1 evolution:1 symmetric:23 nonzero:2 laboratory:1 illustrated:2 shinomoto:2 rat:1 criterion:10 m:2 motion:1 chaos:8 recently:1 denver:1 cohen:4 ti2:1 analog:6 discussed:1 association:1 numerically:3 cambridge:1 ai:5 nonlinearity:4 sommers:1 language:1 rot:3 stable:5 inhibition:1 showed:1 recent:2 commun:1 smgle:1 certain:1 der:1 seen:2 minimum:3 converge:2 period:3 ii:5 multiple:1 desirable:1 reduces:1 exceeds:2 cross:2 long:3 lin:2 concerning:1 marsden:2 prediction:1 mayor:1 df:1 physically:1 iteration:1 normalization:1 achieved:1 justified:1 addition:1 interval:2 diagram:1 kurten:2 source:1 induced:2 seem:1 presence:2 symmetrically:2 door:1 iii:2 constraining:1 hadeler:2 affect:1 topology:2 whether:1 motivated:1 o0:1 resistance:1 york:5 constitute:1 useful:4 generally:1 detailed:1 eigenvectors:1 clear:1 exist:3 inhibitory:7 notice:2 sign:1 overly:1 discrete:3 dropping:1 vol:1 four:1 prevent:1 neither:1 relaxation:5 monotone:1 angle:1 i5:1 respond:1 powerful:1 prog:1 almost:1 electronic:7 oscillation:22 jji:2 strength:2 occur:3 riedel:2 westervelt:18 argument:1 min:2 relatively:1 conjecture:1 department:1 according:1 alternate:1 jr:1 smaller:1 describes:2 remain:1 rev:5 taken:1 equation:9 agree:1 mutually:1 discus:1 studying:1 away:2 v2:2 magnetic:1 alternative:1 assumes:1 include:1 giving:1 restrictive:1 uj:1 noninvertible:3 capacitance:1 realized:1 occurs:1 predominate:1 said:1 distance:1 lateral:1 capacity:1 sci:2 extent:1 unstable:3 reason:2 marcus:16 assuming:1 o1:1 index:1 ratio:1 info:1 negative:12 stated:1 design:2 anal:1 collective:1 allowing:1 upper:1 neuron:19 observation:1 iti:1 finite:1 horseshoe:1 lb:1 complement:1 pair:3 required:1 connection:19 coherent:9 trans:2 suggested:1 dynamical:1 below:2 pattern:5 reo:1 appeared:1 regime:1 max:1 memory:2 critical:11 natural:1 inversely:1 eye:1 axis:5 created:2 acknowledges:1 coupled:1 text:1 understanding:2 acknowledgement:1 evolve:1 fully:2 lecture:1 mixed:1 interesting:1 proportional:1 mccracken:2 clark:2 generator:2 degree:1 basin:2 principle:1 viewpoint:1 uncorrelated:1 prone:1 excitatory:1 supported:5 allow:1 understand:2 institute:1 wide:1 taking:1 rhythmic:2 distributed:1 van:1 feedback:3 lett:4 qualitatively:1 sj:1 ignore:1 global:2 assumed:2 xi:5 continuous:3 transfer:4 controllable:1 du:1 excellent:1 complex:3 investigated:1 antisymmetric:1 tcrit:2 grillner:1 border:1 noise:1 fmd:1 fig:7 hemmen:1 hebb:4 wiley:2 iij:1 mezard:2 explicit:1 exponential:1 xl:1 lie:1 tin:1 young:1 evidence:1 intrinsic:2 exists:1 adding:1 sequential:1 magnitude:1 nat:2 simply:1 insures:2 iamax:2 scalar:1 doubling:1 springer:1 babcock:2 corresponds:1 loses:2 satisfies:1 frustrated:4 bioi:1 absence:1 considerable:1 change:1 experimentally:2 springerverlag:1 specifically:1 infinite:4 conservative:1 pas:1 experimental:1 la:1 indicating:1 support:1 reg:1 |
124 | 1,111 | Fast Learning by Bounding Likelihoods
in Sigmoid Type Belief Networks
Tommi Jaakkola
tommi@psyche.mit.edu
Lawrence K. Saul
lksaul@psyche.mit.edu
Michael I. Jordan
jordan@psyche.mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
Sigmoid type belief networks, a class of probabilistic neural networks, provide a natural framework for compactly representing
probabilistic information in a variety of unsupervised and supervised learning problems. Often the parameters used in these networks need to be learned from examples. Unfortunately, estimating the parameters via exact probabilistic calculations (i.e, the
EM-algorithm) is intractable even for networks with fairly small
numbers of hidden units. We propose to avoid the infeasibility of
the E step by bounding likelihoods instead of computing them exactly. We introduce extended and complementary representations
for these networks and show that the estimation of the network
parameters can be made fast (reduced to quadratic optimization)
by performing the estimation in either of the alternative domains.
The complementary networks can be used for continuous density
estimation as well.
1
Introduction
The appeal of probabilistic networks for knowledge representation, inference, and
learning (Pearl, 1988) derives both from the sound Bayesian framework and from
the explicit representation of dependencies among the network variables which allows ready incorporation of prior information into the design of the network. The
Bayesian formalism permits full propagation of probabilistic information across the
network regardless of which variables in the network are instantiated. In this sense
these networks can be "inverted" probabilistically.
This inversion, however, relies heavily on the use of look-up table representations
Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks
529
of conditional probabilities or representations equivalent to them for modeling dependencies between the variables. For sparse dependency structures such as trees
or chains this poses no difficulty. In more realistic cases of reasonably interdependent variables the exact algorithms developed for these belief networks (Lauritzen &
Spiegelhalter, 1988) become infeasible due to the exponential growth in the size of
the conditional probability tables needed to store the exact dependencies. Therefore
the use of compact representations to model probabilistic interactions is unavoidable
in large problems. As belief network models move away from tables, however, the
representations can be harder to assess from expert knowledge and the important
role of learning is further emphasized.
Compact representations of interactions between simple units have long been emphasized in neural networks. Lacking a thorough probabilistic interpretation, however, classical feed-forward neural networks cannot be inverted in the above sense;
e.g. given the output pattern of a feed-forward neural network it is not feasible
to compute a probability distribution over the possible input patterns that would
have resulted in the observed output. On the other hand, stochastic neural networks such as Boltzman machines admit probabilistic interpretations and therefore,
at least in principle, can be inverted and used as a basis for inference and learning
in the presence of uncertainty.
Sigmoid belief networks (Neal, 1992) form a subclass of probabilistic neural networks
where the activation function has a sigmoidal form - usually the logistic function.
Neal (1992) proposed a learning algorithm for these networks which can be viewed
as an improvement ofthe algorithm for Boltzmann machines. Recently Hinton et al.
(1995) introduced the wake-sleep algorithm for layered bi-directional probabilistic
networks. This algorithm relies on forward sampling and has an appealing coding
theoretic motivation. The Helmholtz machine (Dayan et al., 1995), on the other
hand, can be seen as an alternative technique for these architectures that avoids
Gibbs sampling altogether. Dayan et al. also introduced the important idea of
bounding likelihoods instead of computing them exactly. Saul et al. (1995) subsequently derived rigorous mean field bounds for the likelihoods. In this paper we
introduce the idea of alternative - extended and complementary - representations
of these networks by reinterpreting the nonlinearities in the activation function. We
show that deriving likelihood bounds in the new representational domains leads to
efficient (quadratic) estimation procedures for the network parameters.
2
The probability representations
Belief networks represent the joint probability of a set of variables {S} as a product
of conditional probabilities given by
n
P(St, ... , Sn) =
IT P(Sk Ipa[k]),
(1)
k=l
where the notation pa[k], "parents of Sk", refers to all the variables that directly
influence the probability of Sk taking on a particular value (for equivalent representations, see Lauritzen et al. 1988). The fact that the joint probability can be written
in the above form implies that there are no "cycles" in the network; i.e. there exists
an ordering of the variables in the network such that no variable directly influences
any preceding variables.
In this paper we consider sigmoid belief networks where the variables S are binary
530
T. JAAKKOLA, L. K. SAUL, M. I. JORDAN
(0/1), the conditional probabilities have the form
P(Ss:lpa[i]) =
g( (2Ss: - 1) L WS:jSj)
(2)
j
and the weights Wij are zero unless Sj is a parent of Si, thus preserving the feedforward directionality of the network. For notational convenience we have assumed
the existence of a bias variable whose value is clamped to one. The activation
function g(.) is chosen to be the cumulative Gaussian distribution function given by
g(x) = -1-
jX
..j2; - 00
e- .l2 Z ~ dz = -1-
..j2;
1
00
)~ dz
e- .l(
2 z-x
(3)
0
Although very similar to the standard logistic function, this activation function
derives a number of advantages from its integral representation. In particular, we
may reinterpret the integration as a marginalization and thereby obtain alternative
representations for the network. We consider two such representations.
We derive an extended representation by making explicit the nonlinearities in the
activation function. More precisely,
P(Silpa[i])
g( (2Si - 1) L WijSj)
j
(4)
This suggests defining the extended network in terms of the new conditional probabilities P(Si, Zs:lpa[i]). By construction then the original binary network is obtained
by marginalizing over the extra variables Z. In this sense the extended network is
(marginally) equivalent to the binary network.
We distinguish a complementary representation from the extended one by writing
the probabilities entirely in terms of continuous variables!. Such a representation
can be obtained from the extended network by a simple transformation of variables.
The new continuous variables are defined by Zs: = (2Si - l)Zi, or, equivalently,
by Zi = IZs: I and Si = O( Zs:) where 0(?) is the step function. Performing this
transformation yields
[.]) -- _1_
P(Z-'I
I pa z
rn=e -MZi-L: J. Wij9(Zj)1~
(5)
V 211"
which defines a network of conditionally Gaussian variables. The original network
in this case can be recovered by conditional marginalization over Z where the conditioning variables are O(Z).
Figure 1 below summarizes the relationships between the different representations.
As will become clear later, working with the alternative representations instead
of the original binary representation can lead to more flexible and efficient (leastsquares) parameter estimation.
3
The learning problem
We consider the problem of learning the parameters of the network from instantiations of variables contained in a training set. Such instantiations, however, need not
1 While the binary variables are the outputs of each unit the continuous variables pertain
to the inputs - hence the name complementary.
Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks
_::s.
531
Extended network
___--~-::a-z_~.:~
"tr:sfonnation of
Z)
~ariables
Complementary
network over {Z}
Original network
over {S}
Figure 1: The relationship between the alternative representations.
be complete; there may be variables that have no value assignments in the training
set as well as variables that are always instantiated. The tacit division between
hidden (H) and visible (V) variables therefore depends on the particular training
example considered and is not an intrinsic property of the network.
To learn from these instantiations we adopt the principle of maximum likelihood
to estimate the weights in the network. In essence, this is a density estimation
problem where the weights are chosen so as to match the probabilistic behavior
of the network with the observed activities in the training set. Central to this
estimation is the ability to compute likelihoods (or log-likelihoods) for any (partial)
configuration of variables appearing in the training set. In other words, if we let
XV be the configuration of visible or instantiated variables 2 and XH denote the
hidden or uninstantiated variables, we need to compute marginal probabilities of
the form
(6)
XH
If the training samples are independent, then these log marginals can be added to
give the overall log-likelihood of the training set
10gP(training set)
= L:logP(XVt)
(7)
Unfortunately, computing each of these marginal probabilities involves summing
(integrating) over an exponential number of different configurations assumed by
the hidden variables in the network. This renders the sum (integration) intractable
in all but few special cases (e.g. trees and chains). It is possible, however, to instead
find a manageable lower bound on the log-likelihood and optimize the weights in
the network so as to maximize this bound.
To obtain such a lower bound we resort to Jensen's inequality:
10gP(Xv)
10gL p(XH,XV) = 10gLQ(XH)P(XH,;V)
XH
XH
Q(X )
> ~Q(XH)1
f;
og
p(XH,XV)
Q(XH)
(8)
Although this bound holds for all distributions Q(X) over the hidden variables, the
accuracy of the bound is determined by how closely Q approximates the posterior
distribution p(XH IXv) in terms of the Kullback-Leibler divergence; if the approximation is perfect the divergence is zero and the inequality is satisfied with equality.
Suitable choices for Q can make the bound both accurate and easy to compute.
The feasibility of finding such Q, however, is highly dependent on the choice of the
representation for the network.
2To postpone the issue of representation we use X to denote 5, {5, Z}, or
on the particular representation chosen.
Z depending
T. JAAKKOLA, L. K. SAUL, M. I. JORDAN
532
4
Likelihood bounds in different representations
To complete the derivation of the likelihood bound (equation 8) we need to fix the
representation for the network. Which representation to select, however, affects the
quality and accuracy of the bound. In addition, the accompanying bound of the
chosen representation implies bounds in the other two representational domains as
they all code the same distributions over the observables. In this section we illustrate
these points by deriving bounds in the complementary and extended representations
and discuss the corresponding bounds in the original binary domain.
Now, to obtain a lower bound we need to specify the approximate posterior Q. In
the complementary representation the conditional probabilities are Gaussians and
therefore a reasonable approximation (mean field) is found by choosing the posterior
approximation from the family of factorized Gaussians:
Q(Z)
= IT _1_e-(Zi-hi)~/2
(9)
i..;?:;
Substituting this into equation 8 we obtain the bound
log P(S*)
~ -~ L
(hi - E j Jij g(hj
?2 - ~ L Ji~g(hj )g(-hj )
i
(10)
ij
The means hi for the hidden variables are adjustable parameters that can be tuned
to make the bound as tight as possible. For the instantiated variables we need
to enforce the constraints g( hi)
to respect the instantiation. These can be
satisfied very accurately by setting hi = 4(2S: - 1). A very convenient property
of this bound and the complementary representation in general is the quadratic
weight dependence - a property very conducive to fast learning. Finally, we note
that the complementary representation transforms the binary estimation problem
into a continuous density estimation problem.
= S:
We now turn to the interpretation of the above bound in the binary domain. The
same bound can be obtained by first fixing the inputs to all the units to be the
means hi and then computing the negative total mean squared error between the
fixed inputs and the corresponding probabilistic inputs propagated from the parents.
The fact that this procedure in fact gives a lower bound on the log-likelihood would
be more difficult to justify by working with the binary representation alone.
In the extended representation the probability distribution for Zi is a truncated
Gaussian given Si and its parents. We therefore propose the partially factorized
posterior approximation:
(11)
where Q(ZiISi) is a truncated Gaussian:
Q(Zi lSi)
=
1
_1_e- t(Zi-(2S,-1)hi)~
(12)
g? 2S i- 1 )h i ) ..;?:;
As in the complementary domain the resulting bound depends quadratically on the
weights. Instead of writing out the bound here, however, it is more informative to
see its derivation in the binary domain.
A factorized posterior approximation (mean field) Q(S) =
the binary network yields a bound
10gP(S*)
>
L
i
{(Si 10gg(L j J,jSj?)
n. q~i(1 I
qi)l-S, for
I
+ (1- Si) 10g(l- 9(L; Ji;S;?)}
Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks
533
(13)
where the averages (.) are with respect to the Q distribution . These averages,
however, do not conform to analytical expressions. The tractable posterior approximation in the extended domain avoids the problem by implicitly making the
following Legendre transformation:
1 2
1 2
1 2
logg(x) = ["2x + logg(x)] -"2x ~ AX - G(A) - "2x
(14)
which holds since x 2 /2 + logg(x) is a convex function. Inserting this back into the
relevant parts of equation 13 and performing the averages gives
10gP(S*)
>
L
{[qjAj - (1- qj),Xd Lhjqj - qjG(Ai) - (1- qj)G('xi)}
j
I",
2
1",2 (
-"2(L.,..
Jijqj) -"2 L.,.. Jjjqj 1- gj)
ij
j
(15)
which is quadratic in the weights as expected. The mean activities q for the hidden
variables and the parameters A can be optimized to make the bound tight. For the
instantiated variables we set qi
= S; .
5
Numerical experiments
To test these techniques in practice we applied the complementary network to the
problem of detecting motor failures from spectra obtained during motor operation
(see Petsche et al. 1995). We cast the problem as a continuous density estimation
problem. The training set consisted of 800 out of 1283 FFT spectra each with 319
components measured from an electric motor in a good operating condition but
under varying loads. The test set included the remaining 483 FFTs from the same
motor in a good condition in addition to three sets of 1340 FFTs each measured
when a particular fault was present. The goal was to use the likelihood of a test
FFT with respect to the estimated density to determine whether there was a fault
present in the motor.
We used a layered 6 -+ 20 -+ 319 generative model to estimate the training set
density. The resulting classification error rates on the test set are shown in figure 2
as a function of the threshold likelihood. The achieved error rates are comparable
to those of Petsche et al. (1995).
6
Conclusions
Network models that admit probabilistic formulations derive a number of advantages from probability theory. Moving away from explicit representations of dependencies, however, can make these properties harder to exploit in practice. We
showed that an efficient estimation procedure can be derived for sigmoid belief
networks, where standard methods are intractable in all but a few special cases
(e.g. trees and chains). The efficiency of our approach derived from the combination of two ideas. First, we avoided the intractability of computing likelihoods
in these networks by computing lower bounds instead. Second, we introduced
new representations for these networks and showed how the lower bounds in the
new representational domains transform the parameter estimation problem into
T. JAAKKOLA, L. K. SAUL, M. 1. JORDAN
534
0.0
.....
0.8
0.7
. ,,
fo.s "\
D:'
,
,
0.8 ' , _
,
' , ... ..
d..
0.3
0.2
---
"
.
..
'0.1
"
' '',
,
,,
'. ,
,.
Figure 2: The probability of error curves for missing a fault (dashed lines) and
misclassifying a good motor (solid line) as a function of the likelihood threshold.
quadratic optimization.
Acknowledgments
The authors wish to thank Peter Dayan for helpful comments. This project was
supported in part by NSF grant CDA-9404932, by a grant from the McDonnellPew Foundation, by a grant from ATR Human Information Processing Research
Laboratories, by a grant from Siemens Corporation, and by grant N00014-94-10777 from the Office of Naval Research. Michael I. Jordan is a NSF Presidential
Young Investigator.
References
P. Dayan, G. Hinton, R. Neal, and R. Zemel (1995). The helmholtz machine . Neural
Computation 7: 889-904.
A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data
via the EM algorithm (1977). J. Roy. Statist. Soc. B 39:1-38.
G. Hinton, P. Dayan, B. Frey, and R. Neal (1995). The wake-sleep algorithm for
unsupervised neural networks. Science 268: 1158-1161.
S. L. Lauritzen and D. J. Spiegelhalter (1988) . Local computations with probabilities on graphical structures and their application to expert systems. J. Roy. Statist.
Soc. B 50:154-227.
R. Neal. Connectionist learning of belief networks (1992). Artificial Intelligence 56:
71-113.
J. Pearl (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann:
San Mateo.
T. Petsche, A. Marcantonio, C. Darken, S. J. Hanson, G. M. Kuhn, I. Santoso
(1995). A neural network autoassociator for induction motor failure prediction. In
Advances in Neural Information Processing Systems 8. MIT Press.
1. K. Saul, T. Jaakkola, and M. I. Jordan (1995). Mean field theory for sigmoid
belief networks . M.l. T. Computational Cognitive Science Technical Report 9501.
| 1111 |@word autoassociator:1 inversion:1 manageable:1 thereby:1 tr:1 solid:1 harder:2 configuration:3 tuned:1 recovered:1 activation:5 si:8 written:1 realistic:1 visible:2 numerical:1 informative:1 motor:7 alone:1 generative:1 intelligence:1 detecting:1 sigmoidal:1 wijsj:1 become:2 reinterpreting:1 introduce:2 expected:1 behavior:1 brain:1 project:1 estimating:1 notation:1 factorized:3 z:3 developed:1 finding:1 transformation:3 corporation:1 thorough:1 reinterpret:1 subclass:1 growth:1 xd:1 exactly:2 unit:4 grant:5 frey:1 local:1 xv:4 mateo:1 suggests:1 bi:1 acknowledgment:1 practice:2 postpone:1 procedure:3 lksaul:1 convenient:1 word:1 integrating:1 refers:1 cannot:1 convenience:1 layered:2 pertain:1 influence:2 writing:2 optimize:1 equivalent:3 dz:2 missing:1 regardless:1 convex:1 deriving:2 construction:1 heavily:1 exact:3 pa:2 helmholtz:2 roy:2 observed:2 role:1 cycle:1 ordering:1 dempster:1 tight:2 division:1 efficiency:1 basis:1 observables:1 compactly:1 joint:2 derivation:2 instantiated:5 fast:6 uninstantiated:1 artificial:1 zemel:1 choosing:1 whose:1 s:2 presidential:1 ability:1 gp:4 transform:1 laird:1 advantage:2 analytical:1 propose:2 interaction:2 product:1 jij:1 j2:2 inserting:1 relevant:1 representational:3 parent:4 perfect:1 derive:2 depending:1 illustrate:1 fixing:1 pose:1 measured:2 ij:2 lauritzen:3 soc:2 involves:1 implies:2 tommi:2 kuhn:1 closely:1 stochastic:1 subsequently:1 human:1 fix:1 leastsquares:1 ariables:1 marcantonio:1 hold:2 accompanying:1 considered:1 lawrence:1 substituting:1 jx:1 adopt:1 estimation:12 mit:4 gaussian:4 always:1 avoid:1 hj:3 og:1 varying:1 jaakkola:5 probabilistically:1 office:1 derived:3 ax:1 naval:1 improvement:1 notational:1 likelihood:21 rigorous:1 sense:3 helpful:1 inference:2 dayan:5 dependent:1 hidden:7 w:1 wij:1 overall:1 among:1 flexible:1 issue:1 classification:1 integration:2 fairly:1 special:2 marginal:2 field:4 xvt:1 sampling:2 look:1 unsupervised:2 connectionist:1 report:1 intelligent:1 few:2 resulted:1 divergence:2 highly:1 chain:3 accurate:1 integral:1 partial:1 unless:1 tree:3 incomplete:1 cda:1 formalism:1 modeling:1 logp:1 assignment:1 dependency:5 st:1 density:6 probabilistic:14 michael:2 squared:1 central:1 unavoidable:1 satisfied:2 cognitive:2 admit:2 expert:2 resort:1 nonlinearities:2 coding:1 depends:2 later:1 ass:1 jsj:2 accuracy:2 kaufmann:1 yield:2 ofthe:1 directional:1 bayesian:2 accurately:1 marginally:1 fo:1 failure:2 propagated:1 massachusetts:1 knowledge:2 back:1 feed:2 supervised:1 santoso:1 specify:1 formulation:1 hand:2 working:2 propagation:1 defines:1 logistic:2 quality:1 name:1 consisted:1 hence:1 equality:1 leibler:1 laboratory:1 neal:5 conditionally:1 during:1 essence:1 gg:1 theoretic:1 complete:2 reasoning:1 recently:1 sigmoid:9 ji:2 conditioning:1 interpretation:3 approximates:1 marginals:1 cambridge:1 gibbs:1 ai:1 moving:1 operating:1 gj:1 qjg:1 posterior:6 showed:2 store:1 n00014:1 inequality:2 binary:11 fault:3 inverted:3 preserving:1 seen:1 morgan:1 preceding:1 determine:1 maximize:1 dashed:1 full:1 sound:1 conducive:1 technical:1 match:1 calculation:1 long:1 feasibility:1 qi:2 prediction:1 represent:1 achieved:1 addition:2 wake:2 extra:1 comment:1 jordan:7 presence:1 feedforward:1 easy:1 fft:2 variety:1 marginalization:2 affect:1 zi:6 architecture:1 idea:3 qj:2 whether:1 expression:1 render:1 peter:1 clear:1 transforms:1 statist:2 reduced:1 lsi:1 zj:1 ipa:1 misclassifying:1 nsf:2 estimated:1 conform:1 threshold:2 sum:1 uncertainty:1 family:1 reasonable:1 summarizes:1 comparable:1 entirely:1 bound:28 hi:7 distinguish:1 quadratic:5 sleep:2 activity:2 lpa:2 incorporation:1 precisely:1 constraint:1 performing:3 department:1 combination:1 legendre:1 across:1 em:2 psyche:3 appealing:1 making:2 equation:3 discus:1 turn:1 needed:1 tractable:1 gaussians:2 operation:1 permit:1 away:2 enforce:1 petsche:3 appearing:1 alternative:6 altogether:1 existence:1 original:5 remaining:1 graphical:1 exploit:1 classical:1 move:1 added:1 dependence:1 thank:1 atr:1 induction:1 code:1 relationship:2 equivalently:1 difficult:1 unfortunately:2 negative:1 design:1 boltzmann:1 adjustable:1 darken:1 truncated:2 defining:1 extended:11 hinton:3 mzi:1 rn:1 introduced:3 cast:1 optimized:1 hanson:1 learned:1 quadratically:1 pearl:2 usually:1 pattern:2 below:1 belief:13 suitable:1 natural:1 difficulty:1 representing:1 technology:1 spiegelhalter:2 ready:1 sn:1 tacit:1 prior:1 interdependent:1 l2:1 marginalizing:1 lacking:1 foundation:1 rubin:1 principle:2 intractability:1 gl:1 supported:1 infeasible:1 bias:1 institute:1 saul:6 taking:1 sparse:1 curve:1 cumulative:1 avoids:2 forward:3 made:1 author:1 san:1 avoided:1 boltzman:1 sj:1 approximate:1 compact:2 implicitly:1 kullback:1 instantiation:4 summing:1 assumed:2 xi:1 spectrum:2 continuous:6 infeasibility:1 sk:3 table:3 learn:1 reasonably:1 z_:1 domain:9 electric:1 bounding:6 motivation:1 silpa:1 complementary:12 explicit:3 xh:11 exponential:2 wish:1 clamped:1 ffts:2 young:1 load:1 emphasized:2 jensen:1 appeal:1 derives:2 intractable:3 exists:1 intrinsic:1 contained:1 partially:1 relies:2 ma:1 conditional:7 viewed:1 goal:1 feasible:1 directionality:1 included:1 determined:1 justify:1 total:1 siemens:1 select:1 investigator:1 |
125 | 1,112 | Learning Model Bias
Jonathan Baxter
Department of Computer Science
Royal Holloway College, University of London
jon~dcs.rhbnc.ac.uk
Abstract
In this paper the problem of learning appropriate domain-specific
bias is addressed. It is shown that this can be achieved by learning
many related tasks from the same domain, and a theorem is given
bounding the number tasks that must be learnt. A corollary of the
theorem is that if the tasks are known to possess a common internal representation or preprocessing then the number of examples
required per task for good generalisation when learning n tasks simultaneously scales like O(a + ~), where O(a) is a bound on the
minimum number of examples requred to learn a single task, and
O( a + b) is a bound on the number of examples required to learn
each task independently. An experiment providing strong qualitative support for the theoretical results is reported.
1
Introduction
It has been argued (see [6]) that the main problem in machine learning is the biasing
of a learner's hypothesis space sufficiently well to ensure good generalisation from
a small number of examples. Once suitable biases have been found the actual
learning task is relatively trivial. Exisiting methods of bias generally require the
input of a human expert in the form of heuristics, hints [1], domain knowledge,
etc. Such methods are clearly limited by the accuracy and reliability of the expert's
knowledge and also by the extent to which that knowledge can be transferred to the
learner. Here I attempt to solve some of these problems by introducing a method
for automatically learning the bias.
The central idea is that in many learning problems the learner is typically embedded within an environment or domain of related learning tasks and that the
bias appropriate for a single task is likely to be appropriate for other tasks within
the same environment. A simple example is the problem of handwritten character
recognition. A preprocessing stage that identifies and removes any (small) rotations, dilations and translations of an image of a character will be advantageous for
J.BAXTER
170
recognising all characters. If the set of all individual character recognition problems
is viewed as an environment of learning tasks, this preprocessor represents a bias
that is appropriate to all tasks in the environment. It is likely that there are many
other currently unknown biases that are also appropriate for this environment. We
would like to be able to learn these automatically.
Bias that is appropriate for all tasks must be learnt by sampling from many tasks.
If only a single task is learnt then the bias extracted is likely to be specific to that
task. For example, if a network is constructed as in figure 1 and the output nodes
are simultaneously trained on many similar problems, then the hidden layers are
more likely to be useful in learning a novel problem of the same type than if only a
single problem is learnt. In the rest of this paper I develop a general theory of bias
learning based upon the idea of learning multiple related tasks. The theory shows
that a learner's generalisation performance can be greatly improved by learning
related tasks and that if sufficiently many tasks are learnt the learner's bias can be
extracted and used to learn novel tasks.
Other authors that have empirically investigated the idea of learning multiple related tasks include [5] and [8].
2
Learning Bias
For the sake of argument I consider learning problems that amount to minimizing
the mean squared error of a function h over some training set D. A more general
formulation based on statistical decision theory is given in [3]. Thus, it is assumed
that the learner receives a training set of (possibly noisy) input- output pairs D =
{(XI, YI), ... , (xm' Ym)}, drawn according to a probability distribution P on X X Y
(X being the input space and Y being the output space) and searches through its
hypothesis space 1l for a function h: X --+ Y minimizing the empirical error,
1 m
E(h, D) = - 2)h(xd - yd 2.
m
(1)
i=1
The true error or generalisation error of h is the expected error under P:
E(h, P) =
r
ixxY
(2)
(h(x) - y)2 dP(x, y).
The hope of course is that an h with a small empirical error on a large enough
training set will also have a small true error, i.e. it will generalise well.
=
{P} is a set of
I model the environment of the learner as a pair (P, Q) where P
learning tasks and Q is a probability measure on P. The learner is now supplied
not with a single hypothesis space 1l but with a hypothesis space family IHI = {1l}.
Each 1l E IHI represents a different bias the learner has about the environment. For
example, one 1l may contain functions that are very smooth, whereas another 1l
might contain more wiggly functions. Which hypothesis space is best will depend
on the kinds of functions in the environment. To determine the best 1l E !HI for
(P, Q), we provide the learner not with a single training set D but with n such
training sets D I , ... , Dn. Each Di is generated by first sampling from 'P according
to Q to give Pi and then sampling m times from X x Y according to Pi to give
Di = {(XiI, Yil), ... , (Xim, Yim)}. The learner searches for the hypothesis space
1l E IHI with minimal empirical error on D I , ... , D n , where this is defined by
~
E*(1l, D I , ... , Dn)
~ Dd.
= -n1 2: hE1/.
inf E(h,
n
i=1
(3)
Learning Model Bias
171
___ L
... ...
... 9
1
-
__ ,
I
\
-
-'- - -,
I
I
I
I
I
f
Figure 1: Net for learning multiple tasks. Input Xij from training set Di is propagated forwards through the internal representation f and then only through the
output network gi. The error [gi(l(Xij)) - Yij]2 is similarly backpropagated only
through the output network gi and then f. Weight updates are performed after all
training sets D 1 , ... , Dn have been presented.
The hypothesis space 1l with smallest empirical error is the one that is best able to
learn the n data sets on average.
There are two ways of measuring the true error of a bias learner. The first is how
well it generalises on the n tasks PI,"" Pn used to generate the training sets.
Assuming that in the process of minimising (3) the learner generates n functions
hI, ... , h n E 1l with minimal empirical error on their respective training setsI , the
learner's true error is measured by:
(4)
Note
that
in
this
case
the
learner's
empirical
error is given by En(hI, ... , hn' Db ... , Dn) = ~ 2:~=I E(hi' Dt}. The second way
of measuring the generalisation error of a bias learner is to determine how good 1l
is for learning novel tasks drawn from the environment (P, Q):
E*(1l, Q) =
1
(5)
inf E(h, P) dQ(P)
1> hEll
A learner that has found an 1l with a small value of (5) can be said to have learnt
to learn the tasks in P in general. To state the bounds ensuring these two types of
generalisation a few more definitions must be introduced.
Definition 1 Let !HI = {1l} be a hypothesis space family. Let ~ = {h E 1l: 1l E
lHt}. For any h:X -+ Y, define a map h:X X Y -+ [0,1] by h(x,y) = (h(x) _ y)2.
Note the abuse of notation: h stands for two different functions depending on its
argument. Given a sequence of n functions h = (hI, . .. , h n ) let h: (X x y)n -+ [0,1]
be the function (XI, YI, ... , Xn , Yn) H ~ 2:~=I hi (Xi, yt). Let 1l n be the set of all such
functions where the hi are all chosen from 1l. Let JHr = {1l n : 1l E H}. For each
1l E !HI define 1l*:P -+ [0,1] by1l*(P) infhEll E(h, P} and let:HI*
{1l*:1l E !HI}.
=
1
This assumes the infimum in (3) is attained.
=
J. BAXTER
172
Definition 2 Given a set of function s 1i from any space Z to [0, 1], and any probability measure on Z, define the pseudo-metric dp on 1i by
dp(h, hI) = l
lh(Z) - hl(z)1 dP(z).
Denote the smallestE-cover of (1i,dp ) byJV{E,1i,dp ). Define the E-capacity of1i
by
C(E, 1i) = sup JV (E, 1i, dp )
p
where the supremum is over all discrete probability measures P on Z.
Definition 2 will be used to define the E-capacity of spaces such as !HI* and [IHF ]".,
where from definition 1 the latter is [IHF ]". = {h E 1i n :1i E H}.
The following theorem bounds the number of tasks and examples per task required
to ensure that the hypothesis space learnt by a bias learner will, with high probability, contain good solutions to novel tasks in the same environment 2 .
Theorem 1 Let the n training sets D I , ... , Dn be generated by sampling n times
from the environment P according to Q to give PI"", Pn , and then sampling m
times from each Pi to generate D i . Let !HI = {1i} be a hypothesis space family and
suppose a learner chooses 1? E 1HI minimizing (3) on D I , ... , Dn. For all E > 0 and
0 < 8 < 1, if
n
and
m
then
The bound on m in theorem 1 is the also the number of examples required per
task to ensure generalisation of the first kind mentioned above. That is, it is
the number of examples required in each data set Di to ensure good generalisation on average across all n tasks when using the hypothesis space family 1HI. If
we let m( lHI, n, E, 8) be the number of examples required per task to ensure that
Pr {Db"" Dn: IEn(hI"' " hn , DI"'" Dn) - En(hI, . . . , h n , PI" " , Pn)1 >
E}
<
8, where all hi E 1i for some fixed 1i E IHI, then
G(IHI, n, E, 8)
= m( lHI, 1, E, 8)
m{lHI, n,
E,
8)
represents the advantage in learning n tasks as opposed to one task (the ordinary
learning scenario). Call G(IHI, n, E, 8) the n-task gain of IHI. Using the fact [3] that
C (E, lHl". ) :::; C (E, [IHr ]".) :::; C (E, IHl".
t,
and the formula for m from theorem 1, we have,
1 :::; G{IHI, n, E, 8) :::; n.
2The bounds in theorem 1 can be improved to 0 (~) if all 11. E H are convex and the
error is the squared loss [7].
Learning Model Bias
173
Thus, at least in the worst case analysis here, learning n tasks in the same environment can result in anything from no gain at all to an n-fold reduction in the number
of examples required per task. In the next section a very intuitive analysis of the
conditions leading to the extreme values of G(H, n, c, J) is given for the situation
where an internal representation is being learnt for the environment. I will also say
more about the bound on the number of tasks (n) in theorem 1.
3
Learning Internal Representations with Neural Networks
In figure 1 n tasks are being learnt using a common representation f. In this
case [JHF]". is the set of all possible networks formed by choosing the weights in the
representation and output networks. IHl". is the same space with a single output node.
If the n tasks were learnt independently (i.e. without a common representation) then
each task would use its own copy of H"., i.e. we wouldn't be forcing the tasks to all
use the same representation.
Let W R be the total number of weights in the representation network and W0
be the number of weights in an individual output network. Suppose also that all
the nodes in each network are Lipschitz boundecP. Then it can be shown [3] that
InC(c, [IHr]".):::: 0 ((Wo + ~)In~) and InC(c,IHr):::: 0 (WRln~). Substituting
these bounds into theorem 1 shows that to generalise well on average on n tasks
using a common representation requires m :::: 0 (/2 [( W 0 + ~) In ~ + ~ In } ]) ::::
o (a + .~J examples of each task. In addition, if n :::: 0 WR In ~) then with high
probability the resulting representation will be good for learning novel tasks from
the same environment. Note that this bound is very large. However it results from a
worst-case analysis and so is highly likely to be beaten in practice. This is certainly
borne out by the experiment in the next section.
CI,
The learning gain G(H, n, c) satisfies G(H, n, c) ~
Wo?'!a.
Wo?.:::.B.
Thus, if WR
?
Wo,
G ~ n, while if Wo ? W R then G ~ 1. This is perfectly intuitive: when Wo ?
W R the representation network is hardly doing any work, most of the power of
the network is in the ouput networks and hence the tasks are effectively being
learnt independently. However, if WR ? Wo then the representation network
dominates; there is very little extra learning to be done for the individual tasks
once the representation is known, and so each example from every task is providing
full information to the representation network. Hence the gain of n.
Note that once a representation has been learnt the sampling burden for learning a
novel task will be reduced to m:::: 0 (e1, [Wo In ~ + In}]) because only the output
network has to be learnt. If this theory applies to human learning then the fact
that we are able to learn words, faces, characters, etcwith relatively few examples
(a single example in the case offaces) indicates that our "output networks" are very
small, and, given our large ignorance concerning an appropriate representation, the
representation network for learning in these domains would have to be large, so we
would expect to see an n-task gain of nearly n for learning within these domains.
A node a : lR P -t lR is LipJChitz bounded if there exists a constant e such that la( x) a(x'}1 < ellx - x'il for all x, x' E lR P ? Note that this rules out threshold nodes, but sigmoid
squashing functions are okay as long as the weights are bounded.
3
174
4
J. BAXTER
Experiment: Learning Symmetric Boolean Functions
In this section the results of an experiment are reported in which a neural network
was trained to learn symmetric4 Boolean functions. The network was the same as
the one in figure 1 except that the output networks 9i had no hidden layers. The
input space X = {O, Ipo was restricted to include only those inputs with between
one and four ones. The functions in the environment of the network consisted of all
possible symmetric Boolean functions over the input space, except the trivial "constant 0" and "constant 1" functions. Training sets D 1 , ..? ,Dn were generated by
first choosing n functions (with replacement) uniformly from the fourteen possible,
and then choosing m input vectors by choosing a random number between 1 and 4
and placing that many l's at random in the input vector. The training sets were
learnt by minimising the empirical error (3) using the backpropagation algorithm
as outlined in figure 1. Separate simulations were performed with n ranging from
1 to 21 in steps of four and m ranging from 1 to 171 in steps of 10. Further details
of the experimental procedure may be found in [3], chapter 4.
Once the network had sucessfully learnt the n training sets its generalization ability
was tested on all n functions used to generate the training set. In this case the
generalisation error (equation (4)) could be computed exactly by calculating the
network's output (for all n functions) for each of the 385 input vectors. The generalisation error as a function of nand m is plotted in figure 2 for two independent
sets of simulations. Both simulations support the theoretical result that the number
of examples m required for good generalisation decreases with increasing n (cf theorem 1) . For training sets D 1 , ... , Dn that led to a generalisation error of less than
G&n erailsahon Error
Figure 2: Learning surfaces for two independent simulations.
0.01, the representation network f was extracted and tested for its true error, where
this is defined as in equation (5) (the hypothesis space 1? is the set of all networks
formed by attaching any output network to the fixed representation network f).
Although there is insufficient space to show the representation error here (see [3]
for the details), it was found that the representation error monotonically decreased
with the number of tasks learnt, verifying the theoretical conclusions.
The representation's output for all inputs is shown in figure 3 for sample sizes
(n, m)
(1,131), (5, 31) and (13,31). All outputs corresponding to inputs from
the same category (i.e. the same number of ones) are labelled with the same symbol.
The network in the n = 1 case generalised perfectly but the resulting representation
does not capture the symmetry in the environment and also does not distinguish
the inputs with 2,3 and 4 "I's" (because the function learnt didn't), showing that
=
4 A symmetric Boolean function is one that is invariant under interchange of its inputs,
or equivalently, one that only depends on the number of "l's" in its input (e.g. parity).
175
Learning Model Bias
learning a single function is not sufficient to learn an appropriate representation.
By n = 5 the representation's behaviour has improved (the inputs with differing
numbers of l's are now well separated, but they are still spread around a lot) and
by n = 13 it is perfect. As well as reducing the sampling burden for the n tasks in
( 1, HII
( 5, J I I
16
( I), ll)
node J
node . -
8~\
o
d
no e I
I
0
node
~
Figure 3: Plots of the output of a representation generated from the indicated (n, m)
sample.
the training set, a representation learnt on sufficiently many tasks should be good
for learning novel tasks and should greatly reduce the number of examples required
for new tasks. This too was experimentally verified although there is insufficient
space to present the results here (see [3]).
5
Conclusion
I have introduced a formal model of bias learning and shown that (under mild
restrictions) a learner can sample sufficiently many times from sufficiently many
tasks to learn bias that is appropriate for the entire environment. In addition, the
number of examples required per task to learn n tasks independently was shown
to be upper bounded by O(a + bin) for appropriate environments. See [2] for an
analysis of bias learning within an Information theoretic framework which leads to
an exact a + bin-type bound.
References
[1] Y. S. Abu-Mostafa. Learning from Hints in Neural Networks. Journal of Complecity, 6:192-198, 1989.
[2] J. Baxter. A Bayesian Model of Bias Learning. Submitted to COLT 1996, 1995.
[3] J. Baxter.
Learning Internal Representations.
PhD thesis, Department of Mathematics and Statistics, The Flinders University of
South Australia, 1995.
Draft copy in Neuroprose Archive under
"/pub/neuroprose/Thesis/baxter.thesis.ps.Z" .
[4] J. Baxter. Learning Internal Representations. In Proceedings of the Eighth International Conference on Computational Learning Theory, Santa Cruz, California,
1995. ACM Press.
[5] R. Caruana. Learning Many Related Tasks at the Same Time with Backpropagation. In Advances in Neural Information Processing 5, 1993.
[6] S. Geman, E. Bienenstock, and R. Doursat.
Neural networks and the
bias/variance dilemma. Neural Comput., 4:1-58, 1992.
[7] W. S. Lee, P. L. Bartlett, and R. C. Williamson. Sample Complexity of Agnostic
Learning with Squared Loss. In preparation, 1995.
[8] T. M. Mitchell and S. Thrun. Learning One More Thing. Technical Report
CMU-CS-94-184, CMU, 1994.
| 1112 |@word mild:1 advantageous:1 simulation:4 reduction:1 pub:1 must:3 cruz:1 remove:1 plot:1 update:1 ihr:3 lr:3 draft:1 node:8 dn:10 constructed:1 ouput:1 qualitative:1 expected:1 automatically:2 actual:1 little:1 increasing:1 notation:1 bounded:3 didn:1 agnostic:1 kind:2 differing:1 pseudo:1 every:1 xd:1 exactly:1 uk:1 yn:1 generalised:1 yd:1 abuse:1 might:1 limited:1 ihi:8 practice:1 backpropagation:2 procedure:1 empirical:7 word:1 restriction:1 map:1 yt:1 independently:4 convex:1 rule:1 suppose:2 exact:1 hypothesis:12 recognition:2 geman:1 ien:1 verifying:1 worst:2 capture:1 ipo:1 decrease:1 mentioned:1 environment:18 complexity:1 trained:2 depend:1 dilemma:1 upon:1 learner:20 chapter:1 separated:1 london:1 choosing:4 heuristic:1 solve:1 say:1 ability:1 statistic:1 gi:3 noisy:1 sequence:1 advantage:1 net:1 intuitive:2 xim:1 p:1 perfect:1 depending:1 develop:1 ac:1 measured:1 strong:1 of1i:1 c:1 human:2 australia:1 bin:2 argued:1 require:1 behaviour:1 generalization:1 hell:1 yij:1 sufficiently:5 around:1 mostafa:1 substituting:1 smallest:1 currently:1 hope:1 clearly:1 pn:3 corollary:1 indicates:1 greatly:2 typically:1 entire:1 nand:1 hidden:2 bienenstock:1 colt:1 once:4 sampling:7 represents:3 placing:1 nearly:1 jon:1 report:1 hint:2 few:2 okay:1 simultaneously:2 individual:3 replacement:1 n1:1 attempt:1 highly:1 certainly:1 extreme:1 wiggly:1 lh:1 respective:1 plotted:1 theoretical:3 minimal:2 boolean:4 cover:1 measuring:2 caruana:1 ordinary:1 introducing:1 too:1 reported:2 sucessfully:1 learnt:18 chooses:1 international:1 lee:1 ym:1 squared:3 central:1 thesis:3 opposed:1 hn:2 possibly:1 borne:1 expert:2 leading:1 lhi:3 inc:2 depends:1 performed:2 lot:1 doing:1 sup:1 ihf:2 formed:2 offaces:1 accuracy:1 il:1 variance:1 handwritten:1 bayesian:1 submitted:1 definition:5 di:5 propagated:1 gain:5 ihl:2 mitchell:1 knowledge:3 attained:1 dt:1 improved:3 formulation:1 done:1 stage:1 receives:1 infimum:1 indicated:1 contain:3 true:5 consisted:1 hence:2 symmetric:3 ignorance:1 ll:1 anything:1 theoretic:1 image:1 ranging:2 novel:7 common:4 rotation:1 sigmoid:1 empirically:1 yil:1 fourteen:1 outlined:1 mathematics:1 similarly:1 had:2 reliability:1 surface:1 etc:1 own:1 inf:2 forcing:1 scenario:1 yi:2 minimum:1 determine:2 monotonically:1 multiple:3 full:1 smooth:1 generalises:1 technical:1 minimising:2 long:1 concerning:1 e1:1 ensuring:1 metric:1 cmu:2 achieved:1 whereas:1 addition:2 addressed:1 decreased:1 extra:1 rest:1 doursat:1 posse:1 archive:1 south:1 db:2 thing:1 call:1 enough:1 baxter:8 perfectly:2 reduce:1 idea:3 rhbnc:1 bartlett:1 wo:8 hardly:1 generally:1 useful:1 santa:1 amount:1 backpropagated:1 category:1 reduced:1 generate:3 supplied:1 xij:2 per:6 wr:3 xii:1 discrete:1 abu:1 four:2 threshold:1 drawn:2 jv:1 verified:1 family:4 decision:1 bound:10 layer:2 hi:19 distinguish:1 fold:1 sake:1 generates:1 argument:2 relatively:2 transferred:1 department:2 according:4 across:1 character:5 hl:1 restricted:1 pr:1 invariant:1 neuroprose:2 equation:2 appropriate:10 hii:1 yim:1 assumes:1 ensure:5 include:2 cf:1 calculating:1 said:1 dp:7 separate:1 thrun:1 capacity:2 w0:1 extent:1 trivial:2 assuming:1 insufficient:2 providing:2 minimizing:3 equivalently:1 unknown:1 upper:1 situation:1 dc:1 introduced:2 pair:2 required:10 california:1 able:3 xm:1 eighth:1 biasing:1 royal:1 power:1 suitable:1 identifies:1 embedded:1 loss:2 expect:1 he1:1 sufficient:1 dd:1 dq:1 pi:6 translation:1 squashing:1 course:1 parity:1 copy:2 bias:25 formal:1 generalise:2 face:1 attaching:1 exisiting:1 xn:1 stand:1 author:1 forward:1 interchange:1 preprocessing:2 wouldn:1 supremum:1 assumed:1 xi:3 search:2 dilation:1 learn:11 symmetry:1 lht:1 williamson:1 investigated:1 domain:6 main:1 spread:1 bounding:1 en:2 comput:1 theorem:10 preprocessor:1 formula:1 specific:2 showing:1 symbol:1 beaten:1 dominates:1 burden:2 exists:1 recognising:1 effectively:1 ci:1 phd:1 led:1 likely:5 applies:1 satisfies:1 extracted:3 acm:1 viewed:1 lhl:1 labelled:1 lipschitz:1 experimentally:1 generalisation:12 except:2 uniformly:1 reducing:1 total:1 experimental:1 la:1 holloway:1 college:1 internal:6 support:2 latter:1 jonathan:1 preparation:1 tested:2 |
126 | 1,113 | Generalized Learning Vector
Quantization
Atsushi Sato & Keiji Yamada
Information Technology Research Laboratories,
NEC Corporation
1-1, Miyazaki 4-chome, Miyamae-ku,
Kawasaki, Kanagawa 216, Japan
E-mail: {asato.yamada}@pat.cl.nec.co.jp
Abstract
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated
based on the steepest descent method in order to minimize the cost
function . The cost function is determined so that the obtained
learning rule satisfies the convergence condition. We prove that
Kohonen's rule as used in LVQ does not satisfy the convergence
condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ
is superior to LVQ in recognition ability.
1
INTRODUCTION
Artificial neural network models have been applied to character recognition with
good results for small-set characters such as alphanumerics (Le Cun et aI., 1989)
(Yamada et al., 1989). However, applying the models to large-set characters such
as Japanese or Chinese characters is difficult because most of the models are based
on Multi-Layer Perceptron (MLP) with the back propagation algorithm, which has
a problem in regard to local minima as well as requiring a lot of calculation.
Classification methods based on pattern matching have commonly been used for
large-set character recognition. Learning Vector Quantization (LVQ) has been studied to generate optimal reference vectors because of its simple and fast learning algorithm (Kohonen, 1989; 1995). However, one problem with LVQ is that reference
vectors diverge and thus degrade recognition ability. Much work has been done on
improving LVQ (Lee & Song, 1993) (Miyahara & Yoda, 1993) (Sato & Tsukumo,
1994), but the problem remains unsolved.
Recently, a generalization of the Simple Competitive Learning (SCL) has been under
424
A. SATO, K. YAMADA
study (Pal et al., 1993) (Gonzalez et al., 1995), and one unsupervised learning
rule has been derived based on the steepest descent method to minimize the cost
function. Pal et al. call their model "Generalized Learning Vector Quantization,"
but it is not a generalization of Kohonen's LVQ.
In this paper, we propose a new learning method for supervised learning, in which
reference vectors are updated based on the steepest descent method, to minimize
the cost function. This is a generalization of Kohonen's LVQ, so we call it "Generalized Learning Vector Quantization (GLVQ)." The cost function is determined so
that the obtained learning rule satisfies the convergence condition. We prove that
Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus
degrades recognition ability. Preliminary experiments revealed that non-linearity
in the cost function is very effective for improving recognition ability. Printed Chinese character recognition experiments were carried out, and we can show that the
recognition ability of GLVQ is very high compared with LVQ.
2
REVIEW OF LVQ
Assume that a number of reference vectors Wk are placed in the input space. Usually, several reference vectors are assigned to each class. An input vector x is decided
to belong to the same class to which the nearest reference vector belongs. Let Wk(t)
represent sequences of the Wk in the discrete-time domain. Heretofore, several LVQ
algorithms have been proposed (Kohonen, 1995), but in this section, we will focus
on LVQ2.1. Starting with properly defined initial values, the reference vectors are
updated as follows by the LVQ2.1 algorithm:
Wi(t + 1) = Wi(t) - a(t)(x - Wi(t)),
(1)
Wj(t + 1) = Wj(t) + a(t)(x - Wj(t)),
(2)
where 0 < aCt) < 1, and aCt) may decrease monotonically with time. The two
reference vectors Wi and Wj are the nearest to x; x and Wj belong to the same
class, while x and Wi belong to different classes. Furthermore, x must fall into
the "window," which is defined around the midplane of Wi and Wj. That is, if the
following condition is satisfied, Wi and Wj are updated:
min
(~> ~~)
> s,
(3)
where di = Ix - wd, d j = Ix - wjl. The LVQ2.1 algorithm is based on the idea
of shifting the decision boundaries toward the Bayes limits with attractive and
repulsive forces from x. However, no attention is given to what might happen to
the location of the Wk, so the reference vectors diverge in the long run. LVQ3
has been proposed to ensure that the reference vectors continue approximating the
class distributions, but it must be noted that if only one reference vector is assigned
to each class, LVQ3 is the same as LVQ2.1, and the problem of reference vector
divergence remains unsolved.
3
GENERALIZED LVQ
To ensure that the reference vectors continue approximating the class distributions,
we propose a new learning method based on minimizing the cost function. Let Wl
be the nearest reference vector that belongs to the same class of x, and likewise let
W2 be the nearest reference vector that belongs to a different class from x. Let us
consider the relative distance difference p,( x) defined as follows:
dl - d2
P,(x)=d 1 +d2 '
(4)
425
Generalized Learning Vector Quantization
where dl and d2 are the distances of:B from WI and W2, respectively. ft(x) ranges
between -1 and +1, and if ft( x) is negative, x is classified correctly; otherwise, x
is classified incorrectly. In order to improve error rates, 1?( x) should decrease for
all input vectors. Thus, a criterion for learning is formulated as the minimizing of
a cost function S defined by
(5)
i=l
where N is the number of input vectors for training, and f(ft) is a monotonically
increasing function. To minimize S, WI and W2 are updated based on the steepest
descent method with a small positive constant a as follows:
as
Wi - Wj - a--,
i = 1,2
(6)
aWj
If squared Euclid distance, d j =
Ix - wd 2 , is used, we can obtain the following.
as = as aft ad l = _ of
4d2
(x _
aWl
aft ad l aWl
aft (d l + d2)2
WI)
(7)
as = as aft ad 2 = + of
4d l
(x _ W2)
aW2
aft ad2 aW2
01? (d l + d2)2
Therefore, the GLVQ's learning rule can be described as follows:
of
d2
WI
WI + a aft (d l + d2)2 (x - wt)
of
dl
W2 W2 - a aft (d l + d2)2 (x - W2)
(8)
(9)
(10)
Let us discuss the meaning of f(ft). of/aft is a kind of gain factor for updating,
and its value depends on x. In other words, of/aft is a weight for each x. To
decrease the error rate, it is effective to update reference vectors mainly by input
vectors around class boundaries, so that the decision boundaries are shifted toward
the Bayes limits. Accordingly, f(ft) should be a non-linear monotonically increasing
function, and it is considered that classification ability depends on the definition
of f(ft). In this paper, of/aft = f(ft,t){l- f(ft,t)} was used in the experiments,
where t is learning time and f(ft, t) is a sigmoid function of 1/(1 + e-lJt). In this
case, of / aft has a single peak at ft = 0, and the peak width becomes narrower as t
increases, so the input vectors that affect learning are gradually restricted to those
around the decision boundaries.
Let us discuss the meaning of ft. WI and W2 are updated by attractive and repulsive
forces from x, respectively, as shown in Eqs. (9) and (10), and the quantities of
updating, ILlwd and ILlw21, depend on derivatives of ft. Reference vectors will
converge to the equilibrium states defined by attractive and repulsive forces, so it
is considered that convergence property depends on the definition of ft.
4
DISCUSSION
First, we show that the conventional LVQ algorithms can be derived based on the
framework of GLVQ. If ft = dl for dl < d2, ft = -d2 for dl > d2, and f(ft) = ft, the
cost f~nction is written as S = ~dl <d2 dl - ~dl >d2 d2 . Then, we can obtain the
followmg:
for dl < d 2
(11)
WI - WI + a(x - WI), W2 - W2
(12)
for dl > d 2
W2 - W2 - a(x - W2), WI - WI
A. SATO, K. YAMADA
426
This learning algorithm is the same as LVQ1. If It = d I -d2 and f(lt) = It for Iltl < s,
f(lt) = const for Iltl > s, the cost function is written as S = 2: IJJ1 <s(d i - d2 ) + C.
Then, we can obtain the following:
if
Iltl < s (x falls into the window)
WI
W2
-
WI + a(x - W2)
W2 - a(x - W2)
(13)
(14)
In this case, WI and W2 are updated simultaneously, and this learning algorithm
is the same as LVQ2.1. SO it can be said that GLVQ is a generalized model that
includes the conventional LVQs.
Next, we discuss the convergence condition. We can obtain other learning algorithms by defining a different cost function, but it must be noted that the convergence property depends on the definition of the cost function. The main difference
between GLVQ and LVQ2.1 is the definition of It; It = (d I -d2)/(d i +d2) in GLVQ,
It = dl - d2 in LVQ2.1. Why do the reference vectors diverge in LVQ2.1, while they
converge in GLVQ, as shown later? In order to clarify the convergence condition,
let us consider the following learning rule:
+ alx -
WI
-
WI
w2lk(x - wt}
(15)
W2
-
W2 - alx - wIlk(x - W2)
(16)
Here, I~Wll and I~W21 are the quantities of updating by the attractive and the
repulsive forces, respectively. The ratio of these two is calculated as follows:
I~WII
alx - w21klx - wII
Ix - w2l k- I
I~W21 = alx - wIlklx - w21 = Ix - wll k - I
(17)
If the initial values of reference vectors are properly defined, most x's will satisfy
Ix - wd < Ix - w21. Therefore, if k > 1, the attractive force is greater than the
repulsive force, and the reference vectors will converge, because the attractive forces
come from x's that belong to the same class of WI. In GLVQ, k = 2 as shown in
Eqs. (9) and (10), and the vectors will converge, while they will diverge in LVQ2.1
because k = 0. According to the above discussion, we can use di/(d 1 + d2) or just
dj, instead of di/(d 1 + d2)2 in Eqs. (9) and (10). This correction does not affect the
convergence condition. The essential problem in LVQ2.1 results from the drawback
in Kohonen's rule with k = 0. In other words, the cost function used in LVQ is not
determined so that the obtained learning rule satisfies the convergence condition.
5
5.1
EXPERIMENTS
PRELIMINARY EXPERIMENTS
The experimental results using Eqs. (15) and (16) with a = 0.001, shown in Fig. 1,
support the above discussion on the convergence condition. Two-dimensional input
vectors with two classes shown in Fig. 1(a) were used in the experiments. The ideal
decision boundary that minimizes the error rate is shown by the broken line. One
reference vector was assigned to each class with initial values (x, y) = (0.3,0.5) for
Class A and (x,y) = (0.7,0.5) for Class B. Figure l(b) shows the distance between
the two reference vectors during learning. The distance remains the same value for
k > 1, while it increases with time for k ~ 1; that is, the reference vectors diverge.
Figure 2 shows the experimental results from GLVQ for linearly non-separable patterns compared with LVQ2.1. The input vectors shown in Fig. 2(a) were obtained
by shifting all input vectors shown in Fig. l(a) to the right by Iy - 0.51. The ideal
Generalized Learning Vector Quantization
427
1.0
6.0
Class A 0
Class B x
0
0.8
c:
g
0.6
4.0
.!!!
0.4
0
t
2.0
0.2
0.0
0.0
!
,I-
1.0
0.2
0.4
.I
0.0
0.6
X Position
(a)
k = 0.0 -+-k=0.5 -f- -k = 1.0 ? 13? ? ?
k= 1.5 ..)( _ ..
k = 2.0 -6- .-
!
.l!! 3.0
0
>-
t
f
i
~
c:
'iii
a.
!
5.0
0.8
1.0
0
10
20
30
40
50
Iteration
(b)
Figure 1: Experimental results that support the discussion on the convergence
condition with one reference vector for each class. (a) Input vectors used in the
experiments. The broken line shows the ideal decision boundary. (b) Distance
between two reference vectors for each k value during learning. The distance remains
the same value for k > 1, while it diverges for k $ 1.
decision boundary that minimizes the error rate is shown by the broken line. Two
reference vectors were assigned to each class with initial values (x, y) = (0.3,0.4)
and (0.3, 0.6) for Class A, and (x,y) = (0.7,0.4) and (0.7,0.6) for Class B. The gain
factor 0: was 0.004 in GLVQ and LVQ2.1, and the window parameter sin LVQ2.1
was 0.8 in the experiments.
Figure 2(b) shows the number of error counts for all the input vectors during
learning. GLVQ(NL) shows results by GLVQ with a non-linear function; that is,
af lap = f(p, t){1 - f(p, t)}. The number of error counts decreased with time to
the minimum determined by the Bayes limit. GLVQ(L) shows results by GLVQ
with a linear function; that is, aflap = 1. The number of error counts did not
decrease to the minimum. This indicates that non-linearity of the cost function is
very effective for improving recognition ability. Results using LVQ2.1 show that the
number of error counts decreased in the beginning, but overall increased gradually
with time. The degradation in the recognition ability results from the divergence
of the reference vectors, as we have mentioned earlier.
5.2
CHARACTER RECOGNITION EXPERIMENTS
Printed Chinese character recognition experiments were carried out to examine the
performance of GLVQ. Thirteen kinds of printed fonts with 500 classes were used
in the experiments. The total number of characters was 13,000; half of which were
used as training data, and the other half were used as test data. As input vectors,
256-dimensional orientation features were used (Hamanaka et al., 1993). Only one
reference vector was assigned to each class, and their initial values were defined by
averaging training data for each class.
Recognition results for test data are tabulated in Table 1 compared with other
methods. TM is the template matching method using mean vectors. LVQ2 is the
earlier version of LVQ2.1. The learning algorithm is the same as LVQ2.1 described
in Section 2, but di must be less than dj. The gain factor 0: was 0.05, and the window
parameter s was 0.65 in the experiments. The experimental result by LVQ3 was
428
A,SATO.K.Y~DA
1,0
1OO~---r----r----r----r---,
0,8
c:
,Q
.; i
GLVQ(NL) --GLVQ(L) -+--LVQ2,1 ?13.. ?
140
0,6
0
a.
~
0,4
0,2
0,0 '---__-'--__..1..-_ _....1...-_ _--'--_ _- " -_ _- '
0,0
0,2
0,4
0,6
0,8
1.0
1,2
X Position
4OL-~~~~~~~
o
20
40
00
__--~
80
100
Iteration
(b)
(a)
Figure 2: Experimental results for linearly non-separable patterns with two reference vectors for each class. (a) Input vectors used in the experiments. The broken
line shows the ideal decision boundary. (b) The number of error counts during learning. GLVQ (NL) and GLVQ (L) denote the proposed method using a non-linear
and linear function in the cost function, respectively. This shows that non-linearity
of the cost function is very effective for improving classification ability.
Table 1: Experimental results for printed Chinese character recognition compared
with other methods.
Methods
Error rates(%)
TMI
LVQ2 2
LVQ2.1
IVQ3
0.23
0.18
0.11
0.08
GLVQ
0.05
1 Template
matching using mean vectors,
2The earlier version of LVQ2,l.
30 ur previous model (Improved Vector Quantization),
the same as that by LVQ2.1, because only one reference vector was assigned to
each class. IVQ (Improved Vector Quantization) is our previous model based on
Kohonen's rule (Sato & Tsukumo, 1994).
The error rate was extremely low for GLVQ, and a recognition rate of 99.95% was
obtained. Ambiguous results can be rejected by thresholding the value of J,t(x). If
input vectors with J,t(x) ~ -0.02 were rejected, a recognition rate of 100% would
be obtained, with a rejection rate of 0.08% for this experiment.
6
CONCLUSION
We proposed the Generalized Learning Vector Quantization as a new learning
method. We formulated the criterion for learning as the minimizing of the cost
function, and obtained the learning rule based on the steepest descent method.
GLVQ is a generalized method that includes LVQ. We discussed the convergence
condition and showed that the convergence property depends on the definition of
Generalized Learning Vector Quantization
429
the cost function . We proved that the essential problem of the divergence of the
reference vectors in LVQ2.1 results from a drawback of Kohonen's rule that does
not satisfy the convergence condition. Preliminary experiments revealed that nonlinearity in the cost function is very effective for improving recognition ability. We
carried out printed Chinese character recognition experiments and obtained a recognition rate of 99.95%. The experimental results revealed that GLVQ is superior to
the conventional LVQ algorithms.
Acknowledgements
We are indebted to Mr. Jun Tsukumo and our colleagues in the Pattern Recognition
Research Laboratory for their helpful cooperation.
References
Y. Le Cun, B. Bose, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and
L. D. Jackel, "Handwritten Digit Recognition with a Back-Propagation Network,"
Neural Information Processing Systems 2, pp. 396-404 (1989).
K. Yamada, H. Kami, J. Tsukumo, and T. Temma, "Handwritten Numeral Recognition by Multi-Layered Neural Network with Improved Learning Algorithm," Proc.
of the International Joint Conference on Neural Networks 89, Vol. 2, pp. 259-266
(1989).
T. Kohonen, S elf- Organization and Associative Memory, 3rd ed. , Springer-Verlag
(1989).
T. Kohonen, "LVQ-.PAK Version 3.1 - The Learning Vector Quantization Program
Package," LVQ Programming Team of the Helsinki University of Technology, (1995).
S. W. Lee and H. H. Song, "Optimal Design of Reference Models Using Simulated
Annealing Combined with an Improved LVQ3," Proc. of the International Conference on Document Analysis and Recognition, pp. 244-249 (1993).
K. Miyahara and F. Yoda, "Printed Japanese Character Recognition Based on
Multiple Modified LVQ Neural Network," Proc. of the International Conference on
Document Analysis and Recognition, pp. 250- 253 (1993).
A. Sato and J. Tsukumo, "A Criterion for Training Reference Vectors and Improved
Vector Quantization," Proc. of the International Conference on Neural Networks,
Vol. 1, pp.161-166 (1994).
N. R. Pal, J. C. Bezdek, and E. C.-IC Tsao, "Generalized Clustering Networks and
Kohonen's Self-organizing Scheme," IEEE Trans. of Neural Networks, Vol. 4, No.4,
pp. 549-557 (1993).
A. I. Gonzalez, M. Grana, and A. D'Anjou, "An Analysis ofthe GLVQ Algorithm,"
IEEE Trans . of Neural Networks, Vol. 6, No.4, pp. 1012-1016 (1995).
M. Hamanaka, K. Yamada, and J. Tsukumo, "On-Line Japanese Character Recognition Experiments by an Off-Line Method Based on Normalization-Cooperated
Feature Extraction," Proc. of the International Conference on Document Analysis
and Recognition, pp. 204-207 (1993).
| 1113 |@word version:3 d2:22 heretofore:1 initial:5 document:3 wd:3 must:4 aft:11 written:2 happen:1 wll:2 update:1 half:2 accordingly:1 beginning:1 steepest:5 yamada:7 location:1 prove:2 examine:1 multi:2 ol:1 window:4 increasing:2 becomes:1 linearity:3 miyazaki:1 what:1 kind:2 minimizes:2 corporation:1 act:2 positive:1 local:1 limit:3 might:1 studied:1 co:1 range:1 wjl:1 decided:1 digit:1 printed:7 matching:3 word:2 layered:1 applying:1 conventional:3 attention:1 starting:1 rule:12 updated:7 programming:1 recognition:29 updating:3 ft:17 wj:8 decrease:4 mentioned:1 broken:4 depend:1 joint:1 fast:1 effective:5 artificial:1 nction:1 otherwise:1 ability:11 associative:1 sequence:1 propose:3 kohonen:12 organizing:1 awl:2 convergence:15 diverges:1 oo:1 nearest:4 eq:4 come:1 drawback:2 numeral:1 generalization:3 preliminary:3 ad2:1 clarify:1 correction:1 around:3 considered:2 ic:1 equilibrium:1 pak:1 proc:5 awj:1 jackel:1 hubbard:1 wl:1 modified:1 derived:2 focus:1 properly:2 indicates:1 mainly:1 helpful:1 overall:1 classification:3 orientation:1 extraction:1 unsupervised:1 elf:1 bezdek:1 simultaneously:1 divergence:3 organization:1 mlp:1 henderson:1 nl:3 increased:1 earlier:3 cost:19 pal:3 combined:1 peak:2 international:5 lee:2 off:1 diverge:5 iy:1 kami:1 squared:1 satisfied:1 derivative:1 japan:1 wk:4 includes:2 satisfy:4 ad:3 depends:5 later:1 lot:1 tmi:1 competitive:1 bayes:3 minimize:4 likewise:1 ofthe:1 handwritten:2 euclid:1 lvq2:23 w21:4 indebted:1 classified:2 followmg:1 ed:1 definition:5 colleague:1 pp:8 di:4 unsolved:2 gain:3 proved:1 back:2 supervised:1 improved:5 ljt:1 done:1 furthermore:1 just:1 rejected:2 propagation:2 reveal:1 requiring:1 assigned:6 laboratory:2 attractive:6 sin:1 during:4 width:1 self:1 ambiguous:1 noted:2 criterion:3 generalized:12 midplane:1 atsushi:1 meaning:2 recently:1 superior:2 sigmoid:1 jp:1 belong:4 discussed:1 aw2:2 ai:1 rd:1 nonlinearity:1 dj:2 showed:1 belongs:3 verlag:1 continue:2 minimum:3 greater:1 mr:1 converge:4 monotonically:3 alx:4 multiple:1 calculation:1 af:1 long:1 iteration:2 represent:1 normalization:1 decreased:2 annealing:1 keiji:1 w2:21 lvq1:1 call:2 ideal:4 revealed:3 iii:1 affect:2 idea:1 tm:1 song:2 tabulated:1 generate:1 shifted:1 correctly:1 discrete:1 vol:4 run:1 package:1 gonzalez:2 decision:7 layer:1 sato:7 helsinki:1 min:1 extremely:1 separable:2 according:1 character:14 ur:1 wi:25 cun:2 gradually:2 restricted:1 remains:4 discus:3 count:5 scl:1 repulsive:5 wii:2 denker:1 clustering:1 ensure:2 const:1 chinese:6 approximating:2 quantity:2 font:1 degrades:2 said:1 distance:7 simulated:1 degrade:1 mail:1 toward:2 ratio:1 minimizing:3 difficult:1 thirteen:1 negative:1 design:1 wilk:1 howard:1 descent:5 pat:1 incorrectly:1 defining:1 team:1 bose:1 trans:2 usually:1 pattern:4 program:1 memory:1 shifting:2 force:7 scheme:1 improve:1 technology:2 carried:3 jun:1 review:1 acknowledgement:1 relative:1 thresholding:1 cooperation:1 placed:1 perceptron:1 fall:2 template:2 regard:1 boundary:8 calculated:1 commonly:1 why:1 table:2 ku:1 kanagawa:1 improving:5 cl:1 japanese:3 domain:1 da:1 did:1 main:1 linearly:2 fig:4 miyamae:1 position:2 ix:7 dl:12 essential:2 quantization:13 nec:2 rejection:1 lt:2 lap:1 springer:1 satisfies:3 formulated:2 lvq:19 narrower:1 tsao:1 determined:4 wt:2 averaging:1 degradation:1 total:1 experimental:8 support:2 kawasaki:1 |
127 | 1,114 | Stock Selection via Nonlinear
Multi-Factor Models
Asriel U. Levin
BZW Barclays Global Investors
Advanced Strategies and Research Group
45 Fremont Street
San Francisco CA 94105
email: asriel.levin@bglobal.com
Abstract
This paper discusses the use of multilayer feed forward neural networks for predicting a stock's excess return based on its exposure
to various technical and fundamental factors. To demonstrate the
effectiveness of the approach a hedged portfolio which consists of
equally capitalized long and short positions is constructed and its
historical returns are benchmarked against T-bill returns and the
S&P500 index.
1
Introduction
Traditional investment approaches (Elton and Gruber, 1991) assume that the return
of a security can be described by a multifactor linear model:
(1)
where Hi denotes the return on security i, Fl are a set of factor values and Uil are
security i exposure to factor I, ai is an intercept term (which under the CAPM
framework is assumed to be equal to the risk free rate of return (Sharpe, 1984))
and ei is a random term with mean zero which is assumed to be uncorrelated across
securities.
The factors may consist of any set of variables deemed to have explanatory power for
security returns . These could be aspects of macroeconomics, fundamental security
analysis, technical attributes or a combination of the above. The value of a factor
is the expected excess return above risk free rate of a security with unit exposure to
the factor and zero exposure to all other factors. The choice offactors can be viewed
as a proxy for the" state of the world" and their selection defines a metric imposed
on the universe of securities: Once the factors are set, the model assumption is that,
967
Stock Selection via Nonlinear Multi-factor Models
on average, two securities with similar factor loadings
manner.
(Uil)
will behave in a similar
The factor model (1) was not originally developed as a predictive model, but rather
as an explanatory model, with the returns It; and the factor values Pi assumed to
be contemporaneous. To utilize (1) in a predictive manner, each factor value must
be replaced by an estimate, resulting in the model
A
It;
A
A
= ai + UilFl + Ui2 F 2 + ... + UiLFL + ei
(2)
where Ri is a security's future return and F/ is an estimate of the future value
of factor 1, based on currently available information. The estimation of Fl can be
approached with varying degree of sophistication ranging from a simple use of the
historical mean to estimate the factor value (setting Fl(t) = Fi), to more elaborate
approaches attempting to construct a time series model for predicting the factor
values.
Factor models of the form (2) can be employed both to control risk and to enhance
return. In the first case, by capturing the major sources of correlation among
security returns, one can construct a well balanced portfolio which diversifies specific
risk away. For the latter, if one is able to predict the likely future value of a factor,
higher return can be achieved by constructing a portfolio that tilts toward "good"
factors and away from "bad" ones.
While linear factor models have proven to be very useful tools for portfolio analysis
and investment management, the assumption of linear relationship between factor
values and expected return is quite restrictive. Specifically, the use of linear models
assumes that each factor affects the return independently and hence, they ignore the
possible interaction between different factors. Furthermore, with a linear model, the
expected return of a security can grow without bound as its exposure to a factor
increases. To overcome these shortcomings of linear models, one would have to
consider more general models that allow for nonlinear relationship among factor
values, security exposures and expected returns.
Generalizing (2), while maintaining the basic premise that the state of the world can
be described by a vector of factor values and that the expected return of a security
is determined through its coordinates in this factor world, leads to the nonlinear
model:
It; = j(Uil' Ui2,???, UiL, Fl , F2, ... , FL ) + ei
(3)
where JO is a nonlinear function and
"security specific risk" .
ei
is the noise unexplained by the model, or
The prediction task for the nonlinear model (3) is substantially more complex than
in the linear case since it requires both the estimation of future factor values as
well as a determination of the unknown function j. The task can be somewhat
simplified if factor estimates are replaced with their historical means:
It;
J(Uil, Ui2, ... , UiL,
lA, F2, ... , FL) + ei
(4)
where now Uil are the security's factor exposure at the beginning of the period over
which we wish to predict.
To estimate the unknown function t(-), a family of models needs to be selected,
from which a model is to be identified. In the following we propose modeling the relationship between factor exposures and future returns using the class of multilayer
feedforward neural networks (Hertz et al., 1991). Their universal approximation
968
A. U. LEVIN
capabilities (Cybenko, 1989; Hornik et al., 1989), as well as the existence of an effective parameter tuning method (the backpropagation algorithm (Rumelhart et al.,
1986)) makes this family of models a powerful tool for the identification of nonlinear
mappings and hence a natural choice for modeling (4).
2
The stock selection problem
Our objective in this paper is to test the ability of neural network based models
of the form (4) to differentiate between attractive and unattractive stocks. Rather
than trying to predict the total return of a security, the objective is to predict its
performance relative to the market, hence eliminating the need to predict market
directions and movements.
The data set consists of monthly historical records (1989 through 1995) for the
largest 1200-1300 US companies as defined by the BARRA HiCap universe. Each
data record (::::::1300 per month) consists of an input vector composed of a security's
factor exposures recorded at the beginning of the month and the corresponding
output is the security's return over the month. The factors used to build the model
include Earning/Price, Book/Price, past price performance, consensus of analyst
sentiments etc, which have been suggested in the financial literature as having
explanatory power for security returns (e.g. (Fama and French, 1992)). To minimize
risk, exposure to other unwarranted factors is controlled using a quadratic optimizer.
3
Model construction and testing
Potentially, changes in a price of a security are a function of a very large number of
forces and events, of which only a small subset can be included in the factor model
(4). All other sources of return play the role of noise whose magnitude is probably
much larger than any signal that can be explained by the factor exposures. When
this information is used to train a neural network, the network attempts to replicate
the examples it sees and hence much of what it tries to learn will be the particular
realizations of noise that appeared in the training set.
To minimize this effect, both a validation set and regularization are used in the
training. The validation set is used to monitor the performance of the model with
data on which it has not been trained on. By stopping the learning process when
validation set error starts to increase, the learning of noise is minimized. Regularization further limits the complexity of the function realized by the network and,
through the reduction of model variance, improves generalization (Levin et al.,
1994).
The stock selection model is built using a rolling train/test window. First, M
"two layer" feedforward networks are built for each month of data (result is rather
insensitive to the particular choice of M). Each network is trained using stochastic
gradient descent with one quarter of the monthly data (randomly selected) used as
a validation set. Regularization is done using principal component pruning (Levin
et al., 1994). Once training is completed, the models constructed over N consecutive
month of data (again, result is insensitive to particular choice of N) are combined
(thus increasing the robustness of the model (Breiman, 1994)) to predict the returns
in the following month. Thus the predicted (out of sample) return of stock i in
month k is given by
(5)
969
Stock Selection via Nonlinear Multi-factor Models
0.4
Nonlinear
Linear
0.35
0.3
c
0
:;:::;
ctI
........Q)
0
()
0.25
0.2
0.15
~
-.,
I :
i !
r---: !
0.1
,,,,,
0.05
!i
r---l
r- -I i -r- ~mf
i
0
,
I
L _J
0
5
10
Cell
bl
1! i
-I! I !
----_]'-T_-I
':
-l-.+-r-+-,-...L-J.-l-
-
15
:
-L J.
20
Figure 1 : Average correlation between predicted alphas and realized returns for
linear and nonlinear models
where k(k) is stock's i predicted return, N Nk-j(?) denoted the neural network
model built in month k - j and u71 are stock's i factor exposures as measured at
the beginning of month k.
4
Benchmarking to linear
As a first step in evaluating the added value of the nonlinear model, its performance
was benchmarked against a generalized least squares linear model. Each model was
run over three universes: all securities in the HiCap universe, the extreme 200 stocks
(top 100, bottom 100 as defined by each model), and the extreme 100 stocks. As
a comparative performance measure we use the Sharpe ratio (Elton and Gruber,
1991). As shown in Table 4, while the performance of the two models is quite
comparable over the whole universe of stocks, the neural network based model
performs better at the extremes, resulting in a substantially larger Sharpe ratio
(and of course, when constructing a portfolio , it is the extreme alphas that have
the most impact on performance).
I Portfolio\Model
All HiCap
100 long/100 short
50 long/50 short
II
Linear
6.43
4.07
3.07
Nonlinear
6.92
5.49
4.23
II
Table 1: Ex ante Sharpe ratios: Neural network vs. linear
While the numbers in the above table look quite impressive, it should be emphasised
that they do not represent returns of a practical strategy: turnover is huge and the
figures do not take transaction costs into account. The main purpose of the table
A. U. LEVIN
970
is to compare the information that can be captured by the different models and
specifically to show the added value of the neural network at the extremes. A
practical implementation scheme and the associated performance will be discussed
in the next section.
Finally, some insight as to the reason for the improved performance can be gained
by looking at the correlation between model predictions and realized returns for
different values of model predictions (commonly referred to as alphas). For that,
the alpha range was divided to 20 cells, 5% of observations in each and correlations
were calculated separately for each cell. As is shown in figure 1, while both neural
network and linear model seem to have more predictive power at the extremes, the
network's correlations are substantially larger for both positive and negative alphas.
5
Portfolio construction
Given the superior predictive ability of the nonlinear model at the extremes, a
natural way of translating its predictions into an investment strategy is through the
use of a long/short construct which fully captures the model information on both
the positive as well as the negative side.
The long/short portfolio (Jacobs and Levy, 1993) is constructed by allocating equal
capital to long and short positions. By monitoring and controlling the risk characteristics on both sides, one is able to construct a portfolio that has zero correlation
with the market ((3 = 0) - a "market neutral" portfolio. By construction, the return of a market neutral portfolio is insensitive to the market up or down swings
and its only source of return is the performance spread between the long and short
positions, which in turn is a direct function of the model (5) discernment ability.
Specifically, the translation of the model predictions into a realistically implementable strategy is done using a quadratic optimizer. Using the model predicted
returns and incorporating volatility information about the various stocks, the optimizer is utilized to construct a portfolio with the following characteristics:
? Market neutral (equal long and short capitalization).
? Total number of assets in the portfolio
<= 200.
? Average (one sided) monthly turnover
~
15%.
? Annual active risk ~ 5%.
In the following, all results are test set results (out of sample), net of estimated
transaction costs (assumed to be 1.5% round trip). The standard benchmark for
a market neutral portfolio is the return on 3 month T-bill and as can be seen
in Table 2, over the test period the market neutral portfolio has consistently and
decisively outperformed its benchmark. Furthermore, the results reported for 1995
were recorded in real-time (simulated paper portfolio).
An interesting feature of the long/short construct is its ease of transportability (Jacobs and Levy, 1993). Thus, while the base construction is insensitive to market
movement, if one wishes, full exposure to a desired market can be achieved through
the use of futures or swaps (Hull, 1993). As an example, by adding a permanent
S&P500 futures overlay in an amount equal to the invested capital, one is fully
exposed to the equity market at all time , and returns are the sum of the long/short
performance spread and the profits or losses resulting from the market price movements. This form of a long/short strategy is referred to as an "equitized" strategy
and the appropriate benchmark will be overlayed index. The relative performance
Stock Selection via Nonlinear Multi-factor Models
I Statistics
II
Total Return~%)
Annual total(Yr%)
Active Return(%)
Annual active(Yr%)
Active risk(Yr%)
Max draw down(%)
Turnover(Yr%)
T-Bill
27.8
4.6
971
I Neutral II
131.5
16.8
103.7
12.2
4.8
3.2
198.4
-
S&P500
102.0
10.4
-
13.9
-
I Equitized II
264.5
27.0
162.5
16.6
4.8
10.0
198.4
Table 2: Comparative summary of ex ante portfolio performance (net of transaction
costs) 8/90 - 12/95
4
3.5
3
(I)
;:)
?i
>
Equitized --SP500 -+--Neutral .-0 . .. .
T-bill ......_.
2.5
.2
~0
2
0..
C])
>
~
"S
E
1.5
::I
()
91
92
93
94
95
96
Year
Figure 2: Cumulative portfolio value 8/90 - 12/95 (net of estimated transaction
costs)
of the equitized strategy with an S&P500 futures overlay is presented in Table 2.
Summary of the accumulated returns over the test period for the market neutral
and equitized portfolios compared to T-bill and S&P500 are given in Figure 2.
Finally, even though the performance of the model is quite good, it is very difficult
to convince an investor to put his money on a "black box". A rather simple way to
overcome this problem of neural networks is to utilize a CART tree (Breiman et aI.,
1984) to explain the model's structure. While the performance of the tree on the
raw data in substantially inferior to the network's, it can serve as a very effective
tool for analyzing and interpreting the information that is driving the model.
6
Conclusion
We presented a methodology by which neural network based models can be used
for security selection and portfolio construction. In spite of the very low signal to
noise ratio of the raw data, the model was able to extract meaningful relationship
972
A. U. LEVIN
between factor exposures and expected returns. When utilized to construct hedged
portfolios, these predictions achieved persistent returns with very favorable risk
characteristics.
The model is currently being tested in real time and given its continued consistent
performance, is expected to go live soon.
References
Anderson, J. and Rosenfeld, E., editors (1988) . Neurocomputing: Foundations of
Research. MIT Press, Cambridge.
Breiman, L. (1994) . Bagging predictors. Technical Report 416, Department of
Statistics, VCB, Berkeley, CA.
Breiman, L., Friedman, J ., Olshen, R., and Stone, C. (1984). Classification and
Regression Trees. Chapman & Hall.
Cybenko, G . (1989) . Approximation by superpositions of a sigmoidal function .
Mathematics of Control, Signals, and Systems, 2:303-314.
Elton , E. and Gruber, M. (1991). Modern Portfolio Theory and Investment Analysis.
John Wiley.
Fama, E. and French, K. (1992). The cross section of expected stock returns. Journal
of Finance, 47:427- 465 .
Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the theory of neural
computation, volume 1 of Santa Fe Institute studies in the sciences ofcomplexity. Addison Wesley Pub. Co.
Hornik, K. , Stinchcombe, M., and White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2:359-366.
Hull, J . (1993). Options, Futures and Other Derivative Securities. Prentice-Hall.
Jacobs , B. and Levy, K. (1993). Long/short equity investing. Journal of Portfolio
Management, pages 52-63.
Levin, A. V., Leen, T. K., and Moody, J . E. (1994) . Fast pruning using principal
components. In Cowan, J . D., Tesauro, G., and Alspector, J., editors , Advances
in Neural Information Processing Systems, volume 6. Morgan Kaufmann. to
apear.
Rumelhart, D., Hinton, G., and Williams, R. (1986) . Learning representations by
back-propagating errors. Nature, 323:533- 536. Reprinted in (Anderson and
Rosenfeld, 1988).
Sharpe, W . (1984) . Factor models, CAPMs and the APT. Journal of Portfolio
Management, pages 21-25.
| 1114 |@word eliminating:1 loading:1 replicate:1 t_:1 jacob:3 profit:1 reduction:1 series:1 pub:1 past:1 com:1 must:1 john:1 fama:2 v:1 selected:2 yr:4 beginning:3 short:12 record:2 sigmoidal:1 constructed:3 direct:1 persistent:1 consists:3 manner:2 market:14 expected:8 alspector:1 multi:4 company:1 window:1 increasing:1 what:1 benchmarked:2 substantially:4 developed:1 berkeley:1 finance:1 control:2 unit:1 positive:2 limit:1 analyzing:1 black:1 co:1 ease:1 palmer:1 range:1 practical:2 testing:1 investment:4 backpropagation:1 universal:2 spite:1 selection:8 put:1 risk:10 live:1 intercept:1 prentice:1 bill:5 imposed:1 hedged:2 exposure:14 go:1 williams:1 independently:1 insight:1 continued:1 financial:1 his:1 coordinate:1 construction:5 play:1 ui2:3 controlling:1 rumelhart:2 utilized:2 bottom:1 role:1 capture:1 fremont:1 movement:3 balanced:1 complexity:1 turnover:3 trained:2 exposed:1 barclays:1 predictive:4 serve:1 f2:2 sp500:1 swap:1 stock:16 various:2 train:2 fast:1 shortcoming:1 effective:2 approached:1 quite:4 whose:1 larger:3 ability:3 statistic:2 capm:1 invested:1 rosenfeld:2 differentiate:1 net:3 propose:1 interaction:1 realization:1 realistically:1 comparative:2 volatility:1 propagating:1 measured:1 krogh:1 predicted:4 direction:1 attribute:1 capitalized:1 stochastic:1 hull:2 translating:1 premise:1 generalization:1 cybenko:2 hall:2 uil:7 mapping:1 predict:6 driving:1 major:1 optimizer:3 consecutive:1 purpose:1 estimation:2 unwarranted:1 outperformed:1 favorable:1 currently:2 unexplained:1 superposition:1 largest:1 tool:3 mit:1 rather:4 breiman:4 varying:1 consistently:1 stopping:1 accumulated:1 explanatory:3 among:2 classification:1 denoted:1 equal:4 once:2 construct:7 having:1 transportability:1 chapman:1 look:1 future:9 minimized:1 report:1 modern:1 randomly:1 composed:1 neurocomputing:1 replaced:2 overlayed:1 attempt:1 friedman:1 decisively:1 huge:1 extreme:7 sharpe:5 allocating:1 tree:3 desired:1 modeling:2 cost:4 subset:1 neutral:8 rolling:1 predictor:1 levin:8 reported:1 combined:1 convince:1 fundamental:2 enhance:1 moody:1 jo:1 again:1 recorded:2 management:3 book:1 derivative:1 return:40 account:1 permanent:1 try:1 start:1 investor:2 option:1 capability:1 ante:2 minimize:2 square:1 variance:1 characteristic:3 kaufmann:1 identification:1 raw:2 monitoring:1 asset:1 explain:1 email:1 against:2 associated:1 improves:1 back:1 feed:1 wesley:1 originally:1 higher:1 methodology:1 improved:1 leen:1 done:2 though:1 box:1 anderson:2 furthermore:2 correlation:6 ei:5 nonlinear:14 french:2 defines:1 effect:1 swing:1 hence:4 regularization:3 white:1 attractive:1 round:1 inferior:1 generalized:1 trying:1 stone:1 demonstrate:1 performs:1 interpreting:1 ranging:1 fi:1 superior:1 quarter:1 tilt:1 insensitive:4 volume:2 discussed:1 monthly:3 cambridge:1 ai:3 tuning:1 mathematics:1 portfolio:24 impressive:1 money:1 etc:1 base:1 tesauro:1 approximators:1 captured:1 seen:1 morgan:1 somewhat:1 employed:1 period:3 signal:3 ii:5 full:1 technical:3 determination:1 cross:1 long:12 divided:1 equally:1 controlled:1 impact:1 prediction:6 basic:1 regression:1 multilayer:3 metric:1 represent:1 achieved:3 cell:3 separately:1 grow:1 source:3 probably:1 capitalization:1 cart:1 cowan:1 effectiveness:1 seem:1 feedforward:3 affect:1 identified:1 reprinted:1 sentiment:1 useful:1 elton:3 santa:1 amount:1 overlay:2 multifactor:1 estimated:2 per:1 group:1 monitor:1 capital:2 utilize:2 sum:1 year:1 run:1 powerful:1 family:2 earning:1 draw:1 comparable:1 capturing:1 fl:6 hi:1 bound:1 layer:1 quadratic:2 annual:3 ri:1 aspect:1 attempting:1 department:1 combination:1 hertz:2 across:1 asriel:2 explained:1 sided:1 discus:1 turn:1 addison:1 available:1 bzw:1 away:2 appropriate:1 apt:1 robustness:1 existence:1 bagging:1 denotes:1 assumes:1 include:1 top:1 completed:1 maintaining:1 emphasised:1 restrictive:1 build:1 bl:1 objective:2 added:2 realized:3 strategy:7 traditional:1 gradient:1 simulated:1 street:1 consensus:1 toward:1 reason:1 analyst:1 index:2 relationship:4 ratio:4 difficult:1 olshen:1 fe:1 potentially:1 negative:2 implementation:1 unknown:2 observation:1 benchmark:3 implementable:1 descent:1 behave:1 hinton:1 looking:1 trip:1 security:24 able:3 suggested:1 appeared:1 built:3 max:1 stinchcombe:1 power:3 event:1 natural:2 force:1 predicting:2 advanced:1 scheme:1 deemed:1 extract:1 literature:1 relative:2 fully:2 loss:1 interesting:1 proven:1 validation:4 foundation:1 degree:1 proxy:1 consistent:1 gruber:3 editor:2 uncorrelated:1 pi:1 translation:1 course:1 summary:2 free:2 soon:1 side:2 allow:1 institute:1 overcome:2 calculated:1 world:3 evaluating:1 cumulative:1 forward:1 commonly:1 san:1 simplified:1 historical:4 contemporaneous:1 transaction:4 excess:2 pruning:2 alpha:5 ignore:1 global:1 active:4 assumed:4 francisco:1 investing:1 table:7 learn:1 nature:1 p500:5 ca:2 hornik:2 complex:1 constructing:2 main:1 spread:2 universe:5 whole:1 noise:5 referred:2 benchmarking:1 elaborate:1 wiley:1 position:3 wish:2 levy:3 down:2 bad:1 specific:2 unattractive:1 consist:1 incorporating:1 adding:1 gained:1 magnitude:1 nk:1 mf:1 generalizing:1 sophistication:1 likely:1 cti:1 viewed:1 month:10 price:5 change:1 included:1 specifically:3 determined:1 principal:2 total:4 la:1 equity:2 meaningful:1 latter:1 tested:1 ex:2 |
128 | 1,115 | A New Learning Algorithm for Blind
Signal Separation
s. Amari*
University of Tokyo
Bunkyo-ku, Tokyo 113, JAPAN
amari@sat.t. u-tokyo.ac.jp
A. Cichocki
Lab. for Artificial Brain Systems
FRP, RIKEN
Wako-Shi, Saitama, 351-01, JAPAN
cia@kamo.riken.go.jp
H. H. Yang
Lab. for Information Representation
FRP, RIKEN
Wako-Shi, Saitama, 351-01, JAPAN
hhy@koala.riken.go.jp
Abstract
A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed
signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing
matrix are unknown except for the number of the sources. The
Gram-Charlier expansion instead of the Edgeworth expansion is
used in evaluating the MI. The natural gradient approach is used
to minimize the MI. A novel activation function is proposed for the
on-line learning algorithm which has an equivariant property and
is easily implemented on a neural network like model. The validity
of the new learning algorithm are verified by computer simulations.
1
INTRODUCTION
The problem of blind signal separation arises in many areas such as speech recognition, data communication, sensor signal processing, and medical science. Several
neural network algorithms [3, 5, 7] have been proposed for solving this problem.
The performance of these algorithms is usually affected by the selection of the activation functions for the formal neurons in the networks. However, all activation
?Lab. for Information Representation, FRP, RIKEN, Wako-shi, Saitama, JAPAN
758
S. AMARI, A. CICHOCKI, H. H. YANG
functions attempted are monotonic and the selections of the activation functions
are ad hoc. How should the activation function be determined to minimize the MI?
Is it necessary to use monotonic activation functions for blind signal separation? In
this paper, we shall answer these questions and give an on-line learning algorithm
which uses a non-monotonic activation function selected by the independent component analysis (ICA) [7]. Moreover, we shall show a rigorous way to derive the
learning algorithm which has the equivariant property, i.e., the performance of the
algorithm is independent of the scaling parameters in the noiseless case.
2
PROBLEM
Let us consider unknown source signals Si(t), i = 1"", n which are mutually independent. It is assumed that the sources Si(t) are stationary processes and each
source has moments of any order with a zero mean. The model for the sensor output
is
x(t) = As(t)
E R nxn is an unknown non-singular mixing matrix, set)
[Sl(t),? .. , sn(t)]T and x(t) = [Xl(t), .. ?, xn(t)JT.
where A
Without knowing the source signals and the mixing matrix, we want to recover the
original signals from the observations x(t) by the following linear transform:
yet) = Wx(t)
where yet) = [yl(t), ... , yn(t)]T and WE R nxn is a de-mixing matrix.
It is impossible to obtain the original sources Si(t) because they are not identifiable
in the statistical sense. However, except for a permutation of indices, it is possible
to obtain CiSi(t) where the constants Ci are indefinite nonzero scalar factors. The
source signals are identifiable in this sense. So our goal is to find the matrix W such
that [yl, ... , yn] coincides with a permutation of [Sl, ... ,sn] except for the scalar
factors. The solution W is the matrix which finds all independent components in
the outputs. An on-line learning algorithm for W is needed which performs the
ICA. It is possible to find such a learning algorithm which minimizes the dependency
among the outputs. The algorithm in [6] is based on the Edgeworth expansion[8] for
evaluating the marginal negentropy. Both the Gram-Charlier expansion[8] and the
Edgeworth expansion[8] can be used to approximate probability density functions.
We shall use the Gram-Charlier expansion instead of the Edgeworth expansion for
evaluating the marginal entropy. We shall explain the reason in section 3.
3
INDEPENDENCE OF SIGNALS
The mathematical framework for the ICA is formulated in [6]. The basic idea of the
ICA is to minimize the dependency among the output components. The dependency
is measured by the Kullback-Leibler divergence between the joint and the product
of the marginal distributions of the outputs:
D(W) =
J
p(y) log
p(y)
rra=lPa
(y a) dy
(1)
where Pa(ya) is the marginal probability density function (pdf). Note the KullbackLeibler divergence has some invariant properties from the differential-geometrical
point of view[l].
A New Learning Algorithm for Blind Signal Separation
759
It is easy to relate the Kullback-Leibler divergence D(W) to the average MI of y:
n
D(W) = -H(y)
+ LH(ya)
(2)
a=l
where
H(y) = - J p(y) logp(y)dy,
H(ya) = - J Pa(ya)logPa(ya)dya is the marginal entropy.
The minimization of the Kullback-Leibler divergence leads to an ICA algorithm for
estimating W in [6] where the Edgeworth expansion is used to evaluate the negentropy. We use the truncated Gram-Charlier expansion to evaluate the KullbackLeibler divergence. The Edgeworth expansion has some advantages over the GramCharlier expansion only for some special distributions. In the case of the Gamma
distribution or the distribution of a random variable which is the sum of iid random
variables, the coefficients of the Edgeworth expansion decrease uniformly. However,
there is no such advantage for the mixed output ya in general cases.
To calculate each H(ya) in (2), we shall apply the Gram-Charlier expansion to
approximate the pdf Pa(ya). Since E[y] = E[W As] = 0, we have E[ya] = 0. To
simplify the calculations for the entropy H(ya) to be carried out later, we assume
m2 = 1. We use the following truncated Gram-Charlier expansion to approximate
the pdf Pa(ya):
(3)
where
lI;a = ma,
11;4 =
2
m4 -
3,
mk
= E[(ya)k] is the k-th order moment of ya,
a(y) = ~e-lIi-, and Hk(Y) are Chebyshev-Hermite polynomials defined by the
identity
We prefer the Gram-Charlier expansion to the Edgeworth expansion because the
former clearly shows how lI;a and 11;4 affect the approximation of the pdf. The last
term in (3) characterizes non-Gaussian distributions. To apply (3) to calculate
H(ya), we need the following integrals:
~
(4)
a(y)(H2(y))2 H4(y)dy = 24.
(5)
- / a(y)H2(y)loga(y)dy =
J
These integrals can be obtained easily from the following results for the moments
of a Gaussian random variable N(O,l):
/ y2k+1a(y)dy = 0,
/ y2k a(y)dy = 1?3??? (2k - 1).
(6)
By using the expansion
y2
log(l
+ y) ~ y - 2 + O(y3)
and taking account of the orthogonality relations of the Chebyshev-Hermite polynomials and (4)-(5), the entropy H(ya) is expanded as
H(ya)
~
1
(lI;a)2
(lI;a)2
-log(27re) _ _
3 ___
4_
2
2 . 3!
2 . 4!
5
1
+ _(lI;a)2I1;a
+ _(lI;a)3.
8
3
4
16
4
(7)
760
S. AMARI, A. CICHOCKI, H. H. YANG
It is easy to calculate
-J
a(y)loga(y)dy =
From y = Wx, we have H(y)
expressions to (2), we have
D(W) ~
~ log(27re).
= H(x) + log Idet(W)I. Applying (7) and the above
-H(x) -log Idet(W)1
n
n
(Ka)2
(Ka)2
+ -log(27re)
- "[_3_, + ~4'
2
~ 2 ?3.
2?.
a=l
(8)
4
A NEW LEARNING ALGORITHM
To obtain the gradient descent algorithm to update W recursively, we need to
calculate 88.0..
where wk' is the (a,k) element of W in the a-th row and k-th column.
WI.
Let cof(wk) be the cofactor of wk' in W. It is not difficult to derive the followings:
8log [det(W)[ _
8wI:
81t3 _
8w;: -
81t; _
8wl: -
-
cof(wk') = (W-Tt
det(W)
k
3E[(ya)2 x k]
4E[(ya)3 x k]
where (W-T)k' denotes the (a,k) element of (WT)-l. From (8), we obtain
;!!a ~
-(W-T)k'
k
+ f(K'3, K~)E[(ya)2xk] + g(K'3, K~)E[(ya)3xk]
(9)
where
f(y, z) = -~y + l1yz,
g(y, z) = -~z
+ ~y2 + ~z2.
From (9), we obtain the gradient descent algorithm to update W recursively:
"
oD
oWk'
d~1s =
-TJ( t)-TJ(t){(W - T)k - f(K'3, K~)E[(ya)2xk]_ g(K'3, K~)E[(ya)3xk]} (10)
where TJ(t) is a learning rate function. Replacing the expectation values in (10) by
their instantaneous values, we have the stochastic gradient descent algorithm:
d~k =
TJ(t){(W-T)k' - f(K'3, K~)(ya)2xk - g(K'3, K~)(ya)3xk}.
We need to use the following adaptive algorithm to compute
dK a
dt
= -J.'(t)(K'3 -
K'3
and
K~
(11)
in (11):
(ya)3)
dK a
d/ = -J.'(t)(K~ - (ya)4
+ 3)
(12)
where 1'( t) is another learning rate function.
The performance of the algorithm (11) relies on the estimation of the third and
fourth order cumulants performed by the algorithm (12). Replacing the moments
761
A New Learning Algorithm for Blind Signal Separation
ofthe random variables in (11) by their instantaneous values, we obtain the following
algorithm which is a direct but coarse implementation of (11):
dw a
dt = 1](t){(W-T)~ - f(ya)x k }
(13)
where the activation function f(y) is defined by
f()
Y
3 11
=
25 9
14
7
47
4Y + 4 Y -"3 Y - 4
5
Y
29 3
+ 4Y .
(14)
Note the activation function f(y) is an odd function, not a monotonic function.
The equation (13) can be written in a matrix form:
(15)
This equation can be further simplified as following by substituting xTW T = yT:
(16)
where f(y) = (f(yl), ... , f(yn))T. The above equation is based on the gradient
descent algorithm (10) with the following matrix form:
dW
dt
aD
= -1](t) aw'
(17)
From information geometry perspective[l], since the mixing matrix A is nonsingular we had better replace the above algorithm by the following natural gradient
descent algorithm:
dW
aD
T
(18)
dt = -1](t)aw W w.
Applying the previous approximation of the gradient
following algorithm:
:& to (18), we obtain the
(19)
which has the same "equivariant" property as the algorithms developed in [4, 5].
Although the on-line learning algorithms (16) and (19) look similar to those in
[3, 7] and [5] respectively, the selection of the activation function in this paper is
rational, not ad hoc. The activation function (14) is determined by the leA. It is
a non-monotonic activation function different from those used in [3, 5, 7].
There is a simple way to justify the stability of the algorithm (19). Let Vec(?)
denote an operator on a matrix which cascades the columns of the matrix from the
left to the right and forms a column vector. Note this operator has the following
property:
(20)
Vec(ABC) = (C T 0 A)Vec(B).
Both the gradient descent algorithm and the natural gradient descent algorithm are
special cases of the following general gradient descent algorithm:
dVec(W) = _ (t)P
aD
dt
1]
aVec(W)
(21)
where P is a symmetric and positive definite matrix. It is trivial that (21) becomes
(17) when P = I. When P = WTW 0 I, applying (20) to (21), we obtain
dVec(W)
dt
= -1]( t)(W
T
aD
W 0 I) aVec(W)
aD T
= -1]( t)Vec( aw W W)
s. AMARI. A. CICHOCKI. H. H. YANG
762
and this equation implies (18). So the natural gradient descent algorithm updates
Wet) in the direction of decreasing the dependency D(W). The information geometry theory[l] explains why the natural gradient descent algorithm should be used
to minimize the MI.
Another on-line learning algorithm for blind separation using recurrent network was
proposed in [2]. For this algorithm, the activation function (14) also works well.
In practice, other activation functions such as those proposed in [2]-[6] may also be
used in (19). However, the performance of the algorithm for such functions usually
depends on the distributions of the sources. The activation function (14) works for
relatively general cases in which the pdf of each source can be approximated by the
truncated Gram-Charlier expansion.
5
SIMULATION
In order to check the validity and performance of the new on-line learning algorithm
(19), we simulate it on the computer using synthetic source signals and a random
mixing matrix. The extensive computer simulations have fully confirmed the theory
and the validity of the algorithm (19). Due to the limit of space we present here
only one illustrative example.
Example:
Assume that the following three unknown sources are mixed by a random mixing
matrix A:
[SI (t), S2(t), S3(t)] = [n(t), O.lsin( 400t)cos(30t), 0.01sign[sin(500t + 9cos( 40t))]
where net) is a noise source uniformly distributed in the range [-1, +1], and S2(t)
and S3(t) are two deterministic source signals. The elements of the mixing matrix
A are randomly chosen in [-1, +1]. The learning rate is exponentially decreaSing
to zero as rJ(t) = 250exp( -5t).
A simulation result is shown in Figure 1. The first three signals denoted by Xl,
X2 and X3 represent mixing (sensor) signals: x l (t), x 2(t) and x 3(t). The last
three signals denoted by 01, 02 and 03 represent the output signals: yl(t), y2(t),
and y3(t). By using the proposed learning algorithm, the neural network is able
to extract the deterministic signals from the observations after approximately 500
milliseconds.
The performance index El is defined by
El =
tct
i=1 j=1
where P =
6
(Pij)
tct
IPijl
- 1) +
IPijl
- 1)
maxk IPikl
j=l i=l maxk IPkjl
= WA.
CONCLUSION
The major contribution of this paper the rigorous derivation of the effective blind
separation algorithm with equivariant property based on the minimization of the
MI of the outputs. The ICA is a general principle to design algorithms for blind
signal separation. The most difficulties in applying this principle are to evaluate
the MI of the outputs and to find a working algorithm which decreases the MI.
Different from the work in [6], we use the Gram-Charlier expansion instead of the
Edgeworth expansion to calculate the marginal entropy in evaluating the MI. Using
A New Learning Algorithm for Blind Signal Separation
763
the natural gradient method to minimize the MI, we have found an on-line learning
algorithm to find a de-mixing matrix. The algorithm has equivariant property and
can be easily implemented on a neural network like model. Our approach provides
a rational selection of the activation function for the formal neurons in the network.
The algorithm has been simulated for separating unknown source signals mixed by
a random mixing matrix. Our theory and the validity of the new learning algorithm
are verified by the simulations.
o.
04
0'
o
I
Figure 1: The mixed and separated signals, and the performance index
Acknowledgment
We would like to thank Dr. Xiao Yan SU for the proof-reading of the manuscript.
References
[1] S.-I. Amari. Differential-Geometrical Methods in Statistics, Lecture Notes in
Statistics vol.28. Springer, 1985.
[2] S. Amari, A. Cichocki, and H. H. Yang. Recurrent neural networks for blind separation of sources. In Proceedings 1995 International Symposium on Nonlinear
Theory and Applications, volume I, pages 37-42, December 1995.
[3] A. J. Bell and T . J . Sejnowski. An information-maximisation approach to blind
separation and blind deconvolution. Neural Computation, 7:1129-1159, 1995.
[4] J.-F. Cardoso and Beate Laheld. Equivariant adaptive source separation. To
appear in IEEE Trans. on Signal Processing, 1996.
[5] A. Cichocki, R. Unbehauen, L. MoszczyIiski, and E. Rummert. A new on-line
adaptive learning algorithm for blind separation of source signals. In ISANN94,
pages 406-411, Taiwan, December 1994.
[6] P. Comon. Independent component analysis, a new concept? Signal Processing,
36:287-314, 1994.
[7] C. Jutten and J. Herault. Blind separation of sources, part i: An adaptive
algorithm based on neuromimetic architecture. Signal Processing, 24:1- 10, 1991.
[8] A. Stuart and J. K. Ord. Kendall's Advanced Theory of Statistics. Edward
Arnold, 1994.
| 1115 |@word polynomial:2 simulation:5 recursively:2 moment:4 wako:3 ka:2 z2:1 od:1 activation:16 si:4 yet:2 negentropy:2 written:1 wx:2 update:3 stationary:1 selected:1 xk:6 coarse:1 provides:1 hermite:2 mathematical:1 h4:1 direct:1 differential:2 symposium:1 ica:6 equivariant:6 brain:1 decreasing:2 becomes:1 estimating:1 moreover:1 minimizes:2 developed:1 y3:2 medical:1 yn:3 appear:1 positive:1 limit:1 approximately:1 co:2 range:1 acknowledgment:1 practice:1 maximisation:1 definite:1 edgeworth:9 x3:1 xtw:1 area:1 laheld:1 yan:1 bell:1 cascade:1 selection:4 operator:2 impossible:1 applying:4 deterministic:2 shi:3 yt:1 go:2 m2:1 dw:3 stability:1 us:1 pa:4 element:3 recognition:1 approximated:1 calculate:5 decrease:2 solving:1 easily:3 joint:1 riken:5 derivation:1 separated:1 effective:1 sejnowski:1 artificial:1 amari:7 statistic:3 transform:1 hoc:2 advantage:2 net:1 product:1 mixing:11 tct:2 derive:2 recurrent:2 ac:1 measured:2 odd:1 edward:1 cisi:1 implemented:2 implies:1 direction:1 tokyo:3 stochastic:1 explains:1 avec:2 exp:1 substituting:1 major:1 estimation:1 wet:1 wl:1 minimization:2 clearly:1 sensor:3 gaussian:2 derived:1 check:1 hk:1 rigorous:2 sense:2 el:2 relation:1 i1:1 among:3 denoted:2 herault:1 special:2 mutual:1 marginal:6 stuart:1 look:1 simplify:1 randomly:1 gamma:1 divergence:5 m4:1 geometry:2 tj:4 integral:2 necessary:1 lh:1 re:3 mk:1 y2k:2 column:3 cumulants:1 logp:1 saitama:3 kullbackleibler:2 dependency:6 answer:1 aw:3 synthetic:1 density:2 international:1 yl:4 dr:1 lii:1 li:6 japan:4 account:1 de:2 wk:4 coefficient:1 blind:15 ad:7 depends:1 later:1 view:1 performed:1 lab:3 kendall:1 characterizes:1 recover:1 contribution:1 minimize:5 logpa:1 t3:1 ofthe:1 nonsingular:1 iid:1 confirmed:1 explain:1 proof:1 mi:11 rational:2 manuscript:1 dt:6 working:1 replacing:2 su:1 nonlinear:1 jutten:1 validity:4 concept:1 y2:3 former:1 symmetric:1 nonzero:1 leibler:3 sin:1 illustrative:1 coincides:1 pdf:5 tt:1 performs:1 geometrical:2 instantaneous:2 novel:1 jp:3 exponentially:1 volume:1 frp:3 cofactor:1 vec:4 had:1 rra:1 perspective:1 loga:2 signal:28 rj:1 calculation:1 basic:1 noiseless:1 expectation:1 represent:2 lea:1 want:1 singular:1 source:19 lsin:1 december:2 yang:5 easy:2 independence:1 affect:1 charlier:9 architecture:1 idea:1 knowing:1 chebyshev:2 det:2 expression:1 bunkyo:1 speech:1 cardoso:1 sl:2 millisecond:1 s3:2 sign:1 shall:5 vol:1 affected:1 indefinite:1 verified:2 wtw:1 sum:1 fourth:1 separation:15 prefer:1 scaling:1 dy:7 dvec:2 identifiable:2 lpa:1 orthogonality:1 x2:1 simulate:1 expanded:1 relatively:1 wi:2 comon:1 invariant:1 equation:4 mutually:1 needed:1 idet:2 neuromimetic:1 apply:2 cia:1 original:2 denotes:1 question:1 gradient:13 thank:1 simulated:1 separating:1 trivial:1 reason:1 taiwan:1 index:3 difficult:1 relate:1 implementation:1 design:1 unknown:5 ord:1 neuron:2 observation:2 descent:10 unbehauen:1 truncated:3 maxk:2 communication:1 extensive:1 trans:1 able:1 usually:2 reading:1 natural:6 difficulty:1 advanced:1 carried:1 cichocki:6 extract:1 sn:2 nxn:2 fully:1 lecture:1 permutation:2 mixed:5 h2:2 pij:1 xiao:1 principle:2 row:1 beate:1 last:2 formal:2 arnold:1 taking:1 distributed:1 xn:1 gram:9 evaluating:4 adaptive:4 simplified:1 approximate:3 kullback:3 sat:1 assumed:1 why:1 ku:1 expansion:20 s2:2 noise:1 hhy:1 xl:2 third:1 jt:1 dk:2 deconvolution:1 ci:1 entropy:5 scalar:2 monotonic:5 springer:1 relies:1 abc:1 ma:1 goal:1 formulated:1 identity:1 replace:1 determined:2 except:3 uniformly:2 wt:1 justify:1 ya:27 attempted:1 arises:1 cof:2 evaluate:3 |
129 | 1,116 | Classifying Facial Action
Marian Stewart Bartlett, Paul A. Viola,
Terrence J. Sejnowski, Beatrice A. Golomb
Howard Hughes Medical Institute
The Salk Institute, La Jolla, CA 92037
marni, viola, terry, beatrice @salk.edu
Jan Larsen
The Niels Bohr Institute
2100 Copenhagen
Denmark
jlarsen@fys.ku.dk
Joseph C. Hager
Paul Ekman
Network Information Research Corp
Salt Lake City, Utah
jchager@ibm.net
University of California San Francisco
San Francisco, CA 94143
ekmansf@itsa.ucsf.edu
Abstract
The Facial Action Coding System, (FACS), devised by Ekman and
Friesen (1978), provides an objective meanS for measuring the facial
muscle contractions involved in a facial expression. In this paper,
we approach automated facial expression analysis by detecting and
classifying facial actions. We generated a database of over 1100
image sequences of 24 subjects performing over 150 distinct facial
actions or action combinations. We compare three different approaches to classifying the facial actions in these images: Holistic
spatial analysis based on principal components of graylevel images;
explicit measurement of local image features such as wrinkles; and
template matching with motion flow fields. On a dataset containing six individual actions and 20 subjects, these methods had 89%,
57%, and 85% performances respectively for generalization to novel
subjects. When combined, performance improved to 92%.
1
INTRODUCTION
Measurement of facial expressions is important for research and assessment psychiatry, neurology, and experimental psychology (Ekman, Huang, Sejnowski, & Hager,
1992), and has technological applications in consumer-friendly user interfaces, interactive video and entertainment rating. The Facial Action Coding System (FACS)
is a method for measuring facial expressions in terms of activity in the underlying
facial muscles (Ekman & Friesen, 1978). We are exploring ways to automate FACS.
824
BARTLETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN
Rather than classifying images into emotion categories such as happy, sad, or surprised, the goal of this work is instead to detect the muscular actions that comprise
a facial expression.
FACS was developed in order to allow researchers to measure the activity of facial
muscles from video images of faces. Ekman and Friesen defined 46 distinct action
units, each of which correspond to activity in a distinct muscle or muscle group,
and produce characteristic facial distortions which can be identified in the images.
Although there are static cues to the facial actions, dynamic information is a critical
aspect of facial action coding.
FACS is currently used as a research tool in several branches of behavioral science,
but a major limitation to this system is the time required to both train human
experts and to manually score the video tape. Automating the Facial Action Coding
System would make it more widely accessible as a research tool, and it would provide
a good foundation for human-computer interactions tools.
Why Detect Facial Actions?
Most approaches to facial expression recognition by computer have focused on classifying images into a small set of emotion categories such as happy, sad, or surprised
(Mase, 1991; Yacoob & Davis, 1994; Essa & Pentland, 1995). Real facial signals,
however, consist ofthousands of distinct expressions, that differ often in only subtle
ways. These differences can signify not only which emotion is occurring, but whether
two or more emotions have blended together, the intensity of the emotion(s), and
if an attempt is being made to control the expression of emotion (Hager & Ekman ,
1995).
An alternative to training a system explicitly on a large number of expression categories is to detect the facial actions that comprise the expressions. Thousands of
facial expressions can be defined in terms of this smaller set of structural components. We can verify the signal value of these expressions by reference to a large
body of behavioral data relating facial actions to emotional states which have already been scored with FACS. FACS also provides a meanS for obtaining reliable
training data. Other approaches to automating facial measurement have mistakenly
relied upon voluntary expressions, which tend to contain exaggerated and redundant
cues, while omitting some muscular actions altogether (Hager & Ekman, 1995).
2
IMAGE DATABASE
We have collected a database of image sequences of subjects performing specified
facial actions. The full database contains over 1100 sequences containing over 150
distinct actions, or action combinations, and 24 different subjects. The sequences
contain 6 images, beginning with a neutral expression and ending with a high intensity muscle contraction (Figure 1). For our initial investigation we used data
from 20 subjects and attempted to classify the six individual upper face actions
illustrated in Figure 2. The information that is available in the images for detecting
and discriminating these actions include distortions in the shapes and relative positions of the eyes and eyebrows, the appearance of wrinkles, bulges, and furrows,
in specific regions of the face, and motion of the brows and eyelids.
Prior to classifying the images, we manually located the eyes, and we used this
information to crop a region around the upper face and scale the images to 360 x 240.
The images were rotated so that the eyes were horizontal, and the luminance was
normalized. Accurate image registration is critical for principal components based
approaches. For the holistic analysis and flow fields, the images were further scaled
825
Classifying Facial Action
to 22 x 32 and 66 x 96, respectively. Since the muscle contractions are frequently
asymmetric about the face, we doubled the size of our data set by reflecting each
image about the vertical axis, giving a total of 800 images.
Figure 1: Example action sequences from the database.
AU 1
AU2
AU4
AU5
AU6
AU7
Figure 2: Examples of the six actions used in this study. AU 1: Inner brow raiser.
2: Outer brow raiser. 4: Brow lower. 5: Upper lid raiser (widening the eyes). 6:
Cheek raiser. 7: Lid tightener (partial squint).
3
HOLISTIC SPATIAL ANALYSIS
The Eigenface (Thrk & Pentland, 1991) and Holon (Cottrell & Metcalfe, 1991)
representations are holistic representations based on principal components, which
can be extracted by feed forward networks trained by back propagation. Previous
work in our lab and others has demonstrated that feed forward networks taking such
holistic representations as input can successfully classify gender from facial images
(Cottrell & Metcalfe, 1991; Golomb, Lawrence, & Sejnowski, 1991). We evaluated
the ability of a back propagation network to classify facial actions given principal
components of graylevel images as input.
The primary difference between the present approach and the work referenced above
is that we take the principal components of a set of difference images, which we
obtained by subtracting the first image in the sequence from the subsequent images
(see Figure 3). The variability in our data set is therefore due to the facial distortions
and individual differences in facial distortion, and we have removed variability due
to surface-level differences in appearance.
We projected the difference images onto the first N principal components of the
dataset, and these projections comprised the input to a 3 layer neural network with
10 hidden units, and six output units, one per action (Figure 3.) The network is feed
forward and fully connected with a hyperbolic tangent transfer function, and was
trained with conjugate gradient descent. The output of the network was determined
using winner take all, and generalization to novel subjects was determined by using
the leave-one-out, or jackknife, procedure in which we trained the network on 19
subjects and reserved all of the images from one subject for testing. This process
was repeated for each of the subjects to obtain a mean generalization performance
across 20 test cases.
BARTLETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN
826
We obtained the best performance with 50 component projections, which gave 88.6%
correct across subjects. The benefit obtained by using principal components over
the 704-dimensional difference images themselves is not large. Feeding the difference
images directly into the network gave a performance of 84% correct.
6 OUtputs
I WTA
Figure 3: Left: Example difference image. Input values of -1 are mapped to black
and 1 to white. Right: Architecture of the feed forward network.
4
FEATURE MEASUREMENT
We turned next to explicit measurement of local image features associated with
these actions. The presence of wrinkles in specific regions of the face is a salient
cue to the contraction of specific facial muscles. We measured wrinkling at the four
facial positions marked in Figure 4a, which are located in the image automatically
from the eye position information. Figure 4b shows pixel intensities along the line
segment labeled A, and two major wrinkles are evident.
We defined a wrinkle measure P as the sum of the squared derivative of the intensity
values along the segment (Figure 4c.) Figure 4d shows P values along line segment
A, for a subject performing each of the six actions. Only AU 1 produces wrinkles
in the center of the forehead. The P values remain at zero except for AU 1, for
which it increases with increases in action intensity. We also defined an eye opening
measure as the area of the visible sclera lateral to the iris. Since we were interested
in changes in these measures from baseline, we subtract the measures obtained from
the neutral image.
a
b
0'---_ _ _ _- - '
Pixel
3....----------, ....-----,
AU1 __
d
c
AU2 -+-.
2
p
AU4 -E!-.
c(
c.. 1
o
o
....----t-.... --']t;.... ~.t(=-:
....
-.... ="!1t:-:-:-..:-::!:: 2
~
~.~~
1
_~
234
AU5 -K-?
AU6-4-?
AU7-ll---
5
Image in Seqence
Figure 4: a) Wrinkling was measured at four image locations, A-D. b) Smoothed
pixel intensities along the line labeled A. c) Wrinkle measure. d) P measured at
image location A for one subject performing each of the six actions.
We classified the actions from these five feature measures using a 3-layer neural net
with 15 hidden units. This method performs well for some subjects but not for
Classifying Facial Action
827
Figure 5: Example flow field for a subject performing AU 7, partial closure of the
eyelids. Each flow vector is plotted as an arrow that points in the direction of
motion. Axes give image location.
others, depending on age and physiognomy. It achieves an overall generalization
performance of 57% correct.
5
OPTIC FLOW
The motion that results from facial action provides another important source of
information. The third classifier attempts to classify facial actions based only on the
pattern of facial motion. Motion is extracted from image pairs consisting of a neutral
image and an image that displays the action to be classified. An approximation to
flow is extracted by implementing the brightness constraint equation (2) where the
velocity (vx,Vy) at each image point is estimated from the spatial and temporal
gradients of the image I. The velocities can only be reliably extracted at points
of large gradient, and we therefore retain only the velocities from those locations.
One of the advantages of this simple local estimate of flow is speed. It takes 0.13
seconds on a 120 MHz Pentium to compute one flow field. A resulting flow image
is illustrated in Figure 5.
Vx
8I(x, y, t)
8x
+ Vy
8I(x, y, t)
8y
+
8I(x, y, t) _ 0
8t
-
(2)
We obtained weighted templates for each of the actions by taking mean flow fields
from 10 subjects. We compared novel flow patterns,
to the template ft by the
similarity measure S (3). S is the normalized dot product of the novel flow field with
the template flow field. This template matching procedure gave 84.8% accuracy for
novel subjects. Performance was the same for the ten subjects used in the training
set as for the ten in the test set.
r
(3)
6
COMBINED SYSTEM
Figure 6 compares performance for the three individual methods described in the
previous sections. Error bars give the standard deviation for the estimate of generalization to novel subjects. We obtained the best performance when we combined
all three sources of information into a single neural network. The classifier is a
BAR1LETI, VIOLA, SEJNOWSKI, GOLOMB, LARSEN, HAGER, EKMAN
828
I
6 Output
I
WTA
11Ol:i
Classifier
Figure 6: Left: Combined system architecture. Right: Performance comparisons.
-
Holistic v. Flow
r :0.52 ?
Feature v. Holistic
Feature v. Row
i
i
r :0.26
?
g
?
?
r:O.OO
Ii!
~~--...;......,..".-
~
~
50
? ?
60
70
~
80
90
100
~:-----50 60 70 80 90 100
Figure 7: Performance correlations among the three individual classifiers. Each
data point is performance for one of the 20 subjects.
feed forward network taking 50 component projections, 5 feature measures, and 6
template matches as input (see Figure 6.)
The combined system gives a generalization performance of 92%, which is an improvement over the best individual method at 88.6%. The increase in performance
level is statistically significant by a paired t-test. While the improvement is small,
it constitutes about 30% of the difference between the best individual classifier and
perfect performance. Figure 6 also shows performance of human subjects on this
same dataset. Human non-experts can correctly classify these images with about
74% accuracy. This is a difficult classification problem that requires considerable
training for people to be able to perform well.
We can examine how the combined system benefits from multiple input sources
by looking at the cprrelations in performance of the three individual classifiers.
Combining estimators is most beneficial when the individual estimators make very
different patterns of errors.1 The performance of the individual classifiers are compared in Figure 7.
The holistic and the flow field classifiers are correlated with a coefficient of 0.52. The
feature based system, however, has a more independent pattern of errors from the
two template-based methods. Although the stand-alone performance of the featurebased system is low, it contributes to the combined system because it provides
estimates that are independent from the two template-based systems. Without the
feature measures, we lose 40% of the improvement. Since we have only a small
number of features, this data does not address questions about whether templates
are better than features, but it does suggest that local features plus templates may
be superior to either one alone, since they may have independent patterns of errors.
iTom Dietterich, Connectionists mailing list, July 24, 1993.
Classifying Facial Action
7
829
DISCUSSION
We have evaluated the performance of three approaches to image analysis on a difficult classification problem. We obtained the best performance when information
from holistic spatial analysis, feature measurements, and optic flow fields were combined in a single system. The combined system classifies a face in less than a second
on a 120 MHz Pentium.
Our initial results are promising since the upper facial actions included in this study
represent subtle distinctions in facial appearance that require lengthy training for
humans to make reliably. Our results compare favorably with facial expression
recognition systems developed by Mase (1991), Yacoob and Davis (1994), and Padgett and Cottrell (1995), who obtained 80%, 88%, and 88% accuracy respectively for
classifying up to six full face expressions. The work presented here differs from these
systems in that we attempt to detect individual muscular actions rather than emotion categories, we use a dataset of labeled facial actions, and our dataset includes
low and medium intensity muscular actions as well as high intensity ones. Essa and
Pentland (1995) attempt to relate facial expressions to the underlying musculature
through a complex physical model of the face. Since our methods are image-based,
they are more adaptable to variations in facial structure and skin elasticity in the
subject population.
We intend to apply these techniques to the lower facial actions and to action combinations as well. A completely automated method for scoring facial actions from
images would have both commercial and research applications and would reduce
the time and expense currently required for manual scoring by trained observers.
Acknow ledgments
This research was supported by Lawrence Livermore National Laboratories, IntraUniversity Agreement B291436, NSF Grant No. BS-9120868, and Howard Hughes
Medical Institute. We thank Claudia Hilburn for image collection.
References
Cottrell, G.,& Metcalfe, J. (1991): Face, gender and emotion recognition using holons. In
Advances in Neural Information Processing Systems 9, D. Touretzky, (Ed.) San Mateo:
Morgan & Kaufman. 564 - 571.
Ekman, P., & Friesen, W. (1978): Facial Action Coding System: A Technique for the
Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press.
Ekman, P., Huang, T., Sejnowski, T., & Hager, J. (1992): Final Report to NSF of the
Planning Workshop on Facial Expression Understanding. Available from HIL-0984,
UCSF, San Francisco, CA 94143.
Essa, I., & Pentland, A. (1995). Facial expression recognition using visually extracted facial
action parameters. Proceedings of the International Workshop on Automatic Face- and
Gesture-Recognition. University of Zurich, Multimedia Laboratory.
Golomb, B., Lawrence, D., & Sejnowski, T. (1991). SEXnet: A neural network identifies
sex from human faces. In Advances in Neural Information Processing Systems 9, D.
Touretzky, (Ed.) San Mateo: Morgan & Kaufman: 572 - 577.
Hager, J., & Ekman, P., (1995). The essential behavioral science of the face and gesture
that computer scientists need to know. Proceedings of the International Workshop on
Automatic Face- and Gesture-Recognition. University of Zurich, Multimedia Laboratory.
Mase, K. (1991): Recognition of facial expression from optical flow. IEICE Transactions
E 74(10): 3474-3483.
Padgett, C., Cottrell, G., (1995). Emotion in static face images. Proceedings of the
Institute for Neural Computation Annual Research Symposium, Vol 5. La Jolla, CA.
Turk, M., & Pentland, A. (1991): Eigenfaces for Recognition. Journal of Cognitive Neuroscience 3(1): 71 - 86.
Yacoob, Y., & Davis, L. (1994): Recognizin~ human facial expression. University of
Maryland Center for Automation Research Technical Report No. 706.
| 1116 |@word sex:1 closure:1 contraction:4 brightness:1 hager:9 initial:2 contains:1 score:1 cottrell:5 visible:1 subsequent:1 shape:1 alone:2 cue:3 beginning:1 detecting:2 consulting:1 provides:4 location:4 wrinkling:2 five:1 along:4 symposium:1 surprised:2 behavioral:3 au1:1 themselves:1 frequently:1 examine:1 planning:1 ol:1 automatically:1 classifies:1 underlying:2 golomb:6 medium:1 alto:1 kaufman:2 developed:2 temporal:1 marian:1 brow:4 friendly:1 holons:1 interactive:1 scaled:1 classifier:8 control:1 unit:4 medical:2 grant:1 scientist:1 local:4 referenced:1 black:1 plus:1 au:5 mateo:2 statistically:1 testing:1 hughes:2 differs:1 procedure:2 jan:1 area:1 wrinkle:7 hyperbolic:1 matching:2 projection:3 suggest:1 doubled:1 onto:1 marni:1 demonstrated:1 center:2 focused:1 estimator:2 population:1 variation:1 graylevel:2 commercial:1 padgett:2 user:1 au4:2 agreement:1 velocity:3 recognition:8 located:2 asymmetric:1 database:5 labeled:3 featurebased:1 ft:1 thousand:1 region:3 connected:1 movement:1 technological:1 removed:1 dynamic:1 trained:4 segment:3 upon:1 completely:1 train:1 distinct:5 sejnowski:8 widely:1 distortion:4 ability:1 final:1 sequence:6 advantage:1 net:2 essa:3 au2:2 subtracting:1 interaction:1 product:1 turned:1 combining:1 holistic:9 produce:2 perfect:1 leave:1 rotated:1 depending:1 oo:1 measured:3 differ:1 direction:1 correct:3 human:7 vx:2 eigenface:1 implementing:1 require:1 feeding:1 beatrice:2 generalization:6 investigation:1 exploring:1 around:1 visually:1 lawrence:3 automate:1 major:2 achieves:1 niels:1 facs:7 lose:1 currently:2 palo:1 city:1 tool:3 successfully:1 weighted:1 hil:1 rather:2 yacoob:3 ax:1 improvement:3 pentium:2 psychiatry:1 baseline:1 detect:4 hidden:2 interested:1 pixel:3 overall:1 among:1 classification:2 spatial:4 field:9 emotion:9 comprise:2 manually:2 constitutes:1 others:2 report:2 opening:1 national:1 individual:11 consisting:1 attempt:4 sexnet:1 accurate:1 bohr:1 partial:2 elasticity:1 facial:53 plotted:1 classify:5 blended:1 mhz:2 stewart:1 measuring:2 deviation:1 neutral:3 comprised:1 combined:9 international:2 discriminating:1 accessible:1 automating:2 retain:1 terrence:1 together:1 bulge:1 squared:1 containing:2 huang:2 cognitive:1 expert:2 derivative:1 au6:2 coding:5 includes:1 coefficient:1 automation:1 explicitly:1 observer:1 lab:1 relied:1 accuracy:3 characteristic:1 reserved:1 who:1 correspond:1 researcher:1 classified:2 touretzky:2 manual:1 ed:2 lengthy:1 larsen:4 involved:1 turk:1 associated:1 static:2 dataset:5 subtle:2 reflecting:1 back:2 adaptable:1 feed:5 friesen:4 improved:1 evaluated:2 correlation:1 horizontal:1 mistakenly:1 assessment:1 propagation:2 ieice:1 utah:1 omitting:1 dietterich:1 verify:1 contain:2 normalized:2 laboratory:3 illustrated:2 white:1 ll:1 davis:3 claudia:1 iris:1 evident:1 performs:1 motion:6 interface:1 image:48 novel:6 superior:1 physical:1 salt:1 winner:1 relating:1 forehead:1 measurement:7 significant:1 automatic:2 mailing:1 had:1 dot:1 similarity:1 surface:1 exaggerated:1 jolla:2 corp:1 muscle:8 scoring:2 morgan:2 redundant:1 july:1 ii:1 branch:1 full:2 multiple:1 signal:2 technical:1 match:1 gesture:3 devised:1 paired:1 crop:1 represent:1 signify:1 source:3 subject:22 tend:1 flow:17 structural:1 presence:1 automated:2 ledgments:1 psychology:1 gave:3 architecture:2 identified:1 inner:1 reduce:1 whether:2 expression:21 six:7 bartlett:1 tape:1 action:47 ten:2 category:4 vy:2 nsf:2 estimated:1 neuroscience:1 per:1 correctly:1 vol:1 group:1 four:2 salient:1 registration:1 luminance:1 sum:1 musculature:1 cheek:1 lake:1 sad:2 layer:2 display:1 annual:1 activity:3 optic:2 constraint:1 aspect:1 speed:1 performing:5 optical:1 jackknife:1 combination:3 conjugate:1 smaller:1 across:2 remain:1 beneficial:1 joseph:1 lid:2 wta:2 b:1 psychologist:1 equation:1 zurich:2 know:1 au5:2 available:2 apply:1 alternative:1 altogether:1 entertainment:1 include:1 emotional:1 giving:1 objective:1 skin:1 already:1 question:1 intend:1 primary:1 gradient:3 thank:1 mapped:1 lateral:1 maryland:1 outer:1 collected:1 denmark:1 consumer:1 happy:2 difficult:2 relate:1 favorably:1 expense:1 acknow:1 reliably:2 squint:1 perform:1 upper:4 vertical:1 howard:2 descent:1 pentland:5 voluntary:1 viola:5 variability:2 looking:1 smoothed:1 intensity:8 rating:1 copenhagen:1 required:2 specified:1 pair:1 livermore:1 connectionists:1 california:1 distinction:1 address:1 able:1 bar:1 pattern:5 eyebrow:1 reliable:1 video:3 terry:1 critical:2 widening:1 eye:6 identifies:1 axis:1 mase:3 prior:1 understanding:1 tangent:1 relative:1 fully:1 limitation:1 age:1 foundation:1 classifying:10 ibm:1 row:1 supported:1 allow:1 institute:5 template:10 face:15 eyelid:2 taking:3 eigenfaces:1 benefit:2 ending:1 stand:1 forward:5 made:1 collection:1 san:5 projected:1 transaction:1 francisco:3 neurology:1 why:1 promising:1 ku:1 transfer:1 ca:5 obtaining:1 contributes:1 complex:1 arrow:1 paul:2 scored:1 repeated:1 body:1 salk:2 position:3 explicit:2 third:1 specific:3 raiser:4 list:1 dk:1 consist:1 workshop:3 essential:1 occurring:1 subtract:1 appearance:3 gender:2 extracted:5 goal:1 marked:1 ekman:13 change:1 considerable:1 included:1 muscular:4 determined:2 except:1 principal:7 total:1 multimedia:2 experimental:1 la:2 attempted:1 metcalfe:3 people:1 ucsf:2 correlated:1 |
130 | 1,117 | Parallel Optimization of Motion
Controllers via Policy Iteration
J. A. Coelho Jr., R. Sitaraman, and R. A. Grupen
Department of Computer Science
University of Massachusetts, Amherst, 01003
Abstract
This paper describes a policy iteration algorithm for optimizing the
performance of a harmonic function-based controller with respect
to a user-defined index. Value functions are represented as potential distributions over the problem domain, being control policies
represented as gradient fields over the same domain. All intermediate policies are intrinsically safe, i.e. collisions are not promoted
during the adaptation process. The algorithm has efficient implementation in parallel SIMD architectures. One potential application - travel distance minimization - illustrates its usefulness.
1
INTRODUCTION
Harmonic functions have been proposed as a uniform framework for the solution of several versions of the motion planning problem. Connolly and Grupen [Connolly and Grupen, 1993] have demonstrated how harmonic functions
can be used to construct smooth, complete artificial potentials with no local minima.
In addition, these potentials meet the criteria established in
[Rimon and Koditschek, 1990] for navigation functions. This implies that the gradient of harmonic functions yields smooth ("realizable") motion controllers.
By construction, harmonic function-based motion controllers will always command
the robot from any initial configuration to a goal configuration. The intermediate
configurations adopted by the robot are determined by the boundary constraints
and conductance properties set for the domain. Therefore, it is possible to tune
both factors so as to extremize user-specified performance indices (e.g. travel time
or energy) without affecting controller completeness.
Based on this idea, Singh et al. [Singh et al., 1994] devised a policy iteration method
for combining two harmonic function-based control policies into a controller that
minimized travel time on a given environment. The two initial control policies were
997
Parallel Optimization of Motion Controllers via Policy Iteration
derived from solutions to two distinct boundary constraints (Neumann and Dirichlet
constraints). The policy space spawned by the two control policies was parameterized by a mixing coefficient, that ultimately determined the obstacle avoidance behavior adopted by the robot. The resulting controller preserved obstacle avoidance,
ensuring safety at every iteration of the learning procedure.
This paper addresses the question of how to adjust the conductance properties associated with the problem domain 0, such as to extremize an user-specified performance index. Initially, conductance properties are homogeneous across 0, and the
resulting controller is optimal in the sense that it minimizes collision probabilities at
every step [Connolly, 1994]1. The method proposed is a policy iteration algorithm,
in which the policy space is parameterized by the set of node conductances.
2
PROBLEM CHARACTERIZATION
The problem consists in constructing a path controller ifo that maximizes an integral
performance index 'P defined over the set of all possible paths on a lattice for a closed
domain 0 C Rn, subjected to boundary constraints. The controller ifo is responsible
for generating the sequence of configurations from an initial configuration qo on the
lattice to the goal configuration qG, therefore determining the performance index
'P. In formal terms, the performance index 'P can be defined as follows:
Def. 1
Performance indez 'P :
qa
'P
for all q E L(O), where
'P qo
.* = L
f(q)?
q=qo
L(O) is a lattice over the domain 0, qo denotes an arbitrary configuration on L(O),
qG is the goal configuration, and f(q) is a function of the configuration q.
For example, one can define f(q) to be the available joint range associated with the
configuration q of a manipulator; in this case, 'P would be measuring the available
joint range associated with all paths generated within a given domain.
2.1
DERIVATION OF REFERENCE CONTROLLER
The derivation of ifo is very laborious, requiring the exploration of the set of all
possible paths. Out of this set, one is primarily interested in the subset of smooth
paths. We propose to solve a simpler problem, in which the derived controller
if is a numerical approximation to the optimal controller ifo, and (1) generates
smooth paths, (2) is admissible, and (3) locally maximizes P. To guarantee (1) and
(2), it is assumed that the control actions of if are proportional to the gradient of a
harmonic function f/J, represented as the voltage distribution across a resistive lattice
that tessellates the domain O. The condition (3) is achieved through incremental
changes in the set G of internodal conductancesj such changes maximize P locally.
.*
Necessary condition for optimality: Note that 'P qo defines a scalar field over
L(O). It is assumed that there exists a well-defined neighborhood .N(q) for node qj
in fact, it is assumed that every node q has two neighbors across each dimension.
Therefore, it is possible to compute the gradient over the scalar field Pqo.ff by locally
approximating its rate of change across all dimensions. The gradient VP qo defines
lThis is exactly the control policy derived by the TD(O) reinforcement learning method,
for the particular case of an agent travelling in a grid world with absorbing obstacle and
goal states, and being rewarded only for getting to the goal states (see [Connolly, 1994]).
998
J. A. COELHO Jr., R. SITARAMAN, R. A. GRUPEN
a reference controller; in the optimal situation, the actions of the controller if will
parallel the actions of the reference controller. One can now formulate a policy
iteration algorithm for the synthesis of the reference controller:
=
1. Compute if -V~, given conductances G;
2. Evaluate VP q:
- for each cell, compute 'P ;;..
1''''
- for each cell, compute V'P q.
3. Change G incrementally, minimizing the approz. error ? = f(if, VPq);
4. If ? is below a threshold ?o, stop. Otherwise, return to (1).
On convergence, the policy iteration algorithm will have derived a control policy
that maximizes l' globally, and is capable of generating smooth paths to the goal
configuration. The key step on the algorithm is step (3), or how to reduce the
current approximation error by changing the conductances G.
3
APPROXIMATION ALGORITHM
Given a set of internodal conductances, the approximation error
?
L
-
cos (if,
?
is defined as
VP)
(1)
qEL(n)
or the sum over L(n) of the cosine of the angle between vectors if and VP. The
approximation error ? is therefore a function ofthe set G of internodal conductances.
There exist 0(nd") conductances in a n-dimensional grid, where d is the discretization adopted for each dimension. Discrete search methods for the set of conductance
values that minimizes ? are ruled out by the cardinality of the search space: 0(k nd"),
if k is the number of distinct values each conductance can assume. We will represent
conductances as real values and use gradient descent to minimize ?, according to
the approximation algorithm below:
1. Evaluate the apprommation error
?j
g;;
2. Compute the gradient V? =
j
3. Update conductances, making G = G - aVE;
4. Normalize conductances, such that minimum conductance gmin = 1;
Step (4) guarantees that every conductance g E G will be strictly positive. The
conductances in a resistive grid can be normalized without constraining the voltage distribution across it, due to the linear nature of the underlying circuit. The
complexity of the approximation algorithm is dominated by the computation of the
gradient V?(G). Each component of the vector V?(G) can be expressed as
8? = _ ' " 8cos(ifq, VPq).
8g ?
L...J
8g?
'qEL(n)
,
(2)
By assumption, if is itself the gradient of a harmonic function ?> that describes
the voltage distribution across a resistive lattice. Therefore, the calculation of :;.
involves the evaluation of ~ over all domain L(n), or how the voltage ?>q is affected
by changes in a certain conductance gi.
For n-dimensional grids,
*? is a matrix with d" rows and 0(nd") columns. We posit
that the computation of every element of
it is unnecessary: the effects of changing
999
Parallel Optimization of Motion Controllers via Policy Iteration
g, will be more pronounced in a certain grid neighborhood of it, and essentially
negligible for nodes beyond that neighborhood. Furthermore, this simplification
allows for breaking up the original problem into smaller, independent sub-problems
suitable to simultaneous solution in parallel architectures.
3.1
THE LOCALITY ASSUMPTION
The first simplifying assumption considered in this work establishes bounds on
the neighborhood affected by changes on conductances at node ij specifically,
we will assume that changes in elements of g, affect only the voltage at nodes
in J/(i) , being J/(i) the set composed of node i and its direct neighbors. See
[Coelho Jr. et al., 1995] for a discussion on the validity of this assumption. In
particular, it is demonstrated that the effects of changing one conductance decay
exponentially with grid distance, for infinite 2D grids. Local changes in resistive
grids with higher dimensionality will be confined to even smaller neighborhoods.
The locality assumption simplifies the calculation of :;. to
But
~
[ if . V!,
8g,
lifllV'P1
1= lifllV'P1
1....
[8if.
8g,
l.
VP _ if? VP 8if. if
lifl2 (8g, )
Note that in the derivation above it is assumed that changes in G affects primarily
the control policy if, leaving VP relatively unaffected, at least in a first order
approximation.
Given that if = - V~, it follows that the component 7r; at node q can be approximated by the change of potential across the dimension j, as measured by the
potential on the corresponding neighboring nodes:
7r"1 = ?q- - ?q+, and 87r; = _1_ [8?q _ _ 8?q+]
, q
2b. 2
8g,
2b. 2
8g i
8gi'
where b. is the internodal distance on the lattice L(n).
3.2
DERIVATION OF
G;
The derivation of ~ involves computing the Thevenin equiValent circuit for the
resistive lattice, when every conductance 9 connected to node i is removed. For
clarity, a 2D resistive grid was chosen to illustrate the procedure. Figure 1 depicts
the equivalence warranted by Thevenin's theorem [Chua et al., 1987] and the relevant variables for the derivation of ~. As shown, the equivalent circuit for the
resistive grid consists of a four-port resistor, driven by four independent voltage
?4Y and the current
sources. The relation between the voltage vector = [?t
vector r = [it ... i 4 ]T is expressed as
i
Rf+w,
(3)
w
where R is the impedance matrix for the grid equivalent circuit and is the vector
of open-circuit voltage sources. The grid equivalent circuit behaves exactly like the
whole resistive gridj there is no approximation error.
1000
J. A. COELHO Jr., R. SITARAMAN, R. A. GRUPEN
...+...............
!
cJl2
Grid Equivalent
Circuit
i cJl 3
1i3
cJl4.
... +i ..............
cJl o
Figure 1: Equivalence established by Thevenin's theorem.
The derivation of the 20 parameters (the elements of Rand w) of the equivalent
circuit is detailed in [Coelho Jr. et al., 1995]j it involves a series ofrelaxation operations that can be efficiently implemented in SIMD architectures. The total number
of relaxations for a grid with n l nodes is exactly 6n - 12, or an average of 1/2n
relaxations per link. In the context of this paper, it is assumed that Rand w are
known. Our primary interest is to compute how changes in conductances g1c affect
the voltage vector
or the matrix
i,
84>
8g
= I84>j I,
8g1c
for
{Jk?
1, . .. ,4
1, ... ,4.
The elements of ~ can be computed by derivating each of the four equality relations
in Equation 3 with respect to g1c, resulting in a system of 16 linear equations, and 16
variables - the elements of ~. Notice that each element of i can be expressed as a
linear function of the potentials
4
i, by applying Kirchhoff's laws [Chua et al., 1987]:
APPLICATION EXAMPLE
A robot moves repeatedly toward a goal configuration. Its initial configuration is
not known in advance, and every configuration is equally likely of being the initial
configuration. The problem is to construct a motion controller that minimizes the
overall travel distance for the whole configuration space. If the configuration space
o is discretized into a number of cells, define the combined travel distance D(?T) as
D(?T)
L
dq,if,
(4)
qEL(O)
where dq,if is the travel distance from cell q to the goal configuration qG, and robot
displacements are determined by the controller?T. Figure 2 depicts an instance
of the travel distance minimization problem, and the paths corresponding to its
optimal solution, given the obstacle distribution and the goal configuration shown.
A resistive grid with 17 x 17 nodes was chosen to represent the control policies
generated by our algorithm. Initially, the resistive grid is homogeneous, with all
internodal resistances set to 10. Figure 3 indicates the paths the robot takes when
commanded by ifO, the initial control policy derived from an homogeneous resistive
grid.
Parallel Optimization of Motion Controllers via Policy Iteration
16r----,....---,....----,r-----,
16,....----,r-----,-----,----,
12
12
12
1001
16
Figure 2: Paths for optimal solution of
the travel distance minimization problem.
Figure 3: Paths for the initial solution
of the same problem.
The conductances in the resistive grid were then adjusted over 400 steps of the policy
iteration algorithm, and Figure 4 is a plot of the overall travel distance as a function
of the number of steps. It also shows the optimal travel distance (horizontal line),
corresponding to the optimal solution depicted in Figure 2. The plot shows that
convergence is initially fast; in fact, the first 140 iterations are responsible for 90%
of the overall improvement. After 400 iterations, the travel distance is within 2.8%
of its optimal value. This residual error may be explained by the approximation
incurred in using a discrete resistive grid to represent the potential distribution.
Figure 5 shows the paths taken by the robot after convergence. The final paths are
straightened versions of the paths in Figure 3. Notice also that some of the final
paths originating on the left of the I-shaped obstacle take the robot south of the
obstacle, resembling the optimal paths depicted in Figure 2.
5
CONCLUSION
This paper presented a policy iteration algorithm for the synthesis of provably correct navigation functions that also extremize user-specified performance indices.
The algorithm proposed solves the optimal feedback control problem, in which the
final control policy optimizes the performance index over the whole domain, assuming that every state in the domain is as likely of being the initial state as any other
state.
The algorithm modifies an existing harmonic function-based path controller by incrementally changing the conductances in a resistive grid. Departing from an homogeneous grid, the algorithm transforms an optimal controller (i.e. a controller that
minimizes collision probabilities) into another optimal controller, that extremizes
locally the performance index of interest. The tradeoff may require reducing the
safety margin between the robot and obstacles, but collision avoidance is preserved
at each step of the algorithm.
Other Applications: The algorithm presented can be used (1) in the synthesis
of time-optimal velocity controllers, and (2) in the optimization of non-holonomic
path controllers. The algorithm can also be a component technology for Intelligent
Vehicle Highway Systems (IVHS), by combining (1) and (2).
J. A. COELHO Jr .? R. SITARAMAN. R. A. GRUPEN
1002
1170....----,----,----r----,
16r-----.r-----.....---.,---.,
17.~
12
--------------~
1710
1680
16500L---IOO~-~200'":--~300~-~400
Figure 4: Overall travel distance, as a
function of iteration steps.
12
16
Figure 5: Final paths, after 800 policy
iteration steps.
Performance on Parallel Architectures: The proposed algorithm is computationally demandingj however, it is suitable for implementation on parallel architectures. Its sequential implementation on a SPARC 10 workstation requires ~ 30 sec.
per iteration, for the example presented. We estimate that a parallel implementation of the proposed example would require ~ 4.3 ms per iteration, or 1. 7 seconds
for 400 iterations, given conservative speedups available on parallel architectures
[Coelho Jr. et al., 1995].
Acknowledgements
This work was supported in part by grants NSF CCR-9410077, IRI-9116297, IRI9208920, and CNPq 202107/90.6.
References
[Chua et aI., 1987] Chua, L., Desoer, C., and Kuh, E. (1987). Linear and Nonlinear
Circuits. McGraw-Hill, Inc., New York, NY.
[Coelho Jr. et al., 1995] Coelho Jr., J., Sitaraman, R., and Grupen, R. (1995).
Control-oriented tuning of harmonic functions. Technical Report CMPSCI Technical Report 95-112, Dept. Computer Science, University of Massachusetts.
[Connolly, 1994] Connolly, C. I. (1994). Harmonic functions and collision probabilities. In Proc. 1994 IEEE Int. Conf. Robotics Automat., pages 3015-3019.
IEEE.
[Connolly and Grupen, 1993] Connolly, C. I. and Grupen, R. (1993). The applications of harmonic functions to robotics. Journal of Robotic Systems, 10(7):931946.
[Rimon and Koditschek, 1990] Rimon, E. and Koditschek, D. (1990). Exact robot
navigation in geometrically complicated but topologically simple spaces. In Proc .
1990 IEEE Int. Conf. Robotics Automat., volume 3, pages 1937-1942, Cincinnati,
OH.
[Singh et aI., 1994] Singh, S., Barto, A., Grupen, R., and Connolly, C. (1994). Robust reinforcement learning in motion planning. In Advances in Neural Information Processing Systems 6, pages 655-662, San Francisco, CA. Morgan Kaufmann
Publishers.
| 1117 |@word version:2 nd:3 open:1 simplifying:1 automat:2 initial:8 configuration:19 series:1 existing:1 current:2 discretization:1 numerical:1 plot:2 update:1 chua:4 characterization:1 completeness:1 node:12 simpler:1 direct:1 grupen:10 consists:2 resistive:14 behavior:1 p1:2 planning:2 discretized:1 globally:1 td:1 cardinality:1 underlying:1 maximizes:3 circuit:9 minimizes:4 guarantee:2 every:8 exactly:3 control:13 grant:1 safety:2 positive:1 negligible:1 local:2 meet:1 path:19 equivalence:2 co:2 commanded:1 range:2 responsible:2 procedure:2 displacement:1 context:1 g1c:3 applying:1 equivalent:6 demonstrated:2 resembling:1 modifies:1 iri:1 formulate:1 avoidance:3 cincinnati:1 oh:1 construction:1 user:4 exact:1 homogeneous:4 element:6 velocity:1 approximated:1 jk:1 gmin:1 connected:1 removed:1 environment:1 complexity:1 ultimately:1 singh:4 joint:2 kirchhoff:1 represented:3 derivation:7 distinct:2 fast:1 artificial:1 neighborhood:5 solve:1 otherwise:1 gi:2 itself:1 final:4 sequence:1 propose:1 adaptation:1 neighboring:1 relevant:1 combining:2 mixing:1 pronounced:1 normalize:1 getting:1 convergence:3 neumann:1 generating:2 incremental:1 illustrate:1 measured:1 ij:1 solves:1 implemented:1 involves:3 implies:1 safe:1 posit:1 correct:1 exploration:1 require:2 adjusted:1 strictly:1 considered:1 proc:2 travel:12 highway:1 establishes:1 koditschek:3 minimization:3 always:1 i3:1 command:1 voltage:9 barto:1 derived:5 improvement:1 indicates:1 ave:1 realizable:1 sense:1 cmpsci:1 initially:3 relation:2 originating:1 interested:1 provably:1 overall:4 field:3 simd:2 construct:2 shaped:1 minimized:1 report:2 intelligent:1 primarily:2 oriented:1 composed:1 conductance:24 interest:2 evaluation:1 adjust:1 laborious:1 navigation:3 integral:1 capable:1 necessary:1 ruled:1 instance:1 column:1 obstacle:7 measuring:1 lattice:7 subset:1 uniform:1 usefulness:1 connolly:9 combined:1 amherst:1 synthesis:3 cjl:2 conf:2 return:1 potential:8 sec:1 ifo:5 coefficient:1 inc:1 int:2 vehicle:1 closed:1 parallel:11 complicated:1 minimize:1 kaufmann:1 efficiently:1 yield:1 ofthe:1 vp:7 unaffected:1 simultaneous:1 energy:1 associated:3 workstation:1 stop:1 massachusetts:2 intrinsically:1 dimensionality:1 higher:1 rand:2 furthermore:1 horizontal:1 qo:6 nonlinear:1 incrementally:2 defines:2 manipulator:1 effect:2 validity:1 requiring:1 normalized:1 equality:1 during:1 cosine:1 criterion:1 m:1 hill:1 complete:1 motion:9 harmonic:12 absorbing:1 behaves:1 exponentially:1 volume:1 ai:2 tuning:1 grid:21 robot:10 optimizing:1 optimizes:1 driven:1 rewarded:1 sparc:1 certain:2 morgan:1 minimum:2 promoted:1 maximize:1 smooth:5 technical:2 calculation:2 devised:1 equally:1 qg:3 ensuring:1 controller:28 essentially:1 holonomic:1 iteration:19 represent:3 robotics:3 confined:1 achieved:1 cell:4 preserved:2 addition:1 affecting:1 leaving:1 source:2 publisher:1 south:1 intermediate:2 constraining:1 affect:3 architecture:6 qel:3 reduce:1 idea:1 simplifies:1 tradeoff:1 qj:1 resistance:1 york:1 action:3 repeatedly:1 collision:5 detailed:1 spawned:1 tune:1 transforms:1 locally:4 extremize:3 exist:1 nsf:1 notice:2 per:3 ccr:1 discrete:2 affected:2 key:1 four:3 threshold:1 changing:4 clarity:1 relaxation:2 geometrically:1 sum:1 angle:1 parameterized:2 topologically:1 def:1 bound:1 simplification:1 constraint:4 dominated:1 generates:1 optimality:1 relatively:1 speedup:1 department:1 according:1 jr:9 describes:2 across:7 smaller:2 making:1 explained:1 taken:1 computationally:1 equation:2 subjected:1 sitaraman:5 travelling:1 adopted:3 available:3 operation:1 original:1 denotes:1 dirichlet:1 approximating:1 move:1 question:1 primary:1 gradient:9 distance:12 link:1 evaluate:2 coelho:9 toward:1 assuming:1 index:9 minimizing:1 vpq:2 implementation:4 policy:25 rimon:3 descent:1 situation:1 rn:1 arbitrary:1 specified:3 established:2 qa:1 address:1 beyond:1 below:2 rf:1 suitable:2 residual:1 technology:1 acknowledgement:1 determining:1 law:1 proportional:1 incurred:1 agent:1 port:1 dq:2 row:1 supported:1 cnpq:1 formal:1 neighbor:2 departing:1 boundary:3 dimension:4 feedback:1 world:1 reinforcement:2 san:1 mcgraw:1 kuh:1 robotic:1 assumed:5 unnecessary:1 francisco:1 search:2 lthis:1 impedance:1 nature:1 robust:1 ca:1 ioo:1 warranted:1 constructing:1 domain:11 whole:3 ff:1 depicts:2 ny:1 sub:1 resistor:1 breaking:1 admissible:1 theorem:2 derivating:1 decay:1 exists:1 sequential:1 illustrates:1 margin:1 locality:2 depicted:2 likely:2 expressed:3 scalar:2 goal:9 change:11 determined:3 specifically:1 infinite:1 reducing:1 conservative:1 total:1 dept:1 |
131 | 1,118 | A Framework for Non-rigid Matching
and Correspondence
Suguna Pappu, Steven Gold, and Anand Rangarajan 1
Departments of Diagnostic Radiology and Computer Science
and the Yale Neuroengineering and Neuroscience Center
Yale University New Haven, CT 06520-8285
Abstract
Matching feature point sets lies at the core of many approaches to
object recognition. We present a framework for non-rigid matching that begins with a skeleton module, affine point matching,
and then integrates multiple features to improve correspondence
and develops an object representation based on spatial regions to
model local transformations. The algorithm for feature matching
iteratively updates the transformation parameters and the correspondence solution, each in turn. The affine mapping is solved in
closed form, which permits its use for data of any dimension. The
correspondence is set via a method for two-way constraint satisfaction, called softassign, which has recently emerged from the neural
network/statistical physics realm. The complexity of the non-rigid
matching algorithm with multiple features is the same as that of
the affine point matching algorithm. Results for synthetic and real
world data are provided for point sets in 2D and 3D, and for 2D
data with multiple types of features and parts.
1
Introduction
A basic problem of object recognition is that of matching- how to associate sensory
data with the representation of a known object. This entails finding a transformation that maps the features of the object model onto the image, while establishing a
correspondence between the spatial features. However, a tractable class of transformation, e.g., affine, may not be sufficient if the object is non-rigid or has relatively
independent parts. If there is noise or occlusion, spatial information alone may
not be adequate to determine the correct correspondence. In our previous work in
spatial point matching [1], the 2D affine transformation was decomposed into its
Ie-mail address of authors: lastname-firstname@cs.yale.edu
796
S. PAPPU, S. GOLD, A. RANGARAJAN
physical component elements, which does not generalize easily to 3D, and so, only
a rigid 3D transformation was considered.
We present a framework for non-rigid matching that begins with solving the basic
affine point matching problem. The algorithm iteratively updates the affine parameters and correspondence in turn, each as a function of the other. The affine
transformation is solved in closed form, which lends tremendous flexibility- the
formulation can be used in 2D or 3D. The correspondence is solved by using a
softassign [1] procedure, in which the two-way assignment constraints are solved
without penalty functions. The accuracy of the correspondence is improved by the
integration of multiple features. A method for non-rigid parameter estimation is
developed, based on the assumption of a well-articulated model with distinct regions, each of which may move in an affine fashion, or can be approximated as
such. Umeyama [3] has done work on parameterized parts using an exponential
time tree search technique, and Wakahara [4] on local affine transforms, but neither
integrates multiple features nor explicitly considers the non-rigid matching case,
while expressing a one-to-one correspondence between points.
2
Affine Point Matching
The affine point matching problem is formulated as an optimization problem for
determining the correspondence and affine transformation between feature points.
Given two sets of data points Xj E Rn-l, n = 3,4 .. . , i = 1, ... , J, the image,
and Yk E Rn-l, n = 3,4, ... , k = 1, ... , K, the model, find the correspondence and
associated affine transformation that best maps a subset of the image points onto a
subset of the model point set. These point sets are expressed in homogeneous coordinates, Xj = (l,Xj), Yk = (1, Yk). {aij} = A E Rnxn is the affine transformation
matrix. Note that{alj = 0 Vi} because of the homogeneous coordinates. Define the
match variable Mjk where Mjk E [0,1]. For a given match matrix {Mjd, transformation A and I, an identity matrix of dimension n, Lj,k MjkllXj - (A + I)Yk112
expresses the similarity between the point sets. The term -a Lj,k Mjk, with parameter a > 0 is appended to this to encourage matches (else Mjk = 0 V i, k
minimizes the function). To limit the range of transformations, the terms of the
affine matrix are regularized via a term Atr(AT A) in the objective function, with
parameter A, where tr(.) denotes the trace of the matrix. Physically, Xj may fully
match to one Yk, partially match to several, or may not match to any point. A similar constraint holds for Yk. These are expressed as the constraints in the following
optimization problem:
(1)
s.t.
LMjk::S 1, Vk, LMjk::S 1, Vi and Mjk ~ 0
j
k
To begin, slack variables Mj,K+l and MJ+l,k are introduced so that the inequality constraints can be transformed into equality constraints: Lf~t Mjk =
Lf:/
=
=
1, Vk and
Mjk
1, Vi. Mj,K+l
1 indicates that Xj does not match to
any point in Yk. An equivalent unconstrained optimization problem to (2) is derived by relaxing the constraints via Lagrange parameters Ilj, l/k, and introducing
an x log x barrier function, indexed by a parameter {3. A similar technique was used
A Framework for Nonrigid Matching and Correspondence
797
[2] to solve the assignment problem. The energy function used is:
J
min max LMjkllXj - (A+ J)Yk112
A,M
~,v
.
),k
+ Atr(AT A) -
K
a LMjk
J+1
+ LlIk(LMjk -1)
k
j=l
.
),k
K+1
+ LJLj(L
Mjk
.
1
+ (j
-1)
k=l
)
J+1 K+1
L
Mjk(1ogMjk -1)
L
j=l k=l
This is to be minimized with respect to the match variables and affine parameters
while satisfying the constraints via Lagrange parameters. Using the recently developed soft assign technique, we satisfy the constraints explicitly. When A is fixed,
we have an assignment problem. Following the development in [1], the assignment
constraints are satisfied using soft assign , a technique for satisfying two-way (assignment) constraints without a penalty term that is analogous to softmax which
enforces a one-way constraint. First, the match variables are initialized:
(2)
This is followed by repeated row-column normalization of the match variables until
a stopping criterion is reached:
M)"k =
Mjk
Mjk
'""'
then M j k = '""' M
L--j' Mj'k
L--k' jk'
(3)
When the correspondence between the two point sets is fixed, A can be solved in
closed form, by holding M fixed in the objective function, and differentiating and
solving for A:
A = A*(M) = (L Mjk(Xj Y[ - YkY{?(L MjkYkY[
j,k
j,k
+ AI)-l
(4)
The algorithm is summarized as:
=
=
1. INITIALIZE: Variables: A 0, M
0
Parameters: .Binitial, .Bupdate, .Bfinal T = Inner loop iterations, A
2. ITERATE: Do T times for a fixed value of .B
Softassign: Re-initialize M*(A) and then (Eq. 2) until ilM small
A*(M) updated (Eq. 4)
3. UPDATE: While.B
< .Bfinal, .B ~.B * .Bupdate, Return to 2.
The complexity of the algorithm is O(J K). Starting with small .Binitial permits
many partial correspondences in the initial solution for M. As.B increases the
correspondence becomes more refined. For large .Bfinal, M approaches a permutation
matrix (adjusting appropriately for the slack variables).
3
Nonrigid Feature Matching: Affine Quilts
Recognition of an object requires many different types of information working in
concert. Spatial information alone may not be sufficient for representation, especially in the presence of noise. Additionally the affine transformation is limited in
its inability to handle local variation in an object, due to the object's non-rigidity
or to the relatively independent movement of its parts, e.g., in human movement.
The optimization problem (2) easily generalizes to integrate multiple invariant features. A representation with multiple features has a spatial component indicating
798
S. PAPPU, S. GOLD, A. RANGARAJAN
the location of a feature element. At that location, there may be invariant geometric characteristics, e.g., this point belongs on a curve, or non-geometric invariant
features such as color, and texture. Let Xjr be the value of feature r associated
with point Xj. The location of point Xj is the null feature. There are R features
associated with each point Xj and Yk. Note that the match variable remains the
same. The new objective function is identical to the original objective function,
(2), appended by the term "?j,k ,r MjkWr(Xjr - Ykr)2. The (Xjr - Ykr)2 quantity captures the similarity between invariant types of features, with Wr a weighting factor for feature r. Non-invariant features are not considered. In this way,
the point matching algorithm is modified only in the re-initialization of M(A):
Mjk = exp(-,8(IIXj - (I + A)Yk112 + "?rWr(Xjr - y kr )2 - a)) The rest of the
algorithm remains unchanged.
Decomposition of spatial transformations motivates classification of the B individual
regions of an object and use of a "quilt" of local affine transformations. In the
multiple affine scenario, membership to a region is known on the well-articulated
model, but not on the image set . It is assumed that all points that are members
of one region undergo the same affine transformation. The model changes by the
addition of one subscript to the affine matrix, Ab(k) where b(k) is an operator that
indicates which transformation operates on point k. In the algorithm, during the
A(M) update, instead of a single update, B updates are done. Denote K(b) =
{klb(k) = b}, i.e., all the points that are within region b. Then in the affine update,
Ab = Ab(M) = (L: j , kEK(b) Mjk(Xj Y{ - YkY{))("?j, kEK(b) MjkYkY{ + AbI)-l
However, the theoretical complexity does not change, since the B updates still only
require summing over the points.
4
Experimental Results: Hand Drawn and Synthetic
The speed for matching point sets of 50 points each is around 20 seconds on an SGI
workstation with a R4400 processor. This is true for points in 2D, 3D and with
extra features . This can be improved with a tradeoff in accuracy by adopting a
looser schedule for the parameter ,8 or by changing the stopping criterion.
In the hand drawn examples, the contours of the images are drawn, discretized and
then expressed as a set of points in the plane. In Figure (1), the contours of the
boy's face were drawn in two different positions, and a subset of the points were
extracted to make up the point sets. In each set this was approximately 250 points.
Note that even with the change in mood in the two pictures, the corresponding
parts of the face are found. However, in Figure (2) spatial information alone is
Figure 1: Correspondence with simple point features
insufficient. Although the rotation ofthe head is not a true affine transformation, it
A Framework for Nonrigid Matching and Correspondence
799
is a weak perspective projection for which the approximation is valid. Each photo
is outlined, generating approximately 225 points in each face. A point on a contour
Figure 2: Correspondence with multiple features
has associated with it a feature marker indicating the incident textures. For a
human face, we use a binary 4-vector, with a 1 in position r if feature r is present.
Specifically, we have used a vector with elements [skin, hair, lip, eye]. For example,
a point on the line marking the mouth segment the lip from the skin has a feature
vector [1,0,1,0]. Perceptual organization of the face motivates this type of feature
marking scheme. The correspondence is depicted in Figure (2) for a small subset of
matches.
Next, we demonstrate how the multiple affine works in recovering the correct correspondence and transformation. The points associated with the standing figure have
a marker indicating its part membership. There are six parts in this figure: head,
torso, each arm and each leg. The correspondence is shown in Figure (3).
For synthetic data, all 2D and 3D single part experiments used this protocol: The
model set was generated uniformly on a unit square. A random affine matrix
is generated, whose parameters, aij are chosen uniformly on a certain interval,
which is used to generate the image set. Then, Pd image points are deleted, and
Gaussian noise, N(O, u) is added. Finally, spurious points, Ps are added. For
the multiple feature scenario, the elements of the feature vector are randomly
mislabelled with probability, Pr , to represent distortion. For these experiments,
50 model points were generated, and aij are uniform on an interval of length
1.5. u E {0.01, 0.02, ... , 0.08}. Point deletions and spurious additions range from
0% to 50% of the image points. The random feature noise associated with nonspatial features has a probability of Pr = 0.05. The error measure we use is
e a = C Li,j laij -a.ij I where c = # par:meters interva~ length? aij and a.ij are the correct
parameter and the computed value, respectively. The constant term c normalizes
the measure so that the error equals 1 in the case that the aij and aij are chosen at
random on this interval. The factor 3 in the numerator of this formula follows since
800
S. PAPPU, S. GOLD, A. RANGARAJAN
.
'-'.
~?~--??????????????~-??-??????????????R
? ???????????????????????????????~
,
---................~
Il_...................
? . .
...- ...
.;.;;:...':":-:-:~~
~~:~~.~.:~~g
~??~?~-?-??-???????????????:??????<:"!?>,.~Il
;']~f
Figure 3: Articulated Matching: Figure with six parts
Elx - yl = ~, when x and yare chosen randomly on the unit interval, and we want
to normalize the error. The parameters used in all experiments were: ,Binitial = .091,
,Bfmal = 100, ,Bupdate = 1.075, and T = 4.
The model has four regions, 24 parameters. Points corresponding to part 1 were
centered at (.5, .5), and generated randomly with a diameter of 1.0. For the image
set, an affine transformation was applied with a translation diameter of .5, i.e., for
a21, an, and the remaining four parameters have a diameter of 1. Points corresponding to regions 2, 3, and 4 were centered at (-.5, .5), (-.5, -.5), (.5, -.5) with
model points and transformations generated in a similar fashion. 120 points were
generated for the model point set, divided equally among the four parts. Image
points were deleted with equal probability from each region. Spurious point were
not explicitly added, since the overlapping of parts provides implicit spurious points.
Results for the 2D and 3D (simple point) experiments are in Figure (4). Each data
point represents 500 runs for a different randomly generated affine transformation.
In all experiments, note that the error for small amounts of noise is approximately
equal to that when there is no noise. We performed similar experiments for point
sets that are 3-dimensional (12 parameters), but without any feature information.
For the experiments with features, shown in Figure (5) we used R = 4 features, and
Wr = 0.2, Vr. Each data point represents 500 runs.As expected, the inclusion of
feature information reduces the error, especially for large u. Additionally, Figure
(5) details synthetic results for experiments with multiple affines (2D). Each data
point represents 70 runs.
5
Conclusion
We have developed an affine point matching module, robust in the presence of noise
and able to accommodate data of any dimension. The module forms the basis for
a non-rigid feature matching scheme in which multiple types of features interact to
establish correspondence. Modeling an object in terms of its spatial regions and
then using multiple affines to capture local transformations results in a tractable
method for non-rigid matching . This non-rigid matching framework arising out of
A Framework for Nonrigid Matching and Correspondence
801
20 Results
3D Resutts
0 . 25r--~--~-~--...,
!
o
0.2
o
~ 0.15
..e-
g 0.1
o
o
o
o
x
x
x
0.25~~--~-~-~-,
!
0.2
~ 0.15
.e-
g 0.1
Q)
0
0
x
x
0
0
0
0
0
0
:
x x x
x x + +
~0.05 + + _ .~ . J_!.-+- . - ' -'
~0 .05
O~----~-----~
o~----~-----~
0.02
0.04
0.06
0.08
Standard deviation: Jitter
-. : Pd
+ : Pd =
= 0%,P8 = 0%,
10%,P8 = 10%,
0.02
0.04
0.06
0.08
Standard deviation: Jitter
0:
Pd = 50%,P8 = 10%
Pd = 30%,P8 = 10%
X:
Figure 4: Synthetic Experiments: 2D and 3D
4 Features
iii
"ai
~ 0.1
.,
Q.
.."
o
ai 0.05
?
...
? ? . -.-
---
''''
"
_.-'
,
4 Parts
?
.--
......
o~----~-----~
0.02
0.04
0.06
0.08
Standard deviation: Jitter
x
iii 0.25
"ai
~ 0 .2
iii 0.1
~0.05
0
x
c;;
%0.15
e
0
x
.-.-
x
x 0
~ .9 ?-"
0
0
.- .-
.-
.-
." . ""
"
o~----~-----~
0.02
0.04
0.06
0.08
Standard deviation: Jitter
.- :Pd = 0%,P8 = 0%
0: Pd = 10%,P8 = 0%
. : Pd = 30%,P8 = 10%,
X: Pd = 25%,P8 = 0%
- - : Pd = 50%,P8 = 10%,
- : Pd = 40%,P8 = 0%
* : Pd = 10%,P8 = 10%
Figure 5: Synthetic Experiments: Multiple features and parts
neural computation is widely applicable in object recognition.
Acknowledgements: Our thanks to Eric Mjolsness for many interesting discussions related to the present work .
References
[1] S. Gold, C. P. Lu, A. Rangarajan, S. Pappu, and E. Mjolsness. New algorithms for 2D and 3D point matching: Pose estimation and correspondence.
In G. Tesauro, D. Touretzky, and J. Alspector, editors, Advances in Neural
Information Processing Systems, volume 7, San Francisco, CA, 1995. Morgan
Kaufmann Publishers.
[2] J. Kosowsky and A. Yuille. The invisible hand algorithm: Solving the assignment
problem with statistical physics. Neural Networks, 7:477-490, 1994.
[3] S. Umeyama. Parameterized point pattern matching and its application to
recognition of object families. IEEE Trans. on Pattern Analysis and Machine
Intelligence, 15:136-144,1993.
[4] T . Wakahara. Shape matching using LAT and its application to handwritten
numeral recognition. IEEE Trans. in Pattern Analysis and Machine Intelligence,
16:618- 629, 1994.
| 1118 |@word decomposition:1 tr:1 accommodate:1 initial:1 shape:1 mislabelled:1 concert:1 update:8 alone:3 rnxn:1 intelligence:2 plane:1 nonspatial:1 core:1 provides:1 location:3 p8:11 expected:1 alspector:1 nor:1 discretized:1 decomposed:1 becomes:1 begin:3 provided:1 null:1 minimizes:1 developed:3 finding:1 transformation:23 unit:2 local:5 limit:1 establishing:1 quilt:2 subscript:1 approximately:3 yky:2 initialization:1 relaxing:1 limited:1 range:2 enforces:1 pappu:5 lf:2 mjkllxj:1 procedure:1 binitial:3 matching:28 projection:1 onto:2 operator:1 equivalent:1 map:2 center:1 wakahara:2 starting:1 handle:1 coordinate:2 variation:1 analogous:1 updated:1 homogeneous:2 associate:1 element:4 satisfying:2 recognition:6 approximated:1 jk:1 steven:1 module:3 solved:5 capture:2 region:10 mjolsness:2 movement:2 yk:7 pd:12 complexity:3 skeleton:1 solving:3 segment:1 yuille:1 eric:1 basis:1 easily:2 articulated:3 distinct:1 refined:1 whose:1 emerged:1 widely:1 solve:1 distortion:1 radiology:1 mood:1 loop:1 umeyama:2 flexibility:1 gold:5 normalize:1 sgi:1 rangarajan:5 p:1 generating:1 object:13 pose:1 ij:2 alj:1 eq:2 recovering:1 c:1 correct:3 centered:2 human:2 numeral:1 require:1 assign:2 neuroengineering:1 hold:1 xjr:4 around:1 considered:2 exp:1 mapping:1 klb:1 estimation:2 integrates:2 applicable:1 gaussian:1 modified:1 derived:1 vk:2 rwr:1 elx:1 indicates:2 rigid:11 stopping:2 membership:2 lj:2 spurious:4 transformed:1 classification:1 among:1 development:1 spatial:9 integration:1 initialize:2 softmax:1 equal:3 identical:1 represents:3 minimized:1 develops:1 haven:1 randomly:4 individual:1 occlusion:1 softassign:3 ab:3 organization:1 encourage:1 partial:1 tree:1 indexed:1 initialized:1 re:2 theoretical:1 column:1 soft:2 modeling:1 assignment:6 introducing:1 deviation:4 subset:4 uniform:1 synthetic:6 thanks:1 ie:1 standing:1 physic:2 yl:1 ilj:1 satisfied:1 return:1 li:1 summarized:1 ilm:1 satisfy:1 explicitly:3 vi:3 performed:1 closed:3 reached:1 appended:2 square:1 il:1 accuracy:2 kek:2 characteristic:1 kaufmann:1 ofthe:1 generalize:1 weak:1 handwritten:1 lu:1 r4400:1 processor:1 touretzky:1 energy:1 associated:6 workstation:1 adjusting:1 realm:1 color:1 torso:1 schedule:1 improved:2 formulation:1 done:2 implicit:1 until:2 working:1 hand:3 marker:2 overlapping:1 suguna:1 true:2 equality:1 iteratively:2 during:1 numerator:1 lastname:1 criterion:2 nonrigid:4 demonstrate:1 invisible:1 image:10 laij:1 recently:2 rotation:1 physical:1 volume:1 expressing:1 ai:4 unconstrained:1 outlined:1 inclusion:1 entail:1 similarity:2 perspective:1 belongs:1 tesauro:1 scenario:2 certain:1 inequality:1 binary:1 morgan:1 determine:1 multiple:15 reduces:1 match:12 divided:1 equally:1 basic:2 hair:1 physically:1 iteration:1 normalization:1 adopting:1 represent:1 addition:2 want:1 interval:4 else:1 publisher:1 appropriately:1 extra:1 rest:1 abi:1 undergo:1 member:1 anand:1 presence:2 iii:3 iterate:1 xj:10 inner:1 tradeoff:1 six:2 j_:1 penalty:2 adequate:1 iixj:1 transforms:1 amount:1 diameter:3 generate:1 il_:1 diagnostic:1 neuroscience:1 wr:2 arising:1 express:1 four:3 drawn:4 deleted:2 changing:1 neither:1 run:3 parameterized:2 jitter:4 family:1 looser:1 ct:1 followed:1 correspondence:25 yale:3 constraint:12 speed:1 min:1 relatively:2 department:1 marking:2 leg:1 invariant:5 pr:2 remains:2 turn:2 slack:2 ykr:2 tractable:2 photo:1 generalizes:1 permit:2 yare:1 original:1 denotes:1 remaining:1 lat:1 especially:2 establish:1 unchanged:1 move:1 objective:4 skin:2 quantity:1 added:3 lends:1 atr:2 mail:1 considers:1 length:2 insufficient:1 holding:1 boy:1 trace:1 motivates:2 head:2 rn:2 introduced:1 kosowsky:1 deletion:1 tremendous:1 trans:2 address:1 able:1 pattern:3 firstname:1 max:1 mouth:1 satisfaction:1 regularized:1 arm:1 scheme:2 improve:1 mjk:14 eye:1 picture:1 geometric:2 acknowledgement:1 meter:1 determining:1 fully:1 par:1 permutation:1 interesting:1 integrate:1 incident:1 affine:32 sufficient:2 editor:1 translation:1 row:1 normalizes:1 aij:6 face:5 barrier:1 differentiating:1 curve:1 dimension:3 world:1 valid:1 contour:3 sensory:1 author:1 san:1 summing:1 assumed:1 francisco:1 search:1 additionally:2 lip:2 mj:4 robust:1 ca:1 interact:1 protocol:1 noise:7 repeated:1 fashion:2 vr:1 position:2 a21:1 exponential:1 lie:1 perceptual:1 weighting:1 formula:1 kr:1 texture:2 depicted:1 lagrange:2 expressed:3 partially:1 extracted:1 identity:1 formulated:1 change:3 specifically:1 operates:1 uniformly:2 called:1 experimental:1 indicating:3 inability:1 rigidity:1 |
132 | 1,119 | A Smoothing Regularizer for Recurrent
Neural Networks
Lizhong Wu and John Moody
Oregon Graduate Institute, Computer Science Dept., Portland, OR 97291-1000
Abstract
We derive a smoothing regularizer for recurrent network models by
requiring robustness in prediction performance to perturbations of
the training data. The regularizer can be viewed as a generalization of the first order Tikhonov stabilizer to dynamic models. The
closed-form expression of the regularizer covers both time-lagged
and simultaneous recurrent nets, with feedforward nets and onelayer linear nets as special cases. We have successfully tested this
regularizer in a number of case studies and found that it performs
better than standard quadratic weight decay.
1
Introd uction
One technique for preventing a neural network from overfitting noisy data is to add
a regularizer to the error function being minimized. Regularizers typically smooth
the fit to noisy data. Well-established techniques include ridge regression, see (Hoerl & Kennard 1970), and more generally spline smoothing functions or Tikhonov
stabilizers that penalize the mth-order squared derivatives of the function being fit,
as in (Tikhonov & Arsenin 1977), (Eubank 1988), (Hastie & Tibshirani 1990) and
(Wahba 1990). Thes( -ilethods have recently been extended to networks of radial
basis functions (Girosi, Jones & Poggio 1995), and several heuristic approaches have
been developed for sigmoidal neural networks, for example, quadratic weight decay
(Plaut, Nowlan & Hinton 1986), weight elimination (Scalettar & Zee 1988),(Chauvin 1990),(Weigend, Rumelhart & Huberman 1990) and soft weight sharing (Nowlan
& Hinton 1992). 1 All previous studies on regularization have concentrated on feedforward neural networks. To our knowledge, recurrent learning with regularization
has not been reported before.
ITwo additional papers related to ours, but dealing only with feed forward networks,
came to our attention or were written after our work was completed. These are (Bishop
1995) and (Leen 1995). Also, Moody & Rognvaldsson (1995) have recently proposed
several new classes of smoothing regularizers for feed forward nets.
459
A Smoothing Regularizer for Recurrent Neural Networks
In Section 2 of this paper, we develop a smoothing regularizer for general dynamic
models which is derived by considering perturbations of the training data. We
present a closed-form expression for our regularizer for two layer feedforward and
recurrent neural networks, with standard weight decay being a special case. In
Section 3, we evaluate our regularizer's performance on predicting the U.S. Index
of Industrial Production. The advantage of our regularizer is demonstrated by
comparing to standard weight decay in both feedforward and recurrent modeling.
Finally, we conclude our paper in Section 4.
2
2.1
Smoothing Regularization
Prediction Error for Perturbed Data Sets
Consider a training data set {P: Z(t),X(t)}, where the targets Z(t) are assumed to
be generated by an unknown dynamical system F*(I(t)) and an unobserved noise
process:
Z(t) = F*(I(t? + E*(t) with I(t) = {X(s), s = 1,2,???, t} .
(1)
Here, I(t) is, the information set containing both current and past inputs X(s), and
the E*(t) are independent random noise variables with zero mean and variance (F*2.
Consider next a dynamic network model Z(t) = F(~, I(t)) to be trained on data set
P, where ~ represents a set of network parameters, and F( ) is a network transfer
function which is assumed to be nonlinear and dynamic. We assume that F( ) has
good approximation capabilities, such that F(~p,I(t)) ~ F*(I(t)) for learnable
parameters ~ p.
Our goal is to derive a smoothing regularizer for a network trained on the actual
data set P that in effect optimizes the expected network performance (prediction
risk) on perturbed test data sets of form {Q : Z(t),X(t)}. The elements of Q are
related to the elements of P via small random perturbations Ez(t) and Ez(t), so that
Z(t) = Z(t) + Ez(t) ,
(2)
X(t) = X(t) + Ez(t) .
(3)
The Ez(t) and Ez(t) have zero mean and variances (Fz2 and (Fz2 respectively. The
training and test errors for the data sets P and Q are
N
Dp =
~ L [Z(t) - F(~p,I(t))]2
(4)
t=l
N
DQ =
~ L[Z(t) - F(~p,i(t)W
,
(5)
t=l
~p
denotes the network parameters obtained by training on data set P, and
l(t)
{X(s),s = 1,2,??? ,t} is the perturbed information set of Q. With this
notation, our goal is to minimize the expected value of DQ, while training on D p.
where
=
Consider the prediction error for the perturbed data point at time t:
d(t) = [Z(t) - F(~p,i(t)W .
With Eqn (2), we obtain
d(t)
=
-
[Z(t) + Ez(t) - F(~p,I(t)) + F(~p,I(t)) - F(~p,i(t)W,
[Z(t) - F(~p,I(t)W + [F(~p,I(t)) - F(~p,l(t)W + [Ez(t)]2
+2[Z(t) - F(~p,I(t))JIF(~p,I(t)) - F(~p,i(t))]
+2Ez(t)lZ(t) - F(~p,l(t))].
(6)
(7)
L. WU. 1. MOODY
460
Assuming that C:z(t) is uncorrelated with [Z(t) - F(~p,i(t?] and averaging over
the exemplars of data sets P and Q, Eqn(7) becomes
DQ
=
1
1
N
N
Dp+ NL[F(~p,I(t?-F(~p,i(t)W+ NL[c: z(t)]2
t=1
t=1
2 N
+ N L[Z(t) - F(~p,I(t?)][F(~p,I(t? - F(~p,i(t?].
(8)
t=l
The third term, 2:::'1 [C: z (t)]2, in Eqn(8) is independent of the weights, so it can
be neglected during the learning process. The fourth term in Eqn(8) is the crosscovariance between [Z~t) - F(~p,I(t?] and [F(~p,I(t? - F(~p,i(t?]. Using
the inequality 2ab ~ a + b2 , we can see that minimizing the first term D p and
the second term ~ 2:~I[F(~p,I(t? - F(~p,i(t?]2 in Eqn (8) during training
will automatically decrease the effect of the cross-covariance term. Therefore, we
exclude the cross-covariance term from the training criterion.
The above analysis shows that the expected test error DQ can be minimized by
minimizing the objective function D:
1 N
D
= N L[Z(t) -
F(~, I(t?]2
1 N
L[F(~p, I(t? - F(~ p,i(t?]2.
+N
t=l
(9)
t=l
In Eqn (9), the second term is the time average of the squared disturbance
IIZ(t) - Z(t)1I2 of the trained network output due to the input perturbation
lIi(t) - I(t)W. Minimizing this term demands that small changes in the input
variables yield correspondingly small changes in the output. This is the standard
smoothness prior, nanlely that if nothing else is known about the function to be
approximated, a good option is to assume a high degree of smoothness. Without
knowing the correct functional form of the dynamical system F- or using such prior
assumptions, the data fitting problem is ill-posed. In (Wu & Moody 1996), we have
shown that the second term in Eqn (9) is a dynamic generalization of the first order
Tikhonov stabilizer.
2.2
Form of the Proposed Smoothing Regularizer
Consider a general, two layer, nonlinear, dynamic network with recurrent connections on the internal layer 2 as described by
Yet)
= f (WY(t -
T)
+ V X(t? ,Z(t) = UY(t)
(10)
where X(t), Yet) and Z(t) are respectively the network input vector, the hidden
output vector and the network output; ~ = {U, V, W} is the output, input and
recurrent connections of the network; f( ) is the vector-valued nonlinear transfer
function of the hidden units; and T is a time delay in the feedback connections of
hidden layer which is pre-defined by a user and will not be changed during learning.
T can be zero, a fraction, or an integer, but we are interested in the cases with a
small T.3
20 ur derivation can easily be extended to other network structures.
3When the time delay T exceeds some critical value, a recurrent network becomes
unstable and lies in oscillatory modes. See, for example, (Marcus & Westervelt 1989).
461
A Smoothing Regularizer for Recurrent Neural Networks
When T = 1, our model is a recurrent network as described by (Elman 1990) and
(Rumelhart, Hinton & Williams 1986) (see Figure 17 on page 355). When T is equal
to some fraction smaller than one, the network evolves ~ times within each input
time interval. When T decreases and approaches zero, our model is the same as the
network studied by (Pineda 1989), and earlier, widely-studied additive networks. In
(Pineda 1989), T was referred to as the network relaxation time scale. (Werbos 1992)
distinguished the recurrent networks with zero T and non-zero T by calling them
simultaneous recurrent networks and time-lagged recurrent networks respectively.
We have found that minimizing the second term of Eqn(9) can be obtained by
smoothing the output response to an input perturbation at every time step. This
yields, see (Wu & Moody 1996):
IIZ(t)-Z(t)W~p/(~p)IIX(t)-X(t)W for t=1,2, ... ,N.
(11)
We call PT 2 (~ p) the output sensitivity of the trained network ~ p to an input perturbation. PT 2 ( ~ p) is determined by the network parameters only and is independent
of the time variable t.
We obtain our new regularizer by training directly on the expected prediction error
for perturbed data sets Q. Based on the analysis leading to Eqns (9) and (11), the
training criterion thus becomes
1 N
D = N 2:[Z(t) - F(~,I(t)W
+ .\p/(~)
.
(12)
t=l
The coefficient .\ in Eqn(12) is a regularization parameter that measures the degree
of input perturbation lIi(t) - I(t)W. The algebraic form for PT(~) as derived in
(Wu & Moody 1996) is:
,IIUIIIIVII
PT ( ~)1 _ ,IIWII
{1-
exp
(,IIWTIl
-l)}
'
(13)
for time-lagged recurrent networks (T > 0). Here, 1111 denotes the Euclidean matrix
norm. The factor, depends upon the maximal value of the first derivatives of the
activation functions of the hidden units and is given by:
, = m~ II/(oj(t))
t ,]
I,
(14)
where j is the index of hidden units and OJ(t) is the input to the ph unit. In general,
, ~ 1. 4 To insure stability and that the effects of small input perturbations are
damped out, it is required, see (Wu & Moody 1996), that
,IIWII < 1
.
(15)
The regularizer Eqn(13) can be deduced for the simultaneous recurrent networks in
the limit THO by:
p( ~) -= P0 (~) = ,IIUIIIIVII
1 - ,IIWII .
If the network is feedforward, W
(16)
= 0 and T = 0, Eqns (13) and (16) become
p(~) =
,11U1I11V1l .
(17)
Moreover, if there is no hidden layer and the inputs are directly connected to the
outputs via U, the network is an ordinary linear model, and we obtain
p(~)
=
IIUII ,
4For instance, f'(x} = [1- f(x})f(x} if f(x)
= l+!-z.
(18)
Then, "'{
= max 1f'(x}} 1= t.
462
L. WU, J. MOODY
which is standard quadratic weight decay (Plaut et al. 1986) as is used in ridge
regression (Hoerl & Kennard 1970).
The regularizer (Eqn(17) for feedforward networks and Eqn (13) for recurrent networks) was obtained by requiring smoothness of the network output to perturbations
of data. We therefore refer to it as a smoothing regularizer. Several approaches can
be applied to estimate the regularization parameter..x, as in (Eubank 1988), (Hastie
& Tibshirani 1990) and (Wahba 1990). We will not discuss this subject in this
paper.
In the next section, we evaluate the new regularizer for the task of predicting the
U.S. Index of Industrial Production. Additional empirical tests can be found in
(Wu & Moody 1996).
3
Predicting the U.S. Index of Industrial Production
The Index of Industrial Production (IP) is one of the key measures of economic
activity. It is computed and published monthly. Our task is to predict the onemonth rate of change of the index from January 1980 to December 1989 for models
trained from January 1950 to December 1979. The exogenous inputs we have used
include 8 time series such as the index of leading indicators, housing starts, the
money supply M2, the S&P 500 Index. These 8 series are also recorded monthly.
In previous studies by (Moody, Levin & Rehfuss 1993), with the same defined
training and test data sets, the normalized prediction errors of the one month rate
of change were 0.81 with the neuz neural network simulator, and 0.75 with the
proj neural network simulator.
We have simulated feedforward and recurrent neural network models. Both models
consist of two layers. There are 9 input units in the recurrent model, which receive the 8 exogenous series and the previous month IP index change. We set the
time-delayed length in the recurrent connections T = 1. The feedforward model is
constructed with 36 input units, which receive 4 time-delayed versions of each input
series. The time-delay lengths a,re 1, 3, 6 and 12, respectively. The activation functions of hidden units in both feedforward and recurrent models are tanh functions.
The number of hidden units varies from 2 to 6. Each model has one linear output
unit.
We have divided the data from January 1950 to December 1979 into four nonoverlapping sub-sets. One sub-set consists of 70% of the original data and each of
the other three subsets consists of 10% of the original data. The larger sub-set is
used as training data and the three smaller sub-sets are used as validation data.
These three validation data sets are respectively used for determination of early
stopped training, selecting the regularization parameter and selecting the number
of hidden units.
We have formed 10 random training-validation partitions. For each trainingvalidation partition, three networks with different initial weight parameters are
trained. Therefore, our prediction committee is formed by 30 networks.
The committee error is the average of the errors of all committee members. All
networks in the committee are trained simultaneously and stopped at the same
time based on the committee error of a validation set. The value of the regularization parameter and the number of hidden units are determined by minimizing the
committee error on separate validation sets.
Table 1 compares the out-of-sample performance of recurrent networks and feedfor-
463
A Smoothing Regularizer for Recurrent Neural Networks
Table 1: Nonnalized prediction errors for the one-month rate of return on the U.S.
Index of Industrial Production (Jan. 1980 - Dec. 1989). Each result is based on 30
networks.
Model
Regularizer
Mean ? Std
Median
Max
Min
Committee
Recurrent
Networks
Smoothing
Weight Decay
0.646?0.008
0.734?0.018
0.647
0.737
0.657
0.767
0.632
0.704
0.639
0.734
Feedforward
Networks
Smoothing
Weight Decay
0.700?0.023
0.745?0.043
0.707
0.748
0.729
0.805
0.654
0.676
0.693
0.731
ward networks trained with our smoothing regularizer to that of networks trained
with standard weight decay. The results are based on 30 networks. As shown, the
smoothing regularizer again outperfonns standard weight decay with 95% confidence (in t-distribution hypothesis) in both cases of recurrent networks and feedforward networks. We also list the median, maximal and minimal prediction errors
over 30 predictors. The last column gives the committee results, which are based on
the simple average of 30 network predictions. We see that the median, maximal and
minimal values and the committee results obtained with the smoothing regularizer
are all smaller than those obtained with standard weight decay, in both recurrent
and feedforward network models.
4
Concluding Remarks
Regularization in learning can prevent a network from overtraining. Several techniques have been developed in recent years, but all these are specialized for feedforward networks. To our best knowledge, a regularizer for a recurrent network has
not been reported previously.
We have developed a smoothing regularizer for recurrent neural networks that captures the dependencies of input, output, and feedback weight values on each other.
The regularizer covers both simultaneous and time-lagged recurrent networks, with
feedforward networks and single layer, linear networks as special cases. Our smoothing regularizer for linear networks has the same fonn as standard weight decay. The
regularizer developed depends on only the network parameters, and can easily be
used. A more detailed description of this work appears in (Wu & Moody 1996).
References
Bishop, C. (1995), 'Training with noise is equivalent to Tikhonov regularization',
Neural Computation 7(1), 108-116.
Chauvin, Y. (1990), Dynamic behavior of constrained back-propagation networks,
in D. Touretzky, ed., 'Advances in Neural Infonnation Processing Systems 2',
Morgan Kaufmann Publishers, San Francisco, CA, pp. 642-649.
Elman, J. (1990), 'Finding structure in time', Cognition Science 14, 179-211.
Eubank, R. L. (1988), Spline Smoothing and Nonparametric Regression, Marcel
Dekker, Inc.
Girosi, F., Jones, M. & Poggio, T. (1995), 'Regularization theory and neural networks architectures', Neural Computation 7, 219-269.
464
L. WU, J. MOODY
Hastie, T. J. & Tibshirani, R. J. (1990), Generalized Additive Models, Vol. 43 of
Monographs on Statistics and Applied Probability, Chapman and Hall.
Hoerl, A. & Kennard, R. (1970), 'Ridge regression: biased estimation for nonorthogonal problems', Technometrics 12, 55-67.
Leen, T. (1995), 'From data distributions to regularization in invariant learning',
Neural Computation 7(5), 974-98l.
Marcus, C. & Westervelt, R. (1989), Dynamics of analog neural networks with
time delay, in D. Touretzky, ed., 'Advances in Neural Information Processing
Systems 1', Morgan Kaufmann Publishers, San Francisco, CA.
Moody, J. & Rognvaldsson, T. (1995), Smoothing regularizers for feed-forward neural networks, Oregon Graduate Institute Computer Science Dept. Technical
Report, submitted for publication, 1995.
Moody, J., Levin, U. & Rehfuss, S. (1993), 'Predicting the U.S. index of industrial production', In proceedings of the 1993 Parallel Applications in Statistics
and Economics Conference, Zeist, The Netherlands. Special issue of Neural
Network World 3(6), 791-794.
Nowlan, S. & Hinton, G. (1992), 'Simplifying neural networks by soft weightsharing', Neural Computation 4(4), 473-493.
Pineda, F. (1989), 'Recurrent backpropagation and the dynamical approach to
adaptive neural computation', Neural Computation 1(2), 161-172.
Plaut, D., Nowlan, S. & Hinton, G. (1986), Experiments on learning by back propagation, Technical Report CMU-CS-86-126, Carnegie-Mellon University.
Rumelhart, D., Hinton, G. & Williams, R. (1986), Learning internal representations by error propagation, in D. Rumelhart & J. McClelland, eds, 'Parallel
Distributed Processing: Exploration in the microstructure of cognition', MIT
Press, Cambridge, MA, chapter 8, pp. 319-362.
Scalettar, R. & Zee, A. (1988), Emergence of grandmother memory in feed forward
networks: learning with noise and forgetfulness, in D. Waltz & J. Feldman,
eds, 'Connectionist Models and Their Implications: Readings from Cognitive
Science', Ablex Pub. Corp.
Tikhonov, A. N. & Arsenin, V. 1. (1977), Solutions of Ill-posed Problems, Winston;
New York: distributed solely by Halsted Press. Scripta series in mathematics.
Translation editor, Fritz John.
Wahba, G. (1990), Spline models for observational data, CBMS-NSF Regional Conference Series in Applied Mathematics.
Weigend, A., Rumelhart, D. & Huberman, B. (1990), Back-propagation, weightelimination and time series prediction, in T. Sejnowski, G. Hinton & D. Touretzky, eds, 'Proceedings of the connectionist models summer school', Morgan
Kaufmann Publishers, San Mateo, CA, pp. 105-116.
Werbos, P. (1992), Neurocontrol and supervised learning: An overview and evaluation, in D. White & D. Sofge, eds, 'Handbook of Intelligent Control', Van
Nostrand Reinhold, New York.
Wu, L. & Moody, J. (1996), 'A smoothing regularizer for feedforward and recurrent
neural networks', Neural Computation 8(3), 463-491.
| 1119 |@word version:1 norm:1 dekker:1 covariance:2 p0:1 simplifying:1 fonn:1 initial:1 series:7 selecting:2 pub:1 ours:1 past:1 current:1 comparing:1 nowlan:4 activation:2 yet:2 written:1 john:2 additive:2 partition:2 girosi:2 plaut:3 sigmoidal:1 constructed:1 become:1 supply:1 consists:2 fitting:1 expected:4 behavior:1 elman:2 simulator:2 automatically:1 actual:1 considering:1 becomes:3 notation:1 insure:1 moreover:1 developed:4 unobserved:1 finding:1 every:1 control:1 unit:11 before:1 limit:1 thes:1 solely:1 studied:2 mateo:1 graduate:2 uy:1 backpropagation:1 jan:1 empirical:1 pre:1 radial:1 confidence:1 risk:1 equivalent:1 demonstrated:1 williams:2 attention:1 economics:1 m2:1 stability:1 target:1 pt:4 user:1 hypothesis:1 element:2 rumelhart:5 approximated:1 jif:1 werbos:2 std:1 capture:1 connected:1 decrease:2 monograph:1 neglected:1 dynamic:8 trained:9 ablex:1 upon:1 basis:1 easily:2 rognvaldsson:2 chapter:1 regularizer:30 derivation:1 sejnowski:1 heuristic:1 posed:2 valued:1 widely:1 larger:1 scalettar:2 statistic:2 ward:1 emergence:1 noisy:2 ip:2 pineda:3 advantage:1 housing:1 net:4 maximal:3 description:1 derive:2 recurrent:32 develop:1 exemplar:1 school:1 c:1 marcel:1 correct:1 exploration:1 observational:1 elimination:1 microstructure:1 generalization:2 tho:1 hall:1 exp:1 cognition:2 predict:1 nonorthogonal:1 early:1 estimation:1 hoerl:3 tanh:1 infonnation:1 weightsharing:1 successfully:1 mit:1 publication:1 derived:2 portland:1 rehfuss:2 industrial:6 typically:1 mth:1 hidden:10 proj:1 interested:1 issue:1 ill:2 smoothing:23 special:4 constrained:1 equal:1 chapman:1 represents:1 jones:2 forgetfulness:1 minimized:2 report:2 spline:3 connectionist:2 intelligent:1 simultaneously:1 crosscovariance:1 delayed:2 ab:1 technometrics:1 evaluation:1 nl:2 lizhong:1 regularizers:3 damped:1 implication:1 waltz:1 zee:2 poggio:2 euclidean:1 re:1 minimal:2 stopped:2 instance:1 column:1 soft:2 modeling:1 earlier:1 cover:2 ordinary:1 subset:1 predictor:1 delay:4 levin:2 reported:2 dependency:1 perturbed:5 varies:1 deduced:1 fritz:1 sensitivity:1 moody:15 squared:2 again:1 recorded:1 containing:1 cognitive:1 lii:2 derivative:2 leading:2 return:1 exclude:1 nonoverlapping:1 b2:1 coefficient:1 inc:1 oregon:2 depends:2 closed:2 exogenous:2 start:1 option:1 capability:1 parallel:2 minimize:1 formed:2 variance:2 kaufmann:3 yield:2 eubank:3 published:1 submitted:1 overtraining:1 simultaneous:4 oscillatory:1 touretzky:3 sharing:1 ed:6 pp:3 knowledge:2 back:3 cbms:1 appears:1 feed:4 supervised:1 response:1 leen:2 eqn:12 nonlinear:3 propagation:4 mode:1 effect:3 requiring:2 normalized:1 regularization:11 i2:1 white:1 during:3 eqns:2 criterion:2 generalized:1 ridge:3 performs:1 recently:2 specialized:1 functional:1 overview:1 analog:1 refer:1 monthly:2 mellon:1 cambridge:1 feldman:1 smoothness:3 mathematics:2 money:1 add:1 recent:1 optimizes:1 tikhonov:6 corp:1 nostrand:1 inequality:1 came:1 morgan:3 additional:2 ii:1 smooth:1 exceeds:1 technical:2 determination:1 cross:2 divided:1 prediction:11 regression:4 cmu:1 dec:1 penalize:1 receive:2 interval:1 else:1 median:3 publisher:3 biased:1 regional:1 subject:1 december:3 member:1 integer:1 call:1 feedforward:15 fit:2 hastie:3 wahba:3 architecture:1 economic:1 stabilizer:3 knowing:1 expression:2 introd:1 algebraic:1 york:2 remark:1 generally:1 detailed:1 netherlands:1 nonparametric:1 ph:1 concentrated:1 mcclelland:1 nsf:1 tibshirani:3 carnegie:1 vol:1 key:1 four:1 prevent:1 weightelimination:1 relaxation:1 fraction:2 year:1 weigend:2 fourth:1 wu:11 layer:7 summer:1 quadratic:3 winston:1 activity:1 westervelt:2 calling:1 min:1 concluding:1 smaller:3 iiwii:3 ur:1 evolves:1 invariant:1 previously:1 discus:1 committee:9 distinguished:1 robustness:1 original:2 denotes:2 include:2 completed:1 iix:1 objective:1 dp:2 separate:1 simulated:1 evaluate:2 unstable:1 chauvin:2 marcus:2 assuming:1 length:2 index:11 minimizing:5 lagged:4 unknown:1 january:3 extended:2 hinton:7 nonnalized:1 perturbation:9 reinhold:1 required:1 connection:4 established:1 dynamical:3 wy:1 reading:1 grandmother:1 oj:2 max:2 memory:1 critical:1 disturbance:1 predicting:4 indicator:1 prior:2 validation:5 degree:2 scripta:1 dq:4 editor:1 uncorrelated:1 production:6 translation:1 arsenin:2 changed:1 neurocontrol:1 last:1 institute:2 correspondingly:1 distributed:2 van:1 feedback:2 world:1 preventing:1 forward:4 adaptive:1 san:3 lz:1 dealing:1 overfitting:1 sofge:1 handbook:1 conclude:1 assumed:2 francisco:2 table:2 transfer:2 iiz:2 ca:3 noise:4 nothing:1 kennard:3 referred:1 sub:4 lie:1 third:1 bishop:2 learnable:1 list:1 decay:11 consist:1 uction:1 demand:1 onelayer:1 ez:9 ma:1 viewed:1 goal:2 month:3 change:5 determined:2 huberman:2 averaging:1 internal:2 dept:2 tested:1 |
133 | 112 | 802
CRICKET WIND DETECTION
John P. Miller
Neurobiology Group, University of California,
Berkeley, California 94720, U.S.A.
A great deal of interest has recently been focused on theories concerning
parallel distributed processing in central nervous systems. In particular,
many researchers have become very interested in the structure and function
of "computational maps" in sensory systems. As defined in a recent review
(Knudsen et al, 1987), a "map" is an array of nerve cells, within which there
is a systematic variation in the "tuning" of neighboring cells for a particular
parameter. For example, the projection from retina to visual cortex is a relatively simple topographic map; each cortical hypercolumn itself contains a
more complex "computational" map of preferred line orientation representing the angle of tilt of a simple line stimulus.
The overall goal of the research in my lab is to determine how a relatively
complex mapped sensory system extracts and encodes information from external stimuli. The preparation we study is the cercal sensory system of
the cricket, Acheta domesticus. Crickets (and many other insects) have two
antenna-like appendages at the rear of their abdomen, covered with hundreds
of "filiform" hairs, resembling bristles on a bottle brush. Deflection of these
filiform hairs by wind currents activates mechanosensory receptors, which
project into the terminal abdominal ganglion to form a topographic representation (or "map") of "wind space". Primary sensory interneurons having
Cricket Wind Detection
dendritic branches within this afferent map of wind space are selectively
activated by wind stimuli with "relevant" parameters, and generate action
potentials at frequencies that depend upon the value of those parameters.
The "relevant" parameters are thought to be the direction, velocity, and
acceleration of wind currents directed at the animal (Shimozawa & Kanou,
1984a & b). There are only ten pairs of these interneurons which carry the
system's output to higher centers. All ten of these output units are identified, and all can be monitored individually with intracellular electrodes or
simultaneously with extracellular electrodes. The following specific questions are currently being addressed: What are the response properties of the
sensory receptors, and what are the I/O properties of the receptor layer as
a whole? What are the response properties of all the units in the output
layer? Is all of the direction, velocity and acceleration information that is
extracted at the receptor layer also available at the output layer? How is
that information encoded? Are any higher order "features" also encoded?
What is the overall threshold, sensitivity and dynamic range of the system
as a whole for detecting features of wind stimuli?
Michael Landolfa is studying the sensory neurons which serve as the inputs
to the cercal system. The sensory cell layer consists of about 1000 afferent
neurons, each of which innervates a single mechanosensory hair on the cerci.
The input/output relationships of single sensory neurons were characterized
by recording from an afferent axon while presenting appropriate stimuli to
the sensory hairs. The primary results were as follows: 1) Afferents are directionally sensitive. Graphs of afferent response amplitude versus wind direction are approximately sinusoidal, with distinct preferred and anti-preferred
directions. 2) Afferents are velocity sensitive. Each afferent encodes wind
803
804
Miller
velocity over a range of approximately 1.5 log units. 3) Different afferents
have different velocity thresholds. The overlap of these different sensitivity
curves insures that the system as a whole can encode wind velocities that
span several log units. 4) The nature of the afferent response to deflection
of its sensory hair indicates that the parameter transduced by the afferent is
not hair displacement, but change in hair displacement. Thus, a significant
portion of the processing which occurs within the cereal sensory system is
accomplished at the level of the sensory afferents.
This information about the direction and velocity of wind stimuli is encoded
by the relative firing rates of at least 10 pairs of identified sensory interneurons. A full analysis of the input/output properties of this system requires
that the activity of these output neurons be monitored simultaneously. Shai
Gozani has implemented a computer-based system capable of extracting the
firing patterns of individual neurons from multi-unit recordings. For these
experiments, extracellular electrodes were arrayed along the abdominal nerve
cord in each preparation. Wind stimuli of varying directions, velocities and
frequencies were presented to the animals. The responses of the cells were
analyzed by spike descrimination software based on an algorithm originally
developed by Roberts and Hartline (1975). The algorithm employs multiple
linear filters, and is capable of descriminating spikes that were coincident in
time. The number of spikes that could be descriminated was roughly equal
to the number of independent electrodes. These programs are very powerful, and may be of much more general utility for researchers working on
other invertebrate and vertebrate preparations. Using these programs and
protocols, we have characterized the output of the cereal sensory system
in terms of the simultaneous activity patterns of several pairs of identified
Cricket Wind Detection
in terneurons.
The results of these multi-unit recording studies, as well as studies using single intracellular electrodes, have yielded information about the directional
tuning and velocity sensitivity of the first order sensory interneurons. Tuning curves representing interneuron response amplitude versus wind direction are approximately sinusoidal, as was the case for the sensory afferents.
Sensitivity curves representing interneuron response amplitude versus wind
velocity are sigmoidal, with "operating ranges" of about 1.5 log units. The
interneurons are segregated into several distinct classes having different but
overlapping operating ranges, such that the direction and velocity of any
wind stimulus can be uniquely represented as the ratio of activity in the different interneurons. Thus, the overlap of the different direction and velocity
sensitivity curves in approximately 20 interneurons insures that the system
as a whole can encode the characteristics of wind stimuli having directions
that span 360 degrees and velocities that span at least 4 orders of magnitude.
We are particularly interested in the mechanisms underlying directional sensitivity in some of the first-order sensory interneurons. Identified interneurons with different morphologies have very different directional sensitivities.
The excitatory receptive fields of the different interneurons have been shown
to be directly related to the position of their dendrites within the topographic
map of wind space formed by the filiform afferents discussed above (Bacon
& Murphey, 1984; Jacobs & Miller,1985; Jacobs, Miller & Murphey, 1986).
The precise shapes of the directional tuning curves have been shown to be
dependent upon two additional factors. First, local inhibitory interneurons
can have a stong influence over a cell's response by shunting excitatory inputs from particular directions, and by reducing spontaneous activity during
805
806
Miller
stimuli from a cells "null" direction. Second, the "electroanatomy" of a neuron's dendritic branches determines the relative weighting of synaptic inputs
onto its different arborizations.
Some specific aims of our continuing research are as follows: 1) to characterize the distribution of all synaptic inputs onto several different types of
identified interneurons, 2) to measure the functional properties of individual
dendrites of these cell types, 3) to locate the spike initiating zones of the
cells, and 4) to synthesize a quantitative explanation of signal processing by
each cell. Steps 1,2 & 3 are being accomplished through electrophysiological
experiments. Step 4 is being accomplished by developing a compartmental
model for each cell type and testing the model through further physiological experiments. These computer modeling studies are being carried out by
Rocky Nevin and John Tromp. For these models, the structure of each interneuron's dendritic branches are of particular functional importance, since
the flow of bioelectrical currents through these branches determine how signals received from "input" cells are "integrated" and transformed into meaningful output which is transmitted to higher centers.
We are now at a point where we can begin to understand the operation of
the system as a whole in terms of the structure, function and synaptic connectivity of the individual neurons. The proposed studies will also lay the
technical and theoretical ground work for future studies into the nature of
signal "decoding" and higher-order processing in this preparation, mechanisms underlying the development, self-organization and regulative plasticity
of units within this computational map, and perhaps information processing
in more complex mapped sensory systems.
Cricket Wind Detection
REFERENCES
Bacon, J.P. and Murphey, R.K. (1984) Receptive fields of cricket (Acheta do-
mesticus) are determined by their dendritic structure. J.Physiol. (Lond)
352:601
Jacobs, G.A. and Miller, J.P. (1985) Functional properties of individual neu?
ronal branches isolated in situ by laser photoinactivation.
ScierJ~,
228:
344-346
Jacobs, G.A., Miller, J.P. and Murphey, R.K. (1986) Cellular mechanisms
underlying directional sensitivity of an identified sensory interneuron.
J. Neurosci. 6(8): 2298-2311
Knudsen, E.I., S. duLac and Esterly, S.D. (1981) Computational maps in
the brain. Annual Review of Neuroscien~ 10; 41-66
Roberts, W.M. and Hartline, D.K. (1915) Separation of multi-unit nerve
impulse trains by a multi-channel linear filter algorithm. Brain Res.
94: 141- 149.
Shimozawa, T. and Kanou, M. (1984a) Varieties of filiform hairs: range
fractionation by sensory afi'erents and cereal interneurons of a cricket.
J. Compo Physiol. A. 155: 485-493
Shimozawa, T. and Kanou, M. (1984b) The aerodynamics and sensory physiology of range fractionation in the cereal filiform sensilla of the cricket
Gryllus bimaculatus. J. Compo Physiol. A. 155: 495-505
807
| 112 |@word implemented:1 cereal:4 direction:12 question:1 occurs:1 spike:4 filter:2 primary:2 receptive:2 deal:1 jacob:4 during:1 self:1 uniquely:1 cricket:9 mapped:2 carry:1 contains:1 explanation:1 presenting:1 dendritic:4 cellular:1 current:3 relationship:1 ground:1 ratio:1 great:1 recently:1 john:2 physiol:3 robert:2 functional:3 plasticity:1 shape:1 arrayed:1 tilt:1 discussed:1 nervous:1 currently:1 significant:1 neuron:7 sensitive:2 individually:1 tuning:4 coincident:1 anti:1 compo:2 knudsen:2 detecting:1 neurobiology:1 activates:1 precise:1 locate:1 aim:1 sigmoidal:1 cortex:1 operating:2 along:1 varying:1 become:1 encode:2 consists:1 recent:1 bottle:1 hypercolumn:1 pair:3 indicates:1 california:2 cerci:1 roughly:1 accomplished:3 multi:4 morphology:1 terminal:1 brain:2 rear:1 initiating:1 dependent:1 integrated:1 transmitted:1 additional:1 pattern:2 determine:2 vertebrate:1 transformed:1 project:1 begin:1 underlying:3 interested:2 overall:2 transduced:1 orientation:1 null:1 what:4 insect:1 branch:5 development:1 animal:2 technical:1 characterized:2 developed:1 overlap:2 bacon:2 equal:1 field:2 concerning:1 having:3 shunting:1 representing:3 mechanosensory:2 rocky:1 berkeley:1 quantitative:1 esterly:1 hair:8 carried:1 arborizations:1 future:1 extract:1 stimulus:10 unit:9 employ:1 retina:1 cell:11 review:2 simultaneously:2 segregated:1 relative:2 individual:4 local:1 addressed:1 receptor:4 gryllus:1 aerodynamics:1 versus:3 firing:2 detection:4 approximately:4 abdominal:2 interest:1 interneurons:13 organization:1 recording:3 situ:1 flow:1 degree:1 fractionation:2 analyzed:1 extracting:1 activated:1 range:6 excitatory:2 directed:1 variety:1 testing:1 capable:2 identified:6 understand:1 afi:1 displacement:2 continuing:1 abdomen:1 re:1 distributed:1 isolated:1 thought:1 physiology:1 projection:1 theoretical:1 utility:1 curve:5 cortical:1 modeling:1 program:2 sensory:21 onto:2 action:1 influence:1 covered:1 preferred:3 map:9 hundred:1 center:2 resembling:1 ten:2 characterize:1 focused:1 generate:1 my:1 inhibitory:1 array:1 sensitivity:8 cercal:2 systematic:1 nature:2 channel:1 decoding:1 variation:1 michael:1 dendrite:2 spontaneous:1 dulac:1 connectivity:1 group:1 central:1 threshold:2 complex:3 protocol:1 velocity:13 synthesize:1 external:1 particularly:1 intracellular:2 neurosci:1 lay:1 appendage:1 graph:1 whole:5 murphey:4 potential:1 sinusoidal:2 deflection:2 angle:1 powerful:1 cord:1 nevin:1 axon:1 afferent:13 innervates:1 separation:1 position:1 wind:20 lab:1 portion:1 layer:5 bimaculatus:1 parallel:1 weighting:1 shai:1 dynamic:1 yielded:1 depend:1 activity:4 formed:1 annual:1 specific:2 serve:1 upon:2 characteristic:1 miller:7 software:1 encodes:2 physiological:1 directional:5 invertebrate:1 multiple:1 represented:1 span:3 lond:1 importance:1 magnitude:1 laser:1 train:1 distinct:2 researcher:2 hartline:2 relatively:2 extracellular:2 developing:1 interneuron:4 simultaneous:1 synaptic:3 neu:1 insures:2 encoded:3 ganglion:1 visual:1 compartmental:1 frequency:2 topographic:3 monitored:2 brush:1 itself:1 antenna:1 full:1 directionally:1 determines:1 extracted:1 signal:3 electrophysiological:1 mechanism:3 goal:1 amplitude:3 acceleration:2 neighboring:1 relevant:2 nerve:3 change:1 originally:1 higher:4 available:1 studying:1 operation:1 response:8 determined:1 reducing:1 appropriate:1 meaningful:1 electrode:5 working:1 zone:1 selectively:1 overlapping:1 preparation:4 perhaps:1 impulse:1 received:1 |
134 | 1,120 | Optimization Principles for the Neural
Code
Michael DeWeese
Sloan Center, Salk Institute
La Jolla, CA 92037
deweese@salk.edu
Abstract
Recent experiments show that the neural codes at work in a wide
range of creatures share some common features. At first sight, these
observations seem unrelated. However, we show that these features
arise naturally in a linear filtered threshold crossing (LFTC) model
when we set the threshold to maximize the transmitted information.
This maximization process requires neural adaptation to not only
the DC signal level, as in conventional light and dark adaptation,
but also to the statistical structure of the signal and noise distributions. We also present a new approach for calculating the mutual
information between a neuron's output spike train and any aspect
of its input signal which does not require reconstruction of the input signal. This formulation is valid provided the correlations in
the spike train are small, and we provide a procedure for checking
this assumption. This paper is based on joint work (DeWeese [1],
1995). Preliminary results from the LFTC model appeared in a
previous proceedings (DeWeese [2], 1995), and the conclusions we
reached at that time have been reaffirmed by further analysis of the
model.
1
Introduction
Most sensory receptor cells produce analog voltages and currents which are smoothly
related to analog signals in the outside world. Before being transmitted to the brain,
however, these signals are encoded in sequences of identical pulses called action
potentials or spikes. We would like to know if there is a universal principle at work
in the choice of these coding strategies. The existence of such a potentially powerful
theoretical tool in biology is an appealing notion, but it may not turn out to be
useful. Perhaps the function of biological systems is best seen as a complicated
compromise among constraints imposed by the properties of biological materials,
the need to build the system according to a simple set of development rules, and
282
M. DEWEESE
the fact that current systems must arise from their ancestors by evolution through
random change and selection. In this view, biology is history, and the search for
principles (except for evolution itself) is likely to be futile. Obviously, we hope that
this view is wrong, and that at least some of biology is understandable in terms of the
same sort of universal principles that have emerged in the physics of the inanimate
world.
Adrian noticed in the 1920's that every peripheral neuron he checked produced discrete, identical pulses no matter what input he administered (Adrian, 1928). From
the work of Hodgkin and Huxley we know that these pulses are stable non-linear
waves which emerge from the non-linear dynamics describing the electrical properties of the nerve cell membrane These dynamics in turn derive from the molecular
dynamics of specific ion channels in the cell membrane. By analogy with other nonlinear wave problems, we thus understand that these signals have propagated over a
long distance - e.g. ~ one meter from touch receptors in a finger to their targets
in the spinal cord - so that every spike has the same shape. This is an important
observation since it implies that all information carried by a spike train is encoded
in the arrival times of the spikes. Since a creature's brain is connected to all of its
sensory systems by such axons, all the creature knows about the outside world must
be encoded in spike arrival times.
Until recently, neural codes have been studied primarily by measuring changes in the
rate of spike production by different input signals. Recently it has become possible
to characterize the codes in information-theoretic terms, and this has led to the
discovery of some potentially universal features of the code (Bialek, 1996) (or see
(Bialek, 1993) for a brief summary). They are:
1. Very high information rates. The record so far is 300 bits per second in a
cricket mechanical sensor.
2. High coding efficiency. In cricket and frog vibration sensors, the information
rate is within a factor of 2 of the entropy per unit time of the spike train.
3. Linear decoding. Despite evident non-linearities ofthe nervous system, spike
trains can be decoded by simple linear filters . Thus we can write an estimate
of the analog input signal s(t) as Sest (t) = Ei Kl (t - td, with Kl chosen to
minimize the mean-squared errors (X 2 ) in the estimate. Adding non-linear
K2(t - ti, t - tj) terms does not significantly reduce X2 .
4. Moderate signal-to-noise ratios (SNR). The SNR in these experiments was
defined as the ratio of power spectra of the input signal to the noise referred
back to the input; the power spectrum of the noise was approximated by X2
defined above. All these examples of high information transmission rates
have SNR of order unity over a broad bandwidth, rather than high SNR in
a narrow band.
We will try to tie all of these observations together by elevating the first to a principle:
The neural code is chosen to maximize information transmission where information
is quantified following Shannon. We apply this principle in the context of a simple
model neuron which converts analog signals into spike trains. Before we consider
a specific model, we will present a procedure for expanding the information rate of
any point process encoding of an analog signal about the limit where the spikes are
uncorrelated. We will briefly discuss how this can be used to measure information
rates in real neurons.
283
Optimization Principles for the Neural Code
This work will also appear in Network.
2
Information Theory
In the 1940's, Shannon proposed a quantitative definition for "information" (Shannon, 1949). He argued first that the average amount of information gained by
observing some event Z is the entropy of the distribution from which z is chosen,
and then showed that this is the only definition consistent with several plausible
requirements. This definition implies that the amount of information one signal can
provide about some other signal is the difference between the entropy of the first
signal's a priori distribution and the entropy of its conditional distribution. The
average of this quantity is called the mutual (or transmitted) information. Thus,
we can write the amount of information that the spike train, {td, tells us about the
time dependent signal, s(t), as
(1)
where I1Jt; is shorthand for integration over all arrival times {til from 0 to T
and summation over the total number of spikes, N (we have divided the integration
measure by N! to prevent over counting due to equivalent permutations of the spikes,
rather than absorb this factor into the probability distribution as we did in (DeWeese
[1], 1995)). < ... >8= I1JsP[sO]'" denotes integration over the space offunctions
s(t) weighted by the signal's a priori distribution, P[{t;}ls()] is the probability
distribution for the spike train when the signal is fixed and P[{t;}] is the spike
train's average distribution.
3
Arbitrary Point Process Encoding of an Analog Signal
In order to derive a useful expression for the information given by Eq. (1), we need
an explicit representation for the conditional distribution of the spike train. If we
choose to represent each spike as a Dirac delta function, then the spike train can be
defined as
N
p(t)
=L c5(t -
t;).
(2)
;=1
This is the output spike train for our cell, so it must be a functional of both the
input signal, s(t), and all the noise sources in the cell which we will lump together
and call '7(t). Choosing to represent the spikes as delta functions allows us to think
of p(t) as the probability of finding a spike at time t when both the signal and noise
are specified. In other words, if the noise were not present, p would be the cell's
firing rate, singular though it is. This implies that in the presence of noise the cell's
observed firing rate, r(t), is the noise average of p(t):
r(t) =
J
1J'7P ['70I s 0]p(t)
=(p(t))'1'
(3)
Notice that by averaging over the conditional distribution for the noise rather than its
a priori distribution as we did in (DeWeese [1], 1995), we ensure that this expression
is still valid if the noise is signal dependent, as is the case in many real neurons.
For any particular realization of the noise, the spike train is completely specified
which means that the distribution for the spike train when both the signal and
284
M. DEWEESE
noise are fixed is a modulated Poisson process with a singular firing rate, p(t). We
emphasize that this is true even though we have assumed nothing about the encoding
of the signal in the spike train when the noise is not fixed. One might then assume
that the conditional distribution for the spike tra.in for fixed signal would be the
noise average of the familiar formula for a modulated Poisson process:
(4)
However, this is only approximately true due to subtleties arising from the singular
nature of p(t). One can derive the correct expression (DeWeese [1], 1995) by carefully taking the continuum limit of an approximation to this distribution defined for
discrete time. The result is the same sum of noise averages over products of p's
produced by expanding the exponential in Eq. (4) in powers of f dtp(t) except that
all terms containing more than one factor of p(t) at equal times are not present.
The exact answer is:
(5)
where the superscripted minus sign reminds us to remove all terms containing
products of coincident p's after expanding everything in the noise average in powers
of p.
4
Expanding About the Poisson Limit
An exact solution for the mutual information between the input signal and spike
train would be hopeless for all but a few coding schemes. However, the success
of linear decoding coupled with the high information rates seen in the experiments
suggests to us that the spikes might be transmitting roughly independent information
(see (DeWeese [1], 1995) or (Bialek, 1993) for a more fleshed out argument on this
point). If this is the case, then the spike train should approximate a Poisson process.
We can explicitly show this relationship by performing a cluster expansion on the
right hand side of Eq. (5):
(6)
where we have defined ~p(t) == p(t)- < p(t) >'1= p(t) - r(t) and introduced C'1(m)
which collects all terms containing m factors of ~p. For example,
C (2) ==
'1
~2 ,,(~Pi~Pj}q
L..J
r ?r ?
i?j
, J
-
J
dt' ~
L..J
J
(~p' r~Pi}q
+ ~ dt'dt"(~ '~ ")-.
?
2
p P '1
(7)
i=l'
Clearly, if the correlations between spikes are small in the noise distribution, then
the C'1 's will be small, and the spike train will nearly approximate a modulated
Poisson process when the signal is fixed.
Optimization Principles for the Neural Code
285
Performing the cluster expansion on the signal average of Eq. (5) yields a similar
expression for the average distribution for the spike train:
(8)
where T is the total duration of the spike train, r is the average firing rate, and
C'1. 8 (m) is identical to C'1(m) with these substitutions: r(t) --+ r, ~p(t) --+ ap(t) ==
p(t) - f, and ( ...); --+ {{ .. ?);)8. In this case, the distribution for a homogeneous
Poisson process appears in front of the square brackets, and inside we have 1
corrections due to correlations in the average spike train.
5
+
The Transmitted Information
Inserting these expressions for P[ {til IsO] and P[ {til] (taken to all orders in ~p and
ap, respectively) into Eq. (1), and expanding to second non-vanishing order in fTc
results in a useful expression for the information (DeWeese [1], 1995):
(9)
where we have suppressed the explicit time notation in the correction term inside the
double integral. If the signal and noise are stationary then we can replace the
dt
in front of each of these terms by T illustrating that the information does indeed
grow linearly with the duration of the spike train.
I;
The leading term, which is exact if there are no correlations between the spikes,
depends only on the firing rate, and is never negative. The first correction is positive
when the correlations between pairs of spikes are being used to encode the signal,
and negative when individual spikes carry redundant information. This correction
term is cumbersome but we present it here because it is experimentally accessible,
as we now describe.
This formula can be used to measure information rates in real neurons without
having to assume any method of reconstructing the signal from the spike train. In
the experimental context, averages over the (conditional) noise distribution become
repeated trials with the same input signal, and averages over the signal are accomplished by summing over all trials. r(t), for example, is the histogram of the spike
trains resulting from the same input signal, while f(t) is the histogram of all spike
trains resulting from all input signals. If the signal and noise are stationary, then
f will not be time dependent. {p(t)p(t'))'1 is in general a 2-dimensional histogram
which is signal dependent: It is equal to the number of spike trains resulting from
some specific input signal which simultaneously contain a spike in the time bins
containing t and t'. If the noise is stationary, then this is a function of only t - t',
and it reduces to a 1-dimensional histogram.
In order to measure the full amount of information contained in the spike train, it
is crucial to bin the data in small enough time bins to resolve all of the structure in
286
M. DEWEESE
r(t), (p(t)p(t'))'l' and so on. We have assumed nothing about the noise or signal;
in fact, they can even be correlated so that the noise averages are signal dependent
without changing the experimental procedure. The experimenter can also choose
to fix only some aspects of the sensory data during the noise averaging step, thus
measuring the mutual information between the spike train and only these aspects of
the input. The only assumption we have made up to this point is that the spikes
are roughly uncorrelated which can be checked by comparing the leading term to
the first correction, just as we do for the model we discuss in the next section.
6
The Linear Filtered Threshold Crossing Model
As we reported in a previous proceedings (DeWeese [2], 1995) (and see (DeWeese
[1], 1995) for details), the leading term in Eq. (9) can be calculated exactly in the
case of a linear filtered threshold crossing (LFTC) model when the signal and noise
are drawn from independent Gaussian distributions. Unlike the Integrate and Fire
(IF) model, the LFTC model does not have a "renewal process" which resets the
value of the filtered signal to zero each time the threshold is reached. Stevens and
Zador have developed an alternative formulation for the information transmission
which is better suited for studying the IF model under some circumstances (Stevens,
1995), and they give a nice discussion on the way in which these two formulations
compliment each other.
For the LFTC model, the leading term is a function of only three variables: 1) The
threshold height; 2) the ratio of the variances of the filtered signal and the filtered
noise, (s2(t)),/(7J 2(t))'l' which we refer to as the SNR; 3) and the ratio of correlation
times ofthe filtered signal and the filtered noise, T,/T'l' where T; == (S2(t)),/(S2(t)),
and similarly for the noise. In the equations in this last sentence, and in what follows,
we absorb the linear filter into our definitions for the power spectra of the signal and
noise. Near the Poisson limit, the linear filter can only affect the information rate
through its generally weak influence on the ratios of variances and correlation times
of the signal and noise, so we focus on the threshold to understand adaptation in
our model cell.
When the ratio of correlation times of the signal and noise is moderate, we find a
maximum for the information rate near the Poisson limit - the leading term ~ lOx
the first correction. For the interesting and physically relevant case where the noise
is slightly more broadband than the signal as seen through the cell's prefiltering,
we find that the maximum information rate is achieved with a threshold setting
which does not correspond to the maximum average firing rate illustrating that this
optimum is non-trivial. Provided the SNR is about one or less, linear decoding does
well - a lower bound on the information rate based on optimal linear reconstruction
of the signal is within a factor of two of the total available information in the spike
train. As SNR grows unbounded, this lower bound asymptotes to a constant. In
addition, the required timing resolution for extracting the information from the spike
train is quite modest - discretizing the spike train into bins which are half as wide
as the correlation time of the signal degrades the information rate by less than 10%.
However, at maximum information transmission, the information per spike is low Rmaz/r ~ .7 bits/spike, much lower than 3 bits/spike seen in the cricket. This low
information rate drives the efficiency down to 1/3 of the experimental values despite
the model's robustness to timing jitter. Aside from the low information rate, the
optimized model captures all the experimental features we set out to explain .
Optimization Principles for the Neural Code
7
287
Concluding Remarks
We have derived a useful expression for the transmitted information which can be
used to measure information rates in real neurons provided the correlations between
spikes are shorter range than the average inter-spike interval. We have described
a method for checking this hypothesis experimentally. The four seemingly unrelated features that were common to several experiments on a variety of neurons
are actually the natural consequences of maximizing the transmitted information.
Specifically, they are all due to the relation between if and Tc that is imposed by
the optimization. We reiterate our previous prediction (DeWeese [2], 1995; Bialek,
1993): Optimizing the code requires that the threshold adapt not only to cancel
DC offsets, but it must adapt to the statistical structure of the signal and noise.
Experimental hints at adaptation to statistical structure have recently been seen in
the fly visual system (de Ruyter van Steveninck, 1994) and in the salamander retina
(Warland, 1995).
8
References
M. DeWeese 1995 Optimization Principles for the Neural Code (Dissertation, Princeton University)
M. DeWeese and W. Bialek 1995 Information flow in sensory neurons II Nuovo
Cimento l7D 733-738
E. D. Adrian 1928 The Basis of Sensation (New York: W. W. Norton)
F . Rieke, D. Warland, R. de Ruyter van Steveninck, and W. Bialek 1996 Neural
Coding (Boston: MIT Press)
W. Bialek, M. DeWeese, F. Rieke, and D. Warland 1993 Bits and Brains: Information Flow in the Nervous System Physica A 200 581-593
C. E. Shannon 1949 Communication in the presence of noise, Proc. I. R. E. 37
10-21
C. Stevens and A. Zador 1996 Information Flow Through a Spiking Neuron in M.
Hasselmo ed Advances in Neural Information Processing Systems, Vol 8 (Boston:
MIT Press) (this volume)
R .R. de Ruyter van Steveninck, W. Bialek, M. Potters, R.H. Carlson 1994 Statistical
adaptation and optimal estimation in movement computation by the blowfly visual
system, in IEEE International Conference On Systems, Man, and Cybernetics pp
302-307
D. Warland, M. Berry, S. Smirnakis, and M. Meister 1995 personal communication
| 1120 |@word trial:2 illustrating:2 briefly:1 adrian:3 pulse:3 minus:1 carry:1 substitution:1 current:2 comparing:1 must:4 shape:1 offunctions:1 remove:1 asymptote:1 aside:1 stationary:3 half:1 nervous:2 iso:1 vanishing:1 dissertation:1 record:1 filtered:8 height:1 unbounded:1 become:2 shorthand:1 inside:2 inter:1 indeed:1 roughly:2 brain:3 td:2 resolve:1 provided:3 unrelated:2 linearity:1 notation:1 what:2 developed:1 finding:1 quantitative:1 every:2 ti:1 smirnakis:1 tie:1 exactly:1 wrong:1 k2:1 unit:1 appear:1 before:2 positive:1 timing:2 limit:5 consequence:1 receptor:2 despite:2 encoding:3 firing:6 approximately:1 ap:2 might:2 frog:1 studied:1 quantified:1 suggests:1 collect:1 dtp:1 range:2 steveninck:3 procedure:3 universal:3 significantly:1 word:1 selection:1 context:2 influence:1 conventional:1 imposed:2 equivalent:1 center:1 maximizing:1 zador:2 l:1 duration:2 resolution:1 rule:1 rieke:2 notion:1 target:1 exact:3 homogeneous:1 hypothesis:1 crossing:3 approximated:1 observed:1 fly:1 electrical:1 capture:1 cord:1 connected:1 movement:1 dynamic:3 personal:1 compromise:1 efficiency:2 completely:1 basis:1 joint:1 finger:1 train:31 describe:1 tell:1 outside:2 choosing:1 quite:1 encoded:3 emerged:1 plausible:1 think:1 itself:1 seemingly:1 obviously:1 sequence:1 reconstruction:2 product:2 reset:1 adaptation:5 inserting:1 relevant:1 realization:1 dirac:1 cluster:2 transmission:4 requirement:1 double:1 produce:1 optimum:1 derive:3 eq:6 implies:3 sensation:1 stevens:3 correct:1 filter:3 material:1 everything:1 bin:4 require:1 argued:1 fix:1 preliminary:1 biological:2 summation:1 correction:6 physica:1 elevating:1 continuum:1 estimation:1 proc:1 vibration:1 hasselmo:1 tool:1 weighted:1 hope:1 mit:2 clearly:1 sensor:2 gaussian:1 sight:1 rather:3 voltage:1 encode:1 derived:1 focus:1 salamander:1 dependent:5 ftc:1 relation:1 ancestor:1 among:1 priori:3 development:1 integration:3 renewal:1 mutual:4 equal:2 never:1 having:1 identical:3 biology:3 broad:1 cancel:1 nearly:1 hint:1 primarily:1 few:1 retina:1 simultaneously:1 individual:1 familiar:1 fire:1 bracket:1 light:1 tj:1 integral:1 shorter:1 modest:1 theoretical:1 measuring:2 maximization:1 snr:7 front:2 characterize:1 reported:1 answer:1 international:1 accessible:1 physic:1 decoding:3 michael:1 together:2 transmitting:1 squared:1 containing:4 choose:2 til:3 leading:5 potential:1 de:3 coding:4 matter:1 tra:1 sloan:1 explicitly:1 depends:1 reiterate:1 view:2 try:1 observing:1 reached:2 wave:2 sort:1 complicated:1 minimize:1 square:1 variance:2 yield:1 ofthe:2 correspond:1 weak:1 produced:2 drive:1 cybernetics:1 history:1 explain:1 cumbersome:1 checked:2 ed:1 definition:4 norton:1 pp:1 naturally:1 propagated:1 experimenter:1 carefully:1 actually:1 back:1 nerve:1 appears:1 dt:4 formulation:3 though:2 just:1 correlation:10 until:1 hand:1 ei:1 touch:1 nonlinear:1 perhaps:1 grows:1 contain:1 true:2 evolution:2 reminds:1 during:1 evident:1 theoretic:1 recently:3 common:2 functional:1 spiking:1 spinal:1 volume:1 analog:6 he:3 refer:1 compliment:1 similarly:1 sest:1 stable:1 lftc:5 recent:1 showed:1 optimizing:1 jolla:1 moderate:2 discretizing:1 success:1 accomplished:1 transmitted:6 seen:5 maximize:2 redundant:1 signal:52 ii:1 full:1 reduces:1 adapt:2 long:1 divided:1 molecular:1 prediction:1 circumstance:1 poisson:8 physically:1 histogram:4 represent:2 achieved:1 cell:9 ion:1 addition:1 interval:1 singular:3 source:1 grow:1 crucial:1 unlike:1 flow:3 seem:1 lump:1 call:1 extracting:1 near:2 counting:1 presence:2 enough:1 variety:1 affect:1 bandwidth:1 reduce:1 administered:1 expression:7 york:1 action:1 remark:1 useful:4 generally:1 amount:4 dark:1 band:1 notice:1 sign:1 delta:2 arising:1 per:3 discrete:2 write:2 vol:1 four:1 threshold:9 drawn:1 changing:1 prevent:1 deweese:18 pj:1 superscripted:1 convert:1 sum:1 powerful:1 jitter:1 hodgkin:1 bit:4 bound:2 constraint:1 huxley:1 x2:2 aspect:3 argument:1 concluding:1 performing:2 according:1 peripheral:1 membrane:2 slightly:1 reconstructing:1 suppressed:1 unity:1 appealing:1 taken:1 equation:1 turn:2 describing:1 discus:2 know:3 studying:1 available:1 meister:1 apply:1 blowfly:1 alternative:1 robustness:1 existence:1 denotes:1 ensure:1 calculating:1 carlson:1 warland:4 reaffirmed:1 build:1 noticed:1 quantity:1 spike:55 strategy:1 degrades:1 bialek:8 cricket:3 distance:1 trivial:1 potter:1 code:11 relationship:1 ratio:6 potentially:2 negative:2 understandable:1 observation:3 neuron:10 inanimate:1 coincident:1 communication:2 dc:2 arbitrary:1 introduced:1 pair:1 mechanical:1 kl:2 specified:2 sentence:1 required:1 optimized:1 narrow:1 appeared:1 power:5 event:1 natural:1 scheme:1 brief:1 carried:1 lox:1 coupled:1 nice:1 discovery:1 checking:2 meter:1 berry:1 permutation:1 interesting:1 analogy:1 integrate:1 consistent:1 principle:10 uncorrelated:2 share:1 pi:2 production:1 hopeless:1 summary:1 last:1 side:1 understand:2 institute:1 wide:2 taking:1 emerge:1 van:3 calculated:1 valid:2 world:3 prefiltering:1 sensory:4 c5:1 made:1 far:1 approximate:2 emphasize:1 absorb:2 summing:1 assumed:2 spectrum:3 search:1 channel:1 nature:1 ruyter:3 ca:1 expanding:5 expansion:2 futile:1 did:2 linearly:1 s2:3 noise:35 arise:2 arrival:3 nothing:2 repeated:1 referred:1 broadband:1 creature:3 salk:2 axon:1 decoded:1 explicit:2 exponential:1 formula:2 down:1 specific:3 offset:1 adding:1 gained:1 boston:2 suited:1 smoothly:1 entropy:4 led:1 tc:1 likely:1 visual:2 contained:1 subtlety:1 conditional:5 replace:1 man:1 change:2 experimentally:2 specifically:1 except:2 averaging:2 called:2 total:3 experimental:5 la:1 shannon:4 modulated:3 princeton:1 correlated:1 |
135 | 1,121 | Optimal Asset Allocation
?
uSIng
Adaptive Dynamic Programming
Ralph Neuneier*
Siemens AG, Corporate Research and Development
Otto-Hahn-Ring 6, D-81730 Munchen, Germany
Abstract
In recent years, the interest of investors has shifted to computerized asset allocation (portfolio management) to exploit the growing
dynamics of the capital markets. In this paper, asset allocation is
formalized as a Markovian Decision Problem which can be optimized by applying dynamic programming or reinforcement learning
based algorithms. Using an artificial exchange rate, the asset allocation strategy optimized with reinforcement learning (Q-Learning)
is shown to be equivalent to a policy computed by dynamic programming. The approach is then tested on the task to invest liquid
capital in the German stock market. Here, neural networks are
used as value function approximators. The resulting asset allocation strategy is superior to a heuristic benchmark policy. This is
a further example which demonstrates the applicability of neural
network based reinforcement learning to a problem setting with a
high dimensional state space.
1
Introduction
Billions of dollars are daily pushed through the international capital markets while
brokers shift their investments to more promising assets. Therefore, there is a great
interest in achieving a deeper understanding of the capital markets and in developing
efficient tools for exploiting the dynamics of the markets.
* Ralph.Neuneier@zfe.siemens.de, http://www.siemens.de/zfe.Jlll/homepage.html
Optimal Asset Allocation Using Adaptive Dynamic Programming
953
Asset allocation (portfolio management) is the investment of liquid capital to various
trading opportunities like stocks, futures, foreign exchanges and others. A portfolio
is constructed with the aim of achieving a maximal expected return for a given
risk level and time horizon. To compose an optimal portfolio, the investor has
to solve a difficult optimization problem consisting of two phases (Brealy, 1991).
First, the expected yields are estimated simultaneously with a certainty measure.
Second, based on these estimates, a portfolio is constructed obeying the risk level
the investor is willing to accept (mean-variance techniques). The problem is further
complicated if transaction costs must be considered and if the investor wants to
revise the decision at every time step. In recent years, neural networks (NN) have
been successfully used for the first task. Typically, a NN delivers the expected
future values of a time series based on data of the past. Furthermore, a confidence
measure which expresses the certainty of the prediction is provided.
In the following, the modeling phase and the search for an optimal portfolio are
combined and embedded in the framework of Markovian Decision Problems, MDP.
That theory formalizes control problems within stochastic environments (Bertsekas,
1987, Elton, 1971). If the discrete state space is small and if an accurate model of
the system is available, MDP can be solved by conventional Dynamic Programming,
DP. On the other extreme, reinforcement learning methods, e.g. Q-Learning, QL,
can be applied to problems with large state spaces and with no appropriate model
available (Singh, 1994).
2
Portfolio Managelnent is a Markovian Decision Problem
The following simplifications do not restrict the generalization of the proposed methods with respect to real applications but will help to clarify the relationship between
MDP and portfolio optimization.
? There is only one possible asset for a Deutsch-Mark based investor, say a
foreign currency called Dollar, US-$.
? The investor is small and does not influence the market by her/his trading.
? The investor has no risk aversion and always invests the total amount.
? The investor may trade at each time step for an infinite time horizon.
MDP provide a model for multi-stage decision making problems in stochastic environments. MDP can be described by a finite state set S = 1, ... , n, a finite set
U (i) of admissible control actions for every state i E S, a set of transition probabilities P0' which describe the dynamics of the system, and a return function 1
r(i,j,u(i)),with i,j E S,u(i) E U(i). Furthermore, there is a stationary policy
rr(i), which delivers for every state an admissible action u(i). One can compute the
value-function l;j11" of a given state and policy,
00
Vi: = E[I:'"-/R(it,rr(i
t ))),
(1)
t=o
1 In the MDP-literature, the return often depends only on the current state i , but the
theory extends to the case of r = r(i,j,u(i)) (see Singh, 1994).
954
R. NEUNEIER
where E indicates the expected value, 'Y is the discount factor with 0 ~ 'Y < 1, and
where R are the expected returns, R
Ej(r(i, j, u(i)). The aim is now to find a
policy 71"* with the optimal value-function Vi* = max?!" Vi?!" for all states.
=
In the context discussed here, a state vector consists of elements which describe the
financial time series, and of elements which quantify the current value of the investment. For the simple example above, the state vector is the triple of the exchange
rate, Xt, the wealth of the portfolio, Ct, expressed in the basis currency (here DM),
and a binary variable b, representing the fact that currently the investment is in
DM or US-$.
Note, that out of the variables which form the state vector, the exchange rate is
actually independent of the portfolio decisions, but the wealth and the returns are
not. Therefore, asset allocation is a control problem and may not be reduced to pure
prediction. 2 This problem has the attractive feature that, because the investments
do not influence the exchange rate, we do not need to invest real money during the
training phase of QL until we are convinced that our strategy works.
3
Dynamic Programming: Off-line and Adaptive
The optimal value function V* is the unique solution of the well-known Bellman
equation (Bertsekas, 1987). According to that equation one has to maximize the
expected return for the next step and follow an optimal policy thereafter in order
to achieve global optimal behavior (Bertsekas, 1987). An optimal policy can be
easily derived from V* by choosing a 71"( i) which satisfies the Bellman equation. For
nonlinear systems and non-quadric cost functions, V* is typically found by using an
iterative algorithm, value iteration, which converges asymptotically to V*. Value
iteration applies repeatedly the operator T for all states i,
(2)
Value iteration assumes that the expected return function R(i, u(i)) and the transition probabilities pij (i. e. the model) are known. Q-Learning, (QL), is a
reinforcement-learning method that does not require a model of the system but
optimizes the policy by sampling state-action pairs and returns while interacting
with the system (Barto, 1989). Let's assume that the investor executes action u(i)
at state i, and that the system moves to a new state j. Let r(i, j, u(i)) denote the
actual return. QL then uses the update equation
Q(i, u(i))
Q(k, v)
(1 - TJ)Q(i, u(i))
+ TJ(r(i, j, u(i)) + 'Yma:xQ(j,
u(j)))
u(J )
Q(k, v), for all k oF i and voF u(i)
(3)
where TJ is the learning rate and Q(i, u(i)) are the tabulated Q-values. One can
prove, that this relaxation algorithm converges (under some conditions) to the optimal Q-values (Singh, 1994).
2To be more precise, the problem only becomes a mUlti-stage decision problem if the
transaction costs are included in the problem.
955
Optimal Asset Allocation Using Adaptive Dynamic Programming
The selection of the action u( i) should be guided by the trade-off between exploration and exploitation. In the beginning, the actions are typically chosen randomly
(exploration) and in the course of training, actions with larger Q-values are chosen with increasingly higher probability (exploitation). The implementation in the
following experiments is based on the Boltzmann-distribution using the actual Qvalues and a slowly decreasing temperature parameter (see Barto, 1989).
4
Experiment I: Artificial Exchange Rate
In this section we use an exchange-rate model to demonstrate how DP and QLearning can be used to optimize asset allocation.
The artificial exchange rate Xt is in the range between 1 and 2 representing the
value of 1 US-$ in DM. The transition probabilities Pij of the exchange rate are
chosen to simulate a situation where the Xt follows an increasing trend, but with
higher values of Xt, a drop to very low values becomes more and more probable.
A realization of the time series is plotted in the upper part of fig. 2. The random
state variable Ct depends on the investor's decisions Ut, and is further influenced by
Xt, Xt+b and Ct-l. A complete state vector consists of the current exchange rate Xt
and the capital Ct, which is always calculated in the basis currency (DM). Its sign
represents the actual currency, i. e., Ct = -1.2 stands for an investment in US-$
worth of 1.2 DM, and Ct
1.2 for a capital of 1.2 DM. Ct and Xt are discretized in 10
bins each. The transaction costs ~ = 0.1 + Ic/IOOI are a combination of fixed (0.1)
and variable costs (Ic/IOOI). Transactions only apply, if the currency is changed
from DM to US-$. The immediate return rt(Xt,ct, Xt+1, ut) is computed as in table
1. If the decision has been made to change the portfolio into DM or to keep the
actual portfolio in DM, Ut = DM, then the return is always zero. If the decision
has been made to change the portfolio into US-$ or to keep the actual portfolio in
US-$, Ut
US-$, then the return is equal to the relative change of the exchange
rate weighted with Ct. That return is reduced by the transaction costs
if the
investor has to change into US-$.
=
=
e,
Table 1: The immediate return function.
Ut
Ct
Ct
E DM
E US-$
=DM
o
o
Ut
= US-$
The success of the strategies was tested on a realization (2000 data points) of the
exchange rate. The initial investment is 1 DM, at each time step the algorithm has
to decide to either change the currency or remain in the present currency.
As a reinforcement learning method, QL has to interact with the environment to
learn optimal behavior. Thus, a second set of 2000 data was used to learn the Qvalues. The training phase is divided into epochs. Each epoch consists of as many
trials as data exist in the training set. At every trial the algorithm looks at Xt,
chooses randomly a portfolio value Ct and selects a decision. Then the immediate
return and the new state is evaluated to apply eq. 3. The Q-values were initialized
with zero, the learning rate T} was 0.1. Convergence was achieved after 4 epochs.
956
R. NEUNEIER
$
04
~03
.s
~02
1t' o
1
o
DM
2
2
2
2
-2
1
Figure 1: The optimal decisions (left) and value function (right).
a':
.
10
10
~::[:
o
~]
o
10
.
..
.
.
20"
.
_.
.
..
...
.
_.
_ .'
. .
.
30'
40
:
:
:
20
30
40
50
60
70
80
20
30
40
100
~
50
60
70
80
:ONJD:V, U:
10
90
50
60
70
80
90
100
:IT]
90
100
Time
Figure 2: The exchange rate (top), the capital and the decisions (bottom).
To evaluate the solution QL has found, the DP-algorithm from eq. 2 was implemented using the given transition probabilities. The convergence of DP was very
fast. Only 5 iterations were needed until the average difference between successive
value functions was lower than 0.01. That means 500 updates in comparison to
8000 updates with QL.
The solutions were identical with respect to the resulting policy which is plotted in
fig. 1, left. It can clearly be seen, that there is a difference between the policy of
a DM-based and a US-$-based portfolio. If one has already changed the capital to
US-$, then it is advisable to keep the portfolio in US-$ until the risk gets too high,
i. e. Xt E {1.8, 1.9}. On the other hand, if Ct is still in DM, the risk barrier moves
to lower values depending on the volume of the portfolio. The reason is that the
potential gain by an increasing exchange rate has to cover the fixed and variable
transaction costs. For very low values of Ct, it is forbidden to change even at low Xt
because the fixed transaction costs will be higher than any gain. Figure 2 plots the
Optimal Asset Allocation Using Adaptive Dynamic Programming
exchange rate
Xt,
the accumulated capital
Ct
for 100 days, and the decisions
957
Ut.
Let us look at a few interesting decisions. At the beginning, t = 0, the portfolio was
changed immediately to US-$ and kept there for 13 steps until a drop to low rates
Xt became very probable. During the time steps 35-45, the 'O'xchange rate oscillated
at higher exchange rates. The policy insisted on the DM portfolio, because the
risk was too high. In contrary, looking at the time steps 24 to 28, the policy first
switched back to DM, then there was a small decrease of Xt which was sufficient to
let the investor change again. The following increase justified that decision. The
success of the resulting strategy can be easily recognized by the continuous increase
of the portfolio. Note, that the ups and downs of the portfolio curve get higher
in magnitude at the end because the investor has no risk aversion and always the
whole capital is traded.
5
Experiment II: German Stock Index DAX
In this section the approach is tested on a real world task: assume that an investor
wishes to invest her Ihis capital into a block of stocks which behaves like the German
stock index DAX. We based the benchmark strategy (short: MLP) on a NN model
which was build to predict the daily changes of the DAX (for details, see Dichtl,
1995). If the prediction of the next day DAX difference is positive then the capital
is invested into DAX otherwise in DM. The input vector of the NN model was
carefully optimized for optimal prediction. We used these inputs (the DAX itself
and 11 other influencing market variables) as the market description part of the
state vector for QL. In order to store the value functions two NNs, one for each
action, with 8 nonlinear hidden neurons and one linear output are used.
The data is split into a training (from 2. Jan. 1986 to 31. Dec. 1992) and a test set
(from 2. Jan. 1993 to 18. Oct. 1995). The return function is defined in the same
way as in section 4 using 0.4% as proportional costs and 0.001 units as fixed costs,
which are realistic for financial institutions. The training proceeds as outlined in
the previous section with TJ = 0.001 for 1000 epochs.
In fig. 3 the development of a reinvested capital is plotted for the optimized (upper
line) and the MLP strategy (middle line). The DAX itself is also plotted but with
a scaling factor to fit it into the figure (lower line). The resulting policy by QL
clearly beats the benchmark strategy because the extra return amounts to 80% at
the end of the training period and to 25% at the end of the test phase. A closer
look at some statistics can explain the success. The QL policy proposes almost as
often as the MLP policy to invest in DAX, but the number of changes from DM
to DAX and v. v. is much lower (see table 2). Furthermore, it seems that the QL
strategy keeps the capital out of the market if there is no significant trend to follow
and the market shows too much volatility (see fig. 3 with straight horizontal lines
of the capital development curve indicating no investments). An extensive analysis
of the resulting strategy will be the topic of future research.
In a further experiment the NNs which store the Q-values are initialized to imitate
the MLP strategy. In some runs the number of necessary epochs were reduced by
a factor of 10. But often the QL algorithni took longer to converge because the
initialization ignores the input elements which describe the investor's capital and
therefore led to a bad starting point in the weight space.
R. NEUNEIER
958
4S,r --------------------------,
,7.-----------------------------,
, Jan 1993
18 Od H:195
Figure 3: The development of a reinvested capital on the training (left) and test set
(right). The lines from top to bottom: QL-strategy, MLP-strategy, scaled DAX.
Table 2: Some statistics of the policies.
Data
Training set
Test set
6
1825
729
DAX investments
MLP Policy QL-Policy
1020
434
1005
395
position changes
MLP Policy QL-Policy
904
344
284
115
Conclusions and Future Work
In this paper, the task of asset allocation/portfolio management was approached
by reinforcement learning algorithms. QL was successfully utilized in combination
with NNs as value function approximators in a high dimensional state space.
Future work has to address the possibility of several alternative investment opportunities and to clarify the connection to the classical mean-variance approach of
professional brokers. The benchmark strategy in the real world experiment is in
fact a neuro-fuzzy model which allows the extraction of useful rules after learning.
It will be interesting to use that network architecture to approximate the value
function in order to achieve a deeper insight in the resulting optimized strategy.
References
Barto A. G., Sutton R. S. and Watkins C. J. C. H. (1989) , Learning and Sequential Decision
Making, COINS TR 89-95.
Bertsekas D. P. (1987) , Dynamic Programming, NY: Wiley.
Singh, P. S. (1993) , Learning to Solve Markovian Decision Processes, CMPSCI TR 93-77.
Neuneier R. (1995), Optimal Strategies with Density-Estimating Neural Networks, ICANN
95, Paris.
Brealy, R. A. , Myers, S. C. (1991), Principles of Corporate Finance, McGraw-Hill.
Watkins C. J., Dayan, P. (1992) , Technical Note: Q-Learning, Machine Learning 8, 3/4.
Elton, E . J. , Gruber, M. J. (1971), Dynamic Programming Applications in Finance, The
Journal of Finance, 26/2.
Dichtl, H. (1995), Die Prognose des DAX mit Neuro-Fuzzy, masterthesis, engl. abstract
in preparation.
| 1121 |@word trial:2 exploitation:2 middle:1 seems:1 willing:1 p0:1 tr:2 initial:1 series:3 liquid:2 past:1 neuneier:6 current:3 od:1 must:1 realistic:1 drop:2 plot:1 update:3 stationary:1 imitate:1 beginning:2 short:1 institution:1 successive:1 constructed:2 consists:3 prove:1 compose:1 expected:7 market:10 behavior:2 growing:1 multi:2 discretized:1 bellman:2 decreasing:1 actual:5 increasing:2 becomes:2 provided:1 estimating:1 dax:12 homepage:1 fuzzy:2 ag:1 formalizes:1 certainty:2 every:4 finance:3 demonstrates:1 scaled:1 control:3 unit:1 bertsekas:4 positive:1 influencing:1 sutton:1 initialization:1 range:1 unique:1 investment:10 block:1 jan:3 ups:1 confidence:1 get:2 selection:1 operator:1 context:1 influence:2 risk:7 applying:1 www:1 equivalent:1 conventional:1 optimize:1 zfe:2 starting:1 oscillated:1 formalized:1 immediately:1 pure:1 rule:1 insight:1 his:1 financial:2 programming:10 us:1 element:3 trend:2 utilized:1 bottom:2 solved:1 trade:2 decrease:1 environment:3 dynamic:13 singh:4 basis:2 easily:2 stock:5 various:1 fast:1 describe:3 artificial:3 approached:1 choosing:1 heuristic:1 larger:1 solve:2 say:1 otherwise:1 otto:1 statistic:2 invested:1 itself:2 rr:2 myers:1 took:1 maximal:1 realization:2 achieve:2 description:1 invest:4 billion:1 exploiting:1 convergence:2 ring:1 converges:2 help:1 depending:1 advisable:1 volatility:1 eq:2 implemented:1 trading:2 quantify:1 deutsch:1 guided:1 stochastic:2 exploration:2 bin:1 exchange:16 require:1 generalization:1 probable:2 clarify:2 considered:1 ic:2 great:1 predict:1 traded:1 currently:1 successfully:2 tool:1 weighted:1 mit:1 clearly:2 quadric:1 always:4 aim:2 ej:1 barto:3 derived:1 indicates:1 dollar:2 dayan:1 foreign:2 nn:4 typically:3 accumulated:1 accept:1 cmpsci:1 her:2 hidden:1 selects:1 germany:1 ralph:2 html:1 development:4 proposes:1 equal:1 extraction:1 sampling:1 identical:1 represents:1 look:3 future:5 others:1 few:1 randomly:2 simultaneously:1 phase:5 consisting:1 interest:2 mlp:7 possibility:1 extreme:1 tj:4 accurate:1 closer:1 daily:2 necessary:1 initialized:2 plotted:4 modeling:1 markovian:4 cover:1 engl:1 applicability:1 cost:10 too:3 combined:1 chooses:1 nns:3 density:1 international:1 off:2 again:1 management:3 slowly:1 return:17 potential:1 de:3 vi:3 depends:2 investor:15 complicated:1 became:1 variance:2 yield:1 computerized:1 worth:1 asset:14 straight:1 executes:1 explain:1 influenced:1 dm:20 gain:2 revise:1 ut:7 carefully:1 actually:1 back:1 higher:5 day:2 follow:2 evaluated:1 furthermore:3 stage:2 until:4 hand:1 horizontal:1 nonlinear:2 mdp:6 attractive:1 during:2 die:1 hill:1 complete:1 demonstrate:1 delivers:2 temperature:1 superior:1 behaves:1 volume:1 discussed:1 significant:1 outlined:1 portfolio:23 longer:1 money:1 recent:2 forbidden:1 optimizes:1 store:2 binary:1 success:3 approximators:2 seen:1 recognized:1 converge:1 maximize:1 period:1 ii:1 currency:7 corporate:2 technical:1 divided:1 prediction:4 neuro:2 iteration:4 achieved:1 dec:1 justified:1 want:1 wealth:2 extra:1 j11:1 contrary:1 split:1 fit:1 architecture:1 restrict:1 shift:1 tabulated:1 repeatedly:1 action:8 useful:1 elton:2 amount:2 discount:1 reduced:3 http:1 exist:1 shifted:1 sign:1 estimated:1 reinvested:2 discrete:1 express:1 thereafter:1 achieving:2 capital:18 kept:1 asymptotically:1 relaxation:1 year:2 run:1 extends:1 almost:1 decide:1 decision:18 scaling:1 pushed:1 ct:15 simplification:1 simulate:1 developing:1 according:1 combination:2 remain:1 increasingly:1 making:2 equation:4 german:3 needed:1 end:3 available:2 apply:2 munchen:1 appropriate:1 alternative:1 coin:1 professional:1 assumes:1 top:2 opportunity:2 exploit:1 hahn:1 build:1 classical:1 move:2 already:1 strategy:16 rt:1 dp:4 topic:1 reason:1 index:2 relationship:1 difficult:1 ql:16 implementation:1 policy:20 boltzmann:1 upper:2 neuron:1 benchmark:4 finite:2 beat:1 immediate:3 situation:1 looking:1 precise:1 interacting:1 pair:1 paris:1 extensive:1 optimized:5 connection:1 address:1 proceeds:1 max:1 representing:2 xq:1 epoch:5 understanding:1 literature:1 relative:1 embedded:1 interesting:2 allocation:12 proportional:1 triple:1 aversion:2 switched:1 pij:2 sufficient:1 gruber:1 principle:1 invests:1 course:1 changed:3 convinced:1 deeper:2 barrier:1 curve:2 calculated:1 transition:4 stand:1 world:2 ignores:1 made:2 adaptive:5 reinforcement:7 transaction:7 approximate:1 qlearning:1 mcgraw:1 keep:4 global:1 search:1 iterative:1 continuous:1 table:4 promising:1 learn:2 interact:1 icann:1 whole:1 fig:4 ny:1 wiley:1 position:1 wish:1 obeying:1 watkins:2 admissible:2 down:1 bad:1 xt:16 sequential:1 qvalues:2 magnitude:1 horizon:2 led:1 expressed:1 applies:1 satisfies:1 oct:1 broker:2 change:10 included:1 infinite:1 called:1 total:1 siemens:3 indicating:1 mark:1 preparation:1 evaluate:1 tested:3 |
136 | 1,122 | Recursive Estimation of Dynamic
Modular RBF Networks
Visakan Kadirkamanathan
Automatic Control & Systems Eng. Dept.
University of Sheffield, Sheffield Sl 4DU, UK
visakan@acse.sheffield.ac.uk
Maha Kadirkamanathan
Dragon Systems UK
Cheltenham GL52 4RW, UK
maha@dragon.co.uk
Abstract
In this paper, recursive estimation algorithms for dynamic modular
networks are developed. The models are based on Gaussian RBF
networks and the gating network is considered in two stages: At
first, it is simply a time-varying scalar and in the second, it is
based on the state, as in the mixture of local experts scheme. The
resulting algorithm uses Kalman filter estimation for the model
estimation and the gating probability estimation. Both, 'hard' and
'soft' competition based estimation schemes are developed where in
the former, the most probable network is adapted and in the latter
all networks are adapted by appropriate weighting of the data.
1
INTRODUCTION
The problem of learning multiple modes in a complex nonlinear system is increasingly being studied by various researchers [2, 3, 4, 5, 6], The use of a mixture of
local experts [5, 6], and a conditional mixture density network [3] have been developed to model various modes of a system. The development has mainly been on
model estimation from a given set of block data, with the model likelihood dependent on the input to the networks. A recursive algorithm for this static case is the
approximate iterative procedure based on the block estimation schemes [6].
In this paper, we consider dynamic systems - developing a recursive algorithm is
difficult since mode transitions have to be detected on-line whereas in the block
scheme, search procedures allow optimal detection. Block estimation schemes for
general architectures have been described in [2, 4]. However, unlike in those schemes,
the algorithm developed here uses relationships based on Bayes law and Kalman
filters and attempts to describe the dynamic system explicitly, The modelling is
carried out by radial basis function (RBF) networks for their property that by preselecting the centres and widths, the problem can be reduced to a linear estimation.
240
2
V. KADIRKAMANATHAN. M. KADIRKAMANATHAN
DYNAMIC MODULAR RBF NETWORK
The dynamic modular RBF network consists of a number of models (or experts)
to represent each nonlinear mode in a dynamical system. The models are based
on the RBF networks with Gaussian function, where the RBF centre and width
parameters are chosen a priori and the unknown parameters are only the linear
coefficients w. The functional form of the RBF network can be expressed as,
K
f(XiP)
=L
wkgk(X)
= wT g
(1)
k=l
where w = [. . . , Wk, .. Y E lRK is the linear weight vector and g
[... , gk(X), .. .]T E lR~ are the radial basis functions, where,
gk(X)
= exp {-O.5r-21Ix -
(2)
mk112}
E lRM are the RBF centres or means and r the width. The RBF networks
are used for their property that having chosen appropriate RBF centre and width
parameters mk, r, only the linear weights w need to be estimated for which fast,
efficient and optimal algorithms exist.
mk
Each model has an associated probability score of being the current underlying
model for the given observation. In the first stage of the development, this probability is not determined from parametrised gating network as in the mixture of
local experts [5] and the mixture density network [3], but is determined on-line as it
varies with time. In dynamic systems, time information must be taken into account
whereas the mixture of local experts use only the state information which is not
sufficient in general, unless the states contain the necessary information. In the
second stage, the probability is extended to represent both the time and state information explicitly using the expressions from the mixture of local experts. Recently,
time and state information have been combined in developing models for dynamic
systems such as the mixture of controllers [4] and the Input - Output HMM [2].
However, the scheme developed here is more explicit and is not as general as the
above schemes and is recursive as opposed to block estimation.
3
RECURSIVE ESTIMATION
The problem of recursive estimation with RBF networks have been studied previously [7, 8] and the algorithms developed here is a continuation of that process. Let
the set of input - output observations from which the model is to be estimated be,
2N
where,
2N
= {zn
1n
= 1, ... , N}
(3)
includes all observations upto the Nth data and
= {( X n , Yn)
lR M , Yn
Zn
is the nth data,
(4)
The underlying system generating the observations are assumed to be multi-modal
(with known H modes), with each observation satisfying the nonlinear relation,
Zn
1 Xn
E
Y = fh(X)
E lR}
+ 1]
(5)
where 1] is the noise with unknown distribution and fh (.) : lRM 1-+ lR is the unknown
underlying nonlinear function for the hth mode which generated the observation.
Under assumptions of zero mean Gaussian noise and that the model can approximate the underlying function arbitrarily closely, the probability distribution,
P(Zn IW h,M n
= Mh, 2 n - 1) = ( 271") _1 Ro-t exp
2
{1
-11
-"2Ro
Yn
-!h (Xn; W h)12} (6)
Recursive Estimation of Dynamic Modular RBF Networks
241
is Gaussian. This is the likelihood of the observation Zn for the model Mh, which
in our case is the GRBF network, given model parameters wand that the nth
observation was generated by Mh. Ro is the variance of the noise TJ . In general
however, the model generating the nth observation is unknown and the likelihood
of the nth observation is expanded to include I~ the indicator variable, as in [6],
H
p(zn"nIW,M,Zn-l) =
k
IT [p(znlw\Mn = Mh,Zn_dp(Mn = Mhlxn,zn-l)r"
h=l
(7)
Bayes law can be applied to the on-line or recursive parameter estimation,
p(WIZn,M)
= p(znIW,M, Zn-dp(WIZn-l,M)
(8)
P(ZnIZn-l,M)
and the above equation is applied recursively for n
=
1, ... , N .
The term
p(zn IZn-l, M) is the evidence. If the underlying system is unimodal, this will
result in the optimal Kalman estimator and if we assign the prior probability distribution for the model parameters p(w h IMhk to be Gaussian with mean Wo and
covariance matrix (positive definite) Po E 1R xK, which combines the likelihood
and the prior to give the posterior probability distribution which at time n is given
by p(whlZn , Mh) which is also Gaussian,
p(whIZn,Mh)
= (27r)-4Ip~l-t exp { _~(wh - W~fp~-l (w h - w~)}
(9)
In the multimodal case also, the estimation for the individual model parameters
decouple naturally with the only modification being that the likelihood used for the
parameter estimation is now based on weighted data and given by,
'
1
h- 1
1
p(znlw h,Mh,Zn-l)=(27r)-~(Roln
)-~exp
{1
-'2 Ro Inh IYn-ih(Xn;W)h 12}
1
(10)
The Bayes law relation (8) applies to each model. Hence, the only modification
in the Kalman filter algorithm is that the noise variance for each model is set to
Roh~ and the resulting equations can be found in [7]. It increases the apparent
uncertainty in the measurement output according to how likely the model is to be
the true underlying mode, by increasing the noise variance term of the Kalman filter
algorithm. Note that the term p(Mn = Mhlxn, zn-l) is a time-varying scalar and
does not influence the parameter estimation process.
The evidence term can also be determined directly from the Kalman filter,
(11)
where the e~ is the prediction error and R~ is the innovation variance with,
enh
Rhn
hT
Yn - wn-1gn
h- 1
RO ln
T
h
+ gnP
n-lgn
(12)
(13)
This is also the likelihood of the nth observation given the model M and the past
observations Zn-l. The above equation shows that the evidence term used in
Bayesian model selection [9] is computed recursively, but for the specific priors Ro,
Po. On-line Bayesian model selection can be carried out by choosing many different
priors, effectively sampling the prior space, to determine the best model to fit the
given data, as discussed in [7].
242
4
V. KADIRKAMANATHAN. M. KADIRKAMANATHAN
RECURSIVE MODEL SELECTION
Bayes law can be invoked to perform recursive or on-line model selection and this
has been used in the derivation of the multiple model algorithm [1] . The multiple
model algorithm has been used for the recursive identification of dynamical nonlinear systems [7]. Applying Bayes law gives the following relation:
(14)
which can be computed recursively for n = 1, ... , N. p(ZnIMh, Zn-1) is the likelihood given in (11) and p(MhIZn) is the posterior probability of model Mh being the
underlying model for the nth data given the observations Zn? The term p(Zn IZn-1)
is the normalising term given by,
H
P(ZnI Z n-1) = Lp(znIMh,Zn-1)p(MhI Z n-1)
h=l
(15)
The initial prior probabilities for models are assigned to be equal to 1/ H. The
equations (11), (14) combined with the Kalman filter estimation equations is known
as the multiple model algorithm [1] .
Amongst all the networks that are attempting to identify the underlying system,
the identified model is the one with the highest posterior probability p(MhIZn) at
each time n, ie.,
(16)
and hence can vary from time to time. This is preferred over the averaging of all the
H models as the likelihood is multimodal and hence modal estimates are sought.
Predictions are based on this most probable model.
Since the system is dynamical, if the underlying model for the dynamics is known,
it can be used to predict the estimates at the next time instant based on the current
estimates, prior to observing the next data. Here, a first order Markov assumption
is made for the mode transitions. Given that at the time instant n - 1 the given
mode is j, it is predicted that the probability of the mode at time instant n being
h is the transition probability Phj . With H modes, 2: Phj = 1. The predicted
probability of the mode being h at time n therefore is given by,
H
Pnln-l(MhI Z n-1) = L Phjp(Mj IZ n-1)
j=l
(17)
This can be viewed as the prediction stage of the model selection algorithm. The
predicted output of the system is obtained from the output of the model that has
the highest predicted probability.
Given the observation Zn, the correction is achieved through the multiple model
algorithm of (14) with the following modification:
p(MhIZn)
= p(znIMh, Zn-1)Pnln-1(MhI Z n-1)
p(znIZn-d
(18)
where modification to the prior has been made . Note that this probability is a
time-varying scalar value and does not depend on the states.
243
Recursive Estimation of Dynamic Modular RBF Networks
5
HARD AND SOFT COMPETITION
The development of the estimation and model selection algorithms have thus far
assumed that the indicator variable 'Y~ is known. The 'Y~ is unknown and an
expected value must be used in the algorithm, which is given by,
(3h _ p(znlMn
= Mh, Zn-I)Pnln_1(M n = MhIZn-I)
(19)
P(ZnI Zn-1)
n -
Two possible methodologies can be used for choosing the values for 'Y~. In the first
scheme,
'Y~ = 1 if,B~ > ,B~ for all j 1= h, and 0 otherwise
(20)
This results in 'hard' competition where, only the model with the highest predicted
probability undergoes adaptation using the Kalman filter algorithm while all other
models are prevented from adapting. Alternatively, the expected value can be used
in the algorithm,
(21)
which results in 'soft' competition and all models are allowed to undergo adaptation
with appropriate data weighting as outlined in section 3. This scheme is slightly
different from that presented in [7]. Since the posterior probabilities of each mode
effectively indicate which mode is dominant at each time n, changes can then be
used as means of detecting mode transitions.
6
EXPERIMENTAL RESULTS
The problem chosen for the experiment is learning the inverse robot kinematics used
in [3]. This is a two link rigid arm manipulator for which, given joint arm angles
(0 1 , O2 ), the end effector position in cartesian co-ordinates is given by,
:l:1
=
:l:2
L1 COS(Ol) - L2 COS(Ol + O2)
L1 sin(Ol) - L2 sin(Ol + O2)
(22)
L1 = 0.8, L2 = 0.2 being the arm lengths. The inverse kinematics learning problem
(0 1 , O2 ),
requires the identification of the underlying mapping from (:l:1' :l:2) which is bi-modal. Since the algorithm is developed for the identification of dynamical systems, the data are generated with the joint angles being excited sinusoidally
with differing frequencies within the intervals [0.3,1.2] x ['71"/2,371"/2]. The first 1000
observations are used for training and the next 1000 observations are used for testing with the adaptation turned off. The models use 28 RBFs chosen with fixed
parameters, the centres being uniformly placed on a 7 x 4 grid.
,
.-'~',"--'-"-':"",
- :: :~,r .:::
"..
c> g
II
. . .:
I.
0 __
0 _"7
I!:
'I
.,
(, :',-1~~ ?
.'
~:
I
::, .. :
~:
'"
I
1 0__
0
_e
0
. &
:::
"
: . . :: I
I I
I
:
t:
"
"
:
co
:zoo
.,
I
:
"
::~::
"
. :: :: I
~
::
,I
:,
i
~ ~
,
~
I
:,1
:'
'~,
'1: ::':
I.
~
I
i ~
I
."
I
':
~.'
i
::
,. I I ::
!
~:
t
,'I
I
I
I
'I
::
::
': '\ .
"
. :,,: I
:.
I :
::!!
"
~
~;~
'0
I~
I,
I
" I
y.~
,
I,
::::
,
-\
.~
'J
..,
' "
: ,",: :
.::: I
:: :
?
.
:: =~It.U\JJW~
I I : ~ , ,1.111.;, l,!111. ::, tU\",: ,i
:,' :'llJr: , .l/v :. \ _
?0
" :',.' " " '\'1 "
;:"';'d:',~r":rbv:~b?::":~~:-:-:-~W-,~:~,g 'I::-~~.
:l ,'
! ' .. ~\ :." ., :"
':
l::
300
_00
~
:,:, , :,
::'
L
I ?? t
::::'
::;
..00
"TI......,_
!
"
~:
:
,,,,
" '
.
I.
I'
"
: : VI f;' ::
I...
,,'
.:.
I.
I.
I'
.,
.
'
I'
Figure 1: Learning inverse kinematics (,hard' competition): Model probabilities.
Figure 1 shows the model probabilities during training and shows the switching
taking place between the two modes.
244
V. KADIRKAMANATHAN. M. KADIRKAMANATHAN
Modtl 2 Teat Data enoN In ttw End .n.ckH' PoeItion Space
Modtl1 Teat Ca. errora in the EncI.n.ctcr Poei6oo S~
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
\l 0.5
\l OS
0.4
0.4
0.3
0.3
02
02
0. '
0.'
~~~0.'--~02~0~
.3~0~.
' ~0.~5~
0.6~~OJ~0~
.8~0~
. 9~
~~~
0.~'~
02~0~.3~0~.4~0.~5~0.6~~
0.7~0~
.8~0~.
9~
"
"
Figure 2: End effector position errors (test data) ('hard' competition) : (a) Model
1 prediction (b) Model 2 prediction.
Figure 2 show the end effector position errors on the test data by both models 1 and
2 separately under the 'hard' competition scheme. The figure indicates the errors
achieved by the best model used in the prediction - both models predicting in the
centre of the input space where the function is multi-modal. This demonstrates
the successful operation of the algorithm in the two RBF networks capturing some
elements of the two underlying modes of the relationship . The best results on this
learning task are : The RMSE on test data for this problem by the Mixture Density
Table 1: Learning Inverse Kinematics : Results
RMSE (Train)
RMSE (Test)
Hard Competition
0.0213
0.0084
Soft Competition
0.0442
0.0212
Network is 0.0053 and by a single network is 0.0578 [3]. Note however that the
algorithm here did not use state information and used only the time dependency.
7
PARAMETRISED GATING NETWORKS
The model parameters were determined explicitly based on the time information in
the dynamical system . If the gating model probabilities are expressed as a function
of the states, similar to [6],
H
p(Mhlxn, Zn-l)
=
exp{a hT g}
/
L exp{a
hT g}
= a~
(23)
h=l
where a h are the gating network parameters. Note that the gating network shares
the same basis functions as the expert models .
This extension to the gating networks does not affect the model parameter estimation procedure . The likelihood in (7) decomposes into a part for model parameter
estimation involving output prediction error and a part for gating parameter estimation involving the indicator variable Tn . The second part can be approximated
to a Gaussian of the form,
1
h-l.~ exp
P(Tnlxn,a h ,Zn-d ~ (21r)-~RgO
{1-"2 Rgoh- I'Yn 1
h
h
ani
2}
(24)
245
Recursive Estimation of Dynamic Modular RBF Networks
This approximation allows the extended Kalman filter algorithm to be used for
gating network parameter estimation. The model selection equations of section 4
can be applied without any modification with the new gating probabilities. The
choice of the indicator variable 'Y~ can be based as before, resulting in either hard
or soft competition. The necessary expressions in (21) are obtained through the
Kalman filter estimates and the evidence values, for both the model and gating
parameters. Note that this is different from the estimates used in [6] in the sense
that, marginalisation over the model and gating parameters have been done here .
8
CONCLUSIONS
Recursive estimation algorithms for dynamic modular RBF networks have been developed . The models are based on Gaussian RBF networks and the gating is simply
a time-varying scalar. The resulting algorithm uses Kalman filter estimation for
the model parameters and the multiple model algorithm for the gating probability.
Both, (hard' and (soft' competition based estimation schemes are developed where
in the former, the most probable network is adapted and in the latter all networks
are adapted by appropriate weighting of the data. Experimental results are given
that demonstrate the capture of the switching in the dynamical system by the modular RBF networks. Extending the method to include the gating probability to be
a function of the state are then outlined briefly. Work is currently in progress to
experimentally demonstrate the operation of this extension.
References
[1] Bar-Shalom, Y . and Fortmann, T. E. Tracking and data association, Academic
Press, New York, 1988.
[2] Bengio, Y. and Frasconi, P . "An input output HMM architecture", In
G . Tesauro, D. S. Touretzky and T . K. Leen (eds.) Advances in Neural Information Processing Systems 7, Morgan Kaufmann, CA: San Mateo, 1995.
[3] Bishop, C. M. "Mixture density networks", Report NCRG/4288, Computer
Science Dept., Aston University, UK, 1994.
[4] Cacciatore, C. W. and Nowlan, S. J. "Mixtures of controllers for jump linear
and nonlinear plants", In J. Cowan, G. Tesauro, and J. Alspector (eds.) Advances in Neural Information Processing Systems 6, Morgan Kaufmann, CA:
San Mateo, 1994.
[5] Jacobs, R. A., Jordan, M. I., Nowlan, S. J . and Hinton, G. E.
mixtures of local experts", Neural Computation, 9: 79-87, 1991.
"Adaptive
[6] Jordan, M. I. and Jacobs, R. A. "Hierarchical mixtures of experts and the EM
algorithm" , Neural Computation, 6: 181-214, 1994.
[7] Kadirkamanathan, V. "Recursive nonlinear identification using multiple model
algorithm", In Proceedings of the IEEE Workshop on Neural Networks for
Signal Processing V, 171-180, 1995.
[8] Kadirkamanathan, V. ((A statistical inference based growth criterion for the
RBF network", In Proceedings of the IEEE Workshop on Neural Networks for
Signal Processing IV, 12-21, 1994.
[9] MacKay, D. J. C. "Bayesian interpolation", Neural Computation,
1992.
4:
415-447,
| 1122 |@word briefly:1 covariance:1 eng:1 excited:1 jacob:2 recursively:3 initial:1 score:1 past:1 o2:4 current:2 nowlan:2 must:2 xk:1 lr:4 normalising:1 detecting:1 consists:1 combine:1 expected:2 alspector:1 multi:2 ol:4 increasing:1 phj:2 underlying:11 developed:9 differing:1 ti:1 growth:1 ro:6 demonstrates:1 uk:6 control:1 yn:5 positive:1 before:1 local:6 switching:2 interpolation:1 lrm:2 studied:2 mateo:2 co:5 bi:1 testing:1 recursive:16 block:5 definite:1 procedure:3 adapting:1 radial:2 selection:7 influence:1 applying:1 estimator:1 us:3 element:1 satisfying:1 approximated:1 capture:1 highest:3 dynamic:13 depend:1 basis:3 po:2 mh:9 multimodal:2 joint:2 various:2 derivation:1 train:1 fast:1 describe:1 maha:2 detected:1 choosing:2 apparent:1 modular:9 otherwise:1 ip:1 adaptation:3 gnp:1 turned:1 tu:1 cacciatore:1 competition:11 extending:1 generating:2 ac:1 progress:1 predicted:5 indicate:1 closely:1 filter:10 assign:1 probable:3 extension:2 correction:1 considered:1 exp:7 mapping:1 predict:1 vary:1 sought:1 fh:2 estimation:29 iw:1 currently:1 weighted:1 gaussian:8 varying:4 ckh:1 modelling:1 likelihood:9 mainly:1 indicates:1 sense:1 inference:1 dependent:1 rigid:1 xip:1 relation:3 lgn:1 priori:1 development:3 mackay:1 equal:1 having:1 frasconi:1 sampling:1 mhi:3 report:1 individual:1 attempt:1 detection:1 mixture:13 parametrised:2 tj:1 necessary:2 unless:1 iv:1 mk:2 effector:3 soft:6 sinusoidally:1 gn:1 zn:23 successful:1 dependency:1 varies:1 combined:2 density:4 ie:1 off:1 iyn:1 opposed:1 expert:9 account:1 wk:1 includes:1 coefficient:1 explicitly:3 vi:1 observing:1 bayes:5 rbfs:1 rmse:3 variance:4 kaufmann:2 identify:1 bayesian:3 identification:4 zoo:1 researcher:1 touretzky:1 ed:2 frequency:1 naturally:1 associated:1 static:1 wh:1 methodology:1 modal:4 leen:1 done:1 stage:4 grbf:1 nonlinear:7 o:1 mode:17 undergoes:1 manipulator:1 contain:1 true:1 former:2 hence:3 assigned:1 sin:2 during:1 width:4 criterion:1 demonstrate:2 tn:1 l1:3 invoked:1 recently:1 functional:1 ncrg:1 discussed:1 association:1 measurement:1 automatic:1 outlined:2 grid:1 centre:6 rgo:1 robot:1 dominant:1 posterior:4 shalom:1 tesauro:2 arbitrarily:1 niw:1 morgan:2 lrk:1 determine:1 signal:2 ii:1 multiple:7 unimodal:1 academic:1 preselecting:1 prevented:1 prediction:7 involving:2 sheffield:3 controller:2 represent:2 achieved:2 whereas:2 separately:1 interval:1 marginalisation:1 unlike:1 undergo:1 cowan:1 jordan:2 bengio:1 wn:1 affect:1 fit:1 architecture:2 identified:1 expression:2 wo:1 york:1 rw:1 reduced:1 continuation:1 sl:1 exist:1 estimated:2 iz:1 ani:1 ht:3 wand:1 inverse:4 angle:2 uncertainty:1 place:1 capturing:1 adapted:4 dragon:2 attempting:1 expanded:1 developing:2 according:1 slightly:1 increasingly:1 em:1 lp:1 modification:5 taken:1 ln:1 equation:6 previously:1 kinematics:4 end:4 operation:2 hierarchical:1 appropriate:4 upto:1 include:2 instant:3 kadirkamanathan:10 amongst:1 dp:1 link:1 hmm:2 kalman:11 length:1 relationship:2 izn:2 innovation:1 ttw:1 difficult:1 gk:2 unknown:5 perform:1 zni:2 teat:2 observation:16 markov:1 fortmann:1 extended:2 hinton:1 inh:1 enon:1 ordinate:1 bar:1 dynamical:6 roh:1 fp:1 oj:1 predicting:1 indicator:4 enh:1 arm:3 nth:7 mn:3 scheme:12 aston:1 carried:2 prior:8 l2:3 law:5 plant:1 sufficient:1 share:1 placed:1 allow:1 taking:1 rhn:1 xn:3 transition:4 made:2 jump:1 san:2 adaptive:1 far:1 approximate:2 preferred:1 assumed:2 alternatively:1 search:1 iterative:1 decomposes:1 table:1 mj:1 ca:3 hth:1 du:1 complex:1 did:1 noise:5 allowed:1 position:3 explicit:1 weighting:3 ix:1 specific:1 bishop:1 gating:16 evidence:4 workshop:2 ih:1 effectively:2 cartesian:1 errora:1 simply:2 likely:1 expressed:2 tracking:1 scalar:4 applies:1 conditional:1 viewed:1 rbf:20 hard:9 change:1 experimentally:1 determined:4 uniformly:1 wt:1 averaging:1 decouple:1 experimental:2 latter:2 dept:2 |
137 | 1,123 | Silicon Models
for
A uditory Scene Analysis
John Lazzaro and John Wawrzynek
CS Division
UC Berkeley
Berkeley, CA 94720-1776
lazzaroOcs.berkeley.edu. johnvOcs.berkeley.edu
Abstract
We are developing special-purpose, low-power analog-to-digital
converters for speech and music applications, that feature analog
circuit models of biological audition to process the audio signal
before conversion. This paper describes our most recent converter
design, and a working system that uses several copies ofthe chip to
compute multiple representations of sound from an analog input.
This multi-representation system demonstrates the plausibility of
inexpensively implementing an auditory scene analysis approach to
sound processing.
1. INTRODUCTION
The visual system computes multiple representations of the retinal image, such as
motion, orientation, and stereopsis, as an early step in scene analysis. Likewise ,
the auditory brainstem computes secondary representations of sound, emphasizing
properties such as binaural disparity, periodicity, and temporal onsets. Recent
research in auditory scene analysis involves using computational models of these
auditory brainstem representations in engineering applications.
Computation is a major limitation in auditory scene analysis research: the complete auditory processing system described in (Brown and Cooke, 1994) operates at
approximately 4000 times real time, running under UNIX on a Sun SPARCstation
1. Standard approaches to hardware acceleration for signal processing algorithms
could be used to ease this computational burden in a research environment; a variety
of parallel, fixed-point hardware products would work well on these algorithms.
700
J. LAZZARO, J. WAWRZYNEK
However, hardware solutions appropriate for a research environment may not be
well suited for accelerating algorithms in cost-sensitive, battery-operated consumer
products. Possible product applications of auditory algorithms include robust
pitch-tracking systems for musical instrument applications, and small-vocabulary,
speaker-independent wordspotting systems for control applications.
In these applications, the input takes an analog form: a voltage signal from a
microphone or a guitar pickup. Low-power analog circuits that compute auditory
representations have been implemented and characterized by several research groups
- these working research prototypes include several generation of cochlear models
(Lyon and Mead, 1988), periodicity models, and binaural models. These circuits
could be used to compute auditory representations directly on the analog signal, in
real-time, using these low-power, area-efficient analog circuits.
Using analog computation successfully in a system presents many practical difficulties; the density and power advantages of the analog approach are often lost in the
process of system integration. One successful Ie architecture that uses analog computation in a system is the special-purpose analog to digital converter, that includes
analog, non-linear pre-processing before or during data conversion. For example,
converters that include logarithmic waveform compression before digitization are
commercially viable components.
Using this component type as a model, we have been developing special-purpose,
low-power analog-to-digital converters for speech and audio applications; this paper
describes our most recent converter design, and a working system that uses several
copies of the chip to compute multiple representations of sound.
2. CONVERTER DESIGN
Figure 1 shows an architectural block diagram of our current converter design. The
35,000 transistor chip was fabricated in the 2pm, n-well process of Orbit Semiconductor, broke red through MOSIS; the circuit is fully functional. Below is a summary
of the general architectural features ofthis chip; unless otherwise referenced, circuit
details are similar to the converter design described in (Lazzaro et al., 1994).
? An analog audio signal serves as input to the chip; dynamic range is 40dB to
60dB (l-lOmV to IV peak, dependent on measurement criteria).
? This signal is processed by analog circuits that model cochlear processing (Lyon
and Mead, 1988) and sensory transduction; the audio signal is transformed into
119 wavelet-filtered, half-wave rectified, non-linearly compressed audio signals. The
cycle-by-cycle waveform of each signal is preserved; no temporal smoothing is performed.
? Two additional analog processing blocks follow this initial cochlear processing,
a temporal autocorrelation processor and a temporal adaptation processor. Each
block transforms the input array into a new representation of equal size; alternatively, the block can be programmed to pass its input vector to its output without
alteration.
? The output circuits of the final processing block are pulse generators, which code
the signal as a pattern of fixed-width, fixed-height spikes. All the information in
the representation is contained in the onset times of the pulses.
701
Silicon Models for Auditory Scene Analysis
? The activity on this array is sent off-chip via an asynchronous parallel bus. The
converter chip acts as a sender on the bus; a digital host processor is the receiver.
The converter initiates a transaction on the bus to communicate the onset of a pulse
in the array; the data value on the bus is a number indicating which unit in the
array pulsed. The time of transaction initiation carries essential information. This
coding method is also known as the address-event representation.
? Many converters can be used in the same system, sharing the same asynchronous
output bus (Lazzaro and Wawrzynek, 1995) . No extra components are needed to
implement bus sharing; the converter bus design includes extra signals and logic that
implements multi-chip bus arbitration. This feature is a major difference between
this design and (Lazzaro et at., 1994).
? The converter includes a digitally-controllable parameter storage and generation
system; 25 tunable parameters control the behavior of the analog processing blocks.
Programmability supports the creation of multi-converter systems that use a single
chip design: each chip receives the same analog signal, but processes the signal in
different ways, as determined by the parameter values for each chip.
? Non-volatile analog storage elements are used to store the parameters; parameters
are changeable via Fowler-Nordhiem tunneling, using a 5V control input bus. Many
converters can share the same control bus. Parameter values can be sensed by activating a control mode, which sends parameter information on the converter output
bus. Apart from two high-voltage power supply pins, and a trimming input pin
for tunneling pulse width, all control voltages used in this converter are generated
on-chip.
21V-----.
15V
Trim
~
DO
~
Dl
D2
D3
D4
D5
D6
CS
-;
~
'0
~
?
U WR
VDD
GND
VDD
GND
VDD
GND
Audio In ...-------1
R
A
DO
Dl
D2
D3
D4
D5
D6
AR
rJl
RR ]
AL
RL
AM
b()
en
~
RM "i:
DL ~
DR
til
KO
KM
~
~
Figure 1. Block diagram of the converter chip. Most of the 40 pins of the chip are
dedicated to the data output and control input buses, and to the control signals for
coordinating bus sharing in multi-converter systems.
702
J. LAZZAR9. J. WAWRZYNEK
3. SYSTEM DESIGN
Figure 2 shows a block diagram of a system that uses three copies of the converter
chip to compute multiple representations of sound; the system acts as a real-time
audio input device to a Sun workstation. An analog audio input connects to each
converter; this input can be from a pre-amplified microphone, for spontaneous input,
or from the analog audio signal of the workstation, for controlled experiments.
The asynchronous output buses from the three chips are connected together, to
produce a single output address space for the system; no external components are
needed for output bus sharing and arbitration. The onset time of a transaction
carries essential information on this bus; additional logic on this board adds a 16bit timestamp to each bus transaction, coding the onset time with 20ps resolution .
The control input buses for the three chips are also connected together to produce
a single input address space, using external logic for address decoding. We use a
commercial interface board to link the workstation with these system buses.
4. SYSTEM PERFORMANCE
We designed a software environment , Aer, to support real-time, low-latency data
visualization of the multi-converter system. Using Aer, we can easily experiment
with different converter tunings. Figure 3 shows a screen from Aer , showing data
from the three converters as a function of time; the input sound for this screen is
a short 800 Hz tone burst, followed by a sinusoid sweep from 300 Hz to 3 Khz.
The top ("Spectral Shape") and bottom ("Onset") representations are raw data
from converters 1 and 3, as marked on Figure 2, tuned for different responses. The
output channel number is plotted vertically; each dot represents a pulse.
The top representation codes for periodicity-based spectral shape; for this representation, the temporal autocorrelation block (see Figure 1) is activated, and the
temporal adaptation block is inactivated. Spectral frequency is mapped logarithmically on the vertical dimension, from 300 Hz to 4 Khz; the activity in each
channel is the periodic waveform present at that frequency. The difference between
a periodicity-based spectral method and a resonant spectral method can be seen
in the response to the 800 Hz sinusoid onset: the periodicity representation shows
activity only around the 800 Hz channels, whereas a spectral representation would
show broadband transient activity at tone onset.
Multi-Converter System
In~~----. .----~--------~
O~
Bus
Out
aa
aaa
aa aaaaaaaaaD aaa
::aa ::::::::ga
::0
aaaaaaaaaD aao
aa a
Sound Input
Figure 2. Block diagram of the multi-converter system.
aa
Silicon Models for Auditory Scene Analysis
703
4 Khz
Spectral
(log)
Shape
300 Hz
Oms
. ~;.;\:
(linear)
: ;;~~if~
:r;,;~.,.?.:,
Summary
Auto
. 1....
1
gj~;':~_
I: ?>'~':
Corr.
?;:.:mT.:?~~
i ttJrC-
; f;:.~
12.5 ms
..~.~~~~
.._
??
? ~
"' 1,~
4 Khz
(log)
Onset
300 Hz
200 ms
Figure 3. Data from the multi-converter system, in response to a 800-Hz pure
tone, followed by a sinusoidal sweep from 300Hz to 3Khz.
J. LAZZARO, J. WAWRZYNEK
704
. ": ', "' ,
-: :. ....
~
", r:
300
D ?
? De
Oms
......
o
o
...,o
:l
<
?
~
e
e
:l
rJ)
12.
4 Khz
...,
II>
til
C
o
100 ms
Figure 4. Data from the multi-converter system, in response to the word "five"
followed by the word "nine".
Silicon Models for Auditory Scene Analysis
705
The bottom representation codes for temporal onsets; for this representation, the
temporal adaptation block is activated, and the temporal autocorrelation block is
inactivated. The spectral filtering of the representation reflects the silicon cochlea
tuning: a low-pass response with a sharp cutoff and a small resonant peak at the
best frequency of the filter. The black, wideband lines at the start of the 800 Hz
tone and the sinusoid sweep illustrate the temporal adaptation.
The middle ("Summary Auto Corr.") representation is a summary autocorrelogram, useful for pitch processing and voiced/unvoiced decisions in speech recognition. This representation is not raw data from a converter; software post-processing
is performed on the converter output to produce the final result. The frequency response of converter 2 is set as in the bottom representation; the temporal adaptation
response, however, is set to a 100 millisecond time constant. The converter output
pulse rates are set so that the cycle-by-cycle waveform information for each output
channel is preserved in the output.
To complete the representation, a set of running autocorrelation functions x(t)X(t-T)
is computed for T = k 105tLs, k = 1 ... 120, for each of the 119 output channels.
These autocorrelation functions are summed over all output channels to produce
the final representation; T is plotted as a linear function of time on the vertical axis.
The correlation multiplication can be efficiently implemented by integer subtraction
and comparison of pulse timestamps; the summation over channels is simply the
merging of lists of bus transactions. The middle representation in Figure 3 shows
the qualitative characteristics of the summary autocorrelogram: a repetitive band
structure in response to periodic sounds.
Figure 'f shows the output response of the multi-converter system in response to
telephone-bandwidth-limited speech; the phonetic boundaries of the two words,
"five" and "nine", are marked by arrows. The vowel formant information is shown
most clearly by the strong peaks in the spectral shape representation; the wideband
information in the "f" offive is easily seen in the onset representation. The summary
autocorrelation representation shows a clear texture break between vowels and the
voiced "n" and "v" sounds.
Acknowledgements
Thanks to Richard Lyon and Peter Cariani for summary autocorrelogram discussions. Funded by the Office of Naval Research (URI-N00014-92-J-1672).
References
Brown, G.J. and Cooke, M. (1994). Computational auditory scene analysis. Computer Speech and Language, 8:4, pp. 297-336.
Lazzaro, J. P. and Wawrzynek, J. (1995). A multi-sender asynchronous extension
to the address-event protocol. In Dally, W. J., Poulton, J. W., Ishii, A. T. (eds),
16th Conference on Advanced Research in VLSI, pp. 158-169.
Lazzaro, J. P., Wawrzynek, J., and Kramer, A (1994). Systems technologies for
silicon auditory models. IEEE Micro, 14:3. 7-15.
Lyon, R. F., and Mead, C. (1988). An analog electronic cochlea. IEEE Trans.
Acoust., Speech, Signal Processing vol. 36, pp. 1119-1134.
| 1123 |@word middle:2 compression:1 d2:2 km:1 pulse:7 sensed:1 carry:2 initial:1 disparity:1 tuned:1 current:1 john:2 timestamps:1 shape:4 designed:1 uditory:1 half:1 device:1 tone:4 short:1 filtered:1 five:2 height:1 burst:1 supply:1 viable:1 qualitative:1 autocorrelation:6 behavior:1 multi:11 lyon:4 circuit:8 sparcstation:1 acoust:1 fabricated:1 lomv:1 temporal:11 berkeley:4 act:2 demonstrates:1 rm:1 control:9 unit:1 before:3 engineering:1 referenced:1 vertically:1 semiconductor:1 mead:3 approximately:1 black:1 ease:1 programmed:1 wideband:2 limited:1 range:1 practical:1 lost:1 block:13 implement:2 area:1 pre:2 word:3 ga:1 storage:2 resolution:1 pure:1 d5:2 array:4 poulton:1 spontaneous:1 commercial:1 us:4 element:1 logarithmically:1 recognition:1 bottom:3 connected:2 cycle:4 sun:2 digitally:1 environment:3 battery:1 dynamic:1 vdd:3 creation:1 division:1 binaural:2 easily:2 chip:17 offive:1 otherwise:1 compressed:1 formant:1 final:3 advantage:1 rr:1 transistor:1 product:3 adaptation:5 amplified:1 p:1 produce:4 illustrate:1 strong:1 implemented:2 c:2 involves:1 waveform:4 filter:1 brainstem:2 broke:1 transient:1 implementing:1 activating:1 biological:1 summation:1 extension:1 around:1 major:2 early:1 purpose:3 sensitive:1 successfully:1 reflects:1 clearly:1 voltage:3 office:1 naval:1 ishii:1 am:1 dependent:1 vlsi:1 transformed:1 orientation:1 smoothing:1 special:3 integration:1 uc:1 timestamp:1 equal:1 summed:1 represents:1 commercially:1 richard:1 micro:1 connects:1 vowel:2 inexpensively:1 trimming:1 operated:1 activated:2 programmability:1 unless:1 iv:1 orbit:1 plotted:2 ar:1 cost:1 successful:1 periodic:2 thanks:1 density:1 peak:3 ie:1 off:1 decoding:1 together:2 dr:1 external:2 audition:1 til:2 sinusoidal:1 de:1 retinal:1 alteration:1 coding:2 includes:3 onset:11 performed:2 break:1 dally:1 red:1 wave:1 start:1 parallel:2 wordspotting:1 voiced:2 musical:1 characteristic:1 likewise:1 efficiently:1 ofthe:1 raw:2 rectified:1 processor:3 sharing:4 ed:1 frequency:4 pp:3 workstation:3 auditory:14 tunable:1 follow:1 response:10 correlation:1 working:3 receives:1 mode:1 fowler:1 brown:2 sinusoid:3 during:1 width:2 speaker:1 d4:2 criterion:1 m:3 complete:2 motion:1 dedicated:1 interface:1 image:1 volatile:1 functional:1 mt:1 rl:1 khz:6 analog:22 silicon:6 measurement:1 tuning:2 pm:1 language:1 dot:1 funded:1 gj:1 add:1 recent:3 pulsed:1 apart:1 store:1 phonetic:1 initiation:1 n00014:1 seen:2 additional:2 subtraction:1 signal:16 ii:1 multiple:4 sound:9 rj:1 characterized:1 plausibility:1 host:1 post:1 controlled:1 pitch:2 ko:1 cochlea:2 repetitive:1 preserved:2 whereas:1 diagram:4 sends:1 extra:2 hz:10 db:2 sent:1 integer:1 variety:1 architecture:1 converter:35 bandwidth:1 prototype:1 accelerating:1 peter:1 speech:6 nine:2 lazzaro:8 useful:1 latency:1 clear:1 transforms:1 band:1 hardware:3 processed:1 gnd:3 millisecond:1 coordinating:1 wr:1 vol:1 group:1 d3:2 cutoff:1 mosis:1 unix:1 communicate:1 resonant:2 architectural:2 electronic:1 decision:1 tunneling:2 bit:1 followed:3 autocorrelogram:3 activity:4 aer:3 scene:9 software:2 developing:2 describes:2 wawrzynek:7 aaa:2 visualization:1 bus:21 pin:3 needed:2 initiate:1 instrument:1 serf:1 appropriate:1 spectral:9 changeable:1 top:2 running:2 include:3 music:1 sweep:3 spike:1 rjl:1 link:1 mapped:1 d6:2 digitization:1 cochlear:3 consumer:1 code:3 design:9 conversion:2 vertical:2 unvoiced:1 pickup:1 sharp:1 trans:1 address:5 below:1 pattern:1 power:6 event:2 difficulty:1 advanced:1 technology:1 axis:1 auto:2 acknowledgement:1 multiplication:1 fully:1 generation:2 limitation:1 oms:2 filtering:1 generator:1 digital:4 share:1 cooke:2 periodicity:5 summary:7 copy:3 asynchronous:4 boundary:1 dimension:1 vocabulary:1 computes:2 sensory:1 transaction:5 trim:1 logic:3 receiver:1 stereopsis:1 alternatively:1 channel:7 robust:1 ca:1 controllable:1 inactivated:2 protocol:1 linearly:1 aao:1 arrow:1 broadband:1 en:1 tl:1 board:2 transduction:1 screen:2 wavelet:1 emphasizing:1 showing:1 list:1 guitar:1 dl:3 burden:1 ofthis:1 essential:2 merging:1 corr:2 texture:1 uri:1 suited:1 logarithmic:1 simply:1 sender:2 visual:1 contained:1 tracking:1 aa:5 marked:2 kramer:1 acceleration:1 determined:1 telephone:1 operates:1 microphone:2 secondary:1 pas:2 indicating:1 support:2 arbitration:2 audio:9 |
138 | 1,124 | Memory-based Stochastic Optimization
Andrew W. Moore and Jeff Schneider
School of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213
Abstract
In this paper we introduce new algorithms for optimizing noisy
plants in which each experiment is very expensive. The algorithms
build a global non-linear model of the expected output at the same
time as using Bayesian linear regression analysis of locally weighted
polynomial models. The local model answers queries about confidence, noise, gradient and Hessians, and use them to make automated decisions similar to those made by a practitioner of Response
Surface Methodology. The global and local models are combined
naturally as a locally weighted regression . We examine the question of whether the global model can really help optimization, and
we extend it to the case of time-varying functions. We compare
the new algorithms with a highly tuned higher-order stochastic optimization algorithm on randomly-generated functions and a simulated manufacturing task. We note significant improvements in
total regret , time to converge, and final solution quality.
1
INTRODUCTION
In a stochastic optimization problem , noisy samples are taken from a plant. A
sample consists of a chosen control u (a vector ofreal numbers) and a noisy observed
response y. y is drawn from a distribution with mean and variance that depend on
u. y is assumed to be independent of previous experiments. Informally t he goal is
to quickly find control u to maximize the expected output E[y I u) . This is different
from conventional numerical optimization because the samples can be very noisy,
there is no gradient information , and we usually wish to avoid ever performing badly
(relative to our start state) even during optimization. Finally and importantly:
each experiment is very expensive and there is ample computational time
(often many minutes) for deciding on the next experiment. The following questions
are both interesting and important: how should this computational time best be
used, and how can the data best be used?
Stochastic optimization is of real industrial importance, and indeed one of our
reasons for investigating it is an association with a U.S . manufacturing company
Memory-based Stochastic Optimization
1067
that has many new examples of stochastic optimization problems every year.
The discrete version of this problem, in which u is chosen from a discrete set,
is the well known k-armed bandit problem. Reinforcement learning researchers
have recently applied bandit-like algorithms to efficiently optimize several discrete problems [Kaelbling, 1990, Greiner and Jurisica, 1992, Gratch et al., 1993,
Maron and Moore, 1993]. This paper considers extensions to the continuous case
in which u is a vector of reals. We anticipate useful applications here too. Continuity implies a formidable number of arms (uncountably infinite) but permits us to
assume smoothness of E[y I u] as a function of u.
The most popular current techniques are:
? Response Surface Methods (RSM). Current RSM practice is described in
the classic reference [Box and Draper, 1987]. Optimization proceeds by cautious
steepest ascent hill-climbing. A region of interest (ROI) is established at a starting point and experiments are made at positions within the region that can best
be used to identify the function properties with low-order polynomial regression.
A large portion of the RSM literature concerns experimental design-the decision
of where to take data points in order to acquire the lowest variance estimate of
the local polynomial coefficients in a fixed number of experiments. When the
gradient is estimated with sufficient confidence, the ROI is moved accordingly.
Regression of a quadratic locates optima within the ROI and also diagnoses ridge
systems and saddle points.
The strength of RSM is that it is careful not to change operating conditions based
on inadequate evidence, but moves once the data justifies. A weakness of RSM
is that human judgment is needed: it is not an algorithm, but a manufacturing
methodology .
? Stochastic Approximation methods. The algorithm of [Robbins and Monro,
1951] does root finding without the use of derivative estimates. Through the use of
successively smaller steps convergence is proven under broad assumptions about
noise. Keifer-Wolfowitz (KW) [Kushner and Clark, 1978] is a related algorithm
for optimization problems. From an initial point it estimates the gradient by
performing an experiment in each direction along each dimension of the input
space. Based on the estimate, it moves its experiment center and repeats. Again,
use of decreasing step sizes leads to a proof of convergence to a local optimum.
The strength of KW is its aggressive exploration, its simplicity, and that it comes
with convergence guarantees. However, it has more of a danger of attempting
wild experiments in the presence of noise, and effectively discards the data it
collects after each gradient estimate is made. In practice, higher order versions
of KW are available in which convergence is accelerated by replacing the fixed
step size schedule with an adaptive one [Kushner and Clark, 1978]. Later we
compare the performance of our algorithms to such a higher-order KW.
2
MEMORY-BASED OPTIMIZATION
Neither KW nor RSM uses old data. After a gradient has been identified the control
u is moved up the gradient and the data that produced the gradient estimate is
discarded. Does this lead to inefficiencies in operation? This paper investigates one
way of using old data: build a global non-linear plant model with it.
We use locally weighted regression to model the system [Cleveland and Delvin, 1988,
Atkeson, 1989, Moore, 1992]. We have adapted the methods to return posterior
distributions for their coefficients and noise (and thus, indirectly, their predictions)
1068
A. W. MOORE, J. SCHNEIDER
based on very broad priors, following the Bayesian methods for global linear regression described in [DeGroot, 1970].
We estimate the coefficients f3 = {,8I ... ,8m} of a local polynomial model in which
the data was generated by the polynomial and corrupted with gaussian noise of
variance u 2 , which we also estimate. Our prior assumption will be that f3 is distributed according to a multivariate gaussian of mean 0 and covariance matrix E.
Our prior on u is that 1/u2 has a gamma distribution with parameters a and ,8.
Assume we have observed n pieces of data. The jth polynomial term for the ith
data point is Xij and the output response of the ith data point is Ii. Assume
further that we wish to estimate the model local to the query point X q , in which a
data point at distance di from the the query point has weight Wi = exp( -dl! K).
K, the kernel width is a fixed parameter that determines the degree of localness in
the local regression. Let W = Diag(wl,w2 . .. Wn) .
The marginal posterior distribution of f3 is' a t distribution with mean
X T W 2X)-1(X T W 2y) covariance
(2,8 + (yT - f3T XT)W2yT)(E-l
and a
+ I:~=l w'f
+ X T W 2X)-l
/ (2a
13 = (E- 1 +
+ I:~=l wi)
(1)
degrees of freedom.
We assume a wide, weak, prior E = Diag(20 2,20 2, ... 20 2), a = 0.8,,8 = 0.001,
meaning the prior assumes each regression coefficient independently lies with high
probability in the range -20 to 20, and the noise lies in the range 0.01 to 0.5.
Briefly, we note the following reasons that Bayesian locally weighted polynomial
regression is particularly suited to this application:
? We can directly obtain meaningful confidence estimates of the joint pdf of the
regressed coefficients and predictions. Indirectly, we can compute the probability
distribution of the steepest gradient, the location of local optima and the principal
components of the local Hessian.
? The Bayesian approach allows meaningful regressions even with fewer data points
than regression coefficients-the posterior distribution reveals enormous lack of
confidence in some aspects of such a model but other useful aspects can still be
predicted with confidence. This is crucial in high dimensions, where it may be
more effective to head in a known positive gradient without waiting for all the
experiments that would be needed for a precise estimate of steepest gradient.
? Other pros and cons of locally weighted regression in the context of control can
be found in [Moore et ai., 1995].
Given the ability to derive a plant model from data, how should it best be used?
The true optimal answer, which requires solving an infinite-dimensional Markov
decision process, is intractable. We have developed four approximate algorithms
that use the learned model , described briefly below.
? AutoRSM. Fully automates the (normally manual) RSM procedure and incorporates weighted data from the model; not only from the current design. It uses
online experimental design to pick ROI design points to maximize information
about local gradients and optima. Space does not permit description of the linear
algebraic formulations of these questions.
? PMAX. This is a greedy, simpler approach that uses the global non-linear model
from the data to jump immediately to the model optimum. This is similar to the
technique described in [Botros, 1994], with two extensions. First, the Bayesian
Memory-based Stochastic Optimization
1069
Figure 1: Three examples
of 2-d functions used in optimization experiments
priors enable useful decisions before the regression becomes full-rank. Second,
local quadratic models permit second-order convergence near an optimum.
? IEMAX. Applies Kaelbling's IE algorithm [Kaelbling, 1990] in the continuous
case using Bayesian confidence intervals.
llchosen
=
argmax;
u
J opt
()
(2)
U
where iopt(u) is the top of the 95th %-ile confidence interval. The intuition here
is that we are encouraged to explore more aggressively than PMAX, but will not
explore areas that are confidently below the best known optimum .
? COMAX. In a real plant we would never want to apply PMAX or IEMAX. Experiments must be cautious for reasons of safety, quality control, and managerial
peace of mind. COMAX extends IEMAX thus:
llchosen =
argmax
u E
SAFE
A
A
fopt(u);U E SAFE{=} f,pess(U)
> dIsaster
.
threshold
(3)
Analysis of these algorithms is problematic unless we are prepared to make strong
assumptions about the form of E[Y I u]. To examine the general case we rely on
Monte Carlo simulations, which we now describe.
The experiments used randomly generated nonlinear unimodal (but not necessarily
convex) d-dimensional functions from [0, l]d -+ [0,1]. Figure 1 shows three example
2-d functions. Gaussian noise (0- = 0.1) is added to the functions. This is large
noise, and means several function evaluations would be needed to achieve a reliable
gradient estimate for a system using even a large step size such as 0.2.
The following optimization algorithms were tested on a sample of such functions.
Vary-KW
Fixed-KW
Auto-RSM
Passlve-RSM
Linear RSM
CRSM
Pmax, IEmax
and Comax
The best performing KW algorithm we could find varied step size and
adapted gradient estimation steps to avoid undue regret at optima.
A version of KW that keeps its gradient-detecting step size fixed.
This risks causing extra regret at a true optima, but has less chance
of becoming delayed by a non-optimum.
The best performing version thereof.
Auto-RSM continues to identify the precise location of the optimum
when it's arrived at that optimum. When Passive-RSM is confident
(greater than 99%) that it knows the location of the optimum to two
significant places, it stops experimenting.
A linear instead of quadratic model, thus restricted to steepest ascent.
Auto-RSM with conservative parameters, more typical of those recommended in the RSM literature.
As described above.
Figures 2a and 2b show the first sixty experiments taken by AutoRSM and KW
respectively on their journeys to the goal.
1070
(a)
A. W. MOORE, J. SCHNEIDER
(b)
Figure 2a: The path taken (start at
(0.8,0.2)) by AutoRSM optimizing the
given function with added noise of standard deviation 0.1 at each experiment.
Figure 2b: The path taken (start at
(0.8,0 .2)) by KW. KW's path looks deceptively bad, but remember it is continually
buffeted by considerable noise.
(0) RetroI_d ............ _
te) No. of", YntII wllhln 0,05 rtf optimum
let) ............ of FINAL . . .tepe
Figure 3: Comparing nine stochastic optimization algorithms by four criteria: (a) Regret ,
(b) Disasters, (c) Speed to converge (d) Quality at convergence. The partial order depicted
shows which results are significant at the 99% level (using blocked pairwise comparisons).
The outputs of the random functions range between 0-1 over the input domain. The
numbers in the boxes are means over fifty 5-d functions. (a) Regret is defined as the mean
Yopt - Yi-the cost incurred during the optimization compared with performance if we
had known the optimum location and used it from the beginning. With the exception of
IEMAX, model-based methods perform significantly better than KW, with reduced advantage for cautious and linear methods. (b) The %-age of steps which tried experiments
with more than 0.1 units worse performance than at the search start . This matters to a
risk averse manager. AutoRSM has fewer than 1% disasters, but COMAX and the modelfree methods do better still. PMAX's aggressive exploration costs it. (c) The number of
steps until we reach within 0.05 units of optimal. PMAX 's aggressiveness wins. (d) The
quality of the "final" solution between steps 50 and 60 of the optimization.
Results for 50 trials of each optimization algorithms for five-dimensional randomly
generated functions are depicted in Figure 3. Many other experiments were performed in other dimensionalities and for modified versions of the algorithm, but
space does not permit detailed discussion here.
Finally we performed experiments with the simulated power-plant process in Figure 4. The catalyst controller adjusts the flow rate of the catalyst to achieve the
goal chemical A content. Its actions also affect chemical B content. The temperature controller adjusts the reaction chamber temperature to achieve the goal
chemical B content . The chemical contents are also affected by the flow rate which
is determined externally by demand for the product.
The task is to find the optimal values for the six controller parameters that minimize the total squared deviation from desired values of chemical A and chemical
B contents. The feedback loops from sensors to controllers have significant delay.
The controller gains on product demand are feedforward terms since there is significant delay in the effects of demand on the process. Finally, the performance of
the system may also depend on variations over time in the composition of the input
chemicals which can not be directly sensed.
1071
Memory-based Stochastic Optimization
Optimize 6 Controller Parameters
Raw
Catalyst Supply
Input
Chemicals
To Minimize Squared Deviation
from Goal Chemical A and B Content
Te
Sensor A
Product
Demand
base lenns:
Base temperature
rccdback term,,;
Sen.for B gain
f'L-_---'----'----~orwvd tem\s:
Product demand gain
Catalyst Controller
Base input rate
Sensor A gain
Product demand gain
Figure 4: A Simulated
Chemical Process
Pumps governed by
demand for product
REACTION
CHAMBER
Product
output
Chemical A
content sensor
Chemical B
content sensor
The total summed regrets of the optimization methods on 200 simulated steps were:
Stay AtStart
10.86
FixedKW
2.82
AutoRSM
1.32
PMAX
3.30
COMAX
4.50
In this case AutoRSM is best, considerably beating the best KW algorithm we could
find. In contrast PM AX and COMAX did poorly: in this plant wild experiments
are very costly to PMAX and COMAX is too cautious. Stay AtStart is the regret
that would be incurred if all 200 steps were taken at the initial parameter setting.
3
UNOBSERVED DISTURBANCES
An apparent danger of learning a model is that if the environment changes, the out
of date model will mean poor performance and very slow adaptation. The modelfree methods, which use only recent data, will react more nimbly. A simple but
unsatisfactory answer to this is to use a model that implicitly (e.g. a neural net) or
explicitly (e.g. local weighted regression of the fifty most recent points) forgets. An
interesting possibility is to learn a model in a way that automatically determines
whether a disturbance has occurred, and if so, how far back to forget.
The following "adaptive forgetting" (AF) algorithm was added to the AutoRSM
algorithm: At each step, use all the previous data to generate 99% confidence
intervals on the output value at the current step. If the observed output is outside
the intervals assume that a large change in the system has occured and forget all
previous data. This algorithm is good for recognizing jumps in the plant's operating
characteristics and allows AutoRSM to respond to them quickly, but is not suitable
for detecting and handling process drift.
We tested our algorithm's performance on the simulated plant for 450 steps. Operation began as before, but at step 150 there was an unobserved change in the
composition of the raw input chemicals. The total regrets of the optimization
methods were:
StayAtStart FixedKW AutoRSM PMAX AutoRSM/AF
11.90
5.31
8.37
9.23
2.75
AutoRSM and PMAX do poorly because all their decisions after step 150 are based
partially on the invalid data collected before then. The AF addition to AutoRSM
solves the problem while beating the best KW by a factor of 2. Furthermore,
AutoRSMj AF gets 1.76 on the invariant task, thus demonstrating that it can be
used safely in cases where it is not known if the process is time varying.
1072
4
A. W. MOORE, J. SCHNEIDER
DISCUSSION
Botros' thesis [Botros, 1994] discusses an algorithm similar to PMAX based on
local linear regression. [Salganicoff and Ungar, 1995] uses a decision tree to learn
a model. They use Gittins indices to suggest experiments: we believe that the
memory-based methods can benefit from them too. They, however, do not use
gradient information, and so require many experiments to search a 2D space.
IEmax performed badly in these experiments, but optimism-gl1ided exploration may
prove important in algorithms which check for potentially superior local optima.
A possible extension is self tuning optimization. Part way through an optimization,
to estimate the best optimization parameters for an algorithm we can run montecarlo simulations which run on sample functions from the posterior global model
given the current data.
This paper has examined the question of how much can learning a Bayesian memorybased model accelerate the convergence of stochastic optimization . We have proposed four algorithms for doing this, one based on an autonomous version of RSM;
the other three upon greedily jumping to optima of three criteria dependent on
predicted output and uncertainty. Empirically the model-based methods provide
significant gains over a highly tuned higher order model-free method.
References
[Atkeson, 1989] C . G . Atkeson. Using Local Models to Control Movement . In Proceedings of Neural
Information Processing Systems Conference, November 1989.
[Botros, 1994] S . M . Botros . Model-Based Techniques in Motor Learning and Task Optimization . PhD.
Thesis, MIT Dept. of Brain and Cognitive Sciences, February 1994 .
[Box and Draper, 1987] G. E . P. Box and N . R. Draper .
Surfaces. Wiley , 1987.
Empirical Model-Building and Response
[Cleveland and Delvin , 1988] W. S. Cleveland and S . J . Delvin. Locally Weighted Regression : An Approach to Regression Analysis by Local Fitting. Journal of the American Statistical Association,
83(403):596-610, September 1988.
[DeGroot, 1970] M. H . DeGroot . Optimal Statistical Decisions. McGraw-Hill, 1970.
[Gratch et al . , 1993] J. Gratch , S . Chien, and G. DeJong. Learning Search Control Knowledge for
Deep Space Network Scheduling. In Proceedings of the 10th International Conference on Machine
Learning. Morgan Kaufmann, June 1993.
[Greiner and Jurisica, 1992] R. Greiner and I. Jurisica. A statistical approach to solving the EBL utility
problem. In Proceedings of the Tenth International Conference on Artificial Intelligence (AAAI92). MIT Press, 1992.
[Kaelbling, 1990] L . P . Kaelbling. Learning in Embedded Systems. PhD . Thesis ; Technical Report No.
TR-90-04, Stanford University, Department of Computer Science, June 1990.
[Kushner and Clark, 1978] H . Kushner and D. Clark . Stochastic Approximation Methods for Con strained and Unconstrained Systems. Springer-Verlag, 1978.
[Maron and Moore , 1993] O. Maron and A . Moore. Hoeffding Races: Accelerating Model Selection
Search for Classification and Function Approximation . In Advances in Neural Information Processing
Systems 6. Morgan Kaufmann, December 1993.
[Moore et al ., 1995] A . W . Moore , C . G . Atkeson, and S . Schaal. Memory-based Learning for Control. Technical report , CMU Robotics Institute, Technical Report CMU-RI-TR-95-18 (Submitted for
Publication) , 1995.
[Moore, 1992] A. W . Moore. Fast, Robust Adaptive Control by Learning only Forward Models . In
J . E . Moody, S. J . Hanson , and R . P . Lippman, editors, Advances in Neural Information Processing
Systems 4. Morgan Kaufmann, April 1992 .
[Robbins and Monro, 1951] H . Robbins and S. Monro. A stochastic approximation method . Annals of
Mathematical Statist2cs , 22 :400-407, 1951.
[Salganicoff and Ungar, 1995] M. Salganicoffand L . H. Ungar. Active Exploration and Learning in RealValued Spaces using Multi-Armed Bandit Allocation Indices. In Proceedings of the 12th International
Conference on Machine Learning. Morgan Kaufmann , 1995 .
| 1124 |@word trial:1 briefly:2 version:6 polynomial:7 simulation:2 tried:1 sensed:1 covariance:2 pick:1 tr:2 initial:2 inefficiency:1 tuned:2 reaction:2 current:5 comparing:1 must:1 numerical:1 motor:1 greedy:1 fewer:2 intelligence:1 accordingly:1 beginning:1 steepest:4 ith:2 detecting:2 location:4 simpler:1 five:1 mathematical:1 along:1 supply:1 consists:1 prove:1 wild:2 fitting:1 introduce:1 pairwise:1 forgetting:1 indeed:1 expected:2 examine:2 nor:1 multi:1 manager:1 brain:1 decreasing:1 company:1 automatically:1 armed:2 becomes:1 cleveland:3 formidable:1 lowest:1 developed:1 dejong:1 finding:1 unobserved:2 guarantee:1 safely:1 remember:1 every:1 control:9 normally:1 unit:2 continually:1 positive:1 before:3 safety:1 local:16 path:3 becoming:1 examined:1 collect:1 range:3 practice:2 regret:8 lippman:1 procedure:1 danger:2 area:1 empirical:1 managerial:1 significantly:1 confidence:8 suggest:1 get:1 selection:1 scheduling:1 context:1 risk:2 optimize:2 conventional:1 center:1 yt:1 starting:1 independently:1 convex:1 yopt:1 simplicity:1 immediately:1 react:1 adjusts:2 deceptively:1 importantly:1 jurisica:3 classic:1 variation:1 autonomous:1 annals:1 us:4 pa:1 expensive:2 particularly:1 continues:1 observed:3 region:2 averse:1 movement:1 ebl:1 intuition:1 environment:1 automates:1 depend:2 solving:2 rtf:1 upon:1 accelerate:1 joint:1 fast:1 effective:1 describe:1 monte:1 query:3 artificial:1 outside:1 apparent:1 stanford:1 ability:1 noisy:4 final:3 online:1 advantage:1 net:1 sen:1 product:7 botros:5 adaptation:1 causing:1 loop:1 date:1 poorly:2 achieve:3 description:1 moved:2 cautious:4 convergence:7 optimum:17 gittins:1 help:1 derive:1 andrew:1 school:1 solves:1 strong:1 predicted:2 implies:1 come:1 direction:1 safe:2 stochastic:13 exploration:4 human:1 aggressiveness:1 enable:1 require:1 ungar:3 really:1 opt:1 anticipate:1 memorybased:1 extension:3 roi:4 deciding:1 exp:1 strained:1 vary:1 estimation:1 robbins:3 wl:1 weighted:8 mit:2 sensor:5 gaussian:3 modified:1 avoid:2 varying:2 publication:1 ax:1 june:2 schaal:1 improvement:1 unsatisfactory:1 rank:1 check:1 experimenting:1 contrast:1 industrial:1 greedily:1 dependent:1 bandit:3 classification:1 undue:1 summed:1 marginal:1 once:1 f3:3 never:1 encouraged:1 kw:15 broad:2 look:1 tem:1 report:3 delvin:3 randomly:3 gamma:1 delayed:1 argmax:2 freedom:1 interest:1 salganicoff:2 highly:2 possibility:1 evaluation:1 weakness:1 sixty:1 partial:1 jumping:1 unless:1 tree:1 old:2 desired:1 kaelbling:5 cost:2 deviation:3 f3t:1 pump:1 delay:2 recognizing:1 inadequate:1 too:3 answer:3 corrupted:1 considerably:1 combined:1 confident:1 international:3 ie:1 stay:2 quickly:2 moody:1 again:1 squared:2 thesis:3 successively:1 hoeffding:1 worse:1 cognitive:1 american:1 derivative:1 return:1 aggressive:2 coefficient:6 matter:1 explicitly:1 race:1 piece:1 later:1 root:1 performed:3 doing:1 portion:1 start:4 monro:3 minimize:2 variance:3 characteristic:1 efficiently:1 kaufmann:4 judgment:1 identify:2 climbing:1 weak:1 bayesian:7 raw:2 produced:1 carlo:1 researcher:1 submitted:1 reach:1 manual:1 ofreal:1 thereof:1 naturally:1 proof:1 di:1 con:2 stop:1 gain:6 popular:1 knowledge:1 dimensionality:1 occured:1 localness:1 schedule:1 back:1 higher:4 methodology:2 response:5 april:1 formulation:1 box:4 furthermore:1 until:1 replacing:1 nonlinear:1 lack:1 continuity:1 maron:3 quality:4 believe:1 building:1 effect:1 true:2 aggressively:1 chemical:13 moore:13 during:2 width:1 self:1 criterion:2 pdf:1 hill:2 arrived:1 ridge:1 modelfree:2 rsm:15 pro:1 passive:1 temperature:3 meaning:1 recently:1 began:1 superior:1 empirically:1 extend:1 he:1 association:2 occurred:1 mellon:1 significant:6 blocked:1 composition:2 ai:1 smoothness:1 tuning:1 unconstrained:1 pm:1 had:1 surface:3 operating:2 base:3 posterior:4 multivariate:1 recent:2 optimizing:2 discard:1 verlag:1 yi:1 morgan:4 greater:1 schneider:4 converge:2 maximize:2 wolfowitz:1 recommended:1 ii:1 full:1 unimodal:1 technical:3 af:4 locates:1 peace:1 prediction:2 ile:1 regression:17 controller:7 cmu:2 kernel:1 disaster:3 robotics:1 addition:1 want:1 interval:4 crucial:1 w2:1 extra:1 fifty:2 ascent:2 degroot:3 gratch:3 fopt:1 ample:1 december:1 incorporates:1 flow:2 practitioner:1 near:1 presence:1 feedforward:1 wn:1 automated:1 affect:1 identified:1 whether:2 six:1 optimism:1 utility:1 accelerating:1 algebraic:1 hessian:2 nine:1 action:1 deep:1 useful:3 detailed:1 informally:1 prepared:1 locally:6 reduced:1 generate:1 xij:1 problematic:1 estimated:1 diagnosis:1 carnegie:1 discrete:3 waiting:1 affected:1 four:3 threshold:1 enormous:1 demonstrating:1 drawn:1 neither:1 tenth:1 draper:3 year:1 run:2 uncertainty:1 respond:1 journey:1 extends:1 place:1 decision:7 investigates:1 quadratic:3 badly:2 strength:2 adapted:2 ri:1 regressed:1 aspect:2 speed:1 performing:4 attempting:1 department:1 according:1 poor:1 smaller:1 wi:2 restricted:1 invariant:1 taken:5 discus:1 montecarlo:1 needed:3 mind:1 know:1 aaai92:1 available:1 operation:2 permit:4 apply:1 indirectly:2 chamber:2 assumes:1 top:1 kushner:4 build:2 february:1 move:2 question:4 added:3 costly:1 september:1 gradient:16 win:1 distance:1 simulated:5 considers:1 collected:1 reason:3 index:2 acquire:1 potentially:1 pmax:11 design:4 perform:1 markov:1 discarded:1 november:1 ever:1 head:1 precise:2 varied:1 drift:1 hanson:1 learned:1 established:1 proceeds:1 usually:1 below:2 beating:2 confidently:1 reliable:1 memory:7 power:1 suitable:1 rely:1 disturbance:2 arm:1 realvalued:1 auto:3 prior:6 literature:2 relative:1 catalyst:4 plant:9 fully:1 embedded:1 interesting:2 allocation:1 proven:1 clark:4 age:1 incurred:2 degree:2 sufficient:1 editor:1 uncountably:1 repeat:1 free:1 jth:1 institute:1 wide:1 distributed:1 benefit:1 feedback:1 dimension:2 forward:1 made:3 reinforcement:1 adaptive:3 jump:2 atkeson:4 far:1 approximate:1 implicitly:1 mcgraw:1 chien:1 keep:1 global:7 active:1 investigating:1 reveals:1 pittsburgh:1 assumed:1 continuous:2 search:4 learn:2 robust:1 necessarily:1 domain:1 diag:2 did:1 noise:10 slow:1 wiley:1 position:1 wish:2 lie:2 governed:1 forgets:1 externally:1 minute:1 bad:1 xt:1 concern:1 evidence:1 dl:1 intractable:1 effectively:1 importance:1 phd:2 te:2 justifies:1 demand:7 suited:1 depicted:2 forget:2 saddle:1 explore:2 greiner:3 partially:1 u2:1 applies:1 springer:1 determines:2 chance:1 goal:5 careful:1 manufacturing:3 invalid:1 jeff:1 considerable:1 change:4 content:8 infinite:2 typical:1 determined:1 principal:1 conservative:1 total:4 experimental:2 meaningful:2 exception:1 accelerated:1 dept:1 tested:2 handling:1 |
139 | 1,125 | Parallel analog VLSI architectures for
computation of heading direction and
time-to-contact
Giacomo Indiveri
giacomo@klab.caltech .edu
Jorg Kramer
kramer@klab .caltech.edu
Christof Koch
koch@klab.caltech.edu
Division of Biology
California Institute of Technology
Pasadena, CA 91125
Abstract
We describe two parallel analog VLSI architectures that integrate
optical flow data obtained from arrays of elementary velocity sensors to estimate heading direction and time-to-contact. For heading
direction computation, we performed simulations to evaluate the
most important qualitative properties of the optical flow field and
determine the best functional operators for the implementation of
the architecture. For time-to-contact we exploited the divergence
theorem to integrate data from all velocity sensors present in the
architecture and average out possible errors.
1
Introduction
We have designed analog VLSI velocity sensors invariant to absolute illuminance
and stimulus contrast over large ranges that are able to achieve satisfactory performance in a wide variety of cases; yet such sensors, due to the intrinsic nature of
analog processing, lack a high degree of precision in their output values. To exploit
their properties at a system level, we developed parallel image processing architectures for applications that rely mostly on the qualitative properties of the optical
flow, rather than on the precise values of the velocity vectors. Specifically, we designed two parallel architectures that employ arrays of elementary motion sensors
for the computation of heading direction and time-to-contact. The application domain that we took into consideration for the implementation of such architectures,
is the promising one of vehicle navigation. Having defined the types of images to be
analyzed and the types of processing to perform, we were able to use a priori infor-
VLSI Architectures for Computation of Heading Direction and Time-to-contact
721
mation to integrate selectively the sparse data obtained from the velocity sensors
and determine the qualitative properties of the optical flow field of interest.
2
The elementary velocity sensors
A velocity sensing element, that can be integrated into relatively dense arrays to
estimate in parallel optical flow fields, has been succesfully built [Kramer et al.,
1995]. Unlike most previous implementations of analog VLSI motion sensors, it
unambiguously encodes 1-D velocity over considerable velocity, contrast, and illuminance ranges , while being reasonably compact. It implements an algorithm
that measure'3 the time of travel of features (here a rapid temporal change in intensity) stimulus between two fixed locations on the chip. In a first stage, rapid
dark-to-bright irradiance changes or temporal ON edges are converted into short
current pulses. Each current pulse then gives rise to a sharp voltage spike and a
logarithmically-decaying voltage signal at each edge detector location. The sharp
spike from one location is used to sample the analog voltage of the slowly-decaying
signal from an adjacent location. The sampled output voltage encodes the relative
time delay of the two signals, and therefore velocity, for the direction of motion
where the onset of the slowly-decaying pulse precedes the sampling spike. In the
other direction, a lower voltage is sampled. Each direction thus requires a separate
output stage.
'05
~
OIlS
j
">
o.
Oil!>
O'
40
..
VIIIoaIy (mmsec)
Figure 1: Output voltage of a motion sensing element for the preferred direction
of motion of a sharp high-contrast ON edge versus image velocity under incandescent room illumination. Each data point represents the average of 5 successive
measurements.
As implemented with a 2 J.tm CMOS process, the size of an elementary bi-directional
motion element (including 30 transistors and 8 capacitances) is 0.045 mm 2 . Fig. 1
shows that the experimental data confirms the predicted logarithmic encoding of
velocity by the analog output voltage. The data was taken by imaging a moving
high-contrast ON edge onto the chip under incandescent room illumination. The
calibration of the image velocity in the focal plane is set by the 300 J.tm spacing of
adjacent photoreceptors on the chip.
3
Heading direction computation
To simplify the computational complexity of the problem of heading direction detection we restricted our analysis to pure translational motion , taking advantage of the
722
G. INDIVERI, 1. KRAMER, C. KOCH
fact that for vehicle navigation it is possible to eliminate the rotational component
of motion using lateral accelerometer measurements from the vehicle. Furthermore,
to analyze the computational properties of the optical flow for typical vehicle navigation scenes, we performed software simulations on sequences of images obtained
from a camera with a 64 x 64 pixel silicon retina placed on a moving truck (courtesy
of B. Mathur at Rockwell Corporation). The optical flow fields have been computed
Figure 2: The sum of the horizontal components of the optical flow field is plotted
on the bottom of the figure. The presence of more than one zero-crossing is due to
different types of noise in the optical flow computation (e.g. quantization errors in
software simulations or device mismatch in analog VLSI circuits). The coordinate of
the heading direction is computed as the abscissa of the zero-crossing with maximum
steepness and closest to the abscissa of the previously selected unit.
by implementing an algorithm based on the image brightness consia'f)cy equation
[Verri et al., 1992] [Barron et al., 1994]. For the application domain considered and
the types of optical flow fields obtained from the simulations, it is reasonable to
assume that the direction of heading changes smoothly in time. Furthermore, being
interested in determining, and possibly controlling, the heading direction mainly
along the horizontal axis, we can greatly reduce the complexity of the problem by
considering one-dimensional arrays of velocity sensors. In such a case, if we assign
positive values to vectors pointing in one direction and negative values to vectors
pointing in the opposite direction, the heading direction location will correspond to
the point closest to the zero-crossing. Under these assumptions, the computation
of the horizontal coordInate of the heading direction has been carried out using the
following functional operators: thresholding on the horizontal components of the
optical flow vectors; spatial smoothing on the resulting values; detection and evaluation of the steepness of the zero-crossings present in the array and finally selection
of the zero-crossing with maximum steepness. The zero-crossing with maximum
steepness is selected only if its position is in the neighborhood of the previously
selected zero-crossing. This helps to eliminate errors due to noise and device mismatch and assures that the computed heading direction location will shift smoothly
in time. Fig. 2 shows a result of the software simulations, on an image of a road
VLSI Architectures for Computation of Heading Direction and Time-to-contact
723
with a shadow on the left side.
All of the operators used in the algorithm have been implemented with analog
circuits (see Fig. 3 for a block diagram of the architecture). Specifically, we have
WIIb..
Figure 3: Block diagram of the architecture for detecting heading direction: the
first layer of the architecture computes the velocity of the stimulus; the second
layer converts the voltage output of the velocity sensors into a positive/negative
current performing a threshold operation; the third layer performs a linear smoothing operation on the positive and negative halfs of the input current; the fourth
layer detects zero-crossings by comparing the intensity of positive currents from one
pixel with negative currents from the neighboring pixel; the top layer implements a
winner-take-all network with distributed excitation, which selects the zero-crossing
with maximum steepness.
designed test chips in which the thresholding function has been implemented using
a transconductance amplifier whose current represents the output signal [Mead,
1989], spatial smoothing has been obtained using a circuit that separates positive
currents and negative currents into two distinct paths and feeds them into two
layers of current-mode diffuser networks [Boahen and Andreou, 1992], the zerocrossing detection and evaluation of its steepness has been implemented using a
newly designed circuit block based on a modification of the simple current-correlator
[Delbriick, 1991], and the selection of the zero-crossing with maximum steepness
closest to the previously selected unit has been implemented using a winner-takeall circuit with distributed excitation [Morris et al., 1995]. The schematics of the
former three circuits, which implement the top three layers of the diagram of Fig. 3,
are shown in Fig. 4.
Fig. 5 shows the output of a test chip in which all blocks up to the diffuser network
(without the zero-crossing detection stages) were implemented. The velocity sensor
layout was modified to maximize the number of units in the 1-D array. Each velocity sensor measures 60pm x 802pm. On a (2.2mm)2 size chip we were able to fit 23
units. The shown results have been obtained by imaging on the chip expanding or
contracting stimuli using black and white edges wrapped around a rotating drum
and reflected by an adjacent tilted mirror. The point of contact between drum
and mirror corresponding to the simulated heading direction has been imaged approximately onto the 15 th unit of the array. As shown, the test chip considered
does not achieve 100% correct performance due to errors that arise mainly from the
presence of parasitic capacitors in the modified part of the velocity sensor circuits;
nonetheless, at least from a qualitative point of view, the data confirms the results
obtained from software simulations and demonstrates the validity of the approach
considered.
O. INDNERI. J. KRAMER. C. KOCH
724
- - - - - - - - - - - - - - -
~
~
~-
II
'"
,..
""'>---t~;-------'~:-----<-'
- - - - - - - - -Ir - - - - - - - - -
rln
II ~
II
Cd??
II
II
I..
"'.>----t.......-----i'-----<,..t II
ill
II
II
II
II
II
II
II
II
!P II
II
II
II
~
1
I
cp
I
I
*"J>--........,'t--t--t-<.,fJ
..
,. ,
.._"-
II
II
I
II
I
I
- - - - - - - - - - - - - - - - - - - _ _ _ _ _ _ 1 ______ - - - -
Figure 4: Circuit schematics of the smoothing, zero-detection and winner-take-all
blocks respectively.
" ..................-r-~...,...................-........-.-.................-r-.,.......,.--.-,
"
.B
.0,
.0'
.0.
'O?.~~'~~~~7~.~.~'.~'~,~I2~"~'~.~'.~"~'~"~.~,,~~~'~
' 22
lk1I P9ton
(a)
"!-,~,~,~,~.~'~'''''7-'~,~,."""~'~,,~,,~,,~,"".~,.""",,:-',~,.,. "~'O~"~22'
"""""''''''
(b)
Figure 5: Zero crossings computed as difference between smoothed positive currents
and smoothed negative currents: (a) for expanding stimuli; (b) for contracting
stimuli. The "zero" axis is shifted due to is a systematic offset of 80 nA.
4
Time-to-contact
The time-to-contact can be computed by exploiting qualitative properties of the
optical flow field such as expansion or contraction [Poggio et al., 1991]. The divergence theorem, or Gauss theorem, as applied to a plane, shows that the integral
over a surface patch of the divergence of a vector field is equal to the line integral
along the patch boundary of the component of the field normal to the boundary.
Since a camera approaching a rigid object sees a linear velocity field, where the
velocity vectors are proportional to their distance from the focus-of-expansion, the
divergence is constant over the image plane. By integrating the radial component
of the optical flow field along the circumference of a circle, the time-to-contact can
thus be estimated, independently of the position of the focus-of-expansion.
We implemented this algorithm with an analog integrated circuit, where an array
of twelve motion sensing elements is arranged on a circle, such that each element
measures velocity radially. According to the Gauss theorem, the time-to-contact is
VLSI Architectures for Computation of Heading Direction and Time-to-contact
725
then approximated by
N?R
T=
(1)
N
'
2:k=l Vk
where N denotes the number of elements, R the radius of the circle, and Vk the radial
velocity components at the locations of the elements. For each clement, temporal
aliasing is prevented by comparing the output voltages of the two directions of
motion and setting the lower one, corresponding to the null direction, to zero. The
output voltages are then used to control subthreshold transistor currents. Since
these voltages are logarithmically dependent on velocity, the transistor currents are
proportional to the measured velocities. The sum of the velocity components is
thus calculated by aggregating the currents from all elements on two lines, one for
outward motion and one for inward motion, and taking the difference of the total
currents. The resulting bi-directional output current is an inverse function of the
signed time-to-contact.
;:
1 o~--------~----~~
I
-,
-0 25
OOS
01
015
02
025
Time-1c>ContKt (sec)
Figure 6: Output current of the time-to-contact sensor as a function of simulated
time-to-contact under incandescent room illumination. The theoretical fit predicts
an inverse relationship.
The circuit has been implemented on a chip with a size of (2.2mm)2 using 2 pm
technology. The photo diodes of the motion sensing elements are arranged on two
concentric circles with radii of 400 pm and 600 pm respectively. In order to simulate
an approaching or withdrawing object, a high-contrast spiral stimulus was printed
onto a rotating disk. Its image was projected onto the chip with a microscope lens
under incandescent room illumination. The focus-of-expansion was approximately
centered with respect to the photo diode circles. The averaged output current is
shown as a function of simulated time-to-contact with a theoretical fit in Fig. 6.
The expected inverse relationship is qualitatively observed and the sign (expansion
or contraction) is robustly encoded. However , the deviation of the output current
from its average can be substantial: Since the output voltage of each motion sensing
element decays slowly due to leak currents and since the spiral stimulus causes a
serial update of the velocity values along the array, a step change in the output
current is observed upon each update, followed by a slow decay. The effect is
aggravated, if the individual motion sensing elements measure significantly differing
velocities. This is generally the case, because the focus-of-expansion is usually not
centered with respect to the sensor and because of inaccuracies in the velocity
measurements due to circuit offsets, noise, and the aperture problem [Verri et al.,
1992]. The integrative property of the algorithm is thus highly desirable, and more
robust data would be obtained from an array with more elements and stimuli with
higher edge densities.
G. INDIVERI. J. KRAMER. C. KOCH
726
5
Conclusions
We have developed parallel architectures for motion analysis that bypass the problem of low precision in analog VLSI technology by exploiting qualitative properties
of the optical flow. The correct functionality of the devices built, at least from a
qualitative point of view, have confirmed the validity of the approach followed and
induced us to continue this line of research . We are now in the process of designing more accurate circuits that implement the operators used in the architectures
proposed.
Acknowledgments
This work was supported by grants from ONR, ERe and Daimler-Benz AG. The
velocity sensor was developed in collaboration with R . Sarpeshkar. The chips were
fabricated through the MOSIS VLSI Fabrication Service.
References
[Barron et al., 1994] J.1. Barron, D.J. Fleet, and S.S. Beauchemin. Performance of optical
flow techniques. International Journal on Computer Vision, 12(1):43-77, 1994.
[Boahen and Andreou, 1992] K.A. Boahen and A.G. Andreou. A contrast sensitive silicon
retina with reciprocal synapses. In NIPS91 Proceedings. IEEE, 1992.
[Delbriick, 1991] T. Delbriick. "Bump" circuits for computing similarity and dissimilarity
of analog voltages. In Proc. IJCNN, pages 1-475-479, June 1991.
[Kramer et al., 1995] J. Kramer, R. Sarpeshkar, and C. Koch. An analog VLSI velocity
sensor. In Proc. Int. Symp. Circuit and Systems ISCAS '95, pages 413-416, Seattle,
WA, May 1995.
[Mead, 1989] C.A. Mead. Analog VLSI and Neural Systems. Addison-Wesley, Reading,
1989.
[Morris et al., 1995] T .G. Morris, D.M. Wilson, and S.P. DeWeerth. Analog VLSI circuits
for manufacturing inspection. In Conference for Advanced Research in VLSI-Chapel
Hill, North Carolina, March 1995.
[Poggio et al., 1991] T. Poggio, A. Verri, and V. Torre. Green theorems and qualitative
properties of the optical flow. Technical report, MIT, 1991. Internal Lab. Memo 1289.
[Verri et al., 1992] A. Verri, M. Straforini, and V. Torre. Computational aspects of motion perception in natural and artificial vision systems. Phil. Trans. R. Soc. Lond. B,
337:429-443, 1992.
PART VI
SPEECH AND SIGNAL PROCESSING
| 1125 |@word disk:1 integrative:1 confirms:2 pulse:3 simulation:6 carolina:1 contraction:2 brightness:1 current:23 comparing:2 yet:1 tilted:1 designed:4 update:2 half:1 selected:4 device:3 plane:3 inspection:1 reciprocal:1 short:1 detecting:1 location:7 successive:1 along:4 qualitative:8 symp:1 diffuser:2 expected:1 rapid:2 abscissa:2 aliasing:1 detects:1 correlator:1 considering:1 circuit:15 null:1 inward:1 developed:3 differing:1 ag:1 corporation:1 fabricated:1 temporal:3 demonstrates:1 control:1 unit:5 grant:1 christof:1 positive:6 service:1 aggregating:1 encoding:1 mead:3 path:1 approximately:2 black:1 signed:1 succesfully:1 range:2 bi:2 averaged:1 acknowledgment:1 camera:2 block:5 implement:4 significantly:1 printed:1 road:1 integrating:1 radial:2 onto:4 selection:2 operator:4 courtesy:1 circumference:1 phil:1 layout:1 independently:1 pure:1 chapel:1 array:10 coordinate:2 controlling:1 designing:1 velocity:31 element:12 logarithmically:2 crossing:12 approximated:1 predicts:1 bottom:1 observed:2 cy:1 substantial:1 boahen:3 leak:1 complexity:2 upon:1 division:1 chip:11 sarpeshkar:2 distinct:1 describe:1 precedes:1 artificial:1 neighborhood:1 whose:1 encoded:1 advantage:1 sequence:1 transistor:3 took:1 rln:1 neighboring:1 achieve:2 exploiting:2 seattle:1 cmos:1 object:2 help:1 measured:1 soc:1 implemented:8 predicted:1 shadow:1 diode:2 direction:25 radius:2 correct:2 functionality:1 torre:2 centered:2 implementing:1 assign:1 elementary:4 mm:3 klab:3 koch:6 considered:3 around:1 normal:1 bump:1 pointing:2 proc:2 travel:1 sensitive:1 ere:1 mit:1 sensor:17 mation:1 modified:2 rather:1 voltage:13 wilson:1 focus:4 indiveri:3 june:1 vk:2 mainly:2 greatly:1 contrast:6 dependent:1 rigid:1 integrated:2 eliminate:2 pasadena:1 vlsi:14 interested:1 selects:1 infor:1 pixel:3 translational:1 zerocrossing:1 ill:1 priori:1 illuminance:2 spatial:2 smoothing:4 field:11 equal:1 having:1 sampling:1 biology:1 represents:2 report:1 stimulus:9 simplify:1 employ:1 retina:2 divergence:4 individual:1 iscas:1 amplifier:1 detection:5 interest:1 highly:1 beauchemin:1 evaluation:2 navigation:3 analyzed:1 accurate:1 edge:6 integral:2 poggio:3 rotating:2 circle:5 plotted:1 theoretical:2 deviation:1 delay:1 fabrication:1 rockwell:1 giacomo:2 density:1 twelve:1 international:1 systematic:1 na:1 slowly:3 possibly:1 converted:1 accelerometer:1 sec:1 north:1 int:1 onset:1 vi:1 performed:2 vehicle:4 view:2 lab:1 analyze:1 decaying:3 parallel:6 bright:1 ir:1 correspond:1 subthreshold:1 directional:2 confirmed:1 detector:1 synapsis:1 nonetheless:1 sampled:2 newly:1 radially:1 irradiance:1 feed:1 wesley:1 higher:1 unambiguously:1 reflected:1 verri:5 arranged:2 furthermore:2 stage:3 deweerth:1 horizontal:4 lack:1 mode:1 oil:2 effect:1 validity:2 nips91:1 former:1 imaged:1 satisfactory:1 i2:1 white:1 adjacent:3 wrapped:1 excitation:2 hill:1 performs:1 motion:17 cp:1 fj:1 image:9 consideration:1 functional:2 winner:3 analog:15 measurement:3 silicon:2 clement:1 focal:1 pm:5 moving:2 calibration:1 similarity:1 surface:1 closest:3 onr:1 continue:1 exploited:1 caltech:3 determine:2 maximize:1 signal:5 ii:21 desirable:1 technical:1 serial:1 prevented:1 schematic:2 vision:2 microscope:1 spacing:1 diagram:3 unlike:1 induced:1 flow:16 capacitor:1 presence:2 spiral:2 variety:1 aggravated:1 fit:3 architecture:15 approaching:2 opposite:1 reduce:1 drum:2 tm:2 shift:1 fleet:1 speech:1 cause:1 generally:1 outward:1 dark:1 morris:3 daimler:1 shifted:1 sign:1 estimated:1 steepness:7 threshold:1 imaging:2 mosis:1 sum:2 convert:1 inverse:3 fourth:1 reasonable:1 patch:2 layer:7 followed:2 truck:1 ijcnn:1 scene:1 software:4 encodes:2 aspect:1 simulate:1 lond:1 transconductance:1 performing:1 optical:16 relatively:1 according:1 march:1 modification:1 invariant:1 restricted:1 taken:1 benz:1 equation:1 previously:3 assures:1 addison:1 photo:2 operation:2 takeall:1 barron:3 robustly:1 top:2 denotes:1 exploit:1 contact:16 capacitance:1 spike:3 distance:1 separate:2 lateral:1 simulated:3 relationship:2 rotational:1 mostly:1 negative:6 rise:1 memo:1 jorg:1 implementation:3 perform:1 precise:1 incandescent:4 smoothed:2 sharp:3 mathur:1 delbriick:3 intensity:2 concentric:1 andreou:3 california:1 inaccuracy:1 trans:1 able:3 usually:1 perception:1 mismatch:2 reading:1 built:2 including:1 green:1 natural:1 rely:1 advanced:1 technology:3 axis:2 carried:1 determining:1 relative:1 contracting:2 proportional:2 versus:1 integrate:3 oos:1 degree:1 thresholding:2 bypass:1 cd:1 collaboration:1 placed:1 supported:1 heading:17 side:1 institute:1 wide:1 taking:2 absolute:1 sparse:1 distributed:2 boundary:2 calculated:1 computes:1 qualitatively:1 projected:1 compact:1 preferred:1 aperture:1 photoreceptors:1 promising:1 nature:1 reasonably:1 robust:1 ca:1 expanding:2 expansion:6 domain:2 dense:1 noise:3 arise:1 fig:7 slow:1 precision:2 position:2 third:1 theorem:5 sensing:6 offset:2 decay:2 intrinsic:1 quantization:1 mirror:2 dissimilarity:1 illumination:4 smoothly:2 logarithmic:1 kramer:8 manufacturing:1 room:4 considerable:1 change:4 specifically:2 typical:1 total:1 lens:1 experimental:1 gauss:2 selectively:1 parasitic:1 internal:1 evaluate:1 |
140 | 1,126 | Recurrent Neural Networks for Missing or
Asynchronous Data
Yoshua Bengio Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
Francois Gingras
Dept. Informatique et
Recherche Operationnelle
Universite de Montreal
Montreal, Qc H3C-3J7
bengioy~iro.umontreal.ca
gingra8~iro.umontreal.ca
Abstract
In this paper we propose recurrent neural networks with feedback into the input
units for handling two types of data analysis problems. On the one hand, this
scheme can be used for static data when some of the input variables are missing.
On the other hand, it can also be used for sequential data, when some of the
input variables are missing or are available at different frequencies. Unlike in the
case of probabilistic models (e.g. Gaussian) of the missing variables, the network
does not attempt to model the distribution of the missmg variables given the
observed variables. Instead it is a more "discriminant" approach that fills in the
missing variables for the sole purpose of minimizing a learning criterion (e.g., to
minimize an output error).
1
Introduction
Learning from examples implies discovering certain relations between variables of interest. The
most general form of learning requires to essentially capture the joint distribution between these
variables. However, for many specific problems, we are only interested in predicting the value
of certain variables when the others (or some of the others) are given. A distinction IS therefore
made between input variables and output variables. Such a task requires less information (and
less p'arameters, in the case of a parameterized model) than that of estimating the full joint
distrIbution. For example in the case of classification problems, a traditional statistical approach
is based on estimating the conditional distribution of the inputs for each class as well as the
class prior probabilities (thus yielding the full joint distribution of inputs and classes). A more
discriminant approach concentrates on estimating the class boundaries (and therefore requires
less parameters), as for example with a feedforward neural network trained to estimate the output
class probabilities given the observed variables.
However, for many learning problems, only some ofthe input variables are given for each particular training case, and the missing variables differ from case to case. The simplest way to deal
with this mIssing data problem consists in replacing the missing values by their unconditional
mean. It can be used with "discriminant" training algorithms such as those used with feedforward neural networks. However, in some problems, one can obtain better results by taking
advantage of the dependencies between the input variables. A simple idea therefore consists
-also, AT&T Bell Labs, Holmdel, NJ 07733
Y. BENGIO, F. GINGRAS
396
Figure 1: Architectures of the recurrent networks in the experiments. On the left a 90-3-4
architecture for static data with missing values, on the right a 6-3-2-1 architecture with multiple
time-scales for asynchronous sequential data. Small squares represent a unit delay. The number
of units in each layer is inside the rectangles. The time scale at which each layer operates is on
the right of each rectangle.
in replacing the missing input variables by their conditional expected value, when the observed
input variables are given. An even better scheme is to compute the expected output given the
observed inputs, e.g. with a mixture of Gaussian. Unfortunately, this amounts to estimating the
full joint distribution of all the variables. For example, with ni inputs, capturing the possible
effect of each observed variable on each missing variable would require O(nl) parameters (at least
one parameter to capture some co-occurrence statistic on each pair of input variables) . Many
related approaches have been proposed to deal with missing inputs using a Gaussian (or Gaussian
mixture) model (Ahmad and Tresp, 1993; Tresp, Ahmad and Neuneier, 1994; Ghahramani and
Jordan , 1994). In the experiments presented here, the proposed recurrent network is compared
with a Gaussian mixture model trained with EM to handle missing values (Ghahramani and
Jordan , 1994).
The approach proposed in section 2 is more economical than the traditional Gaussian-based
approaches for two reasons . Firstly, we take advantage of hidden units in a recurrent network,
which might be less numerous than the inputs. The number of parameters depends on the
product of the number of hidden units and the number of inputs. The hidden units only need to
capture the dependencies between input variables which have some dependencies, and which are
useful to reducing the output error. The second advantage is indeed that training is based on
optimizing the desired criterion (e.g., reducing an output error), rather than predIcting as well
as possible the values of the missmg inputs. The recurrent network is allowed to relax for a few
iterations (typically as few as 4 or 5) in order to fill-in some values for the missing inputs and
produce an output. In section 3 we present experimental results with this approach , comparing
the results with those obtained with a feedforward network.
In section 4 we propose an extension of this scheme to sequential data. In this case, the network
is not relaxing: inputs keep changing with time and the network maps an input sequence (with
possibly missing values) to an output sequence. The main advantage of this extension is that
It allows to deal with sequential data in which the variables occur at different frequencies. This
type of problem is frequent for example with economic or financial data. An experiment with
asynchronous data is presented in section 5.
2
Relaxing Recurrent Network for Missing Inputs
Networks with feedback such as those proposed in (Almeida, 1987; Pineda, 1989) can be applied
to learning a static input/output mapping when some of the inputs are missing. In both cases,
however, one has to wait for the network to relax either to a fixed point (assuming it does find
one) or to a "stable distribution" (in the case of the Boltzmann machine). In the case of fixedpoint recurrent networks, the training algorithm assumes that a fixed point has been reached.
The gradient with respect to the weIghts is then computed in order to move the fixed point
to a more desirable position. The approach we have preferred here avoids such an assumption.
Recurrent Neural Networks for Missing or Asynchronous Data
397
Instead it uses a more explicit optimization of the whole behavior of the network as it unfolds
in time, fills-in the missing inputs and produces an output. The network is trained to minimize
some function of its output by back-propagation through time.
Computation of Outputs Given Observed Inputs
=
Given: input vector U [UI, U2, ..? , un,]
Resul t: output vector Y [YI, Y2, .. . , Yn.l
1. Initialize for t = 0:
For i 1 ... nu,xo,; f- 0
For i 1 . . . n;, if U; is missing then xO,1(;) f- E( i),
=
=
=
. Else
XO,1(i) f- Ui?
2. Loop over tl.me:
For t = 1 to T
For i 1 ... nu
If i = I(k) is an input unit and
=
Uk
is not missing then
Xt if-Uk
Else '
Xt,i f- (1- "Y)Xt-I,i + "YfCEles, WIXt_d/,p/)
where Si is a set of links from unit PI to unit i,
each with weight WI and a discrete delay dl
(but terms for which t - dl < 0 were not considered).
3. Collect outputs by averaging at the end of the sequence:
Y; f-
'L;=I Vt
Xt,O(i)
Back-Propagation
The back-propagation computation requireE! an extra set of variables
respectively g~ and ~~ after this computation.
Given:
Resul t:
Xt
and W, which will contain
output gradient vector ~;
input gradient ~~ and parameter gradient ae
aw'
1. Initialize unit gradients using outside gradient:
Initialize Xt,; 0 for all t and i.
For i = 1 . . . no, initialize Xt,O(;) f- Vt Z~
=
2. Backward loop over time:
For t
=T
to 1
For i nu ... 1
If i = I(k) is an input unit and Uk is not missing then
no backward propagation
Else
For IE S;
1ft - d! > 0
=
+ (1 - "Y)Xt-d/+1
+ "YwIXt,d'('L/es, WIXt_d/,p/)
f- WI + "Yf'CLAes, WIXt-d/ ,p/)Xt-d/,p/
Xt-d/,p/ f- Xt-d/,p/
WI
3. Collect input gradients:
For i = 1 .. . ni,
If U; is missing, then
ae f- 0
au;
Else
ae f- "
.
au,
l..Jt Xt,1(;)
The observed inputs are clamped for the whole duration of the sequence. The missing units
corresponding to missing inputs are initialized to their unconditional expectation and their value
is then updated using the feedback links for the rest of the sequence (just as if they were hidden
units). To help stability of the network and prevent it from finding periodic solutions (in which
the outputs have a correct output only periodically), output supervision is given for several time
steps. A fixed vector v, with Vt > 0 and I':t Vt = 1 specifies a weighing scheme that distributes
Y. BENGIO, F. GINGRAS
398
the responsibility for producing the correct output among different time steps. Its purpose is to
encourage the network to develop stable dynamics which gradually converge toward the correct
output tthus the weights Vt were chosen to gradually increase with t) .
The neuron transfer function was a hyperbolic tangent in our experiments. The inertial term
weighted by , (in step 3 of the forward propagation algorithm below) was used to help the
network find stable solutions. The parameter, was fixed by hand . In the experiments described
below , a value of 0.7 was used, but near values yielded similar results.
This module can therefore be combined within a hybrid system composed of several modules by
propagating gradient through the combined system (as in (Bottou and Gallinari, 1991)). For
example, as in Figure 2, there might be another module taking as input the recurrent network's
output. In this case the recurrent network can be seen as a feature extractor that accepts
data with missing values in input and computes a set of features that are never missing. In
another example of hybrid system the non-missing values in input of the recurrent network are
computed by another, upstream module (such as the preprocessing normalization used in our
experiments), and the recurrent network would provide gradients to this upstream module (for
example to better tune its normalization parameters) .
3
Experiments with Static Data
A network with three layers (inputs, hidden, outputs) was trained to classify data with missing values from the audiolD9Y database. This database was made public thanks to Jergen and
Quinlan, was used by (Barelss and Porter, 1987), and was obtained from the UCI Repository of
machine learning databases (ftp. ies . ueL edu: pub/maehine-learning-databases). The original database has 226 patterns, with 69 attributes , and 24 classes. Unfortunately, most of the
classes have only 1 exemplar. Hence we decided to cluster the classes into four groups. To do
so, the average pattern for each of the 24 classes was computed, and the K-Means clustering
algorithm was then applied on those 24 prototypical class "patterns", to yield the 4 "superclasses" used in our experiments. The multi-valued input symbolic attributes (with more than
2 possible values) where coded with a "one-out-of-n" scheme, using n inputs (all zeros except
the one corresponding to the attribute value). Note that a missing value was represented with a
special numeric value recognized by the neural network module. The inputs which were constant
over the training set were then removed . The remaining 90 inputs were finally standardized
(by computing mean and standard deviation) and transformed by a saturating non-linearity (a
scaled hyperbolic tangent). The output class ~s coded with a "one-out-of-4" scheme, and the
recognized class is the one for which the corresponding output has the largest value.
The architecture of the network is depicted in Figure 1 (left) . The length of each relaxing sequence
in the experiments was 5. Higher values would not bring any measurable improvements, whereas
for shorter sequences performance would degrade. The number of hidden units was varied, with
the best generalization performance obtained using 3 hidden units.
The recurrent network was compared with feedforward networks as well as with a mixture of
Gaussians. For the feedforward networks, the missing input values were replaced by their unconditional expected value. They were trained to minimize the same criterion as the recurrent
networksl i.e., the sum of squared differences between network output and desired output. Several feedtorward neural networks with varying numbers of hidden units were trained. The best
generalization was obtained with 15 hidden units. Experiments were also performed with no
hidden units and two hidden layers (see Table 1) . We found that the recurrent network not only
generalized better but also learned much faster (although each pattern required 5 times more
work because of the relaxation), as depicted in Figure 3.
The recurrent network was also compared with an approach based on a Gaussian and Gaussian
mixture model of the data. We used the algorithm described in (Ghahramani and Jordan,
1994) for supervised leaning from incomplete data with the EM algorithm. The whole joint
input/output distribution is modeled using a mixture model with Gaussians (for the inputs) and
multinomial (outputs) components:
P(X = x, C
= c) = E P(Wj) (21r)S;I~jll/2 exp{ -~(x _lJj)'Ejl(X -lJj)}
j
where x is the input vector , c the output class, and P(Wj) the prior probability of component j of
the mixture. The IJjd are the multinomial parameters; IJj and Ej are the Gaussian mean vector
399
Recurrent Neural Networks for Missing or Asynchronous Data
down
UpA'ellm
normalization
module
atllNlm
coat --
static
module
Figure 2: Example of hybrid modular system, using the recurrent network (middle) to extract
features from patterns which may have missing values. It can be combined with upstream
modules (e.g., a normalizing preprocessor, right) and downstream modules (e.g., a static classifier,
left) . Dotted arrows show the backward flow of gradients.
teat . e t
training eet
50r-----~----~----~----~----_,
50r-----~----~----~----~----_,
....~
45
45
40
40
35
35
30
30
....~
25
f ? ? dforvvard
25
20
20
15
15
recurrent
10
5
40
40
Figure 3: Evolution of training and test error for the recurrent network and for the best of
the feedforward networks (90-15-4): average classification error w.r.t. training epoch, (with 1
standard deviation error bars, computed over 10 trials).
and covariance matrix for component j. Maximum likelihood training is applied as explained
in (Ghahramani and Jordan, 1994), taking missing values into account (as additional missing
variables of the EM algorithm).
For each architecture in Table 1, 10 trainin~ trials were run with a different subset of 200
training and 26 test patterns (and different initial weights for the neural networks) . The recurrent
network was dearlr superior to the other architectures, probably for the reasons discussed in the
conclusion. In addItion, we have shown graphically the rate of convergence during training of the
best feedforward network (90-15-4) as well as the best recurrent network (90-3-4), in Figure 3.
Clearly, the recurrent network not only performs better at the end of traming but also learns
much faster .
4
Recurrent Network for Asynchronous Sequential Data
An important problem with many sequential data analysis problems such as those encountered
in financial data sets is that different variables are known at different frequencies, at different
times (phase), or are sometimes missing. For example, some variables are given daily, weekly,
monthly, quarterly, or yearly. Furthermore, some variables may not even be given for some of
the periods or the precise timing may change (for example the date at which a company reports
financial performance my vary) .
Therefore, we propose to extend the algorithm presented above for static data with missing
values to the general case of sequential data with missing values or asynchronous variables. For
time steps at which a low-frequency variable is not given, a missing value is assumed in input.
Again, the feedback links from the hidden and output units to the input units allow the network
400
Y. BENGIO, F. GINGRAS
Table 1: Comparative performances of recurrent network, feedforward network, and Gaussian
mixture density model on audiology data. The average percentage of classification error is shown
after training, for both training and test sets, and tlie standard deviation in parenthesis, for 10
trials.
Trammg set error Test set error
90-3-4 Recurrent net
0.3(~ . 6
2.~(?(j
3.8(4
0(0
90-6-4 Recurrent net
15(7.3
90-25-4 Feedforward net
13.8(7
0.80.4
90-15-4 Feedforward net
1 0.96
90-10-6-4 Feedforward net
16f5.3
298.9
64.9
90-6-4 Feedforward net
27(10
90-2-4 Feedforward net
18.5?
22 1
33(8
90-4 Feedforward net
38 9.3
35 1.6
1 Gaussian
38 9.2
36 1.5
4 Gaussians Mixture
8 Gaussians Mixture
36 2.1
38 9.3
?r
to "complete" the missing data. The main differences with the static case are that the inputs
and outputs vary with t (we use Ut and Yt at each time step instead of U and y). The training
algorithm is otherwise the same.
5
Experiments with Asynchronous Data
To evaluate the algorithm, we have used a recurrent network with random weights, and feedback
links on the input units to generate artificial data. The generating network has 6 inputs 3
hidden and 1 outputs. The hidden layer is connected to the input layer (1 delay). The hidden
layer receives inputs with delays 0 and 1 from the input layer and with delay 1 from itself. The
output layer receives inputs from the hidden layer. At the initial time step as well as at 5% of
the time steps (chosen randomly), the input units were clamped with random values to introduce
some further variability. The mlssing values were then completed by the recurrent network. To
generate asynchronous data, half of the inputs were then hidden with missing values 4 out of every
5 time steps. 100 training sequences and 50 test sequences were generated. The learning problem
is therefore a sequence regression problem with mlssing and asynchronous input variables.
Preliminary comp'arative experiments show a clear advantage to completing the missing values
(due to the the dlfferent frequencies of the input variables) wlth the recurrent network, as shown
in Figure 4. The recognition recurrent network is shown on the right of Figure 1. It has multiple
time scales (implemented with subsampling and oversampling, as in TDNNs (Lang, Waibel and
Hinton, 1990) and reverse-TDNNs (Simard and LeCun, 1992)), to facilitate the learning of such
asynchronous data. The static network is a time-delay neural network with 6 input, 8 hidden,
and 1 output unit, and connections with delays 0,2, and 4 from the input to hidden and hidden to
output units. The "missing values" for slow-varying variables were replaced by the last observed
value in the sequence. Experiments with 4 and 16 hidden units yielded similar results.
6
Conclusion
When there are dependencies between input variables, and the output prediction can be improved by taking them into account, we have seen that a recurrent network with input feedback
can perform significantly better than a simpler approach that replaces missing values by their
unconditional expectation. According to us, this explains the significant improvement brought
by using the recurrent network instead of a feedforward network in the experiments.
On the other hand, the large number of input variables (n; = 90, in the experiments) most likely
explains the poor performance of the mixture of Gaussian model in comparison to both the static
networks and the recurrent network. The Gaussian model requires estimating O(nn parameters
and inverting large covariance matrices.
The aPl?roach to handling missing values presented here can also be extended to sequential data
with mlssing or asynchronous variables. As our experiments suggest, for such problems, using
recurrence and multiple time scales yields better performance than static or time-delay networks
for which the missing values are filled using a heuristic.
401
Recurrent Neural Networks for Missing or Asynchronous Data
"().18
0.18
0.104
f 0 .12
i
j
tlme-delay network
0 .1
0.08
~
0.08
0 .040
2
~
8
_ _ _-lecurrent network
8
10
12
training 4tPOCh
1~
18
18
20
Figure 4: Test set mean squared error on the asynchronous data. Top: static network with time
delays. Bottom: recurrent network with feedback to input values to complete missing data.
References
Ahmad, S. and Tresp, V. (1993) . Some solutions to the missing feature problem in vision . In
Hanson, S. J., Cowan, J. D., and Giles, C. L. , editors, ACivances in Neural Information
Processing Systems 5, San Mateo, CA. Morgan Kaufman Publishers.
Almeida, L. (1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In Caudill, M. and Butler, C., editors, IEEE International Conference
on Neural Networks, volume 2, pages 609- 618, San Diego 1987. IEEE, New York.
Bareiss, E. and Porter, B. (1987) . Protos: An exemplar-based learning apprentice. In Proceedings
of the 4th International Workshop on Machine Learning, pages 12-23, Irvine, CA. Morgan
Kaufmann.
Bottou, L. and Gallinari, P. (1991). A framework for the cooperation of learning algorithms. In
Lippman, R. P., Moody, R., and Touretzky, D. S., editors, Advances in Neural Information
Processing Systems 3, pages 781-788, Denver, CO.
Ghahramani, Z. and Jordan, M. I. (1994). Supervised learning from incomplete data via an
EM approach. In Cowan, J. , Tesauro, G. , and Alspector, J. , editors, Advances in Neural
Information Processing Systems 6, page ,San Mateo, CA. Morgan Kaufmann.
Lang) K. J ., Waibel, A. H., and Hinton, G. E. (1990) . A time-delay neural network architecture
tor isolated word recognition . Neural Networks, 3:23- 43 .
Pineda, F. (1989) . Recurrent back-propagation and the dynamical approach to adaptive neural
computation. Neural Computation, 1:161- 172.
Simard, P. and LeCun, Y . (1992) . Reverse TDNN: An architecture for trajectory generation. In
Moody, J ., Hanson, S., and Lipmann, R. , editors, Advances in Neural Information Processing
Systems 4, pages 579- 588, Denver, CO. Morgan Kaufmann, San Mateo.
Tresp, V., Ahmad, S., and Neuneier, R. (1994). Training neural networks with deficient data.
In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information
Processing Systems 6, pages 128-135. Morgan Kaufman Publishers, San Mateo, CA.
| 1126 |@word trial:3 repository:1 middle:1 covariance:2 initial:2 pub:1 neuneier:2 comparing:1 si:1 lang:2 periodically:1 half:1 discovering:1 weighing:1 recherche:2 firstly:1 simpler:1 consists:2 inside:1 introduce:1 operationnelle:2 indeed:1 expected:3 alspector:2 behavior:1 multi:1 company:1 estimating:5 linearity:1 kaufman:2 finding:1 nj:1 every:1 weekly:1 scaled:1 classifier:1 uk:3 gallinari:2 unit:25 yn:1 producing:1 timing:1 might:2 au:2 mateo:4 collect:2 relaxing:3 co:3 decided:1 lecun:2 lippman:1 bell:1 hyperbolic:2 significantly:1 word:1 wait:1 suggest:1 symbolic:1 measurable:1 map:1 missing:49 yt:1 graphically:1 duration:1 qc:2 rule:1 traming:1 fill:3 financial:3 stability:1 handle:1 gingras:4 updated:1 diego:1 us:1 recognition:2 database:5 observed:8 ft:1 module:10 bottom:1 capture:3 wj:2 connected:1 ahmad:4 removed:1 environment:1 ui:2 dynamic:1 trained:6 joint:5 represented:1 informatique:2 artificial:1 outside:1 modular:1 heuristic:1 valued:1 relax:2 otherwise:1 statistic:1 h3c:2 itself:1 pineda:2 advantage:5 sequence:11 net:8 propose:3 product:1 frequent:1 uci:1 loop:2 date:1 convergence:1 cluster:1 francois:1 produce:2 comparative:1 generating:1 help:2 ftp:1 recurrent:38 montreal:4 propagating:1 develop:1 exemplar:2 sole:1 implemented:1 implies:1 differ:1 concentrate:1 tlie:1 correct:3 attribute:3 public:1 explains:2 require:1 generalization:2 preliminary:1 extension:2 considered:1 missmg:2 exp:1 mapping:1 tor:1 vary:2 purpose:2 trammg:1 combinatorial:1 largest:1 weighted:1 brought:1 clearly:1 j7:2 gaussian:13 rather:1 ej:1 varying:2 improvement:2 likelihood:1 nn:1 typically:1 hidden:21 relation:1 transformed:1 interested:1 classification:3 among:1 special:1 initialize:4 never:1 yoshua:1 others:2 report:1 few:2 randomly:1 composed:1 replaced:2 phase:1 attempt:1 interest:1 mixture:11 nl:1 yielding:1 unconditional:4 encourage:1 coat:1 daily:1 shorter:1 filled:1 incomplete:2 initialized:1 desired:2 isolated:1 classify:1 giles:1 deviation:3 subset:1 delay:11 dependency:4 aw:1 periodic:1 my:1 combined:3 thanks:1 density:1 international:2 ie:1 probabilistic:1 moody:2 squared:2 again:1 possibly:1 f5:1 simard:2 account:2 de:2 depends:1 performed:1 lab:1 responsibility:1 reached:1 minimize:3 square:1 ni:2 kaufmann:3 yield:2 ofthe:1 economical:1 comp:1 trajectory:1 upa:1 touretzky:1 frequency:5 universite:2 static:12 irvine:1 ut:1 inertial:1 back:4 higher:1 supervised:2 improved:1 furthermore:1 just:1 hand:4 receives:2 replacing:2 propagation:6 porter:2 jll:1 yf:1 facilitate:1 effect:1 contain:1 y2:1 evolution:1 hence:1 deal:3 during:1 recurrence:1 lipmann:1 criterion:3 generalized:1 complete:2 performs:1 bring:1 umontreal:2 superior:1 multinomial:2 denver:2 volume:1 discussed:1 ejl:1 extend:1 significant:1 monthly:1 stable:3 supervision:1 optimizing:1 reverse:2 tesauro:2 certain:2 vt:5 yi:1 seen:2 morgan:5 additional:1 recognized:2 converge:1 period:1 full:3 multiple:3 desirable:1 faster:2 dept:2 y:1 coded:2 parenthesis:1 prediction:1 regression:1 ae:3 essentially:1 expectation:2 vision:1 iteration:1 sometimes:1 represent:1 normalization:3 tdnns:2 whereas:1 addition:1 else:4 publisher:2 extra:1 resul:2 unlike:1 rest:1 probably:1 deficient:1 cowan:3 flow:1 jordan:5 near:1 feedforward:15 bengio:4 architecture:8 economic:1 idea:1 wlth:1 york:1 useful:1 clear:1 tune:1 amount:1 simplest:1 generate:2 specifies:1 percentage:1 oversampling:1 dotted:1 discrete:1 group:1 four:1 changing:1 prevent:1 rectangle:2 backward:3 relaxation:1 downstream:1 sum:1 run:1 parameterized:1 audiology:1 holmdel:1 capturing:1 layer:10 completing:1 replaces:1 encountered:1 yielded:2 occur:1 according:1 waibel:2 poor:1 em:4 wi:3 explained:1 gradually:2 xo:3 end:2 available:1 gaussians:4 quarterly:1 occurrence:1 apprentice:1 original:1 assumes:1 clustering:1 remaining:1 standardized:1 completed:1 subsampling:1 top:1 quinlan:1 yearly:1 ghahramani:5 move:1 traditional:2 gradient:10 link:4 me:1 degrade:1 discriminant:3 iro:2 reason:2 toward:1 assuming:1 length:1 modeled:1 minimizing:1 unfortunately:2 boltzmann:1 teat:1 perform:1 neuron:1 roach:1 hinton:2 variability:1 precise:1 extended:1 varied:1 inverting:1 pair:1 required:1 connection:1 hanson:2 distinction:1 accepts:1 learned:1 nu:3 bar:1 below:2 pattern:6 dynamical:1 hybrid:3 predicting:2 caudill:1 scheme:6 numerous:1 tdnn:1 extract:1 tresp:4 ljj:2 prior:2 epoch:1 tangent:2 apl:1 prototypical:1 generation:1 editor:6 leaning:1 pi:1 cooperation:1 last:1 asynchronous:15 allow:1 taking:4 feedback:8 boundary:1 unfolds:1 avoids:1 numeric:1 computes:1 forward:1 made:2 adaptive:1 preprocessing:1 fixedpoint:1 tlme:1 san:5 preferred:1 eet:1 keep:1 assumed:1 butler:1 un:1 table:3 transfer:1 ca:6 bottou:2 upstream:3 main:2 arrow:1 whole:3 allowed:1 tl:1 slow:1 position:1 explicit:1 bengioy:1 clamped:2 trainin:1 extractor:1 learns:1 down:1 preprocessor:1 xt:12 specific:1 jt:1 normalizing:1 dl:2 workshop:1 sequential:8 arameters:1 depicted:2 likely:1 ijj:1 saturating:1 u2:1 conditional:2 superclass:1 change:1 except:1 operates:1 reducing:2 averaging:1 distributes:1 experimental:1 e:1 perceptrons:1 almeida:2 evaluate:1 handling:2 |
141 | 1,127 | A Realizable Learning Task which
Exhibits Overfitting
Siegfried Bos
Laboratory for Information Representation, RIKEN,
Hirosawa 2-1, Wako-shi, Saitama, 351-01, Japan
email: boes@zoo.riken.go.jp
Abstract
In this paper we examine a perceptron learning task. The task is
realizable since it is provided by another perceptron with identical architecture. Both perceptrons have nonlinear sigmoid output
functions. The gain of the output function determines the level of
nonlinearity of the learning task. It is observed that a high level
of nonlinearity leads to overfitting. We give an explanation for this
rather surprising observation and develop a method to avoid the
overfitting. This method has two possible interpretations, one is
learning with noise, the other cross-validated early stopping.
1
Learning Rules from Examples
The property which makes feedforward neural nets interesting for many practical
applications is their ability to approximate functions, which are given only by examples. Feed-forward networks with at least one hidden layer of nonlinear units
are able to approximate each continuous function on a N-dimensional hypercube
arbitrarily well. While the existence of neural function approximators is already
established, there is still a lack of knowledge about their practical realizations. Also
major problems, which complicate a good realization, like overfitting, need a better
understanding.
In this work we study overfitting in a one-layer percept ron model. The model
allows a good theoretical description while it exhibits already a qualitatively similar
behavior as the multilayer perceptron.
A one-layer perceptron has N input units and one output unit. Between input
and output it has one layer of adjustable weights Wi, (i = 1, ... ,N). The output z
is a possibly nonlinear function of the weighted sum of inputs Xi, i.e.
1
z = g(h) ,
with
h=
N
L
vN
I1tT
i=l
Wi Xi .
(1)
219
A Realizable Learning Task Which Exhibits Overfitting
The quality of the function approximation is measured by the difference between
the correct output z* and the net's output z averaged over all possible inputs. In
the supervised learning scheme one trains the network using a set of examples ;fll
(JL = 1, ... , P), for which the correct output is known. It is the learning task to
minimize a certain cost function, which measures the difference between the correct
output z~ and the net's output Zll averaged over all examples.
Using the mean squared error as a suitable measure for the difference between
the outputs, we can define the training error ET and the generalization error Ea
as
(2)
The development of both errors as a function of the number P of trained examples
is given by the learning curves. Training is conventionally done by gradient descend.
For theoretical purposes it is very useful to study learning tasks, which are provided by a second network, the so-called teacher network. This concept allows a
more transparent definition of the difficulty of the learning task. Also the monitoring of the training process becomes clearer, since it is always possible to compare
the student network and the teacher network directly.
Suitable quantities for such a comparison are, in the perceptron case, the following
order parameters,
N
q:=
IIWII
=
2:(Wi )2.
(3)
i=l
Both have a very transparent interpretation, r is the normalized overlap between
the weight vectors of teacher and student, and q is the norm of the student's weight
vector. These order parameters can also be used in multilayer learning, but their
number increases with the number of all possible permutations between the hidden
units of teacher and student.
2
The Learning Task
Here we concentrate on the case in which a student perceptron has to learn a
mapping provided by another perceptron. We choose identical networks for teacher
and student. Both have the same sigmoid output function, i.e. g*(h) = g(h) =
tanh( "Ih). Identical network architectures of teacher and student are realizable tasks.
In principle the student is able to learn the task provided by the teacher exactly.
Unrealizable tasks can not be learnt exactly, there remains always a finite error.
If we use uniformally distributed random inputs ;f and weights W, the weighted
sum h in (1) can be assumed as Gaussian distributed. Then we can express the
generalization error (2) by the order parameters (3),
Ea= JDZ 1
JDz2~{tanh["IZll-tanh[q(rzl+~Z2)]r,
(4)
with the Gaussian measure
J 1
Dz:=
+00
- 00
-dz- exp (Z2)
--
../2i
2
(5)
From equation (4) we can see how the student learns the gain "I of the teachers
output function. It adjusts the norm q of its weights. The gain "I plays an important
role since it allows to tune the function tanhbh) between a linear function b ? 1)
and a highly nonlinear function b ? 1). Now we want to determine the learning
curves of this task.
s.B6s
220
3
3.1
Emergence of Overfitting
Explicit Expression for the Weights
Below the storage capacity of the perceptron, i.e. a = 1, the minimum of the training
error ET is zero. A zero training error implies that every example has been learnt
exactly, thus
(6)
The weights with minimal norm that fulfill this condition are given by the Pseudoinverse (see Hertz et al. 1991),
P
Wi =
2:
h~ (C-l)~v
xf,
(7)
~,v=l
Note, that the weights are completely independent of the output function g(h) =
g*(h). They are the same as in the simplest realizable case, linear perceptron learns
linear perceptron.
3.2
Statistical Mechanics
The calculation of the order parameters can be done by a method from statistical
mechanics which applies the commonly used replica method. For details about the
replica approach see Hertz et al. (1991). The solution of the continuous perceptron
problem can be found in Bas et al. (1993). Since the results of the statistical mechanics calculations are exact only in the thermodynamic limit, i.e. N ~ 00, the
variable a is the more natural measure. It is defined as the fraction of the number
of patterns P over the system size N, i.e. a := PIN. In the thermodynamic limit
N and P are infinite, but a is still finite. Normally, reasonable system sizes, such
as N ~ 100, are already well described by this theory.
Usually one concentrates on the zero temperature limit, because this implies that
the training error ET accepts its absolute minimum for every number of presented
examples P. The corresponding order parameters for the case, linear perceptron
learns linear student, are
q='Yva,
r
(8)
=va.
The zero temperature limit can also be called exhaustive training, since the student
net is trained until the absolute minimum of ET is reached.
For small a and high gains 'Y, i.e levels of nonlinearity, exhaustive training leads
to overfitting. That means the generalization error Ea(a) is not, as it should,
monotonously decreasing with a. It is one reason for overfitting, that the training
follows too strongly the examples. The critical gain 'Yc, which determines whether
the generalization error Ea (a) is increasing or decreasing function for small values
of a, can be determined by a linear approximation. For small a, both order parameters (3) are small, and the student's tanh-function in (4) can be approximated by
a linear function. This simplifies the equation (4) to the following expression,
Ea(f) = Ea(O) -
i [2H(r) - 'Y 1,
with
H( 'Y):=
J
Dz tanh(rz) z.
(9)
Since the function H(r) has an upper bound, i.e. J2/7r, the critical gain is reached
if 'Yc = 2H{rc). The numerical solution gives 'Yc = 1.3371. If 'Y is higher, the slope
of Ea(a) is positive for small a. In the following considerations we will use always
the gain 'Y = 5 as an example, since this is an intermediate level of nonlinearity.
221
A Realizable Learning Task Which Exhibits Overfitting
1.0
100.0
10.0
5.0
0.8
2.0
1.0
0.6
0.5
0.4
......
0.2 -- ' - .- . - '-.- -- .w
_
_
?
_
..
__
0.0
-- .- .- .-'-.
- '- '- .
.
-- -- -.. -- -.. -- -- -- --- . _--.--- -'- .-'- .-
0.2
0.0
0.6
0.4
0.8
1.0
PIN
Figure 1: Learning curves E ( 0:) for the problem, tanh- perceptron learns tanhperceptron, for different values of the gain,. Even in this realizable case, exhaustive
training can lead to overfitting, if the gain , is high enough.
3.3
How to Understand the Emergence of Overfitting
Here the evaluation of the generalization error in dependence of the order parameters
rand q is helpful. Fig. 2 shows the function EG(r, q) for r between 0 and 1 and q
between 0 and 1.2,.
The exhaustive training in realizable cases follows always the line q( r) =
independent of the actual output function. That means, training is guided only by
the training error and not by the generalization error. If the gain , is higher than
the line EG = EG(O, 0) starts with a lower slope than q(r) = ,r, which results
in overfitting.
,r
,e,
4
How to Avoid Overfitting
From Fig. 2 we can guess already that q increases too fast compared to r. Maybe the
ratio between q and r is better during the training process. So we have to develop
a description for the training process first.
4.1
Training Process
We found already that the order parameters for finite temperatures (T > 0) of
the statistical mechanics approach are a good description of the training process
in an unrealizable learning task (Bos 1995). So we use the finite temperature order
parameters also in this task. These are, again taken from the task 'linear perceptron
learns linear percept ron' ,
a, J(~)
=
q( 0:, )
a
(1 + 0:)
a - 20:
2
a - 0:
'
r(o:, a) =
a2 - 0:
(0:)
a (1+0:)a-20:'
(10)
with the temperature dependent variable
a:= 1 + [,8(Q - q)]-l .
(11)
222
S.BOS
..
.
6.0
/local minZ
i abs. mici.
5.0
./ local m~.
4.0
q
3.0
........
2.0
...
........
" .-
...
1.0
........
--- ......
-.
~:......
:=:::.. ==?~===--L..::===~?=??????~?????3??????
0.0 ~~==~-~0.0
0.2
!:.
0.6
0.4
0.8
1.0
r
Figure 2: Contour plot of EG(r,q) defined by (4), the generalization error as a
function of the two order parameters. Starting from the minimum EG = 0 at (r, q) =
(1,5) the contour lines for EG = 0.1,0.2, ... , 0.8 are given (dotted lines). The dashed
line corresponds to EG(O,O) = 0.42. The solid lines are parametric curves of the
order parameters (r , q) for certain training strategies. The straight line illustrates
exhaustive training, the lower ones the optimal training, which will be explained in
Fig. 3. Here the gain I = 5.
The zero temperature limit corresponds to a = 1. We will show now that the
decrease of the temperature dependent parameter a from 00 to 1, describes the
evolution of the order parameters during the training process. In the training process
the natural parameter is the number of parallel training steps t. In each parallel
training step all patterns are presented once and all weights are updated. Fig. 3
shows the evolution of the order parameters (10) as parametric curves (r,q).
The exhaustive learning curve is defined by a = 1 with the parameter 0: (solid
line). For each 0: the training ends on this curve. The dotted lines illustrate the
training process, a runs from infinity to 1. Simulations of the training process have
shown that this theoretical curve is a good description, at least after some training
steps. We will now use this description of the training process for the definition of
an optimized training strategy.
4.2
Optimal temperature
The optimized training strategy chooses not a = 1 or the corresponding temperature
T = 0, but the value of a (Le. temperature), which minimizes the generalization
error EG. In the lower solid curve indicating the parametric curve (r, q) the value of
a is chosen for every 0:, which minimizes EG. The function EG(a) has two minima
between 0: = 0.5 and 0.7. The solid line indicates always the absolute minimum.
The parametric curves corresponding to the local minima are given by the double
dashed and dash-dotted lines. Note, that the optimized value a is always related
to an optimized temperature through equation (11). But the parameter a is also
related to the number of training steps t.
A Realizable Learning Task Which Exhibits Overfilling
223
6.0
5.0
4.0
local min.
abs. min.
local min.
simulation
I--t--l
q
3.0
2.0
1.0
0.0
0.0
0.2
0.6
0.4
0.8
1.0
r
Figure 3: Training process. The order parameters (10) as parametric curves (r,q)
with the parameters a and a. The straight solid line corresponds to exhaustive
learning, i.e. a = 1 (marks at a = 0.1,0.2, ... 1.0). The dotted lines describe the
training process for fixed a. Iterative training reduces the parameter a from 00
to 1. Examples for a = 0.1,0.2,0.3,0.4,0.9,0.99 are given. The lower solid line is
an optimized learning curve. To achieve this curve the value of a is chosen, which
minimizes EG absolutely. Between a ~ 0.5 and 0.7 the error EG has two minima;
the double- dashed and dash-dotted lines indicate the second, local minimum of EG.
Compare with Fig. 2, to see which is the absolute and which the local minimum of
EG. A naive early stopping procedure ends always in the minimum with the smaller
q, since it is the first minimum during the training process (see simulation indicated
with errorbars).
4.3
Early Stopping
Fig. 3 and Fig. 2 together indicate that an earlier stopping of the training process
can avoid the overfitting. But in order to determine the stopping point one has
to know the actual generalization error during the training. Cross-validation tries
to provide an approximation for the real generalization error. The cross-validation
error Ecv is defined like E T , see (2), on a set of examples, which are not used
during the training. Here we calculate the optimum using the real generalization
error, given by rand q, to determine the optimal point for early stopping. It is a
lower bound for training with finite cross-validation sets. Some preliminary tests
have shown that already small cross- validation sets approximate the real EG quite
well. Training is stopped, when EG increases. The resulting curve is given by the
error bars in Fig. 3. The errorbars indicate the standard deviation of a simulation
with N = 100 averaged over 50 trials.
In Fig. 4 the same results are shown as learning curves EG(a). There one can see
clearly that the early stopping strategy avoids the overfitting.
5
Summary and outlook
In this paper we have shown that overfitting can also emerge in realizable learning
tasks. The calculation of a critical gain and the contour lines in Fig. 2 imply, that
S.BOS
224
0.5
0.4
0.3
EO
0.2
0.1
0.0
0.0
exh.
local min.
abs. min.
local min.
simulation
0.2
~
0.4
0.6
0.8
1.0
PIN
Figure 4: Learning curves corresponding to the parametric curves in Fig. 3. The
upper solid line shows again exhaustive training. The optimized finite temperature
curve is the lower solid line. From 0: = 0.6 exhaustive and optimal training lead to
identical results (see marks). The simulation for early stopping (errorbars) finds the
first minimum of EG.
the reason for the overfitting is the nonlinearity of the problem. The network adjusts
slowly to the nonlinearity of the task. We have developed a method to avoid the
overfitting, it can be interpreted in two ways.
Training at a finite temperature reduces overfitting. It can be realized, if one
trains with noisy examples. In the other interpretation one learns without noise,
but stops the training earlier. The early stopping is guided by cross-validation. It
was observed that early stopping is not completely simple, since it can lead to a
local minimum of the generalization error. One should be aware of this possibility,
before one applies early stopping.
Since multilayer perceptrons are built of nonlinear perceptrons, the same effects
are important for multilayer learning. A study with large scale simulations (Miiller
et al. 1995) has shown that overfitting occurs also in realizable multilayer learning
tasks.
Acknowledgments
I would like to thank S. Amari and M. Opper for stimulating discussions, and M.
Herrmann for hints concerning the presentation.
References
S. Bos. (1995) Avoiding overfitting by finite temperature learning and crossvalidation. International Conference on Artificial Neural Networks '95 Vo1.2, p.111.
S. Bos, W. Kinzel & M. Opper. (1993) Generalization ability of perceptrons with
continuous outputs. Phys. Rev. E 47:1384-1391.
J. Hertz, A. Krogh & R. G. Palmer. (1991) Introduction to the Theory of Neural
Computation. Reading: Addison-Wesley.
K. R. Miiller, M. Finke, N. Murata, K. Schulten & S. Amari. (1995) On large scale
simulations for learning curves, Neural Computation in press.
| 1127 |@word effect:1 trial:1 concept:1 normalized:1 implies:2 indicate:3 norm:3 evolution:2 concentrate:2 guided:2 hypercube:1 already:6 correct:3 laboratory:1 quantity:1 simulation:8 realized:1 eg:18 occurs:1 parametric:6 strategy:4 during:5 dependence:1 exhibit:5 gradient:1 outlook:1 solid:8 thank:1 capacity:1 transparent:2 generalization:13 preliminary:1 unrealizable:2 reason:2 wako:1 temperature:14 z2:2 surprising:1 consideration:1 exp:1 ratio:1 zll:1 mapping:1 sigmoid:2 numerical:1 kinzel:1 major:1 early:9 a2:1 plot:1 purpose:1 jp:1 jl:1 interpretation:3 ba:1 adjustable:1 guess:1 tanh:6 upper:2 observation:1 finite:8 weighted:2 nonlinearity:6 clearly:1 ron:2 always:7 gaussian:2 rather:1 fulfill:1 avoid:4 rc:1 validated:1 optimized:6 indicates:1 certain:2 errorbars:3 accepts:1 arbitrarily:1 behavior:1 realizable:11 examine:1 mechanic:4 bos:6 helpful:1 stopping:11 dependent:2 approximators:1 minimum:14 decreasing:2 able:2 bar:1 eo:1 actual:2 hidden:2 determine:3 below:1 increasing:1 becomes:1 provided:4 dashed:3 reading:1 thermodynamic:2 built:1 explanation:1 reduces:2 suitable:2 overlap:1 difficulty:1 natural:2 interpreted:1 minimizes:3 development:1 xf:1 developed:1 calculation:3 cross:6 once:1 aware:1 concerning:1 scheme:1 imply:1 identical:4 va:1 every:3 multilayer:5 conventionally:1 exactly:3 naive:1 hint:1 unit:4 normally:1 indicated:1 understanding:1 positive:1 before:1 want:1 local:10 limit:5 boes:1 permutation:1 interesting:1 ab:3 validation:5 highly:1 possibility:1 evaluation:1 principle:1 palmer:1 feedforward:1 intermediate:1 averaged:3 enough:1 summary:1 practical:2 acknowledgment:1 architecture:2 simplifies:1 understand:1 perceptron:14 critical:3 procedure:1 emerge:1 absolute:4 whether:1 expression:2 distributed:2 curve:20 theoretical:3 minimal:1 stopped:1 opper:2 avoids:1 contour:3 earlier:2 forward:1 qualitatively:1 miiller:2 commonly:1 herrmann:1 finke:1 storage:1 cost:1 useful:1 deviation:1 usually:1 approximate:3 tune:1 saitama:1 maybe:1 shi:1 dz:3 go:1 pseudoinverse:1 starting:1 monotonously:1 too:2 overfitting:22 simplest:1 assumed:1 teacher:8 xi:2 learnt:2 chooses:1 rule:1 adjusts:2 dotted:5 international:1 continuous:3 iterative:1 learn:2 siegfried:1 together:1 updated:1 express:1 play:1 hirosawa:1 squared:1 exact:1 again:2 choose:1 possibly:1 slowly:1 approximated:1 replica:2 noise:2 japan:1 fraction:1 sum:2 observed:2 role:1 run:1 fig:11 student:12 descend:1 calculate:1 reasonable:1 vn:1 decrease:1 schulten:1 try:1 explicit:1 reached:2 start:1 layer:4 bound:2 parallel:2 dash:2 fll:1 minz:1 slope:2 learns:6 trained:2 minimize:1 infinity:1 jdz:1 percept:2 murata:1 completely:2 ih:1 min:6 riken:2 train:2 zoo:1 monitoring:1 fast:1 describe:1 straight:2 pattern:2 illustrates:1 artificial:1 phys:1 yc:3 hertz:3 exhaustive:9 complicate:1 quite:1 email:1 definition:2 describes:1 smaller:1 iiwii:1 wi:4 amari:2 rev:1 ability:2 explained:1 applies:2 emergence:2 noisy:1 gain:12 stop:1 taken:1 corresponds:3 determines:2 equation:3 stimulating:1 knowledge:1 net:4 remains:1 pin:3 presentation:1 know:1 addison:1 ea:7 j2:1 end:2 feed:1 realization:2 wesley:1 higher:2 supervised:1 infinite:1 determined:1 achieve:1 rand:2 vo1:1 done:2 description:5 strongly:1 called:2 crossvalidation:1 until:1 double:2 optimum:1 existence:1 rz:1 established:1 perceptrons:4 nonlinear:5 indicating:1 lack:1 mark:2 illustrate:1 develop:2 clearer:1 absolutely:1 quality:1 measured:1 avoiding:1 krogh:1 |
142 | 1,128 | Modeling Saccadic Targeting in Visual Search
Gregory J. Zelinsky
Center for Visual Science
University of Rochester
Rochester, NY 14627
greg@cvs.rochester.edu
Rajesh P. N. Rao
Computer Science Department
University of Rochester
Rochester, NY 14627
rao@cs.rochester.edu
Mary M. Hayhoe
Center for Visual Science
University of Rochester
Rochester, NY 14627
mary@cvs.rochester.edu
Dana H. Ballard
Computer Science Department
University of Rochester
Rochester, NY 14627
dana@cs.rochester.edu
Abstract
Visual cognition depends criticalIy on the ability to make rapid eye movements
known as saccades that orient the fovea over targets of interest in a visual
scene. Saccades are known to be ballistic: the pattern of muscle activation
for foveating a prespecified target location is computed prior to the movement
and visual feedback is precluded. Despite these distinctive properties, there
has been no general model of the saccadic targeting strategy employed by
the human visual system during visual search in natural scenes. This paper
proposes a model for saccadic targeting that uses iconic scene representations
derived from oriented spatial filters at multiple scales. Visual search proceeds
in a coarse-to-fine fashion with the largest scale filter responses being compared
first. The model was empirically tested by comparing its perfonnance with
actual eye movement data from human subjects in a natural visual search task;
preliminary results indicate substantial agreement between eye movements
predicted by the model and those recorded from human subjects.
1 INTRODUCTION
Human vision relies extensively on the ability to make saccadic eye movements. These rapid
eye movements, which are made at the rate of about three per second, orient the high-acuity
foveal region of the eye over targets of interest in a visual scene. The high velocity of saccades,
reaching up to 700? per second for large movements, serves to minimize the time in flight; most
of the time is spent fixating the chosen targets.
The objective of saccades is currently best understood for reading text [13] where the eyes fixate
almost every word, sometimes skipping over smalI function words. In general scenes, however,
the purpose of saccades is much more difficult to analyze. It was originally suggested that
Modeling Saccadic Targeting in Visual Search
(a)
831
(b)
Figure 1: Eye Movements in Visual Search. (a) shows the typical pattern of multiple saccades (shown
here for two different subjects) elicited during the course of searching for the object composed of the fork
and knife. The initial fixation point is denoted by ?+'. (b) depicts a summary of such movements over many
experiments as a function of the six possible locations of a target object on the table.
the movements and their resultant fixations formed a visual-motor memory (or "scan-paths") of
objects [11] but subsequent work has suggested that the role of saccades is more tightly coupled
to the momentary problem solving strategy being employed by the subject. In chess, it has
been shown that saccades are used to assess the current situation on the board in the course of
making a decision to move, but the exact information that is being represented is not yet known
[5]. In a task involving the copying of a model block pattern located on a board, fixations have
been shown to be used in accessing crucial information for different stages of the copying task
[2]. In natural language processing, there has been recent evidence that fixations reflect the
instantaneous parsing of a spoken sentence [18]. However, none of the above work addresses the
important question of what possible computational mechanisms underlie saccadic targeting.
The complexity of the targeting problem can be illustrated by the saccades employed by subjects
to solve a natural visual search task. In this task, subjects are given a 1 second preview of a
single object on a table and then instructed to determine, in the shortest possible amount of time,
whether the previewed object is among a group of one to five objects on the same table in a
subsequent view. The typical eye movements elicited are shown in Figure 1 (a). Rather than a
single movement to the remembered target, several saccades are typical, with each successive
saccade moving closer to the goal object (Figure 1 (b? .
The purpose of this paper is to describe a mechanism for programming saccades that can appro ximately model the saccadic targeting method used by human SUbjects. Previous models of human
visual search have focused on simple search tasks involving elementary features such as horizontaVvertical bars of possibly different color [1,4,8] or have relied exclusively on bottom-up
input-driven saliency criteria for generating scan-paths [10, 19]. The proposed model achieves
targeting in arbitrary visual scenes by using bottom-up scene representations in conjunction with
previously memorized top-down object representations; both of these representations are iconic,
based on oriented spatial filters at multiple scales.
One of the difficult aspects of modeling saccadic targeting is that saccades are ballistic, i.e.,
their final location is computed prior to making the movement and the movement trajectory is
uninterrupted by incoming visual signals. Furthermore, owing to the structure of the retina, the
central 1.50 of the visual field is represented with a resolution that is almost 100 times greater
than that of the periphery. We resolve these issues by positing that the targeting computation
proceeds sequentially with coarse resolution information being used in the computation of target
coordinates prior to fine resolution information. The method is compared to actual eye movements
made by human subjects in the visual search task described above; the eye movements predicted
by the model are shown to be in close agreement with observed human eye movements.
832
R. P. N. RAO, G. J. ZELINSKY, M. M. HAYHOE, D. H. BALLARD
Figure 2: Multiscale Natural Basis Functions. The 10 oriented spatial filters used in our model to
generate iconic scene representations, shown here at three octave-separated scales. These filters resemble
the receptive field profiles of cells in the primate visual cortex [20] and have been shown to approximate
the dominant eigenvectors of natural image distributions as obtained from principal component analysis
[7,17].
2
ICONIC REPRESENTATIONS
The current implementation of our model uses a set of non-orthogonal basis functions as given
by a zeroth order Gaussian Go and nine of its oriented derivatives as follows [6]:
G~n,n
= 1,2,3,8n = 0, ... ,m7r/(n + l),m = 1, ... ,n
(1)
where n denotes the order of the filter and 8 n refers to the preferred orientation of the filter
(Figure 2). The response of an image patch I centered at (xo, Yo) to a particular basis filter G~;
can be obtained by convolving the image patch with the filter:
Ti,i(xO,YO) =
II G~;(xo-x,yo-y)I(x,y)dxdy
(2)
The iconic representation for the local image patch centered at (xo, Yo) is formed by combining
into a high-dimensional vector the responses from the ten basis filters at different scales:
= h,i,s(XO, YO)]
(3)
= 0, 1, 2, 3 denotes the order of the filter, j = 1, ... , i + 1 denotes the different filters per
rs(xo,Yo)
where i
order, and S = Smin,
image pyramid.
... ,Sma.,
denotes the different scales as given by the levels of a Gaussian
The use of multiple scales is crucial to the visual search model (see Section 3). In particular, the
larger the number of scales, the greater the perspicuity of the representation as depicted in Figure 3.
A multiscale representation also alJows interpolation strategies for scale invariance. The bighdimensionality of the vectors makes them remarkably robust to noise due to the orthogonality
inherent in high-dimensional spaces: given any vector, most of the other vectors in the space
tend to be relatively uncorrelated with the given vector. The iconic representations can also be
made invariant to rotations in the image plane (for a fixed scale) without additional convolutions
by exploiting the property of steerability [6]. Rotations about an image plane axis are handled
by storing feature vectors from different views. We refer the interested reader to [14] for more
details regarding the above properties.
3
THE VISUAL SEARCH MODEL
Our model for visual search is derived from a model for vision that we previously proposed in
[14]. This model decomposes visual behaviors into sequences of two visual routines, one for
identifying the visual image near the fovea (the "what" routine), and another for locating a stored
prototype on the retina (the "where" routine).
Modeling Saccadic Targeting in Visual Search
833
I"-Q~II
~
~
G.O'...,
.I
..j
i
i
.
Con. . ..:.
(a)
(b)
Figure 3: The Effect of Scale. The distribution of distances (in tenns of correlations) between the response
vector for a selected model point in the dining table scene and all other points in the scene is shown for
single scale response vectors (a) and multiple scale vectors (b). Using responses from multiple scales (five
in this case) results in greater perspicuity and a sharper peak near 0.0; only one point (the model point)
had a correlation greater than 0.94 in the multiple scale case (b) whereas 936 candidate points fell in this
category in the single scale case (a).
The visual search model assumes the existence of three independent processes running concurrently: (a) a targeting process (similar to the "where" routine of [14]) that computes the next
location to be fixated; (b) an oculomotor process that accepts target locations and executes a
saccade to foveate that location (see [16] for more details); and (c) a decision process that models
the cortico-cortical dynamics of the VI f+ V2 f+ V 4 f+ IT pathway related to the identification
of objects in the fovea (see [15] for more details).
Here, we focus on the saccadic targeting process. Objects of interest to the current search task are
where s
assumed to be represented by a set of previously memorized iconic feature vectors
denotes the scale of the filters. The targeting algorithm computes the next location to be foveated
as follows:
r-:p
1. Initialize the routine by setting the current scale of analysis k to the largest scale i.e.
k = max; set Sm(x, y) = 0 for all (x, y).
2. Compute the current saliency image Sm as
max
Sm(x, y)
= L Ilr~(x, y) - r-:Pll2
(4)
s=k
3. Find the location to be foveated by using the following weighted population averaging
(or soft max) scheme:
(x, '0) = L F(S7n(x, y))(x, y)
(5)
(x,y)
where F is an interpolation function. For the experiments, we chose:
e-S",(x,y)/)'(k)
F(S7n(x, y))
= E (x,y) e-S",(x,y)/)'(k)
(6)
This choice is attractive since it allows an interpretation of our algorithm as computing
maximum likelihood estimates (cf. [12]) of target locations. In the above, >'(k) is
decreased with k.
4. Iterate step (2) and (3) above with k = max-I, max-2, . . . until either the target object
has been foveated or the number of scales has been exhausted.
Figure 4 illustrates the above targeting procedure. The case where multiple model vectors are
used per object proceeds in an analogous manner with the target location being averaged over all
the model vectors.
R. P. N. RAO, G. J. ZELINSKY, M. M. HAYHOE, D. H. BALLARD
834
(c)
(d)
Figure 4: mustration of Saccadic Targeting. The saliency image after the inclusion of the largest (a),
intermediate (b), and smallest scale (c) as given by image distances to the prototype (the fork and knife);
the lightest points are the closest matches. (d) shows the predicted eye movements as determined by the
weighted population averaging scheme (for comparison, saccades from a human subject are given by the
dotted arrows).
4
EXPERIMENTAL RESULTS AND DISCUSSION
Eye movements from four human subjects were recorded for the search task described in Section 1
for three different scenes (dining table, work bench, and a crib) using an SRI Dual Purkinje
Eyetracker. The model was implemented on a pipeline image processor, the Datacube MV200,
which can compute convolutions at frame rate (30/ sec). Figure 5 compares the model's performance to the human data. As the results show, there is remarkably good correspondence between
the eye movements observed in human subjects and those generated by the model on the same
data sets. The model has only one important parameter: the scaling function used to rate the
peaks in the saliency map. In the development of the algorithm, this was adjusted to achieve an
approximate fit to the human data.
Our model relies crucially on the existence of a coarse-to-fine matching mechanism. The
main benefit of a coarse-to-fine strategy is that it allows continuous execution of the decision/oculomotor processes, thereby increasing the probability of an early match. Coarse-to-fine
strategies have enjoyed recent popularity in computer vision with the advent of image pyramids
in tasks such as motion detection [3]. Although these methods show that considerable speedup
can be achieved by decreasing the size of window of analysis as resolution increases, our preliminary experiments suggest that this might be an inappropriate strategy for visual search: limiting
search to a small window centered on the coarse location estimate obtained from a larger scale
often resulted in significant errors since the targets frequently lay outside the search window. A
possible solution is to adaptively select the size of the search window based on the current scene
but this would require additional computational machinery.
A key question that remains is the source of sequential application of the filters in the human
visual system. A possible source is the variation in resolution of the retina. Since only very high
resolution information is at the fovea, and since this resolution falls off with distance. fine spatial
scales may be ineffective purely because the fixation point is distant from the target. However.
our preliminary experiments with modeling the variation in retinal resolution suggest that this is
probably not the sole cause. The variations at middle distances from the fovea are too small to
explain the dramatic improvement in target location experienced with the second saccade. Thus,
Modeling Saccadic Targeting in Visual Search
835
First SICCldOl: Hullln
First Socc.d..: nod.1
..'" ::1
~
15
~ 20
o 15
1'""'
I
8
11
16
10
1~
18
51
56
I
40
! '~ ,.",.,???,III,I,I,I,.,.,.,.,.,.~,.,
~
8
S.cond SICCOdOl: Hullon
~::I~
8
11
16
10
14
18
51
56
31
56
40
15
;
IS
:
10
.: :,.1,.,11.,1.,.,.,.,.,.,...,.,., ,. ,.,
40
4
8
16
11
10
1~
18
51
36
40
Third Soccldos: nod.1
.:~
. ::1
50
~5D
l!'"
:0
;
:
18
'"
Third SoccodOl: HUIIIO
l!
1~
~ 20
.: :, ,.,....,.~,.,.,., ~,.,.,.,.., 1"",.,
~
10
. ::l
~
10
4
16
S.clnd SoccodOl: nod.1
~ 2D
; IS
:
11
40
30
20
.: ': ,I,.,.,.~",..""",
8
'1
En~olnt
16
10
14
40
; 30
: 20
.. '0
.
I
18
Error (,1 ?? 11 ? 10)
31
I
I
36
I
'-I
40
0'-"1
~
,-I""'I
8
I
'1
1""1
I
16
10
'-1-1-' .-.
1~
18
I
51
I
36
I
1-'
40
Endpoint Error (,I ?? ls ? 10)
Figure 5: Experimental Results. The graphs compare the distribution of endpoint errors (in terms of
frequency histograms) for three consecutive saccades as predicted by the model for 180 trials (on the right)
and as observed with four human subjects for 676 trials (left). Each of the trials contained search scenes
with one to five objects, one of the objects being the previeWed model.
there are two remaining possibilities: (a) the resolution fall-off in the cortex is different from the
retinal variation in a way that supports the data, or (b) the cortical machinery is set up to match
the larger scales first. In the latter case, the observed data would result from the fact that the
oculomotor system is ready to move before all the scales can be matched, and thus the eyes move
to the current best target position. This interpretation of the data is appealing in two aspects.
First, it reflects a long history of observations on the priority of large scale channels [9], and
second, it reflects current thinking about eye movement programming suggesting that fixation
times are approximately constant and that the eyes are moved as soon as they can be during the
course of visual problem solving. The above questions can however be definitively answered
only through additional testing of human subjects followed by subsequent modeling. We expect
our saccadic targeting model to playa crucial role in this process.
Acknowledgments
This research was supported by NIHIPHS research grants 1-P41-RR09283 and 1-R24-RR0685302, and by NSF research grants IRI-9406481 and IRI-8903582.
836
R. P. N. RAO. G. 1. ZELINSKY. M. M. HAYHOE. D. H. BALLARD
References
[1] Subutai Ahmad and Stephen Omohundro. Efficient visual search: A connectionist solution.
In Proceeding of the 13th Annual Conference of the Cognitive Science Society. Chicago,
1991.
[2] Dana H. Ballard, Mary M. Hayhoe, and Polly K. Pook. Deictic codes for the embodiment
of cognition. Technical Report 95.1, National Resource Laboratory for the study of Brain
and Behavior, University of Rochester, January 1995.
[3] P.J. Burt. Attention mechanisms for vision in a dynamic world. In ICPR, pages 977-987,
1988.
[4] David Chapman. Vision. Instruction. and Action. PhD thesis, MIT Artificial Intelligence
Laboratory, 1990. (Technical Report 1204).
[5] W.G. Chase and H.A. Simon. Perception in chess. Cognitive Psychology, 4:55-81,1973.
[6] William T. Freeman and Edward H. Adelson. The design and use of steerable filters. IEEE
PAMI, 13(9):891-906, September 1991.
[7] Peter lB. Hancock, Roland J. Baddeley, and Leslie S. Smith. The principal components of
natural images. Network, 3:61-70, 1992.
[8] Michael C. Mozer. The perception of multiple objects: A connectionist approach. Cambridge, MA: MIT Press, 1991.
[9] D. Navon. Forest before trees: The precedence of global features in visual perception.
Cognitive Psychology, 9:353-383,1977.
[10] Ernst Niebur and ChristofKoch. Control of selective visual attention: Modeling the "where"
pathway. This volume, 1996.
[11] D. N oton and L. Stark. Scanpaths in saccadic eye movements while viewing and recognizing
patterns. Vision Reseach, 11:929-942,1971.
[12] Steven 1. Nowlan. Maximum likelihood competitive learning. In Advances in Neural
Infonnation Processing Systems 2, pages 574-582. Morgan Kaufmann, 1990.
[13] J.K. O'Regan. Eye movements and reading. In E. Kowler, editor, Eye Movements and Their
Role in Visual and Cognitive Processes, pages 455-477. New York: Elsevier, 1990.
[14] Rajesh P.N. Rao and Dana H. Ballard. An active vision architecture based on iconic
representations. Artificial Intelligence (Special Issue on Vision), 78:461-505, 1995.
[15] Rajesh P.N. Rao and Dana H. Ballard. Dynamic model of visual memory predicts neural
response properties in the visual cortex. Technical Report 95.4, National Resource Laboratory for the study of Brain and Behavior, Computer Sci. Dept., University of Rochester,
November 1995.
[16] Rajesh P.N. Rao and Dana H. Ballard. Learning saccadic eye movements using multi scale
spatial filters. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural
Infonnation Processing Systems 7, pages 893-900. Cambridge, MA: MIT Press, 1995.
[17] Rajesh P.N. Rao and Dana H. Ballard. Natural basis functions and topographic memory for
face recognition. In Proc. ofllCAl, pages 10-17, 1995.
[18] M. Tanenhaus, M. Spivey-Knowlton, K. Eberhard, and I Sedivy. Integration of visual and
linguistic information in spoken language comprehension. To appear in Science, 1995.
[19] Keiji Yamada and Garrison W. Cottrell. A model of scan paths applied to face recognition.
In Proc. 17th Annual Conf. of the Cognitive Science Society, 1995.
[20] R.A. Young. The Gaussian derivative theory of spatial vision: Analysis of cortical cell
receptive field line-weighting profiles. General Motors Research Publication GMR-4920,
1985.
| 1128 |@word trial:3 middle:1 sri:1 instruction:1 r:1 crucially:1 dramatic:1 thereby:1 initial:1 foveal:1 exclusively:1 current:8 comparing:1 kowler:1 skipping:1 nowlan:1 activation:1 yet:1 parsing:1 cottrell:1 distant:1 subsequent:3 chicago:1 motor:2 intelligence:2 selected:1 plane:2 smith:1 prespecified:1 yamada:1 coarse:6 location:12 successive:1 five:3 positing:1 fixation:6 pathway:2 manner:1 rapid:2 behavior:3 frequently:1 multi:1 brain:2 freeman:1 decreasing:1 resolve:1 actual:2 window:4 inappropriate:1 increasing:1 matched:1 advent:1 what:2 spoken:2 oton:1 every:1 ti:1 spivey:1 control:1 underlie:1 grant:2 appear:1 before:2 understood:1 local:1 despite:1 path:3 interpolation:2 approximately:1 pami:1 might:1 zeroth:1 chose:1 averaged:1 acknowledgment:1 testing:1 block:1 procedure:1 steerable:1 matching:1 word:2 refers:1 suggest:2 targeting:18 close:1 map:1 center:2 go:1 iri:2 attention:2 l:1 focused:1 resolution:9 identifying:1 population:2 searching:1 coordinate:1 variation:4 analogous:1 limiting:1 target:15 exact:1 programming:2 us:2 agreement:2 velocity:1 recognition:2 located:1 lay:1 predicts:1 bottom:2 fork:2 role:3 observed:4 steven:1 region:1 movement:25 ahmad:1 substantial:1 accessing:1 mozer:1 complexity:1 dynamic:3 solving:2 gmr:1 purely:1 distinctive:1 basis:5 represented:3 separated:1 hancock:1 describe:1 artificial:2 outside:1 larger:3 solve:1 tested:1 ability:2 topographic:1 final:1 chase:1 sequence:1 dining:2 combining:1 ernst:1 achieve:1 moved:1 exploiting:1 generating:1 object:15 spent:1 sole:1 edward:1 implemented:1 c:2 predicted:4 indicate:1 resemble:1 nod:3 owing:1 filter:16 centered:3 human:16 viewing:1 memorized:2 require:1 preliminary:3 elementary:1 comprehension:1 adjusted:1 precedence:1 cognition:2 sma:1 achieves:1 early:1 smallest:1 consecutive:1 purpose:2 proc:2 ballistic:2 currently:1 infonnation:2 largest:3 weighted:2 reflects:2 mit:3 concurrently:1 subutai:1 gaussian:3 reaching:1 rather:1 appro:1 publication:1 conjunction:1 linguistic:1 foveating:1 derived:2 acuity:1 yo:6 iconic:8 focus:1 improvement:1 likelihood:2 elsevier:1 selective:1 interested:1 issue:2 among:1 orientation:1 dual:1 denoted:1 proposes:1 development:1 spatial:6 special:1 initialize:1 integration:1 field:3 chapman:1 adelson:1 thinking:1 connectionist:2 report:3 inherent:1 retina:3 oriented:4 composed:1 tightly:1 resulted:1 national:2 william:1 preview:1 detection:1 interest:3 possibility:1 reseach:1 rajesh:5 closer:1 perfonnance:1 orthogonal:1 machinery:2 tree:1 modeling:8 soft:1 rao:9 purkinje:1 leslie:1 perspicuity:2 recognizing:1 too:1 stored:1 gregory:1 adaptively:1 peak:2 eberhard:1 off:2 michael:1 thesis:1 reflect:1 recorded:2 central:1 zelinsky:4 possibly:1 priority:1 cognitive:5 conf:1 convolving:1 derivative:2 stark:1 fixating:1 suggesting:1 retinal:2 sec:1 depends:1 vi:1 view:2 analyze:1 competitive:1 relied:1 elicited:2 rochester:14 simon:1 minimize:1 formed:2 ass:1 greg:1 kaufmann:1 saliency:4 identification:1 none:1 niebur:1 trajectory:1 executes:1 processor:1 history:1 explain:1 touretzky:1 frequency:1 fixate:1 resultant:1 con:1 color:1 routine:5 originally:1 response:7 leen:1 furthermore:1 stage:1 correlation:2 until:1 flight:1 multiscale:2 mary:3 effect:1 laboratory:3 illustrated:1 attractive:1 during:3 criterion:1 octave:1 omohundro:1 motion:1 image:14 instantaneous:1 rotation:2 empirically:1 endpoint:2 volume:1 interpretation:2 refer:1 significant:1 cambridge:2 cv:2 enjoyed:1 tanenhaus:1 inclusion:1 language:2 had:1 moving:1 cortex:3 dominant:1 playa:1 closest:1 recent:2 eyetracker:1 navon:1 driven:1 tesauro:1 periphery:1 remembered:1 tenns:1 muscle:1 morgan:1 greater:4 dxdy:1 additional:3 employed:3 determine:1 shortest:1 signal:1 ii:2 stephen:1 multiple:9 technical:3 match:3 knife:2 long:1 roland:1 involving:2 vision:9 histogram:1 sometimes:1 pyramid:2 mustration:1 cell:2 achieved:1 whereas:1 remarkably:2 fine:6 pook:1 decreased:1 keiji:1 source:2 crucial:3 scanpaths:1 fell:1 ineffective:1 subject:13 tend:1 probably:1 near:2 intermediate:1 iii:1 iterate:1 fit:1 psychology:2 architecture:1 regarding:1 prototype:2 whether:1 six:1 handled:1 locating:1 peter:1 york:1 nine:1 cause:1 action:1 eigenvectors:1 amount:1 extensively:1 ten:1 category:1 generate:1 datacube:1 nsf:1 dotted:1 lightest:1 per:4 popularity:1 group:1 smin:1 four:2 key:1 graph:1 orient:2 almost:2 reader:1 patch:3 decision:3 scaling:1 uninterrupted:1 followed:1 correspondence:1 s7n:2 r24:1 annual:2 orthogonality:1 scene:13 aspect:2 answered:1 relatively:1 speedup:1 department:2 icpr:1 appealing:1 making:2 primate:1 chess:2 invariant:1 xo:6 pipeline:1 resource:2 previously:3 remains:1 mechanism:4 serf:1 v2:1 rr09283:1 existence:2 top:1 denotes:5 assumes:1 running:1 cf:1 remaining:1 society:2 objective:1 move:3 question:3 strategy:6 saccadic:15 receptive:2 september:1 fovea:5 distance:4 sci:1 code:1 copying:2 difficult:2 ilr:1 sharper:1 implementation:1 design:1 clnd:1 convolution:2 observation:1 sm:3 november:1 january:1 situation:1 previewed:2 frame:1 arbitrary:1 lb:1 burt:1 david:1 sentence:1 accepts:1 address:1 hayhoe:5 precluded:1 proceeds:3 pattern:4 suggested:2 bar:1 perception:3 reading:2 oculomotor:3 max:5 memory:3 natural:8 scheme:2 eye:22 axis:1 ready:1 coupled:1 text:1 prior:3 expect:1 regan:1 dana:7 editor:2 uncorrelated:1 storing:1 course:3 summary:1 supported:1 soon:1 cortico:1 fall:2 steerability:1 face:2 definitively:1 benefit:1 feedback:1 embodiment:1 cortical:3 world:1 knowlton:1 computes:2 instructed:1 made:3 approximate:2 preferred:1 global:1 sequentially:1 incoming:1 active:1 fixated:1 assumed:1 search:24 continuous:1 decomposes:1 table:5 ballard:9 channel:1 robust:1 forest:1 main:1 arrow:1 noise:1 profile:2 en:1 board:2 depicts:1 fashion:1 ny:4 garrison:1 experienced:1 position:1 momentary:1 candidate:1 third:2 weighting:1 young:1 down:1 evidence:1 sequential:1 phd:1 p41:1 execution:1 foveated:3 exhausted:1 illustrates:1 depicted:1 visual:40 contained:1 saccade:17 foveate:1 relies:2 ma:2 goal:1 considerable:1 typical:3 determined:1 averaging:2 principal:2 invariance:1 experimental:2 cond:1 select:1 support:1 latter:1 scan:3 dept:1 baddeley:1 bench:1 |
143 | 1,129 | Simulation of a Thalamocortical Circuit for
Computing Directional Heading in the Rat
Hugh T. Blair*
Department of Psychology
Yale University
New Haven, CT 06520-8205
tadb@minerva.cis.yale.edu
Abstract
Several regions of the rat brain contain neurons known as head-direction celis, which encode the animal's directional heading during spatial
navigation. This paper presents a biophysical model of head-direction
cell acti vity, which suggests that a thalamocortical circuit might compute the rat's head direction by integrating the angular velocity of the
head over time. The model was implemented using the neural simulator
NEURON, and makes testable predictions about the structure and function of the rat head-direction circuit.
1 HEAD-DIRECTION CELLS
As a rat navigates through space, neurons called head-direction celis encode the animal's
directional heading in the horizontal plane (Ranck, 1984; Taube, Muller, & Ranck, 1990).
Head-direction cells have been recorded in several brain areas, including the postsubiculum (Ranck, 1984) and anterior thalamus (Taube, 1995). A variety of theories have proposed that head-direction cells might play an important role in spatial learning and
navigation (Brown & Sharp, 1995; Burgess, Recce, & O'Keefe, 1994; McNaughton,
Knierim, & Wilson, 1995; Wan, Touretzky, & Redish, 1994; Zhang, 1995).
1.1 BASIC FIRING PROPERTIES
A head-direction cell fires action potentials only when the rat's head is facing in a particular direction with respect to the static surrounding environment, regardless of the animal's
location within that environment. Head-direction cells are not influenced by the position
of the rat's head with respect to its body, they are only influenced by the direction of the
*Also at the Yale Neuroengineering and Neuroscience Center (NNC), 5 Science Park North, New
Haven, CT 06511
153
Simulation of Thalamocortical Circuit for Computing Directional Heading in Rats
, 0
360,0
G>
~
g>
~ 05
90
270
)(
'"
E
;!
o 0 0~~90~--:-::180:-"--::27:70---~360
Head Direction
180
Figure I: Directional Tuning Curve of a Head-Direction Cell
head with respect to the stationary reference frame of the spatial environment. Each headdirection cell has its own directional preference, so that together, the entire population of
cells can encode any direction that the animal is facing.
Figure 1 shows an example of a head-direction cell's directional tuning curve, which plots
the firing rate of the celI as a function of the rat's momentary head direction. The tuning
curve shows that this cell fires maximalIy when the rat's head is facing in a preferred
direction of about 160 degrees. The cell fires less rapidly for directions close to 160
degrees, and stops firing altogether for directions that are far from 160 degrees.
1.2 THE VELOCITY INTEGRATION HYPOTHESIS
McNaughton, Chen, & Markus (1991) have proposed that head-direction cells might rely
on a process of dead-reckoning to calculate the rat's current head direction, based on the
previous head direction and the angular velocity at which the head is turning. That is,
head-direction cells might compute the directional position of the head by integrating the
angular velocity of the head over time. This velocity integration hypothesis is supported
by three experimental findings. First, several brain regions that are associated with headdirection cells contain angular velocity cells, neurons that fire in proportion to the angular
head velocity (McNaughton et al., 1994; Sharp, in press). Second, some head-direction
cells in postsubiculum are modulated by angular head velocity, such that their peak firing
rate is higher if the head is turning in one direction than in the other (Taube et al., 1990).
Third, it has recently been found that head-direction cells in the anterior thalamus, but not
the postsubiculum, anticipate the future direction of the rat's head (Blair & Sharp, 1995).
1.3 ANTICIPATORY HEAD-DIRECTION CELLS
Blair and Sharp (1995) discovered that head-direction cells in the anterior thalamus shift
their directional preference to the left during clockwise turns, and to the right during counterclockwise turns. They showed that this shift occurs systematically as a function of head
velocity, in a way that alIows these cells anticipate the future direction of the rat's head.
To illustrate this, consider a cell that fires whenever the head will be facing a specific
direction, 9, in the near future. How would such a cell behave? There are three cases to
consider. First, imagine that the rat's head is turning clockwise, approaching the direction
9 from the left side. In this case, the anticipatory cell must fire when the head is facing to
the left of 9, because being to the left of 9 and turning clockwise predicts arrival at 9 in the
near future. Second, when the head is turning counterclockwise and approaching 9 from
the right side, the anticipatory cell must fire when the head is to the right of 9. Third, if the
head is still, then the cell should only fire if the head is presently facing 9.
In summary, an anticipatory head direction cell should shift its directional preference to
the left during clockwise turns, to the right during counterclockwise turns, and not at all
when the head is still. This behavior can be formalized by the equation
!leV) = 9 - V't,
[1]
H. T. BLAIR
154
where ~ denotes the cell's preferred present head direction. v denotes the angular velocity
of the head. 8 denotes the future head direction that the cell anticipates, and 't is a constant
time delay by which the cell's activity anticipates arrival at 8. Equation 1 assumes that ~
is measured in degrees. which increase in the clockwise direction. and that v is positive for
clockwise head turns. and negative for counterclockwise head turns. Blair & Sharp (1995)
have demonstrated that Equation 1 provides a good approximation of head-direction cell
behavior in the anterior thalamus .
1.3 ANTICIPATORY TIME DELAY (r)
Initial reports suggested that head-direction cells in the anterior thalamus anticipate the
future head direction by an average time delay of't = 40 msec, whereas postsubicular cells
encode the present head direction, and therefore "anticipate" by 't = 0 msec (Blair &
Sharp, 1995; Taube & Muller, 1995). However, recent evidence suggests that individual
neurons in the anterior thalamus may be temporally tuned to anticipate the rat's future
head-direction by different time delays between 0-100 msec, and that postsubicular cells
may "lag behind" the present head-direction by about to msec (Blair & Sharp, 1996).
2 A BIOPHYSICAL MODEL
This section describes a biophysical model that accounts for the properties of head-direction cells in postsubiculum and anterior thalamus. by proposing that they might be connected to form a thalamocortical circuit. The next section presents simulation results from
an implementation of the model, using the neural simulator NEURON (Hines, 1993).
2.1 NEURAL ELEMENTS
Figure 2 illustrates a basic circuit for computing the rat's head-direction. The circuit consists of five types of cells: 1) Present Head-Direction (PHD) Cells encode the present
direction of the rat's head, 2) Anticipatory Head-Direction (AHD) Cells encode the future
direction of the rat's head, 3) Angular-Velocity (AV) Cells encode the angular velocity of
the rat's head (the CLK AV Cell is active during clockwise turns, and the CNT AV Cell is
active during counterclockwise turns), 4) the Angular Speed (AS) Cell fires in inverse proportion to the angular speed of the head, regardless of the turning direction (that is, the AS
Cell fires at a lower rate during fast turns, and at a higher rate during slow turns), 5) Angular-Velocity Modulated Head-Direction (AVHD) Cells are head-direction cells that fire
--~,;;",I".? AS
AHDCells
Excitatory
MB
I
ABBREVIADONS
RTN A~~~IS
-
Cell
AT = Anterior Thalamus
MB. Mammillary Bodi..
PS =P08tsubiculum
~
Inhibitory
RS ? Rempl"'ill Cortex
R1N =Reticul.11III. Nu.
Figure 2: A Model of the Rat Head-Direction System
Simulation of Thalamocortical Circuit for Computing Directional Heading in Rats
155
only when the head is turning in one direction and not the other (the CLK AVHD Cell fires
in its preferred direction only when the head is turning clockwise, and the CNT AVHD
Cell fires in its preferred direction only when the head turns counterclockwise).
2.2 FUNCTIONAL CHARACTERISTICS
In the model, AHD Cells directly excite their neighbors on either side, but indirectly
inhibit these same neighbors via the AVHD Cells, which act as inhibitory interneurons.
AHD Cells also send excitatory feedback connections to themselves (omitted from Figure
2 for clarity), so that once they become active. they remain active until they are turned off
by inhibitory input (the rate of firing can also be modulated by inhibitory input). When
the rat is not turning its head. the cell representing the current head direction fires constantly, both exciting and inhibiting its neighbors. In the steady-state condition (Le., when
the rat is not turning its head), lateral inhibition exceeds lateral excitation, and therefore
activity does not spread in either direction through the layer of AHD Cells. However.
when the rat begins turning its head, some of the AVHD Cells are turned off, allowing
activity to spread in one direction. For example. during a clockwise head tum. the CLK
AV Cell becomes active, and inhibits the layer of CNT AVHD Cells. As a result, AHD
Cells stop inhibiting their right neighbors, so activity spreads to the right through the layer
of AHD Cells. Because AHD Cells continue to inhibit their neighbors to the left, activity
is shut down in the leftward direction, in the wake of the activity spreading to the right.
The speed of propagation through the AHD layer is governed by the AS Cell. During
slow head turns, the AS Cell fires at a high rate, strongly inhibiting the AHD Cells, and
thereby slowing the speed of propagation. During fast head turns, the AS Cell fires at a
low rate, weakly inhibiting the AHD Cells, allowing activity to propagate more quickly.
Because of inhibition from AS cells, AHD cells fire faster when the head is turning than
when it is still (see Figure 4), in agreement with experimental data (Blair & Sharp, 1995).
AHD Cells send a topographic projection to PHD Cells, such that each PHD Cell receives
excitatory input from an AHD Cell that anticipates when the head will soon be facing in
the PHD Cell's preferred direction. AHD Cell activity anticipates PHD Cell activity
because there is a transmission delay between the AHD and PHD Cells (assumed to be 5
msec in the simulations presented below). Also, the weights of the connections from
AHD Cells to PHD Cells are small, so each AHD Cell must fire several action potentials
before its targeted PHD Cell can begin to fire. The time delay between AHD and PHD
Cells accounts for anticipatory firing, and corresponds to the 1: parameter in Equation I.
2.3 ANATOMICAL CHARACTERISTICS
Each component of the model is assumed to reside in a specific brain region. AHD and
PHD Cells are assumed to reside in anterior thalamus (AT) and postsubiculum (PS),
respectively. AS Cells have been observed in PS (Sharp, in press) and retrosplenial cortex
(RS) (McNaughton, Green, & Mizumori, 1986), but the model predicts that they may also
be found in the mammillary bodies (MB), since MB receives input from PS and RS (Shibata, 1989), and MB projects to ATN. AVHD Cells have been observed in PS (Taube et
ai., 1990), but the model predicts that they may aiso be found in the reticular thalamic
nucleus (RTN), because RTN receives input from PSIRS (Lozsadi, 1994), and RTN inhibits AT. It should be noted that lateral excitation between ATN cells has not been shown, so
this feature of the model may be incorrect. Table 1 summarizes anatomical evidence.
3 SIMULATION RESULTS
The model illustrated in Figure 2 has been implemented using the neural simulator NEURON (Hines. 1993). Each neural element was represented as a single spherical compart-
H. T. BLAIR
156
Table 1: Anatomical Features of the Model
REFERENCE
FEATURE OF MODEL
Chen et aI., 1990; Ranck, 1984
Blair & Sharp, 1995
McNaughton et aI., 1994; Sharp, in press
van Groen & Wyss, 1990
Shibata, 1992
Lozsadi, 1994
PREDICTION OF MODEL
PREDICTION OF MODEL
PHD Cells in PSIRS
AHD Cells in AT
AV Cells in PSIRS
AT projects to PS
AT projects to RTN
PSIRS projects to RTN
AVHD Cells in RTN
AS Cells in MB
ment, 30 Jlm in diameter, with RC time constants ranging between 15 and 30 msec. Synaptic connections were simulated using triggered alpha-function conductances. The
results presented here demonstrate the behavior of the model, and compare the properties
of the model with experimental data.
To begin each simulation, a small current was injected in to one of the AHD Cells, causing
it to initiate sustained firing. This cell represented the simulated rat's initial head direction. Head-turning behavior was simulated by injecting current into the AV and AS Cells,
with an amplitude that yielded firing proportional to the desired angular head velocity.
3.1 ACTIVITY OF HEAD-DIRECTION CELLS
Figure 3 presents a simple simulation, which illustrates the behavior of head-direction
cells in the model. The simulated rat begins by facing in the direction of 0 degrees. Over
the course of 250 msec, the rat quickly turns its head 60 degrees to the right, and then
returns to the initial starting position of 0 degrees. The average velocity of the head in this
simulation was 480 degrees/sec, which is similar to the speed at which an actual rat performs a fast head tum (Blair & Sharp, 1995). Over the course of the simulation, neural
activation propagates from the O-degree cell to the 60-degree cell, and then back to the 0degree cell.
3.2 COMPARISON WITH EXPERIMENTAL DATA
To examine how well the model reproduces firing properties of PS and AT cells, another
simple simulation was performed. The firing rate the model's PHD and AHD Cells was
examined while the simulated rat performed several 360-degree revolutions in both the
clockwise and counterclockwise directions. Results are summarized in Figure 4, which
ACTIVITY OF PHD CELLS
ANIMAL
BEHAVIOR
WSOIIIt
15c.lll_WlM"-_ _-----"'~~
??.......... ".
30-
CelII----MMIM'-_--M\.W~
oW
CelII----~~___-M~~
60-
c.III=:::;~~~~~:::;::==::,
,
50
100
Turning Right
1
A'--;Tu....,mIng~Le~1t---:,
AVlrlgl Angular Velocity =480"'sec
Time (msec)
Figure 3: Simulation Example
Simulation of Thalamocortical Circuit for Computing Directional Heading in Rats
Cil12.0 - -- - - - - - - - . ,
~
10.0 .
~
Z'
8.0 '
CD
6.0 I
15?
O<>"T (exper.,..ntaI Getal
0-0 Ps (.)(~'"*"". ,*a)
_AT (modol dolo '
. . Ps (model ~. )
.n
.?????
,;
io
2.0 i
i
r-,
--~-----
o.-/'~
.. .?????~
e.
~ 20.0 ~
?????
0)15.0 ,
.=
Li:
.:i 40 :
g
N'25 .0
157
[} ...?..........?.?....?-o
?
0.0
.2.0 ,' --_ _ __ _
0
100 200 300
?
_ -----'
400 500
N Angular Head Velocity (deglsec)
g,Cii
?
10.0 ,
[}----------------.----.-o
I
5.0 ,
~ 0.0 i~
o
_______-'
100
200
300
400
500
Angular Head Velocity (degJsec)
Figure 4: Compared Properties of Real and Simulated Head-Direction Cells
compares simulation data with experimental data. The experimental data in Figure 4
shows averaged results for 21 cells recorded in AT, and 19 cells recorded in PS .
Because AT cells anticipate the future head direction, they exhibit an angular separation
between their clockwise and counterclockwise directiQnal preference, whereas as no such
separation occurs for PS cells (see section 2.4). For AT cells, the magnitude of the angular
separation is proportional to angular head velocity, with greater separation occurring for
fast turns, and less separation for slow turns (see Eq. 1). The left panel of Figure 4 shows
that the model's PHD and AHD Cells exhibit a similar pattern of angular separation.
Blair & Sharp (1995) reported that the firing rates of AT and PS cells differ in two ways:
1) AT cells fire at a higher rate than PS cells, and 2) AT cells have a higher rate during fast
turns than during slow turns, whereas PS cells fire at the same rate, regardless of turning
speed. In Figure 4 (right panel), it can be seen that the model reproduces these findings.
4 DISCUSSION AND CONCLUSIONS
In this paper, I have presented a neural model of the rat head-direction system. The model
includes neural elements whose firing properties are similar to those of actual neurons in
the rat brain. The model suggests that a thalamocortical circuit might compute the directional position of the rat's head, by integrating angular head velocity over time.
4.1 COMPARISON WITH OTHER MODELS
McNaughton et al. (1991) proposed that neurons encoding head-direction and angular
velocity might be connected to form a linear associative mapping network. Skaggs et al.
(1995) have refined this idea into a theoretical circuit, which incorporates head-direction
and angular velocity cells. However, the Skaggs et al. (1995) circuit does not incorporate
anticipatory head-direction cells, like those found in AT. A model that does incorporate
anticipatory cells has been developed by Elga, Redish, & Touretzky (unpublished manuscript). Zhang (1995) has recently presented a theoretical analysis of the head-direction
circuit, which suggests that anticipatory head-direction cells might be influenced by both
the angular velocity and angular acceleration of the head, whereas non-anticipatory cells
may be influenced by the angular velocity only, and not the angular acceleration.
4.2 LIMITATIONS OF THE MODEL
In its current form, the model suffers some significant limitations. For example, the directional tuning curves of the model's head-direction cells are much narrower than those of
actual head-direction cells. Also, in its present form , the model can accurately track the
rat's head-direction over a rather limited range of angular head velocities. These limitations are presently being addressed in a more advanced version of the model.
158
H. T. BLAIR
Acknowledgments
This work was supported by NRSA fellowship number 1 F31 MH11102-01Al from
NIMH. a Yale Fellowship. and the Yale Neuroengineering and Neuroscience Center
(NNC). I thank Michael Hines. Patricia Sharp. and Steve Fisher for their assistance.
References
Blair. H.T.. & Sharp. P.E. (1995). Anticipatory head-direction cells in anterior thalamus:
Evidence for a thalamocortical circuit that mtegrates angular head velocity to compute
head direction. Journal of Neuroscience, IS, 6260-6270.
Blair, H.T.? & Sharp (1996). Temporal Tuning of Anticipatory Head-Direction Cells in the
Anterior Thalamus of the Rat. Submitted.
Brown. M. & Sharp. P.E. (1995). Simulation of spatial learning in the morris water maze
by a. neural network model of the hippocampal formation and nucleus accumbens.
Hippocampus, 5. 171-188.
Burgess, N .? Recce. M .? & O'Keefe. J. (1994). A model of hippocampal function. Neural
Networks, 7. 1065-1081.
Elga. AN .? Redish, AD .? & Touretzky. D.S. (1995). A model of the rodent head-direction
system. Unyublished Manuscript.
Hines. M. (1993). NEURON: A program for simulation of nerve equations. In F. Eckman
(Ed.). Neural Systems: Analysis and Modeling, Norwell. MA : Kluwer Academic Publishers. pp. 127-136.
Lozsadi. D.A. (1994). Organization of cortical afferents to the rostral, limbic sector of the
rat thalamic reticular nucleus. The Journal of Comparative Neurology, 341, 520-533.
McNaughton. B.L.. Chen. L.L.. & Markus. E.1. (1991). Dead reckoning, landmark learning. and the sense of direction: a neurophysiological and computational hypothesis.
Journal of Cognitive Neuroscience, 3, 190-202.
McNaughton, B.L.. Green. E.1 .? & Mizumori, S.1.y. (1986). Representation of body
motion trajectory by rat sensory motor cortex neurons. Society for Neuroscience
Abstracts. 12,260.
McNaughton, B.L.. Knierim. J.J .? & Wilson. M.A (1995). Vector encoding and the vestibular foundations of spatial cognition: neurophysiological and computational mechanisms. In M. Gazzaniga (Ed.). The Cognitive Neurosciences. Cambndge: MIT Press.
McNaughton. B.L.. Mizumori, S.Y.1 .? Barnes. C.A .? Leonard. B.J .? MarqUiS. M .? & Green.
B.J. (1994). Coritcal representation of motion during unrestrained spatial navigaton in
the rat. Cerebral Cortex, 4, 27-39.
Ranck, J.B. (1984). Head-direction cells in the deep ceUlayers of dorsal presubiculum in
freely moving rats. Society for Neuroscience Abstracts, 12, 1524.
Shibata. H. (1989). Descending projections to the mammillary nuclei in the rat. as studied
by retrograde and anterograde transport of wheat germ agglutinin-horseradish peroxidase. The Journal of Comparative Neurology, 285. 436-452.
Shibata. H. (1992). Topographic organization of subcortical projections to the anterior thalamic nuclei in the rat. The Journal of Comparative Neurology, 323, 117-127.
Sharp, P.E. (in press). Multiple spatiallbehavioral corrrelates for cells in the rat postsubiculum: multiple regression analysis and comparison to other hippocampal areas. Cerebral Cortex.
Skaggs, W.E .? Knierim. J.1 .? Kudrimoti. H.S., & McNaughton, B.L. (1995). A model of
the neural basis of the rat's sense of direction. In G. Tesauro. D.S. Touretzky, & T.K.
Leen (Eds.), Advances in Neural Information Processing Systems 7. MIT Press.
Taube. 1.S. (1995). Head-direction cells recorded in the anterior thalamic nuclei of freelymoving rats. Journal of Neuroscience, 15, 70-86.
Taube. J.S .? & Muller. R.O. (1995). Head-direction cell activity in the anterior thalamus.
but not the postsubiculum, predicts the animal's future directional heading. Society for
Neuroscience Abstracts. 21. 946.
Taube. J.S., Muller. R.U .? & Ranck, J.B. (1990). Head-direction cells recorded from the
postsubiculum in freely moving rats, I. Description and quantitative analysis. Jounral
of Neuroscience, 10, 420-435.
van Groen. T., & Wyss, J.M. (1990). The postsubicular cortex in the rat: characterization
of the fourth region of subicular cortex and its connections. Journal of Comparative
Neurology, 216. 192-210.
Wan, H.S .? Touretzky. D.S .? & Redish. D.S. (1994). A rodent navigation model that combines place code. head-direction, and path integration information. Society for Neuroscience Abstracts, 20, 1205.
Zhang, K. (1995). Representation of spatial orientation by the intrinsic dynamics of the
head-direction cell ensemble: A theory. Submitted.
| 1129 |@word version:1 proportion:2 hippocampus:1 anterograde:1 simulation:16 r:3 propagate:1 thereby:1 initial:3 tuned:1 ranck:6 current:5 anterior:14 activation:1 must:3 celis:2 motor:1 plot:1 stationary:1 shut:1 slowing:1 plane:1 provides:1 characterization:1 location:1 preference:4 zhang:3 five:1 rc:1 become:1 incorrect:1 consists:1 retrosplenial:1 acti:1 sustained:1 combine:1 rostral:1 behavior:6 themselves:1 examine:1 simulator:3 brain:5 ming:1 spherical:1 actual:3 becomes:1 begin:4 project:4 circuit:14 panel:2 accumbens:1 developed:1 proposing:1 finding:2 jlm:1 temporal:1 quantitative:1 act:1 positive:1 before:1 postsubicular:3 io:1 encoding:2 lev:1 marquis:1 firing:12 path:1 might:8 studied:1 examined:1 suggests:4 limited:1 range:1 averaged:1 acknowledgment:1 germ:1 area:2 projection:3 integrating:3 elga:2 close:1 descending:1 demonstrated:1 center:2 send:2 regardless:3 starting:1 formalized:1 subicular:1 population:1 mcnaughton:11 modol:1 imagine:1 play:1 hypothesis:3 agreement:1 velocity:26 element:3 predicts:4 observed:2 role:1 calculate:1 region:4 wheat:1 connected:2 inhibit:2 limbic:1 environment:3 nimh:1 dynamic:1 weakly:1 basis:1 represented:2 surrounding:1 fast:5 mizumori:3 formation:1 refined:1 whose:1 lag:1 reticular:2 topographic:2 associative:1 triggered:1 biophysical:3 ment:1 mb:6 causing:1 turned:2 tu:1 headdirection:2 rapidly:1 description:1 p:14 transmission:1 comparative:4 illustrate:1 cnt:3 measured:1 eq:1 implemented:2 blair:15 differ:1 direction:89 anticipate:6 neuroengineering:2 mapping:1 cognition:1 inhibiting:4 omitted:1 injecting:1 spreading:1 kudrimoti:1 mit:2 horseradish:1 rather:1 wilson:2 encode:7 unrestrained:1 sense:2 entire:1 shibata:4 ill:1 orientation:1 animal:6 spatial:7 integration:3 once:1 park:1 future:10 report:1 haven:2 individual:1 fire:21 conductance:1 organization:2 interneurons:1 patricia:1 navigation:3 behind:1 norwell:1 desired:1 theoretical:2 modeling:1 delay:6 reported:1 anticipates:4 peak:1 hugh:1 off:2 michael:1 together:1 quickly:2 recorded:5 wan:2 dead:2 cognitive:2 return:1 li:1 account:2 potential:2 redish:4 sec:2 north:1 summarized:1 includes:1 afferent:1 ad:1 performed:2 thalamic:4 reckoning:2 characteristic:2 ensemble:1 directional:15 accurately:1 trajectory:1 submitted:2 influenced:4 suffers:1 touretzky:5 whenever:1 synaptic:1 ed:3 pp:1 associated:1 static:1 stop:2 amplitude:1 back:1 nerve:1 manuscript:2 tum:2 higher:4 steve:1 anticipatory:13 leen:1 strongly:1 angular:29 until:1 receives:3 horizontal:1 transport:1 propagation:2 atn:2 contain:2 brown:2 illustrated:1 assistance:1 during:15 noted:1 steady:1 excitation:2 rat:46 hippocampal:3 demonstrate:1 performs:1 motion:2 mammillary:3 ranging:1 recently:2 functional:1 cerebral:2 kluwer:1 significant:1 ai:3 tuning:5 postsubiculum:8 moving:2 cortex:7 inhibition:2 navigates:1 own:1 showed:1 recent:1 leftward:1 tesauro:1 vity:1 continue:1 muller:4 seen:1 greater:1 cii:1 freely:2 taube:8 clockwise:11 multiple:2 thalamus:12 exceeds:1 faster:1 academic:1 prediction:3 basic:2 regression:1 minerva:1 cell:126 whereas:4 fellowship:2 addressed:1 wake:1 publisher:1 counterclockwise:8 incorporates:1 near:2 iii:2 variety:1 skaggs:3 psychology:1 burgess:2 approaching:2 idea:1 shift:3 action:2 deep:1 morris:1 diameter:1 inhibitory:4 neuroscience:11 track:1 clk:3 anatomical:3 clarity:1 retrograde:1 inverse:1 fourth:1 injected:1 place:1 separation:6 summarizes:1 layer:4 ct:2 yale:5 yielded:1 activity:12 barnes:1 markus:2 speed:6 inhibits:2 department:1 describes:1 remain:1 presently:2 equation:5 turn:18 mechanism:1 initiate:1 indirectly:1 altogether:1 denotes:3 assumes:1 testable:1 society:4 occurs:2 exhibit:2 ow:1 thank:1 lateral:3 simulated:6 landmark:1 water:1 r1n:1 code:1 sector:1 negative:1 implementation:1 allowing:2 peroxidase:1 av:6 neuron:11 behave:1 nrsa:1 head:116 frame:1 discovered:1 sharp:18 knierim:3 unpublished:1 connection:4 nu:1 vestibular:1 gazzaniga:1 suggested:1 below:1 wy:2 pattern:1 program:1 nnc:2 including:1 green:3 rely:1 turning:15 advanced:1 recce:2 representing:1 temporally:1 rtn:7 presubiculum:1 limitation:3 proportional:2 subcortical:1 facing:8 foundation:1 nucleus:6 degree:12 propagates:1 exciting:1 systematically:1 cd:1 excitatory:3 summary:1 course:2 supported:2 thalamocortical:8 soon:1 heading:7 side:3 neighbor:5 van:2 f31:1 curve:4 feedback:1 cortical:1 maze:1 sensory:1 reside:2 far:1 alpha:1 preferred:5 reproduces:2 active:5 assumed:3 excite:1 neurology:4 table:2 exper:1 spread:3 arrival:2 body:3 slow:4 position:4 momentary:1 msec:8 governed:1 third:2 down:1 specific:2 revolution:1 evidence:3 intrinsic:1 keefe:2 ci:1 phd:14 groen:2 magnitude:1 illustrates:2 occurring:1 chen:3 rodent:2 neurophysiological:2 corresponds:1 constantly:1 hines:4 ma:1 ahd:23 targeted:1 narrower:1 acceleration:2 leonard:1 fisher:1 called:1 experimental:6 modulated:3 dorsal:1 incorporate:2 |
144 | 113 | 224
USE OF MULTI-LAYERED NETWORKS FOR
CODING SPEECH WITH PHONETIC FEATURES
Piero Cosi
Centro di Studio per Ie
Ricerche di Fonetica, C.N.R.,
Via Oberdan,10,
35122 Padova, Italy
Yoshua Bengio, Regis Cardin
and Renato De Mori
Computer Science Dept.
McGill University
Montreal, Canada H3A2A7
ABSTRACT
Preliminary results on speaker-independant speech
recognition are reported. A method that combines expertise on
neural networks with expertise on speech recognition is used
to build the recognition systems. For transient sounds, eventdriven property extractors with variable resolution in the
time and frequency domains are used. For sonorant speech, a
model of the human auditory system is preferred to FFT as a
front-end module.
INTRODUCTION
Combining a structural or knowledge-based approach for describing
speech units with neural networks capable of automatically learning
relations between acoustic properties and speech units is the research
effort we are attempting. The objective is that of using good
generalization models for learning speech units that could be reliably
used for many recognition tasks without having to train the system when
a new speaker comes in or a new t~sk is considered.
Domain (speech re.pognition) specific knowledge is applied for
- segmentation and labeling of speech,
- definition of event-driven property extractors,
- use of an ear model as preproqf3ssing applied to some modules,
- coding of network outputs with phonetic features,
- modularization of the speech recognition task by dividing the
workload into smaller networks performing Simpler tasks.
Optimization of learning time and of generalization for the neural
networks is sought through the use of neural networks techniques :
- use of error back-propagation for learning,
Multi-Layered Networks for Coding Phonetic Features
- switching between on-line learning and batch learning when
appropriate,
- convergence acceleration with local (weight specific) learning
rates,
- convergence acceleration with adaptive learning rates based on
information on the changes in the direction of the gradient,
- control of the presentation of examplars in order to ba I an ce
examplars among the different classes,
- training of small modules in the first place:
- simpler architecture (e.g. first find out the solution to
the linearly separable part of the problem),
- use of simple recognition task,
combined using either Waibel's glue units [Waibel 88] or with simple
heuristics.
- training on time-shifted inputs to learn ti me invariance and
insensitivity to errors in the segmentation preprocessing.
- controlling and improving generalization by using several test sets and
using one of them to decide when to stop training.
EAR MODEL
In recent years basilar membrane, inner cell and nerve fiber behavior
have been extensively studied by auditory physiologists and
neurophysiologists and knowledge about the human auditory pathway
has become more accurate [Sachs79,80,83][Delgutte 80,84][Sinex
83]. The computational scheme proposed in this paper for modelling the
human auditory system is derived from the one proposed by S. Seneff
[Seneff 84,85,86]. The overall system structure which is illustrated in
Fig. 1 includes three blocks: the first two of them deal with peripheral
transformations occurring in the early stages of the hearing process
while the third one attempts to extract information relevant to
perception. The first two blocks represent the periphery of the earing
system. They are designed using knowledge of the rather well known
responses of the corresponding human auditory stages [Delgutte 84].
The third unit attempts to apply a useful processing strategy for the
extraction of important speech properties like spectral lines related to
formants.
The speech signal, band-limited and sampled at 16 kHz, is first prefiltered through a set of four complex zero pairs to eliminate the very
high and very low frequency components. The Signal is then analyzed by
the first block, a 40-channel critical-band linear filter bank.
Filters were designed to optimally fit physiological data [Delgutte 84)
such as those observed by [N.V.S. Kiang et al.] and are implemented as a
225
226
Bengio, Cardin, De Mori and Cosi
cascade of complex high frequency zero pairs with taps after each zero
pair to individual tuned resonators. The second block of the model is
called the hair cell synapse model, it is nonlinear and is intended to
capture prominent features of the transformation from basilar
membrane vibration, represented by the outputs of the filter bank, to
probabilistic response properties of auditory nerve fibers. The outputs
of this stage, in accordance with
[Seneff 88], represent the
probability of firing as a function of time for a set of similar fibers
acting as a group. Four different neural mechanisms are modeled in this
nonlinear stage. The rectifier is applied to the signal to simulate the
high level distinct directional sensitivity present in the inner hair cell
current response. The short-term adaptation which seems due to the
neurotransmitter release in the synaptic region between the inner hair
cell and its connected nerve fibers is Simulated by the "membrane
model". The third unit represents the observed gradual loss of
synchrony in nerve fiber behaviour as stimulus frequency is
increased. The last unit is called "Rapid Adaptation", it performs
"Automatic Gain Control" and implements a model of the refractory
phenomenon of nerve fibers. The third and last block of the ear model is
the synchrony detector which implements the known "phase locking"
property of the nerve fibers. It enhances spectral peaks due to vocal
tract resonances.
IN PUT.J,.SIGNAL
OUTPUT
LAYER
40 ? channels
Critical Band
Filter Bank
BASILARi,.MEMBRANE RESPONSE
Hair Cell
Synapse
Model
"
HIDD~NO
LAYER
...
...
~
0
~o
_
0
__
tt
HIDD~NO
o~O
0
LAYER
0
o
0
tt
RRING PROBABILITY
Synchrony
Detector
SYNCHRO~SPECTRUM
Figure 1 : Structure of the ear model
~ time
Figure 2 : Multi?layered network vlith
variable resolution Property Extractor
Multi-Layered Networks for Coding Phonetic Features
PROPERTY EXTRACTORS
For many of the experiments described in this paper, learning is
performed by a set of multi-layered neural networks (MLNs) whose
execution is decided by a data-driven strategy. This strategy
analyzes morphologies of the input data and selects the execution of one
or more MLNs as well as the time and frequency resolution of the
spectral samples that are applied at the network input. An advantage of
using such specialized property extractors is that the number of
necessary input connections (and thus of connections) is then
minimized, thus improving the generalizing power of the MLNs.
Fine time resolution and gross frequency resolution are used, for
example, at the onset of a peak of signal energy, while the opposite is
used in the middle of a segment containing broad-band noise. The latter
situation will allow the duration of the segment analyzed by one
instantiation of the selected MLN to be larger than the duration of the
signal analyzed in the former case.
Property extractors (PEs) are mostly rectangular windows
subdivided into cells, as illustrated in Figure 2. Property extractors
used in the experiments reported here are described in [Bengio, De Mori
& Cardin 88]. A set of PEs form the input of a network called MLN1,
executed when a situation characterized by the following rule is detected:
SITUATION S1
((deep_dip) (t*)(peak))
or
((ns)(t*)(peak))
or (deep_dip)(sonorant-head)(t*)(peak))
--> execute (MLN1
at t*)
(deep_dip), (peak), (ns) are symbols of the PAC alphabet representing
respectively a deep dip, a peak in the time evolution of the signal energy
and a segment with broad-band nOise; t* is the time at which the first
description ends, sonorant-head is a property defined in [De Mori,
Merlo et al. 87]. Similar preconditions and networks are established for
nonsonorant segments at the end of an utterance.
Another MLN called MLN2 is executed only when frication noise is
detected. This situation is characterized by the following rule:
SITUATION S2
(pr1 = (ns)) --> execute (MLN2 every
T=20 msecs.)
227
228
Bengio, Cardin, De Morl and Cosi
EXPERIMENTAL RESULTS
EXPERIMENT 1
- task : perform the classification among the following 10 letters of the
alphabet, from the E-set : { b,c,d,e,g,k,p,t,v,3}
- Input coding defined in [Bengio, De Mori & Cardin 88].
- architecture: two modules have been defined, MLN1 and MLN2. The
input units of each PE window are connected to a group of 20 hidden
units, which are connected to another group of 10 hidden units. All the
units in the last hidden layer are then connected to the 10 output units.
- database : in the learning phase, 1400 samples corresponding to 2
pronounciations of each word of the E-set by 70 speakers were used for
training MLN1 and MLN2. Ten new speakers were used for testing.
The data base contains 40 male and 40 female speakers.
- results: an overall error rate of 9.5% was obtained with a maximum
error of 20% for the letter Id/. These results are much better than the
ones we obtained before and we published recently [De Mori, Lam &
Gilloux 87]. An observation of the confusion matrix shows that most of
the errors represent cases that appear to be difficult even in human
perception.
EXPERIMENT 2
- task : similar to the one in experiment 1 i.e. to recognize the h ea d
consonant in the context of a certain vowel : lae/'/o/'/ul and Ia!.
-subtask 1 : classify pronounciations of the first phoneme of letters
A,K,J,Z and digit 7 into the classes {/vowel/,lkI,/j/'/zl,/s/}.
-subtask 2 : classify pronounciations of the first phoneme of letter
and
digit 4 into the classes {/vowel/,/f/}.
-subtask 3 : classify pronounciations of the first phoneme of the letter Y
and the digits 1 and 2 into the classes {/vowel/,lt!}.
-subtask 4 : classify pronounciations of the first phoneme of letters
I,R,W and digits 5 and 9 into the classes {/vowel/,/d/,/f/'/n/}
- input coding : as for experiment 1 except that only PEs pertaining to
situation 81 were used, as the input to a single MLN.
- architecture : two layers of respectively 40 and 20 hidden units
followed by an output unit for each of the classes defined for the subtask.
- database : 80 speakers (40 males, 40 females) each pronouncing two
utterances of each letter and each digit. The first 70 speakers are used
for training, the last 10 for testing.
- results :
subtask 1 : {/vowel/,/k/,/j/'/z/,/s/} preceding vowel lael.
4 % error on test set.
a
Multi-Layered Networks for Coding Phonetic Features
subtask 2 : {/vowel/.!f!} preceding vowel 10/.
o % error on test set.
subtask 3 : {/vowel/.!tI} preceding vowel luI.
o % error on test set.
subtask 4 : {/vowel/,/d/,/f/,/n/} preceding vowel lal.
3 % error on test set.
EXPERIMENT 3
- task: speaker-Independant vowel recognition to discrimine
among ten vowels extracted from 10 english words
{BEEP,PIT,BED,BAT,BUT,FUR,FAR,SAW,PUT,BOOT}.
- input coding : the signal processing method used for this experiment is
the one described in the section "ear model". The output of the
Generalised Synchrony Detector (GSD) was collected every 5 msecs. and
represented by a 40-coefficients vector. Vowels were automatically
singled out by an algorithm proposed in [De Mori 85] and a linear
interpolation procedure was used to reduce to 10 the variable number of
frames per vowel (the first and the last 20 ms were not considered in
the interpolation procedure).
- architecture: 400 input units (10 frames x 40 filters), a single
hidden layer with 20 nodes, 10 output nodes for the ten vowels.
- database: speech material consisted in 5 pronounciations of the ten
monosyllabic words by 13 speakers (7 male, 6 female) for training and
7 new speakers (3 male, 4 female) for test.
- results: In 95.4% of the cases, correct hypotheses were generated
with the highest evidence, in 98.5% of the cases correct hypotheses
were found in the top two candidates and in 99.4 % of the cases in the
top three candidates. The same experiment with FFT spectra instead of
data from the ear model gave 870/0 recognition rate in similar
experimental conditions. The use of the ear model allowed to produce
spectra with a limited number of well defined spectral lines. This
represents a good use of speech knowledge according to which formants
are vowel parameters with low variance. The use of male and female
voices allowed the network to perform an excellent generalization with
samples from a limited number of speakers.
CONCLUSION
The preliminary experiments reported here on speaker normalization
combining multi-layered neural networks and speech recognition
expertise show promising results. For transient sounds, event-driven
property extractors with variable resolutions in the time and frequency
domains were used. For sonorant speech with formants, a new model of
229
230
Bengio, Cardin, De Mori and Cosi
the human auditory system was preferred to the classical FFT or LPC
representation as a front-end module. More experiments have to be
carried out to build an integrated speaker-independant phoneme
recognizer based on multiple modules and multiple front-end coding
strategies. In order to tune this system, variable depth analysis will be
used. New small modules will be designed to specifically correct the
deficiencies of trained modules. In addition, we consider strategies to
perform recognition at the word level, using as input the sequence of
outputs of the MLNs as time flows and new events are encountered.
These strategies are also useful to handle slowly varying transitions
such as those in diphtongs.
REFERENCES
Bengio Y., De Mori R. & Cardin R., (1988)"Data-Driven Execution of
Multi-Layered Networks for Automatic Speech Recognition", Proceedings
of AAAI 88, August 88, Saint Paul, Minnesota,pp.734-738.
Bengio Y. & De Mori R. (1988), "Speaker normalization and automatic
speech recognition uSing spectral lines and neural networks",
Proceedings of the Canadian Conference on Artificial Intelligence
(CSCSI-88), Edmonton, AI., May 1988.
Delgutte B. (1980), "Representation of speech-like sounds in the
discharge patterns of auditory-nerve fibers" , Journal of the Acoustical
Society of America, N. 68, pp. 843-857.
Delgutte B. & Kiang N.Y.S. (1984) , "Speech coding in the auditory
nerve", Journal of Acoustical Society of America, N. 75, pp. 866-907.
De Mori R., Laface P. & Mong Y. (1985), "Parallel algorithms for
syllable recognition in continuous speech", IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol. PAMI-7, N. 1, pp. 5669, 1985.
De Morl R., Merlo E., Palakal M. & Rouat J.(1987),"Use of procedural
knowledge for automatic speech recognition", Proceedings of the tenth
International Joint Conference on Artificial Intelligence, Milan, August
1987, pp. 840-844.
De Mori R., Lam L. & Gilloux M. (1987), "Learning and plan refinement
in a knowledge-based system for automatic speech recognition", IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9,
No.2, pp.289-305.
Multi-Layered Networks for Coding Phonetic Features
Kiang N.Y.S., Watanabe T., Thomas E.C. & Clark L.F., "Discharge patterns
of single fibers in the eat's aUditory-nerve fibers", Cambridge, MA: MIT
press.
Rumelhart D.E., Hinton G.E. & Williams R.J. (1986),"Learning internal
representation by error propagation", Parallel Distributed Processing :
Exploration in the Microstructure of Cognition, vol. 1, pp.318-362,
MIT Press, 1986.
Seneff S. (1984), "Pitch and spectral estimation of speech based on an
auditory synchrony model", Proceedings of ICASSP-84, San Diego, CA.
Seneff S. (1985), "Pitch and spectral analysis of speech based on an
auditory synchrony model", RLE Technical Report 504 , MIT.
Seneff S. (1986), "A computational model for the peripheral auditory
system: application to speech recognition research", Proceedings of
ICASSP-86, Tokyo, pp. 37.8.1-37.8.4.
Seneff S. (1988), "A joint synchrony/mean-rate model of auditory
speech processing", Journal of Phonetics, January 1988.
Sachs M.B. & Young E.D. (1979),"Representation of steady-state vowels
in the temporal aspects of the discharge pattern of populations of
auditory nerve fibers", Journal of Acoustical Society of America, N. 66,
pp. 1381-1403.
Sachs M.B. & Young E.D. {1980},"Effects of nonlinearities on speech
encoding in the auditory nerve", Journal of Acoustical SOCiety of
America, N. 68, pp. 858-875.
Sachs M.B. & Miller M.1. (1983), "Representation of stop consonants in
the discharge patterns of auditory-nerve fibers", Journal of Acoustical
Society of America, N. 74, pp. 502-517.
Sinex D.G. & Geisler C.D. (1983), "Responses of aUditory-nerve fibers
to consonant-vowel syllables", Journal of Acoustical Society of America,
N. 73, pp. 602-615.
Waibel A. (1988),"Modularity in Neural Networks for Speech
Recognition", Proc. of the 1988 IEEE Conference on Neural Information
Processing Systems, Denver, CO.
231
| 113 |@word beep:1 middle:1 seems:1 glue:1 gradual:1 independant:3 contains:1 tuned:1 current:1 delgutte:5 designed:3 intelligence:4 selected:1 mln:3 short:1 node:2 simpler:2 become:1 combine:1 pathway:1 rapid:1 behavior:1 pr1:1 multi:9 formants:3 morphology:1 automatically:2 window:2 kiang:3 synchro:1 transformation:2 temporal:1 every:2 ti:2 control:2 unit:15 zl:1 appear:1 before:1 generalised:1 local:1 accordance:1 switching:1 encoding:1 id:1 firing:1 interpolation:2 pami:2 studied:1 pit:1 co:1 limited:3 monosyllabic:1 decided:1 bat:1 testing:2 block:5 implement:2 digit:5 procedure:2 cascade:1 word:4 vocal:1 layered:9 put:2 context:1 williams:1 duration:2 rectangular:1 resolution:6 rule:2 population:1 handle:1 discharge:4 mcgill:1 controlling:1 diego:1 hypothesis:2 rumelhart:1 recognition:17 database:3 observed:2 module:8 capture:1 precondition:1 region:1 connected:4 highest:1 gross:1 subtask:9 locking:1 trained:1 segment:4 workload:1 joint:2 icassp:2 represented:2 fiber:13 neurotransmitter:1 america:6 alphabet:2 train:1 distinct:1 pertaining:1 detected:2 gilloux:2 labeling:1 artificial:2 cardin:7 heuristic:1 whose:1 larger:1 singled:1 advantage:1 sequence:1 lam:2 adaptation:2 relevant:1 combining:2 insensitivity:1 description:1 bed:1 milan:1 convergence:2 produce:1 tract:1 montreal:1 basilar:2 dividing:1 implemented:1 come:1 direction:1 correct:3 tokyo:1 filter:5 exploration:1 human:6 transient:2 material:1 subdivided:1 behaviour:1 microstructure:1 generalization:4 preliminary:2 cscsi:1 considered:2 cognition:1 sought:1 early:1 mlns:4 recognizer:1 estimation:1 proc:1 saw:1 vibration:1 mit:3 rather:1 varying:1 derived:1 release:1 modelling:1 fur:1 regis:1 eliminate:1 integrated:1 hidden:5 relation:1 selects:1 overall:2 among:3 classification:1 pronouncing:1 resonance:1 plan:1 having:1 extraction:1 represents:2 broad:2 minimized:1 yoshua:1 stimulus:1 report:1 recognize:1 individual:1 intended:1 phase:2 vowel:21 attempt:2 male:5 analyzed:3 accurate:1 capable:1 necessary:1 re:1 increased:1 classify:4 hearing:1 front:3 optimally:1 reported:3 combined:1 peak:7 sensitivity:1 international:1 ie:1 geisler:1 probabilistic:1 aaai:1 ear:7 containing:1 slowly:1 nonlinearities:1 de:14 coding:11 includes:1 coefficient:1 onset:1 performed:1 parallel:2 synchrony:7 phoneme:5 variance:1 miller:1 directional:1 expertise:3 published:1 detector:3 synaptic:1 definition:1 energy:2 frequency:7 pp:12 di:2 stop:2 auditory:18 sampled:1 gain:1 knowledge:7 segmentation:2 ea:1 back:1 nerve:13 response:5 synapse:2 cosi:4 execute:2 stage:4 nonlinear:2 propagation:2 pronounciations:6 effect:1 consisted:1 former:1 evolution:1 illustrated:2 deal:1 speaker:14 steady:1 m:1 prominent:1 tt:2 confusion:1 performs:1 phonetics:1 recently:1 specialized:1 denver:1 khz:1 refractory:1 cambridge:1 ai:1 automatic:5 minnesota:1 base:1 recent:1 female:5 italy:1 driven:4 periphery:1 phonetic:6 certain:1 seneff:7 gsd:1 analyzes:1 preceding:4 ricerche:1 signal:8 multiple:2 sound:3 technical:1 characterized:2 pitch:2 hair:4 represent:3 normalization:2 cell:6 addition:1 fine:1 examplars:2 flow:1 structural:1 canadian:1 bengio:8 h3a2a7:1 fft:3 fit:1 gave:1 architecture:4 opposite:1 inner:3 reduce:1 ul:1 effort:1 speech:29 deep:1 useful:2 tune:1 extensively:1 band:5 ten:4 shifted:1 per:2 vol:3 group:3 four:2 procedural:1 ce:1 tenth:1 year:1 letter:7 place:1 decide:1 renato:1 layer:6 followed:1 syllable:2 encountered:1 deficiency:1 centro:1 aspect:1 simulate:1 attempting:1 performing:1 separable:1 eat:1 according:1 waibel:3 peripheral:2 membrane:4 smaller:1 s1:1 mori:12 describing:1 mechanism:1 end:5 physiologist:1 apply:1 spectral:7 appropriate:1 modularization:1 batch:1 voice:1 thomas:1 top:2 saint:1 build:2 classical:1 society:6 objective:1 strategy:6 enhances:1 gradient:1 simulated:1 me:1 acoustical:6 collected:1 padova:1 modeled:1 difficult:1 mostly:1 executed:2 lki:1 ba:1 reliably:1 perform:3 boot:1 observation:1 january:1 situation:6 hinton:1 head:2 frame:2 august:2 canada:1 pair:3 connection:2 lal:1 tap:1 acoustic:1 established:1 perception:2 pattern:6 lpc:1 power:1 event:3 critical:2 ia:1 representing:1 scheme:1 carried:1 extract:1 utterance:2 merlo:2 lae:1 loss:1 clark:1 morl:2 bank:3 last:5 english:1 allow:1 distributed:1 dip:1 depth:1 transition:1 adaptive:1 preprocessing:1 refinement:1 san:1 far:1 transaction:2 preferred:2 instantiation:1 consonant:3 spectrum:3 continuous:1 sk:1 modularity:1 promising:1 learn:1 channel:2 ca:1 improving:2 excellent:1 complex:2 domain:3 sachs:3 linearly:1 s2:1 noise:3 paul:1 allowed:2 resonator:1 fig:1 edmonton:1 n:3 watanabe:1 msec:2 candidate:2 pe:4 third:4 extractor:8 sonorant:4 young:2 specific:2 rectifier:1 pac:1 symbol:1 physiological:1 evidence:1 execution:3 occurring:1 studio:1 generalizing:1 lt:1 extracted:1 ma:1 presentation:1 acceleration:2 change:1 rle:1 neurophysiologists:1 except:1 lui:1 specifically:1 acting:1 called:4 invariance:1 experimental:2 piero:1 internal:1 latter:1 dept:1 phenomenon:1 |
145 | 1,130 | Some results on convergent unlearning
algorithm
Serguei A. Semenov &: Irina B. Shuvalova
Institute of Physics and Technology
Prechistenka St. 13/7
Moscow 119034, Russia
Abstract
In this paper we consider probabilities of different asymptotics of
convergent unlearning algorithm for the Hopfield-type neural network (Plakhov & Semenov, 1994) treating the case of unbiased
random patterns. We show also that failed unlearning results in
total memory breakdown.
1
INTRODUCTION
In the past years the unsupervised learning schemes arose strong interest among
researchers but for the time being a little is known about underlying learning mechanisms, as well as still less rigorous results like convergence theorems were obtained
in this field. One of promising concepts along this line is so called "unlearning"
for the Hopfield-type neural networks (Hopfield et ai, 1983, van Hemmen & Klemmer, 1992, Wimbauer et ai, 1994). Elaborating that elegant ideas the convergent
unlearning algorithm has recently been proposed (Plakhov & Semenov, 1994), executing without patterns presentation. It is aimed at to correct initial Hebbian
connectivity in order to provide extensive storage of arbitrary correlated data.
This algorithm is stated as follows. Pick up at iteration step m, m = 0,1,2, ... a
random network state s(m) = (S~m), .. . , S~m), with the values sfm) = ?1 having
equal probability 1/2, calculate local fields generated by s(m)
N
h~m)
,
= L..J
~ J~~)S~m)
'J
J
'
.
t
= 1, ... , N ,
i=l
and then update the synaptic weights by
- cN-lh~m)h~m)
J'J~~+1) = J~~)
'J
'J
'
..
t,
J
= 1, ... , N .
(1)
359
Some Results on Convergent Unlearning Algorithm
Here C > 0 stands for the unlearning strength parameter. We stress that selfinteractions, Ji i , are necessarily involved in the iteration process. The initial condition for (1) is given by the Hebb matrix, J~O) = J!f:
(2)
with arbitrary (?1)-patterns
e J.l = 1, ... ,p.
JJ ,
For C < Ce, the (rescaled) synaptic matrix has been proven to converge with probability one to the projection one on the linear subspace spanned by maximal subset
of linearly independent patterns (Plakhov & Semenov, 1994). As the sufficient
condition for that convergence to occur, the value of unlearning strength C should
be less than Ce = '\;~~x where Amax denotes the largest eigenvalue of the Hebb
matrix. Very often in real-world situations there are no means to know Ce in advance, and therefore it is of interest to explore asymptotic behaviour of iterated
synaptic matrix for arbitrary values of c. As it is seen, there are only three possible limiting behaviours of the normalized synaptic matrix (Plakhov 1995, Plakhov
& Semenov, 1995). The corresponding convergence theorems relate corresponding
spectrum dynamics to limiting behaviour of normalized synaptic matrix j = JIIIJII
( IPII = (L:~=1 Ji~)1/2 ) which can be described in terms of A~n;2 the smallest
eigenvalues of J(m):
I. if A~2 = 0 for every m = 0,1,2, ... , with multiplicity of zero eigenvalue being
fixed, then
(A)
lim j~r:n)
m-oo
IJ
= S-1/2 PiJ?
where P marks the projection matrix on the linear subspace CeRN spanned by
the nominated patterns set JJ , J.l = 1, . .. , p, s = dim C ~ p;
e
II. if A~n;2 = 0, m = 0,1,2, ... , besides at some (at least one) steps mUltiplicity of
zero eigenvalue increases, then
(B)
? J-(m)
I1m
..
-m-oo IJ
s'-1/2p'..
IJ
where P' is the projector on some subspace C' C C, s' = dimC'
III. if A~n;2
< s;
< 0 starting from some value of m, then
(C)
with some (not a ?1) unity random vector
(3)
e = (6, ? .. ,eN).
These three cases exhaust all possible asymptotic behaviours of ji~m), that is their
total probability is unity: PA + PB + Pc = 1. The patterns set is supposed to be
fixed.
The convergence theorems say nothing about relative probabilities to have specific
asymptotics depending on model parameters. In this paper we present some general
results elucidating this question and verify them by numerical simulation.
We show further that the limiting synaptic matrix for the case (C) which is the
projector on
E C cannot maintain any associative memory. Brief discussion on
the retrieval properties of the intermediate case (B) is also given .
-e
S. A. SEMENOV, 1. B. SHUVALOVA
360
2
PROBABILITIES OF POSSIBLE LIMITING
BEHAVIOURS OF j(m)
The unlearning procedure under consideration is stochastic in nature. Which result
of iteration process, (A), (B) or (C), will realize depends upon the value of ?, size
and statistical properties of the patterns set {~~, J.l = 1, ... , p}, and realization of
unlearning sequence {sCm), m=0,1,2, . .. }.
Under fixed patterns set probabilities of appearance of each limiting behaviour of
synaptic matrix is determined by the value of unlearning strength E only. In this
section we consider these probabilities as a function of E.
Generally speaking, considered probabilities exhibit strong dependence on patterns
set, making impossible to calculate them explicitly. It is possible however to obtain
some general knowledge concerning that probabilities, namely: PA(E) - 1 as E 0+, and hence, PB,c(E) - 0, otherwise Pc(E) - 1 as E - 00, and PA,B(?) - 0,
because of PA + PB + Pc = 1. This means that the risk to have failed unlearning
rises when E increases. Specifically, we are able to prove the following:
Proposition. There exist positive ?l and ?2 such that P A (?) = 1,
and Pc(?) = 1, ?2 < ?.
0
< ? < ?l,
Before passing to the proof we bring forward an alternative formulation of the above
stated classification. After multiplying both sides of(l) by sIm)sjm) and summing
up over all i and j, we obtain in the matrix notation
s(m)T J(m+l)s(m)
= D.ms(m)T J(m)s(m)
(4)
where the contraction factor D. m = 1 - EN-ls(m)T J(m)s(m) controls the asymptotics of j(m), as it is suggested by detailed analysis (Plakhov & Semenov, 1995).
(Here and below superscript T designates the transpose.) The hypothesis of convergence theorems can be thus restated in terms of D. m , instead of .A~'7~, respectively:
I. D. m > 0 'tim; II. D. m = 0 for I steps ml, ... , ml; III. D. m < 0 at some step
m.
Proof It is obvious that D. m 2 1 - ?.Af:a1 where .A~"!1 marks the largest eigenvalue of J(m) . From (4), it follows that the sequence p~"!t, m = 0,1,2, ... } is
non increasing, and consequently D. m 2 1 - ?.A~~x with
.A~~x = Ixl-l
s~ x x= Ixl-l
s~ NT JH
=:; sup
Ixl=l
N- 1
l
t (L~rxi)2
~=1
i
P
N
N
~=l
i=l
i=l
L L)~n2 L xl = p.
From this, it is straightforward to see that, if ? < p-l , then D. m > 0 for any m. By
convergence theorem (Plakhov & Semenov, 1995) iteration process (1) thus leads
to the limiting relation (A).
Let by definition "I = mins N-lsr JHS where minimum is taken over such (?1)vectors S for which JH S =1= 0 (-y > 0, in view of positive semidefiniteness of JH),
and put ? > "1- 1 . Let us further denote by n the iteration step such that JH sCm) =
0, m = 0,1, ... , n - 1 and JH sen) =1= O. Needless to say that this condition may be
satisfied even for the initial step n = 0: JH S(O) =1= O. At step n one has
D. n
= 1-
EN- 1 s(n)T JH sen)
=:; 1 -
q
< O.
361
Some Results on Convergent Unlearning Algorithm
The latter implies loss of positive semidefiniteness of J(m), what results in asymptotics (C) (Plakhov, 1995, Plakhov & Semenov, 1995). By choosing Cl = p-l and
C2 = 1'-1 we come to the statement of Proposition.
Comparison of numerical estimates of considered probabilities with analytical approximations can be done on simple patterns statistics. In what follows the patterns
are assumed to be random and unbiased.
The dependence P(c) has been found in computer simulation with unbiased random
patterns. It is worth noting, by passing, that calculation Ll m using current simulation data supplies a good control of unlearning process owing to an alternative
(c) averaged
formulation of convergence theorems. In simulation we calculate
over the sets of unbiased random patterns, as well as over the realizations of unlearning sequence. As N increases, with 0: = piN remaining fixed, the curves slope
steeply down approaching step function PA'(c) = O(c - 0:- 1 ) (Plakhov & Semenov,
1995). Without presenting of derivation or proof we will advance the reasoning
suggestive of it. First it can be checked that Ll m is a selfaveraging quantity with
mean 1 - cN- 1 TrJ(m) and variance vanishing as N goes to infinity. Initially one
has N- 1TrJ H = 0:, and obviously the sequence {TrJ(m), m = 0,1,2, ... } is nonincreasing. Therefore Llo = 1 - cO:, and all others Ll m are not less than Ll o. If one
chooses c < 0:- 1 , then all Ll m will be positive, and the case (A) will realize. On the
other hand, when c > 0:- 1 , we have Llo < 0, and the case (C) will take place.
pf
What is probability for asymptotics (B) to appear? We will adduce an argument
(detailed analysis (Plakhov & Semenov, 1995) is rather cumbersome and omitted
here) indicating that this probability is quite small. First note that given patterns
set it is nonzero for isolated values of c only. Under the assumption that the patterns
are random and unbiased, we have calculated probability of I-fold appearance Ll m
o summed up over that isolated values of c. Using Gaussian approximation at
large N, we have found that probability scales with N as N'/2+2-21+m+l. The
total probability can then be obtained through summing up over integer values
I: 0 < I < s and all the iteration steps m = 0,1,2, .... As a result, the main
contribution to the total probability comes from m = 0 term which is of the order
=
N- 3 / 2 .
3
LIMITING RETRIEVAL PROPERTIES
How does reduction of dimension of "memory space" in the case (B), 5 ~ 5' = 5-1,
affect retrieval properties of the system? They may vary considerably depending on
I. In the most probable case I = 1 it is expected that there will be a slight decrease
in storage capacity but the size of attraction basins will change negligibly. This is
corroborated by calculating the stability parameter for each pattern J.I.
I-' _ cl-' ' " ' pi cl-'
(5)
"'i -
<'i ~
jti
ij<'j?
Let SemI) be the state vector with normalized projection on C given by V =
ps(mI) IIPs(mI)1 such that
IPs(ml)1
= Jo:N,
N
~ '" N-l/2,
L ~~r '" 1.
i=1
Then the stability parameter (5) is estimated by
t
",r = ~r L (P ~Vj)~j = (1- Pii)- (~~r Vj~j ij -
j~i
j=1
Vi 2 )
~ 1- P
ii+ O (N- 1 / 2 ).
362
S. A. SEMENOV, I. B. SHUV ALOVA
Since Pij has mean a and variance vanishing as N --t 00, we thus conclude that the
stability parameter only slightly differs from that calculated for the projector rule
(s = s') (Kanter & Sompolinsky, 1987).
=
On the other hand, in the situation 0 < s' /s ~ 1 (the possible case i
0 is trivial)
the system will be capable retrieving only a few nominated patterns which ones we
cannot specify beforehand. As mentioned above, this case realizes with very small
but finite probability.
The main effect of self-interactions Jji lies in substantial decrease in storage capacity
(Kanter & Sompolinsky, 1987). This is relevant when considering the cases (A)
and (B). In the case (C) the system possesses an interesting dynamics exhibiting
permanent walk over the state space. There are no fixed points at all. To show this,
we write down the fixed point condition for arbitrary state S: Si I:f:l JjjSj >
0, i = 1, ... , N. By using the explicit expression for limiting matrix ~j (3) and
summing up over i's, we get as a result (I: j Sj?j)2 < 0, what is impossible.
If self-interactions are excluded from local fields at the stage of network dynamics,
it is then driven by the energy function of the form H = -(2N)-1 I:itj JjjSjSj.
(Zero-temperature sequential dynamics either random or regular one is assumed.)
In the rest of this section we examine dynamics of the network equiped with limiting synaptic matrix (C) (3). We will show that in this limit the system lacks any
associative memory. There are a single global maximum of H given by Sj = sgn(?d
and exponentially many shallow minima concentrated close to the hyperplane orthogonal to ?. Moreover it is turned out that all the metastable states are unstable
against single spin flip only, whatever the realization of limiting vector ?. Therefore
after a spin flips the system can relax into a new nearby energy minimum. Through
a sequence of steps each consisting of a single spin flip followed by relaxation one
can, in principle, pass from one metastable state to the other one.
We will prove in what follows that any given metastable state S' one can pass to
any other one S through a sequence of steps each consisting of a single spin flip
and subsequent relaxation to a some new metastable state. Note that this general
statement gives no indications concerning the order of spin flips when moving along
a particular trajectory in the state space.
Now on we turn to the proof. Let us enumerate the spins in increasing order in
absolute values of vector components 0 ~ 161 ~ ... ~ I?NI. The proof is carried out
by induction on j = 1, ... , N where j is the maximal index for which SJ 1= Sj.
For j = 1 the statement is evident. Assuming that it holds for 1, ... , j - 1 (2 ~
j ~ N), let us prove it for j. One has j
max { i: Sf 1= Sd. With flipping spin
j in the state Sl, we next allow relaxation by flipping spins 1, .. . ,j - 1 only. The
system finally reaches the state S2 realizing conditional energy minimum under
fixed Sj, ... , SN ?
=
Show that S2 is true energy minimum. There are two possibilities:
(i) For some i, 1 ~ i ~ j - 1, one has sgn (?j Sn
condition for S2 can be then written as
= sgn (?T S2) . The fixed
I ?T S2 I~ min {I?d: 1 ~ i ~ j - 1, sgn(?jS;)
l.From this, in view of increasing order of
I?i I's, one gets immediately
I ?TS2 I~ min {I?d: 1 ~ i ~ N, sgn(?jS;)
what implies S2 is true energy minimum.
= sgn(?T S2)} .
= sgn(?T S2)} ,
point
363
Some Results on Convergent Unlearning Algorithm
If ~T S2 = 0, the fixed point condition for S2 is automatically satisfied. Otherwise,
for 1 $ i $ j - 1 one has
and
j-l
N
~TS2 = -sgn(~T S2) 2: 1~s:1
+ 2:~iSs:,
i;;;1
(6)
i=j
For the sake of definiteness, we set ~T S > O. (The opposite case is treated analogously.) In this case ~T S2 > 0, since otherwise, according to (6), it should be
j-I
N
i=1
i=j
o ~ ~T S2 = I: l~s:I + 2: ~iSi
~ ~T S,
what contradicts our setting.
One thus obtains
j-l
~TS2 = -
N
I: I~d + L~iSi $ ~TS,
(7)
i=1
i=j
and using the fixed point condition for S one gets
~T S $ min {1~s:I: ~iSi
> O} $ min {1~s:I:
j $ i $ N, ~iSi
> O}
= min{l~s:I: ~iSf > O}.
(8)
In the latter inequality of(8) one uses that ~iSf < 0, 1 $ i ~ j - l and Sf = Ss:, j ~
~ N. Taking into account (7) and (8), as a result we come to the condition for
S2 to be true energy minimum
i
o < ~T S2 ~ min {I~il :
~iSf >
O} .
According to inductive hypothesis, since S; = Si, j ~ i ~ N, from the state S2 one
can pass to S, and therefore from S' through S2 to S. This proves the statement.
In general, metastable states may be grouped in clusters surrounded by high energy
barriers. The meaning of proven statement resides in excluding the possibility of
even such type a memory. Conversely, allowing a sequence of single spin flips (for
instance, this can be done at finite temperatures) it is possible to walk through the
whole set of metastable states.
4
CONCLUSION
In this paper we have begun studying on probabilities of different asymptotics of
convergent unlearning algorithm considering the case of unbiased random patterns.
We have shown also that failed unlearning results in total memory breakdown.
References
Hopfield, J.J., Feinstein, D.I. & Palmer, R.G. (1983) "Unlearning" has a stabilizing
effect in collective memories. Nature 304:158-159 .
van Hemmen, J.L. & Klemmer, N. (1992) Unlearning and its relevance to REM
sleep: Decorrelating correlated data. In J. G. Taylor et al (eds.) , Neural Network
Dynamics, pp. 30-43. London: Springer.
364
S. A. SEMENOV, I. B. SHUV ALOV A
Wimbauer, U., Klemmer, N. & van Hemmen, J .L. (1994) Universality of unlearning.
Neural Networks 7:261-270.
Plakhov, A.Yu. & Semenov, S.A. (1994) Neural networks: iterative unlearning
algorithm converging to the projector rule matrix. J. Phys.I France 4:253-260.
Plakhov, A.Yu. (1995) private communication
Plakhov, A.Yu . & Semenov, S.A. (1995) preprint IPT.
Kanter, I. & Sompolinsky, H. (1987) Associative recall of memory without errors.
Phys. Rev. A 35:380-392.
| 1130 |@word private:1 simulation:4 llo:2 contraction:1 pick:1 reduction:1 initial:3 elaborating:1 past:1 ts2:3 current:1 nt:1 si:2 universality:1 written:1 realize:2 subsequent:1 numerical:2 treating:1 update:1 vanishing:2 realizing:1 along:2 c2:1 supply:1 retrieving:1 prove:3 unlearning:22 expected:1 isi:4 examine:1 rem:1 automatically:1 little:1 pf:1 considering:2 increasing:3 underlying:1 notation:1 moreover:1 what:7 every:1 control:2 whatever:1 appear:1 positive:4 before:1 local:2 sd:1 limit:1 conversely:1 co:1 palmer:1 averaged:1 differs:1 procedure:1 asymptotics:6 projection:3 regular:1 get:3 cannot:2 needle:1 close:1 storage:3 risk:1 impossible:2 put:1 projector:4 straightforward:1 go:1 starting:1 l:1 restated:1 stabilizing:1 immediately:1 rule:2 attraction:1 amax:1 spanned:2 stability:3 limiting:10 us:1 hypothesis:2 pa:5 breakdown:2 corroborated:1 negligibly:1 preprint:1 calculate:3 sompolinsky:3 decrease:2 rescaled:1 mentioned:1 substantial:1 dynamic:6 upon:1 hopfield:4 derivation:1 london:1 choosing:1 quite:1 kanter:3 say:2 relax:1 otherwise:3 s:1 statistic:1 superscript:1 associative:3 obviously:1 ip:1 sequence:7 eigenvalue:5 indication:1 analytical:1 sen:2 interaction:2 maximal:2 relevant:1 turned:1 realization:3 supposed:1 convergence:7 cluster:1 p:1 executing:1 tim:1 oo:2 depending:2 ij:5 sim:1 strong:2 implies:2 come:3 pii:1 exhibiting:1 correct:1 owing:1 stochastic:1 sgn:8 behaviour:6 proposition:2 probable:1 hold:1 considered:2 vary:1 smallest:1 omitted:1 realizes:1 largest:2 grouped:1 gaussian:1 rather:1 arose:1 steeply:1 rigorous:1 dim:1 initially:1 relation:1 i1m:1 france:1 among:1 classification:1 summed:1 field:3 equal:1 having:1 yu:3 unsupervised:1 others:1 few:1 irina:1 consisting:2 maintain:1 interest:2 possibility:2 elucidating:1 pc:4 nonincreasing:1 beforehand:1 capable:1 lh:1 orthogonal:1 taylor:1 walk:2 isolated:2 instance:1 rxi:1 subset:1 considerably:1 chooses:1 st:1 physic:1 analogously:1 itj:1 connectivity:1 jo:1 satisfied:2 ipt:1 russia:1 account:1 semidefiniteness:2 exhaust:1 permanent:1 explicitly:1 depends:1 vi:1 view:2 sup:1 slope:1 scm:2 contribution:1 il:1 spin:9 ni:1 variance:2 cern:1 jhs:1 iterated:1 multiplying:1 worth:1 researcher:1 trajectory:1 reach:1 cumbersome:1 phys:2 synaptic:8 checked:1 definition:1 ed:1 against:1 energy:7 pp:1 involved:1 selfaveraging:1 obvious:1 proof:5 mi:2 begun:1 recall:1 lim:1 knowledge:1 specify:1 formulation:2 done:2 decorrelating:1 lsr:1 stage:1 hand:2 lack:1 effect:2 concept:1 unbiased:6 normalized:3 verify:1 true:3 hence:1 inductive:1 excluded:1 nonzero:1 ll:6 self:2 m:1 stress:1 presenting:1 evident:1 jti:1 bring:1 temperature:2 reasoning:1 meaning:1 consideration:1 recently:1 ji:3 exponentially:1 slight:1 isf:3 ai:2 moving:1 j:2 driven:1 klemmer:3 inequality:1 seen:1 minimum:7 converge:1 ii:3 semi:1 hebbian:1 af:1 calculation:1 retrieval:3 concerning:2 a1:1 converging:1 plakhov:14 iteration:6 rest:1 posse:1 elegant:1 integer:1 noting:1 intermediate:1 iii:2 affect:1 approaching:1 opposite:1 idea:1 cn:2 expression:1 speaking:1 passing:2 jj:2 enumerate:1 generally:1 detailed:2 aimed:1 concentrated:1 sl:1 exist:1 estimated:1 write:1 pb:3 ce:3 relaxation:3 year:1 place:1 sfm:1 followed:1 convergent:7 fold:1 jji:1 sleep:1 strength:3 occur:1 infinity:1 sake:1 nearby:1 argument:1 min:7 metastable:6 according:2 slightly:1 contradicts:1 unity:2 shallow:1 rev:1 making:1 trj:3 multiplicity:2 taken:1 pin:1 turn:1 mechanism:1 feinstein:1 know:1 flip:6 studying:1 alternative:2 moscow:1 denotes:1 remaining:1 calculating:1 prof:1 question:1 quantity:1 flipping:2 dependence:2 exhibit:1 subspace:3 capacity:2 unstable:1 trivial:1 induction:1 assuming:1 besides:1 index:1 statement:5 relate:1 stated:2 rise:1 collective:1 allowing:1 finite:2 t:1 situation:2 excluding:1 communication:1 arbitrary:4 namely:1 extensive:1 able:1 suggested:1 below:1 pattern:18 max:1 memory:8 treated:1 scheme:1 technology:1 brief:1 carried:1 sn:2 asymptotic:2 relative:1 loss:1 interesting:1 proven:2 ixl:3 sufficient:1 pij:2 basin:1 principle:1 pi:1 surrounded:1 transpose:1 side:1 jh:7 allow:1 institute:1 taking:1 barrier:1 absolute:1 van:3 curve:1 calculated:2 dimension:1 stand:1 world:1 resides:1 forward:1 sj:5 obtains:1 ml:3 suggestive:1 global:1 wimbauer:2 summing:3 assumed:2 conclude:1 spectrum:1 iterative:1 designates:1 promising:1 nature:2 necessarily:1 cl:3 vj:2 main:2 linearly:1 s2:17 whole:1 n2:1 nothing:1 i:1 en:3 hemmen:3 hebb:2 definiteness:1 nominated:2 explicit:1 sf:2 xl:1 lie:1 theorem:6 down:2 specific:1 sequential:1 explore:1 appearance:2 failed:3 springer:1 conditional:1 presentation:1 consequently:1 change:1 determined:1 specifically:1 hyperplane:1 total:5 called:1 pas:3 indicating:1 mark:2 latter:2 relevance:1 correlated:2 |
146 | 1,131 | Discriminant Adaptive Nearest Neighbor
Classification and Regression
Trevor Hastie
Department of Statistics
Sequoia Hall
Stanford University
California 94305
trevor@playfair .stanford.edu
Robert Tibshirani
Department of Statistics
University of Toronto
tibs@utstat .toronto.edu
Abstract
Nearest neighbor classification expects the class conditional probabilities to be locally constant, and suffers from bias in high dimensions We propose a locally adaptive form of nearest neighbor
classification to try to finesse this curse of dimensionality. We use
a local linear discriminant analysis to estimate an effective metric for computing neighborhoods . We determine the local decision
boundaries from centroid information, and then shrink neighborhoods in directions orthogonal to these local decision boundaries,
and elongate them parallel to the boundaries. Thereafter , any
neighborhood-based classifier can be employed, using the modified
neighborhoods. We also propose a method for global dimension
reduction, that combines local dimension information. We indicate
how these techniques can be extended to the regression problem.
1
Introduction
We consider a discrimination problem with J classes and N training observations .
The training observations consist of predictor measurements x:::: (Xl,X2," .xp) on
p predictors and the known class memberships . Our goal is to predict the class
membership of an observation with predictor vector Xo
Nearest neighbor classification is a simple and appealing approach to this problem.
We find the set of J{ nearest neighbors in the training set to Xo and then classify
Xo as the most frequent class among the J{ neighbors.
Cover & Hart (1967) show that the one nearest neighbour rule has asymptotic
error rate at most twice the Bayes rate. However in finite samples the curse of
T. HASTIE, R. TIBSHIRANI
410
dimensionality can severely hurt the nearest neighbor rule. The relative radius of
the nearest-neighbor sphere grows like r 1 / p where p is the dimension and r the
radius for p 1, resulting in severe bias at the target point x. Figure 1 (left panel)
illustrates the situation for a simple example. Nearest neighbor techniques are
=
, ,
Figure 1: In the left panel, the vertical strip denotes the NN region using only horizontal
coordinate to find the nearest neighbor for the target point (solid dot). The sphere shows
the NN region using both coordinates, and we see in this case it has extended into the
class 1 region (and found the wrong class in this instance). The middle panel shows
a spherical neighborhood containing 25 points, for a two class problem with a circular
decision boundary. The right panel shows the ellipsoidal neighborhood found by the DANN
procedure, also containing 25 points. The latter is elongated in a direction parallel to the
true decision boundary (locally constant posterior probabilities), and flattened orthogonal
to it.
based on the assumption that locally the class posterior probabilities are constant.
While that is clearly true in the vertical strip using only the vertical coordinate,
using both this is no longer true. Figure 1 (middle and right panels) shows how we
locally adapt the metric to overcome this problem, in a situation where the decision
boundary is locally linear .
2
Discriminant adaptive nearest neighbors
Consider first a standard linear discriminant (LDA) classification procedure with
J{ classes. Let Band W denote the between and within sum of squares matrices .
In LDA the data are first sphered with respect to W, then the target point is
classified to the class of the closest centroid (with a correction for the class prior
membership probabilities). Since only relative distances are relevant, any distances
in the complement of the subspace spanned by the sphered centroids can be ignored.
This complement corresponds to the null space of B.
We propose to estimate Band W locally, and use them to form a local metric that
approximately behaves like the LDA metric. One such candidate is
1:
W-1BW- 1
W-l/2(W-l/2BW-l/2)W-l/2
W- 1/ 2B*W- 1/ 2.
(1)
where B* is the between sum-of-squares in the sphered space. Consider the action
of 1: as a metric for computing distances
(x - xo?1:(x - xo) :
(2)
Discriminant Adaptive Nearest Neighbor Classification and Regression
411
? it first spheres the space using W;
? components of distance in the null space of B* are ignored;
? other components are weighted according to the eigenvalues of B* when
there are more than 2 classes - directions in which the centroids are more
spread out are weighted more than those in which they are close
Thus this metric would result in neighborhoods similar to the narrow strip in figure l(left figure): infinitely long in the null space of B, and then deformed appropriately in the centroid subspace according to how they are placed. It is dangerous
to allow neighborhoods to extend infinitely in any direction, so we need to limit this
stretching. Our proposal is
W-l/2[W-l/2BW-l/2 + d]W- 1 / 2
W-l/2[B*
+ d]W- 1 / 2
(3)
where f. is some small tuning parameter to be determined. The metric shrinks
the neighborhood in directions in which the local class centroids differ, with the
intention of ending up with a neighborhood in which the class centroids coincide
(and hence nearest neighbor classification is appropriate). Given E we use perform
K-nearest neighbor classification using the metric (2).
There are several details that we briefly describe here and in more detail in Hastie
& Tibshirani (1994):
? B is defined to be the covariance of the class centroids, and W the pooled
estimate of the common class covariance matrix. We estimate these locally
using a spherical, compactly supported kernel (Cleveland 1979), where the
bandwidth is determined by the distance of the KM nearest neighbor.
? KM above has to be supplied, as does the softening parameter f. . We somewhat arbitrarily use KM = max(N/5,50); so we use many more neighbors
(50 or more) to determine the metric, and then typically K = 1, ... ,5
nearest neighbors in this metric to classify. We have found that the metric
is relatively insensitive to different values of 0 < f. < 5, and typically use
f. = 1.
? Typically the data do not support the local calculation of W (p(p + 1)/2
entries), and it can be argued that this is not necessary. We mostly resort
to using the diagonal of W instead, or else use a global estimate.
Sections 4 and 5 illustrate the effectiveness of this approach on some simulated and
real examples.
3
Dimension Reduction using Local Discriminant
Information
The technique described above is entirely "memory based" , in that we locally adapt
a neighborhood about a query point at the time of classification. Here we describe a
method for performing a global dimension reduction, by pooling the local dimension
information over all points in the training set . In a nutshell we consider subspaces
corresponding to eigenvectors oj the average local between sum-oj-squares matrices.
Consider first how linear discriminant analysis (LDA) works. After sphering the
data, it concentrates in the space spanned by the class centroids Xj or a reduced
rank space that lies close to these centroids. If x denote the overall centroid, this
T. HASTIE, R. TIBSHIRANI
412
subspace is exactly a principal component hyperplane for the data points Xj - X,
weighted by the class proportions, and is given by the eigen-decomposition of the
between covariance B.
Our idea to compute the deviations Xj - x locally in a neighborhood around each of
the N training points, and then do an overall principal components analysis for the
N x J deviations. This amounts to an eigen-decomposition of the average between
sum of squares matrix 2:~1 B (i) / N.
LOA and local Slbapaces - K _ 25
Local e.tween Directions
8
,
,
?
,
I
./
2
"' "
,
?
??
2
?
????
...
10
"'de,
Figure 2: [Left Panel] Two dimensional gaussian data with two classes and correlation
0.65. The solid lines are the LDA decision boundary and its equivalent subspace for classi-
fication, computed using both the between and (crucially) the within class covariance. The
dashed lines were produced by the local procedure described in this section, without knowledge of the overall within covariance matrix. [Middle panel] Each line segment represents
the local between information centered at that point. [Right panel] The eigenvalues of the
average between matrix for the 4D sphere in 10D problem. Using these first four dimensions followed by our DANN nearest neighbor routine, we get better performance than 5NN
in the real 4D subspace.
Figure 2 (left two panels) demonstrates by a simple illustrative example that our
subspace procedure can recover the correct LDA direction without making use of
the within covariance matrix. Figure 2 (right panel) represents a two class problem
with a 4-dimensional spherical decision boundary. The data for the two classes lie
in concentric spheres in 4D, the one class lying inside the other with some overlap (a
4D version of the same 2D situation in figure 1.) In addition the are an extra 6 noise
dimensions, and for future reference we denote such a model as the "4D spheres in
lOD" problem. The decision boundary is a 4 dimensional sphere, although locally
linear. The eigenvalues show a distinct change after 4 (the correct dimension), and
using our DANN classifier in these four dimensions actually beats ordinary 5NN in
the known 4D discriminant subspace.
4
Examples
Figure 3 su?mmarizes the results of a number of simulated examples designed to test
our procedures in both favorable and unfavorable situations. In all the situations
DANN outperforms 5-NN. In the cases where 5NN is provided with the known lowerdimensional discriminant subspace, our subspace technique subDANN followed by
DANN comes close to the optimal performance.
413
Discriminant Adaptive Nearest Neighbor Classification and Regression
Two Gaussians wnh Noise
!i!
Unstructured with Noise
[5
o
T
BI T
T
I
. IBy
~I~
re
o
o
'"o
o
o
:g
o
-1-
-1-
0M=;
8
~LU
I
1
I
j
1
~8
:
'"o
~L
~I~
I
1
og
T
o
o
4?0 Sphere in 10?0
10?0 sphere in 10?0
'"o
ci
T
o
E$J
T
gY T
Ti
'"o
Q
S
'"
o
Figure 3: Boxplots of error rates over 20 simulations. The top left panel has two gaussian
distributions separated in two dimensions, with 14 noise dimensions . The notation red-LDA
and red-5NN refers to these procedures in the known lower dimensional space. iter-DANN
refers to an iterated version of DANN (which appears not to help), while sub-DANN refers
to our global subspace approach, followed by DANN. The top right panel has 4 classes, each
of which is a mixture of 3-gaussians in 2-D; in addition there are 8 noise variables. The
lower two panels are versions of our sphere example .
5
Image Classification Example
Here we consider an image classification problem. The data consist of 4 LANDSAT
images in different spectral bands of a small area of the earths surface, and the goal
is to classify into soil and vegetation types. Figure 4 shows the four spectral bands,
two in the visible spectrum (red and green) and two in the infra red spectrum.
These data are taken from the data archive of the STATLOG (Michie et al. 1994)1.
The goal is to classify each pixel into one of 7 land types: red soil, cotton, vegetation
stubble, mixture, grey soil, damp grey soil, very damp grey soil. We extract for each
pixel its 8-neighbors, giving us (8 + 1) x 4 = 36 features (the pixel intensities) per
pixel to be classified. The data come scrambled, with 4435 training pixels and 2000
test pixels, each with their 36 features and the known classification. Included in
figure 4 is the true classification, as well as that produced by linear discriminant
analysis. The right panel compares DANN to all the procedures used in STATLOG,
and we see the results are favorable.
1 The authors thank C. Taylor and D. Spiegelhalter for making these images and data
available
T. HASTIE, R. TIBSHIRANI
414
STATLOG results
Spectral band 1
Spectral band 2
Spectral band 3
LDA
SMA~g!st!C
Nm"I[C.4.5
C}\fH
OOf"
NUf:r<,1
ALLOCif)
HBF
Spectral band 4
Land use (Actual)
Land use (Predicted)
LVQ
o
ci
K-NN
ANN
2
4
6
8
10
12
Method
Figure 4: The first four images are the satellite images in the four spectral bands. The
fifth image represents the known classification, and the final image is the classification
map produced by linear discriminant analysis. The right panel shows the misclassification
results of a variety of classification procedures on the satellite image test data (taken from
Michie et al. (1994)). DANN is the overall winner.
6
Local Regression
Near neighbor techniques are used in the regression setting as well. Local polynomial
regression (Cleveland 1979) is currently very popular, where, for example, locally
weighted linear surfaces are fit in modest sized neighborhoods. Analogs of K-NN
classification for small J{ are used less frequently. In this case the response variable
is quantitative rather than a class label.
Duan & Li (1991) invented a technique called sliced inverse regression, a dimension reduction tool for situations where the regression function changes in a lowerdimensional space. They show that under symmetry conditions of the marginal
distribution of X, the inverse regression curve E(XIY) is concentrated in the same
lower-dimensional subspace. They estimate the curve by slicing Y into intervals,
and computing conditional means of X in each interval, followed by a principal
component analysis. There are obvious similarities with our DANN procedure, and
the following generalizations of DANN are suggested for regression:
? locally we use the B matrix of the sliced means to form our DANN metric,
and then perform local regression in the deformed neighborhoods .
? The local B(i) matrices can be pooled as in subDANN to extract global
subspaces for regression. This has an apparent advantage over the Duan &
Li (1991) approach: we only require symmetry locally, a condition that is
locally encouraged by the convolution of the data with a spherical kernel 2
7
Discussion
Short & Fukanaga (1980) proposed a technique close to ours for the two class
problem. In our terminology they used our metric with W = I and ( = 0, with
B determined locally in a neighborhood of size J{M. In effect this extends the
2We expect to be able to substantiate the claims in this section by the time of the
NIPS995 meeting.
14
Discriminant Adaptive Nearest Neighbor Classification and Regression
415
neighborhood infinitely in the null space of the local between class directions, but
they restrict this neighborhood to the original KM observations. This amounts
to projecting the local data onto the line joining the two local centroids. In our
experiments this approach tended to perform on average 10% worse than our metric,
and we did not pursue it further. Short & Fukanaga (1981) extended this to J > 2
classes, but here their approach differs even more from ours. They computed a
weighted average of the J local centroids from the overall average, and project
the data onto it, a one dimensional projection. Myles & Hand (1990) recognized
a shortfall of the Short and Fukanaga approach, since the averaging can cause
cancellatlOn, and proposed other metrics to avoid this, different from ours.
Friedman (1994) proposes a number of techniques for flexible metric nearest neighbor classification (and sparked our interest in the problem.) These techniques use
a recursive partitioning style strategy to adaptively shrink and shape rectangular
neighborhoods around the test point.
Acknowledgement
The authors thank Jerry Friedman whose research on this problem was a source
of inspiration, and for many discussions. Trevor Hastie was supported by NSF
DMS-9504495. Robert Tibshirani was supported by a Guggenheim fellowship, and
a grant from the National Research Council of Canada.
References
Cleveland, W. (1979), 'Robust locally-weighted regression and smoothing scatterplots', Journal of the American Statistical Society 74, 829-836.
Cover, T. & Hart, P. (1967), 'Nearest neighbor pattern classification', Proc. IEEE
Trans. Inform. Theory pp. 21- 27.
Duan, N. & Li, K.-C. (1991), 'Slicing regression: a link-free regression method',
Annals of Statistics pp. 505- 530.
Friedman, J. (1994), Flexible metric nearest neighbour classification, Technical report, Stanford Uni versity.
Hastie, T . & Tibshirani, R. (1994), Discriminant adaptive nearest neighbor classification, Technical report, Statistics Department, Stanford University.
Michie, D., Spigelhalter, D. & Taylor, C., eds (1994), Machine Learning, Neural
and Statistical Classification, Ellis Horwood series in Artificial Intelligence,
Ellis Horwood .
Myles, J. & Hand, D. J. (1990), 'The multi-class metric problem in nearest neighbour discrimination rules', Pattern Recognition 23, 1291-1297.
Short, R. & Fukanaga, K. (1980), A new nearest neighbor distance measure, in
'Proc. 5th IEEE Int. Conf. on Pattern Recognition', pp. 81- 86.
Short, R. & Fukanaga, K. (1981), 'The optimal distance measure for nearest neighbor classification', IEEE transactions of Information Theory IT-27, 622-627.
| 1131 |@word deformed:2 version:3 middle:3 briefly:1 proportion:1 polynomial:1 km:4 grey:3 simulation:1 crucially:1 covariance:6 decomposition:2 solid:2 reduction:4 myles:2 series:1 xiy:1 ours:3 outperforms:1 visible:1 shape:1 designed:1 discrimination:2 intelligence:1 short:5 toronto:2 combine:1 inside:1 frequently:1 multi:1 spherical:4 versity:1 duan:3 actual:1 curse:2 cleveland:3 provided:1 notation:1 project:1 panel:15 null:4 pursue:1 quantitative:1 ti:1 nutshell:1 exactly:1 classifier:2 wrong:1 demonstrates:1 partitioning:1 grant:1 local:22 limit:1 severely:1 joining:1 approximately:1 twice:1 bi:1 recursive:1 differs:1 procedure:9 area:1 projection:1 intention:1 refers:3 get:1 onto:2 close:4 equivalent:1 elongated:1 map:1 rectangular:1 unstructured:1 slicing:2 rule:3 spanned:2 fication:1 coordinate:3 hurt:1 annals:1 target:3 lod:1 recognition:2 michie:3 invented:1 tib:1 utstat:1 region:3 segment:1 compactly:1 separated:1 distinct:1 effective:1 describe:2 query:1 artificial:1 neighborhood:18 apparent:1 whose:1 stanford:4 statistic:4 final:1 advantage:1 eigenvalue:3 propose:3 frequent:1 relevant:1 satellite:2 help:1 illustrate:1 nearest:26 predicted:1 indicate:1 come:2 differ:1 direction:8 concentrate:1 radius:2 correct:2 centered:1 argued:1 require:1 generalization:1 statlog:3 correction:1 lying:1 around:2 hall:1 predict:1 claim:1 sma:1 fh:1 earth:1 favorable:2 proc:2 label:1 currently:1 council:1 tool:1 weighted:6 clearly:1 gaussian:2 modified:1 rather:1 avoid:1 og:1 rank:1 centroid:13 membership:3 nn:9 landsat:1 typically:3 pixel:6 overall:5 classification:25 among:1 flexible:2 proposes:1 smoothing:1 marginal:1 encouraged:1 represents:3 future:1 infra:1 report:2 neighbour:3 national:1 bw:3 friedman:3 interest:1 circular:1 severe:1 mixture:2 necessary:1 orthogonal:2 modest:1 taylor:2 re:1 instance:1 classify:4 elli:2 cover:2 wnh:1 ordinary:1 deviation:2 entry:1 expects:1 predictor:3 damp:2 adaptively:1 st:1 shortfall:1 nm:1 containing:2 worse:1 conf:1 resort:1 american:1 style:1 li:3 de:1 gy:1 pooled:2 int:1 dann:14 try:1 red:5 bayes:1 recover:1 parallel:2 fukanaga:5 square:4 stretching:1 iterated:1 produced:3 lu:1 classified:2 inform:1 suffers:1 tended:1 trevor:3 strip:3 ed:1 pp:3 obvious:1 dm:1 popular:1 knowledge:1 dimensionality:2 routine:1 actually:1 appears:1 response:1 shrink:3 correlation:1 hand:2 horizontal:1 su:1 lda:8 grows:1 effect:1 true:4 jerry:1 inspiration:1 hence:1 illustrative:1 substantiate:1 image:9 common:1 behaves:1 winner:1 insensitive:1 extend:1 analog:1 vegetation:2 measurement:1 tuning:1 softening:1 dot:1 longer:1 surface:2 similarity:1 posterior:2 closest:1 arbitrarily:1 meeting:1 somewhat:1 lowerdimensional:2 employed:1 recognized:1 determine:2 dashed:1 technical:2 adapt:2 calculation:1 sphere:10 long:1 hart:2 regression:17 metric:18 kernel:2 proposal:1 addition:2 fellowship:1 interval:2 else:1 source:1 appropriately:1 extra:1 archive:1 pooling:1 effectiveness:1 near:1 variety:1 xj:3 fit:1 hastie:7 bandwidth:1 restrict:1 idea:1 cause:1 action:1 ignored:2 eigenvectors:1 amount:2 ellipsoidal:1 locally:17 band:9 concentrated:1 reduced:1 supplied:1 nsf:1 tibshirani:7 per:1 thereafter:1 four:5 iter:1 terminology:1 boxplots:1 sum:4 inverse:2 extends:1 decision:8 entirely:1 followed:4 dangerous:1 sparked:1 x2:1 performing:1 relatively:1 sphering:1 department:3 according:2 guggenheim:1 appealing:1 making:2 projecting:1 xo:5 taken:2 horwood:2 available:1 gaussians:2 elongate:1 appropriate:1 spectral:7 eigen:2 original:1 denotes:1 top:2 giving:1 society:1 strategy:1 diagonal:1 subspace:13 distance:7 thank:2 link:1 simulated:2 discriminant:14 mostly:1 robert:2 perform:3 vertical:3 observation:4 convolution:1 finite:1 beat:1 situation:6 extended:3 intensity:1 concentric:1 canada:1 complement:2 california:1 cotton:1 narrow:1 trans:1 able:1 suggested:1 pattern:3 max:1 memory:1 oj:2 green:1 overlap:1 misclassification:1 spiegelhalter:1 extract:2 prior:1 acknowledgement:1 asymptotic:1 relative:2 expect:1 xp:1 land:3 placed:1 supported:3 loa:1 soil:5 free:1 bias:2 allow:1 neighbor:27 fifth:1 boundary:9 dimension:14 overcome:1 ending:1 curve:2 author:2 adaptive:7 coincide:1 transaction:1 uni:1 global:5 spectrum:2 scrambled:1 robust:1 symmetry:2 tween:1 did:1 spread:1 hbf:1 noise:5 sliced:2 scatterplots:1 sub:1 xl:1 candidate:1 lie:2 consist:2 flattened:1 ci:2 illustrates:1 infinitely:3 corresponds:1 oof:1 conditional:2 goal:3 sized:1 lvq:1 ann:1 change:2 included:1 determined:3 hyperplane:1 averaging:1 classi:1 principal:3 called:1 unfavorable:1 playfair:1 support:1 latter:1 sphered:3 |
147 | 1,132 | EM Optimization of Latent-Variable
Density Models
Christopher M Bishop, Markus Svensen and Christopher K I Williams
Neural Computing Research Group
Aston University, Birmingham, B4 7ET, UK
c.m.bishop~aston.ac.uk svensjfm~aston.ac.uk c.k.i.williams~aston.ac.uk
Abstract
There is currently considerable interest in developing general nonlinear density models based on latent, or hidden, variables. Such
models have the ability to discover the presence of a relatively small
number of underlying 'causes' which, acting in combination, give
rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational
effort. In this paper we introduce a novel latent variable algorithm
which retains the general non-linear capabilities of previous models
but which uses a training procedure based on the EM algorithm .
We demonstrate the performance of the model on a toy problem
and on data from flow diagnostics for a multi-phase oil pipeline.
1
INTRODUCTION
Many conventional approaches to density estimation, such as mixture models, rely
on linear superpositions of basis functions to represent the data density. Such
approaches are unable to discover structure within the data whereby a relatively
small number of 'causes' act in combination to account for apparent complexity in
the data. There is therefore considerable interest in latent variable models in which
the density function is expressed in terms of of hidden variables. These include
density networks (MacKay, 1995) and Helmholtz machines (Dayan et al., 1995).
Much of this work has been concerned with predicting binary variables. In this
paper we focus on continuous data.
c. M. BISHOP, M. SVENSEN, C. K. I. WILLIAMS
466
y(x;W)
Figure 1: The latent variable density model constructs a distribution function in t-space
in terms of a non-linear mapping y(x; W) from a latent variable x-space.
2
THE LATENT VARIABLE MODEL
Suppose we wish to model the distribution of data which lives in aD-dimensional
space t = (tl, ... , tD). We first introduce a transformation from the hidden variable space x = (Xl, ... , xL) to the data space, governed by a non:-linear function
y(x; W) which is parametrized by a matrix of weight parameters W. Typically
we are interested in the situation in which the dimensionality L of the latent variable space is less than the dimensionality D of the data space, since we wish to
capture the fact that the data itself has an intrinsic dimensionality which is less
than D. The transformation y(x; W) then maps the hidden variable space into an
L-dimensional non-Euclidean subspace embedded within the data space. This is
illustrated schematically for the case of L = 2 and D = 3 in Figure 1.
If we define a probability distribution p(x) on the latent variable space, this will
induce a corresponding distribution p(y) in the data space. We shall refer to p(x)
as the prior distribution of x for reasons which will become clear shortly. Since
L < D, the distribution in t-space would be confined to a manifold of dimension L
and hence would be singular. Since in reality data will only approximately live on a
lower-dimensional space, it is appropriate to include a noise model for the t vector.
We therefore define the distribution of t, for given x and W, given by a spherical
Gaussian centred on y(x; W) having variance {3-1 so that
( 1)
The distribution in t-space, for a given value of the weight matrix W,
obtained by integration over the x-distribution
p(tIW) =
J
p(tlx, W)p(x) dx.
lS
then
(2)
For a given data set V = (t l , ... , t N ) of N data points, we can determine the
weight matrix W using maximum likelihood. For convenience we introduce an
error function given by the negative log likelihood:
11
N
E(W)
= -In
N
p(tn IW)
= - ~ In
{J
p(tn Ixn, W)p(xn) dxn } .
(3)
467
EM Optimization of Latent-Variable Density Models
In principle we can now seek the maximum likelihood solution for the weight matrix,
once we have specified the prior distribution p(x) and the functional form of the
mapping y(x; W), by minimizing E(W). However, the integrals over x occuring
in (3), and in the corresponding expression for 'iJ E, will, in general, be analytically intractable. MacKay (1995) uses Monte Carlo techniques to evaluate these
integrals and conjugate gradients to find the weights. This is computationally very
intensive, however, since a Monte Carlo integration must be performed every time
the conjugate gradient algorithm requests a value for E(W) or 'iJ E(W). We now
show how, by a suitable choice of model, it is possible to find an EM algorithm for
determining the weights.
2.1
EM ALGORITHM
There are three key steps to finding a tractable EM algorithm for evaluating the
weights. The first is to use a generalized linear network model for the mappmg
function y(x; W). Thus we write
y(x; W)
= W ?(x)
(4)
where the elements of ?(x) consist of M fixed basis functions cPj(x), and W is a
D x M matrix with elements Wkj' Generalized linear networks possess the same
universal approximation capabilities as multi-layer adaptive networks. The price
which has to be paid, however, is that the number of basis functions must typically
grow exponentially with the dimensionality L of the input space. In the present
context this is not a serious problem since the dimensionality is governed by the latent variable space and will typically be small. In fact we are particularly interested
in visualization applications, for which L = 2.
The second important step is to use a simple Monte Carlo approximation for the
integrals over x. In general, for a function Q(x) we can write
J
Q(x)p(x) dx ~
1 K
f{
~ Q(x i )
(5)
z=l
where xi represents a sample drawn from the distribution p(x). If we apply this to
(3) we obtain
E(W) = -
t,ln{ ~ tp(tnlxni,w)}
(6)
The third key step to choose the sample of points {xni} to be the same for each
term in the summation over n. Thus we can drop the index n on x ni to give
E(W)
N
=- ~
In
{I
f{
K
}
~p(tnlxi,
W)
(7)
We now note that (7) represents the negative log likelihood under a distribution
consisting of a mixture of f{ kernel functions. This allows us to apply the EM
algorithm to find the maximum likelihood solution for the weights. Furthermore, as
a consequence of our choice (4) for the non-linear mapping function, it will turn out
that the M-step can be performed explicitly, leading to a solution in terms of a set
c. M. BISHOP, M. SYENSEN, C. K. I. WILLIAMS
468
of linear equations. We note that this model corresponds to a constrained Gaussian
mixture distribution of the kind discussed in Hinton et al. (1992).
We can formulate the EM algorithm for this system as follows. Setting the derivatives of (7) with respect to Wkj to zero we obtain
t, t,
R;n(W)
{t,
w"f,(x;) -
t~ } f;(x;) = 0
(8)
where we have used Bayes' theorem to introduce the posterior probabilities, or
responsibilities, for the mixture components given by
=
R- (W)
m
p(tnlxi, W)
L:~=1 p(tnlxil, W)
(9)
Similarly, maximizing with respect to (3 we obtain
K
~=
N1D
N
I: I: Rni(W) lIy(xn; W) - t
ll
n 2 .
(10)
i=l n=l
The EM algorithm is obtained by supposing that, at some point in the algorithm,
the current weight matrix is given by wold and the current value of (3 is (30ld. Then
we can evaluate the responsibilities using these values for Wand (3 (the E-step),
and then solve (8) for the weights to give W new and subsequently solve (10) to give
(3new (the M-step). The two steps are repeated until a suitable convergence criterion
is reached. In practice the algorithm converges after a relatively small number of
iterations.
A more formal justification for the EM algorithm can be given by introducing
auxiliary variables to label which component is responsible for generating each data
point, and then computing the expectation with respect to the distribution of these
variables. Application of Jensen's inequality then shows that, at each iteration
of the algorithm, the error function will decrease unless it is already at a (local)
minimum, as discussed for example in Bishop (1995).
If desired, a regularization term can be added to the error function to control the
complexity of the model y(x; W). From a Bayesian viewpoint, this corresponds to
a prior distribution over weights. For a regularizer which is a quadratic function of
the weight parameters, this leads to a straightforward modification to the weight
update equations. It is convenient to write the condition (8) in matrix notation as
(~TGold~
+ AI)(Wnew)T = ~TTold
(11)
where we have included a regularization term with coefficient A, and I denotes the
unit matrix. In (11) ~ is a f{ x M matrix with elements <l>ij = (/Jj(x i ), T is a I< x D
matrix, and G is a I< x I< diagonal matrix, with elements
N
Tik
=
I: Rin(W)t~
n=l
N
G =
jj
I: ~n(W).
(12)
n=l
We can now solve (11) for w new using standard linear matrix inversion techniques,
based on singular value decomposition to allow for possible ill-conditioning. Note
that the matrix ~ is constant throughout the algorithm, and so need only be evaluated once at the start.
469
EM Optimization of Latent-Variable Density Models
4~----------------------~
4~----------------------~
3
3
?
?
2
o
, : 1' .
2
11
o
-_11~--~----~--~----~--~
o
2
3
4
?
-1~--~----~------~~--~
-1
0
2
3
4
Figure 2: Results from a toy problem involving data (' x') generated from a 1-dimensional
curve embedded in 2 dimensions, together with the projected sample points ('+') and
their Gaussian noise distributions (filled circles). The initial configuration, determined
by principal component analysis, is shown on the left, and an intermediate configuration,
obtained after 4 iterations of EM, is shown on the right.
3
RESULTS
We now present results from the application of this algorithm first to a toy problem
involving data in three dimensions, and then to a more realistic problem involving
12-dimensional data arising from diagnostic measurements of oil flows along multiphase pipelines.
For simplicity we choose the distribution p(x) to be uniform over the unit square.
The basis functions ?j (x) are taken to be spherically symmetric Gaussian functions whose centres are distributed on a uniform grid in x-space, with a common
width parameter chosen so that the standard deviation is equal to the separation of
neighbouring basis functions. For both problems the weights in the network were
initialized by performing principal components analysis on the data and then finding the least-squares solution for the weights which best approximates the linear
transformation which maps latent space to target space while generating the correct
mean and variance in target space.
As a simple demonstration of this algorithm, we consider data generated from a
one-dimensional distribution embedded in two dimensions , as shown in Figure 2.
3.1
OIL FLOW DATA
Our second example arises in the problem of determining the fraction of oil in a
multi-phase pipeline carrying a mixture of oil, water and gas (Bishop and James,
1993). Each data point consists of 12 measurements taken from dual-energy gamma
densitometers measuring the attenuation of gamma beams passing through the pipe.
Synthetically generated data is used which models accurately the attenuation processes in the pipe, as well as the presence of noise (arising from photon statistics).
The three phases in the pipe (oil, water and gas) can belong to one of three different
geometrical configurations, corresponding to stratified, homogeneous, and annular
flows, and the data set consists of 1000 points distributed equally between the 3
c. M. BISHOP, M. SVENSEN, C. K. I. W1LUAMS
470
2~----~------~-------------,
1.5
0.5
0
~ ..,..~
?
-1.5
-32
.,,;'
00
"
,
-0.5
-1
?
.....
~,J1 0
~+
:"."
~
#+
-1
-,
~O
..
..:. ~ ~_"
? aa.;
?
.."
~ ~iI:"
+~+ .,. ? 0
........
...,.,..t:'".....
(~
..
...",
++
,
-
~C
.....".a:.
0
0
?
6
0
.'IIIt.
2
-2
0
2
4
Figure 3: The left plot shows the posterior-mean projection of the oil data in the latent
space of the non-linear model. The plot on the right shows the same data set projected
onto the first two principal components. In both plots, crosses, circles and plus-signs
represent the stratified, annular and homogeneous configurations respectively.
classes. We take the latent variable space to be two-dimensional. This is appropriate for this problem as we know that, locally, the data must have an intrinsic
dimensionality of two (neglecting noise on the data) since, for any given geometrical
configuration of the three phases, there are two degrees of freedom corresponding to
the fractions of oil and water in the pipe (the fraction of gas being redundant since
the three fractions must sum to one). It also allows us to use the latent variable
model to visualize the data by projection onto x-space.
For the purposes of visualization, we note that a data point t n induces a posterior
distribution p(xltn, W*) in x-space, where W* denotes the value of the weight
matrix for the trained network. This provides considerably more information in the
visualization space than many simple techniques (which generally project each data
point onto a single point in the visualization space). For example, the posterior
distribution may be multi-modal, indicating that there is more than one region of
x-space which can claim significant responsibility for generating the data point.
However, it is often convenient to project each data point down to a unique point in
x-space. This can be done by finding the mean of the posterior distribution, which
itself can be evaluated by a simple Monte Carlo integration using quantities already
calculated in the evaluation of W* .
Figure 3 shows the oil data visualized in the latent-variable space in which, for each
data point, we have plotted the posterior mean vector. Again the points have been
labelled according to their multi-phase configuration. We have compared these results with those from a number of conventional techniques including factor analysis
and principal component analysis. Note that factor analysis is precisely the model
which results if a linear mapping is assumed for y(x; W), a Gaussian distribution
p(x) is chosen in the latent space, and the noise distribution in data space is taken
to be Gaussian with a diagonal covariance matrix. Of these techniques, principal
component analysis gave the best class separation (assessed subjectively) and is
illustrated in Figure 3. Comparison with the results from the non-linear model
clearly shows that the latter gives much better separation of the three classes, as a
consequence of the non-linearity permitted by the latent variable mapping .
EM Optimization of Latent-Variable Density Models
4
471
DISCUSSION
There are interesting relationships between the model discussed here and a number
of well-known algorithms for unsupervised learning. We have already commented
that factor analysis is a special case of this model, involving a linear mapping from
latent space to data space. The Kohonen topographic map algorithm (Kohonen,
1995) can be regarded as an approximation to a latent variable density model of
the kind outlined here. Finally, there are interesting similarities to a 'soft' version
of the 'principal curves' algorithm (Tibshirani , 1992).
The model we have described can readily be extended to deal with the problem of
missing data, provided we assume that the missing data is ignorable and missing at
random (Little and Rubin, 1987) . This involves maximizing the likelihood function
in which the missing values have been integrated out. For the model discussed here,
the integrations can be performed analytically, leading to a modified form of the
EM algorithm .
Currently we are extending the model to allow for mixed continuous and categorical
variables . We are also exploring Bayesian approaches, based on Markov chain Monte
Carlo, to replace the maximum likelihood procedure.
Acknowledgements
This work was partially supported by EPSRC grant GR/J75425: Novel Developments in Learning Theory . Markus Svensen would like to thank the staff of the
SANS group in Stockholm for their hospitality during part of this project .
References
Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press.
Bishop, C. M. and G. D. James (1993). Analysis of multiphase flows using dualenergy gamma densitometry and neural networks. Nuclear Instruments and
Methods in Physics Research A327, 580-593.
Dayan, P., G. E. Hinton, R. M. Neal, and R. S. Zemel (1995). The HelmQoltz
machine . Neural Computation 7 (5), 889- 904.
Hinton, G. E., C. K. 1. Williams, and M. D. Revow (1992) . Adaptive elastic models for hand-printed character recognition. In J. E. Moody, S. J. Hanson, and
R. P. Lippmann (Eds .), Advances in Neural Information Processing Systems
4. Morgan Kauffmann.
Kohonen, T. (1995). Self-Organizing Maps. Berlin: Springer-Verlag.
Little, R. J. A. and D. B. Rubin (1987). Statistical Analysis with Missing Data.
New York: John Wiley.
MacKay, D. J. C. (1995). Bayesian neural networks and density networks. Nuclear
Instruments and Methods in Physics Research, A 354 (1), 73- 80 .
Tibshirani, R . (1992). Principal curves revisited. Statistics and Computing 2,
183-190.
| 1132 |@word version:1 inversion:1 seek:1 covariance:1 decomposition:1 tiw:1 paid:1 ld:1 initial:1 configuration:6 current:2 dx:2 must:4 readily:1 john:1 realistic:1 j1:1 drop:1 plot:3 update:1 provides:1 revisited:1 along:1 become:1 consists:2 introduce:4 multi:5 spherical:1 td:1 little:2 project:3 discover:2 underlying:1 notation:1 linearity:1 provided:1 kind:2 finding:3 transformation:3 every:1 act:1 attenuation:2 uk:4 control:1 unit:2 grant:1 local:1 consequence:2 oxford:1 approximately:1 plus:1 stratified:2 unique:1 responsible:1 practice:1 procedure:2 universal:1 printed:1 convenient:2 projection:2 induce:1 convenience:1 onto:3 context:1 live:1 conventional:2 map:4 missing:5 maximizing:2 williams:5 straightforward:1 l:1 formulate:1 simplicity:1 regarded:1 nuclear:2 justification:1 kauffmann:1 target:2 suppose:1 neighbouring:1 homogeneous:2 us:2 element:4 helmholtz:1 recognition:2 particularly:1 ignorable:1 observed:1 epsrc:1 capture:1 region:1 decrease:1 complexity:3 trained:1 carrying:1 rin:1 basis:5 regularizer:1 train:1 monte:5 zemel:1 apparent:2 whose:1 solve:3 ability:1 statistic:2 topographic:1 itself:2 kohonen:3 organizing:1 convergence:1 extending:1 generating:3 converges:1 ac:3 densitometry:1 svensen:4 ij:3 auxiliary:1 involves:1 j75425:1 correct:1 subsequently:1 stockholm:1 summation:1 exploring:1 sans:1 mapping:6 visualize:1 claim:1 purpose:1 estimation:1 birmingham:1 tik:1 label:1 currently:2 superposition:1 iw:1 clearly:1 hospitality:1 gaussian:6 modified:1 focus:1 likelihood:7 dayan:2 multiphase:2 typically:3 integrated:1 hidden:4 interested:2 dual:1 ill:1 development:1 constrained:1 integration:4 mackay:3 special:1 equal:1 construct:1 once:2 having:1 represents:2 unsupervised:1 serious:1 gamma:3 phase:5 consisting:1 tlx:1 freedom:1 interest:2 evaluation:1 mixture:5 diagnostics:1 chain:1 xni:1 integral:3 neglecting:1 unless:1 filled:1 euclidean:1 initialized:1 desired:1 circle:2 plotted:1 soft:1 tp:1 retains:1 measuring:1 introducing:1 deviation:1 uniform:2 gr:1 considerably:1 density:12 physic:2 together:1 moody:1 again:1 choose:2 derivative:1 leading:2 toy:3 account:1 photon:1 centred:1 coefficient:1 explicitly:1 ad:1 performed:3 responsibility:3 liy:1 reached:1 start:1 bayes:1 capability:2 a327:1 square:2 ni:1 variance:2 bayesian:3 accurately:1 carlo:5 ed:1 energy:1 james:2 dimensionality:6 permitted:1 modal:1 evaluated:2 wold:1 done:1 furthermore:1 until:1 hand:1 christopher:2 nonlinear:1 oil:9 hence:1 analytically:2 regularization:2 spherically:1 symmetric:1 neal:1 illustrated:2 deal:1 ll:1 during:1 width:1 self:1 whereby:1 criterion:1 generalized:2 occuring:1 demonstrate:1 tn:2 geometrical:2 novel:2 common:1 functional:1 b4:1 exponentially:1 conditioning:1 discussed:4 belong:1 approximates:1 refer:1 measurement:2 significant:1 ai:1 grid:1 outlined:1 similarly:1 centre:1 similarity:1 subjectively:1 posterior:6 verlag:1 inequality:1 binary:1 life:1 iiit:1 morgan:1 minimum:1 staff:1 determine:1 redundant:1 ii:1 annular:2 cross:1 equally:1 involving:4 expectation:1 iteration:3 kernel:1 represent:2 confined:1 beam:1 schematically:1 singular:2 wkj:2 grow:1 posse:1 supposing:1 flow:5 dxn:1 presence:2 synthetically:1 intermediate:1 concerned:1 gave:1 intensive:1 expression:1 effort:1 york:1 passing:1 cause:2 jj:2 generally:2 clear:1 locally:1 induces:1 visualized:1 sign:1 diagnostic:1 arising:2 tibshirani:2 write:3 shall:1 group:2 key:2 commented:1 drawn:1 fraction:4 sum:1 wand:1 throughout:1 separation:3 layer:1 n1d:1 quadratic:1 precisely:1 markus:2 performing:1 relatively:3 developing:1 according:1 combination:2 request:1 conjugate:2 em:14 character:1 modification:1 pipeline:3 taken:3 computationally:1 ln:1 visualization:4 equation:2 turn:1 know:1 tractable:1 instrument:2 apply:2 appropriate:2 shortly:1 denotes:2 include:2 already:3 added:1 quantity:1 diagonal:2 gradient:2 subspace:1 unable:1 thank:1 berlin:1 parametrized:1 manifold:1 reason:1 water:3 index:1 relationship:1 minimizing:1 demonstration:1 unfortunately:1 negative:2 rise:1 markov:1 gas:3 situation:1 hinton:3 extended:1 specified:1 pipe:4 hanson:1 pattern:1 including:1 suitable:2 rely:1 ixn:1 predicting:1 aston:4 categorical:1 prior:3 acknowledgement:1 determining:2 embedded:3 mixed:1 interesting:2 degree:1 rni:1 rubin:2 principle:1 viewpoint:1 supported:1 formal:1 allow:2 distributed:2 curve:3 dimension:4 xn:2 evaluating:1 calculated:1 adaptive:2 projected:2 lippmann:1 assumed:1 xi:1 continuous:2 latent:22 reality:1 elastic:1 noise:5 repeated:1 tl:1 wiley:1 wish:2 xl:2 governed:2 third:1 theorem:1 down:1 bishop:9 jensen:1 intrinsic:2 intractable:1 consist:1 cpj:1 expressed:1 partially:1 springer:1 aa:1 corresponds:2 wnew:1 labelled:1 price:1 replace:1 considerable:2 revow:1 included:1 determined:1 acting:1 principal:7 indicating:1 latter:1 arises:1 assessed:1 evaluate:2 |
148 | 1,133 | Stable Fitted Reinforcement Learning
Geoffrey J. Gordon
Computer Science Department
Carnegie Mellon University
Pittsburgh PA 15213
ggordon@cs.cmu.edu
Abstract
We describe the reinforcement learning problem, motivate algorithms which seek an approximation to the Q function, and present
new convergence results for two such algorithms.
1
INTRODUCTION AND BACKGROUND
Imagine an agent acting in some environment. At time t, the environment is in some
state Xt chosen from a finite set of states. The agent perceives Xt, and is allowed to
choose an action at from some finite set of actions. The environment then changes
state, so that at time (t + 1) it is in a new state Xt+1 chosen from a probability
distribution which depends only on Xt and at. Meanwhile, the agent experiences a
real-valued cost Ct, chosen from a distribution which also depends only on Xt and
at and which has finite mean and variance.
Such an environment is called a Markov decision process, or MDP. The reinforcement learning problem is to control an MDP to minimize the expected discounted
cost Lt ,tCt for some discount factor, E [0,1]. Define the function Q so that
Q(x, a) is the cost for being in state x at time 0, choosing action a, and behaving
optimally from then on. If we can discover Q, we have solved the problem: at each
step, we may simply choose at to minimize Q(xt, at). For more information about
MDPs, see (Watkins, 1989, Bertsekas and Tsitsiklis, 1989).
We may distinguish two classes of problems, online and offline. In the offline problem, we have a full model of the MDP: given a state and an action, we can describe
the distributions of the cost and the next state. We will be concerned with the
online problem, in which our knowledge of the MDP is limited to what we can discover by interacting with it. To solve an online problem, we may approximate the
transition and cost functions, then proceed as for an offline problem (the indirect
approach); or we may try to learn the Q function without the intermediate step
(the direct approach). Either approach may work better for any given problem: the
1053
Stable Fitted Reinforcement Learning
direct approach may not extract as much information from each observation, but
the indirect approach may introduce additional errors with its extra approximation
step. We will be concerned here only with direct algorithms.
Watkins' (1989) Q-Iearning algorithm can find the Q function for small MDPs,
either online or offline.
Convergence with probability 1 in the online case
was proven in (Jaakkola et al., 1994, Tsitsiklis, 1994). For large MDPs, exact Q-Iearning is too expensive: representing the Q function requires too much
space. To overcome this difficulty, we may look for an inexpensive approximation to the Q function. In the offline case, several algorithms for this purpose
have been proven to converge (Gordon, 1995a, Tsitsiklis and Van Roy, 1994,
Baird, 1995). For the online case, there are many fewer provably convergent algorithms. As Baird (1995) points out, we cannot even rely on gradient descent for
large, stochastic problems, since we must observe two independent transitions from
a given state before we can compute an unbiased estimate of the gradient. One
of the algorithms in (Tsitsiklis and Van Roy, 1994), which uses state aggregation
to approximate the Q function, can be modified to apply to online problems; the
resulting algorithm, unlike Q-Iearning, must make repeated small updates to its
control policy, interleaved with comparatively lengthy periods of evaluation of the
changes. After submitting this paper, we were advised of the paper (Singh et al.,
1995), which contains a different algorithm for solving online MDPs. In addition,
our newer paper (Gordon, 1995b) proves results for a larger class of approximators.
There are several algorithms which can handle restricted versions of the online
problem. In the case of a Markov chain (an MDP where only one action is available
at any time step), Sutton's TD('\') has been proven to converge for arbitrary linear
approximators (Sutton, 1988, Dayan, 1992). For decision processes with linear
transition functions and quadratic cost functions (the so-called linear quadratic
regulation problem), the algorithm of (Bradtke, 1993) is guaranteed to converge.
In practice, researchers have had mixed success with approximate reinforcement
learning (Tesauro, 1990, Boyan and Moore, 1995, Singh and Sutton, 1996).
The remainder of the paper is divided into four sections. In section 2, we summarize
convergence results for offline Q-Iearning, and prove some contraction properties
which will be useful later. Section 3 extends the convergence results to online
algorithms based on TD(O) and simple function approximators. Section 4 treats
nondiscounted problems, and section 5 wraps up.
2
OFFLINE DISCOUNTED PROBLEMS
Standard offline Q-Iearning begins with an MDP M and an initial Q fUnction q(O) .
Its goal is to learn q(n), a good approximation to the optimal Q function for M. To
accomplish this goal, it performs the series of updates q(i+1) = TM(q(i?), where the
component of TM(q(i?) corresponding to state x and action a is defined to be
~p
. q (i)
[T ( (i) )] M
q
xa
=
Cxa
+ "t ~
xay
mln
yb
y
Here Cxa is the expected cost of performing action a in state x; Pxay is the probability
that action a from state x will lead to state y; and"t is the discount factor.
Offline Q-Iearning converges for discounted MDPs because TM is a contraction in
max norm. That is, for all vectors q and r,
II TM(q) - TM(r) II ~ "til q - r II
== maxx,a Iqxa I? Therefore, by the contraction
where II q 1\
mapping theorem, TM
has a unique fixed point q* , and the sequence q(i) converges linearly to q* .
G.J.OORDON
1054
It is worth noting that a weighted version of offline Q-Iearning is also guaranteed
to converge. Consider the iteration
q(i+l) = (I + aD(TM - I))(q(i))
where a is a positive learning rate and D is an arbitrary fixed nonsingular diagonal
matrix of weights. In this iteration, we update some Q valnes more rapidly than
others, as might occur if for instance we visited some states more frequently than
others. (We will come back to this possibility later.) This weighted iteration is a
max norm contraction, for sufficiently small a: take two Q functions q and r, with
II q - r II = I. Suppose a is small enough that the largest element of aD is B < 1,
and let b > 0 be the smallest diagonal element of aD. Consider any state x and
action a, and write d xa for the corresponding element of aD. We then have
[(1 - aD)q - (1 - aD)r]xa
[TMq - TMr]xa
[aDTMq - aDTMr]xa
[(I - aD + aDTM)q - (1 - aD + aDTM )r]xa
<
<
<
<
<
(1 - dxa)1
,I
dxa,l
(1 - dxa)1 + dxa,l
(l-b(l-,))1
so (1 - aD + aDTM ) is a max norm contraction with factor (1 - b(l - ,)). The
fixed point of weighted Q-Iearning is the same as the fixed point of unweighted
Q-Iearning: TM(q*) = q* is equivalent to aD(TM - l)q* = O.
The difficulty with standard (weighted or unweighted) Q-Iearning is that, for MDPs
with many states, it may be completely infeasible to compute TM(q) for even one
value of q. One way to avoid this difficulty is fitted Q-Iearning: if we can find
some function MA so that MA 0 TM is much cheaper to compute than T M , we can
perform the fitted iteration q(Hl) = MA(TM(q(i))) instead of the standard offline Qlearning iteration. The mapping MA implements a function approximation scheme
(see (Gordon, 1995a)); we assume that qeD) can be represented as MA(q) for some
q. The fitted offline Q-Iearning iteration is guaranteed to converge to a unique fixed
point if MA is a nonexpansion in max norm, and to have bounded error if MA(q*)
is near q* (Gordon, 1995a).
Finally, we can define a fitted weighted Q-Iearning iteration:
q(Hl)
= (1 + aMAD(TM - I))(q(i))
M1 = MA (these conditions are satisfied,
for example, by state aggregation), then fitted weighted Q-Iearning is guaranteed
to converge:
If MA is a max norm nonexpansion and
((1 - M A ) + MA(1 + aD(TM - I)))q
MA(1 + aD(TM - 1)))q
since MAq = q for q in the range of MA. (Note that q(i+l) is guaranteed to be in the
range of MA if q(i) is.) The last line is the composition of a max norm nonexpansion
with a max norm contraction, and so is a max norm contraction.
The fixed point of fitted weighted Q-Iearning is not necessarily the same as the fixed
point of fitted Q-Iearning, unless MA can represent q* exactly. However, if MA is
linear, we have that
(1 + aMAD(TM - I))(q + c) = c + MA(I + aD(TM - I)))(q
+ c)
for any q in the range of MA and c perpendicular to the range of MA. In particular,
if we take c so that q* - c is in the range of MA, and let q = MAq be a fixed point
1055
Stable Fitted Reinforcement Learning
of the weighted fitted iteration, then we have
II q* -
II (I + aMAD(TM - I))q* - (I + aMAD(TM - I))q II
II c + MA(I + aD(TM - I)))q* - MA(I + aD(TM - I)))q II
< II c II + (1 - b(l - ,))11 q* - q II
IIcll
q II <
b(l -,)
That is, if MA is linear in addition to the conditions for convergence, we can bound
the error for fitted weighted Q-Iearning.
For offline problems, the weighted version of fitted Q-Iearning is not as useful as the
unweighted version: it involves about the same amount of work per iteration, the
contraction factor may not be as good, the error bound may not be as tight, and it
requires M1 = MA in addition to the conditions for convergence of the unweighted
iteration. On the other hand, as we shall see in the next section, the weighted
algorithm can be applied to online problems.
3
ONLINE DISCOUNTED PROBLEMS
Consider the following algorithm, which is a natural generalization of TD(O) (Sutton, 1988) to Markov decision problems.
(This algorithm has been called
"sarsa" (Singh and Sutton, 1996).) Start with some initial Q function q(O). Repeat the following steps for i from 0 onwards. Let 1l'(i) be a policy chosen according
to some predetermined tradeoff between exploration and exploitation for the Q
function q(i). Now, put the agent in M's start state and allow it to follow the policy
1l'(i) for a random number of steps L(i) . If at step t of the resulting trajectory the
agent moves from the state Xt under action at with cost Ct to a state Yt for which
the action bt appears optimal, compute the estimated Bellman error
- (Ct + ,
et -
[(i)
q
1Ytbt ) - [(
q i) 1Xtat
After observing the entire trajectory, define e(i) to be the vector whose xa-th component is the sum of et for all t such that Xt = x and at = a. Then compute the
next weight vector according to the TD(O)-like update rule with learning rate a(i)
q(i+l) =
q(i)
+ a (i) MAe(i)
See (Gordon, 1995b) for a comment on the types of mappings MA which are appropriate for online algorithms.
We will assume that L(i) has the same distribution for all i and is independent of
all other events related to the i-th and subsequent trajectories, and that E(L(i?) is
bounded. Define d~il to be the expected number of times the agent visited state x
and chose action a during the i-th trajectory, given 1l'(i). We will assume that the
policies are such that d~il > ? for some positive ? and for all i, x, and a. Let D(i)
be the diagonal matrix with elements d~il. With this notation, we can write the
expected update for the sarsa algorithm in matrix form:
E(q(i+l) I q(i?) = (I + a(i) MAD(i)(TM - I))q(i)
With the exception of the fact that D(i) changes from iteration to iteration, this
equation looks very similar to the offline weighted fitted Q-Iearning update. However, the sarsa algorithm is not guaranteed to converge even in the benign case
1056
G. J. GORDON
(b)
(a)
Figure 1: A counterexample to sarsa. (a) An MDP: from the start state, the agent
may choose the upper or the lower path, but from then on its decisions are forced.
Next to each arc is its expected cost; the actual costs are randomized on each step.
Boxed pairs of arcs are aggregated, so that the agent must learn identical Q values
for arcs in the same box. We used a discount, = .9 and a learning rate a = .1.
To ensure sufficient exploration, the agent chose an apparently suboptimal action
10% of the time. (Any other parameters would have resulted in similar behavior.
In particular, annealing a to zero wouldn't have helped .) (b) The learned Q value
for the right-hand box during the first 2000 steps.
where the Q-function is approximated by state aggregation: when we apply sarsa
to the MDP in figure 1, one of the learned Q values oscillates forever. This problem
happens because the frequency-of-update matrix D(i) can change discontinuously
when the Q function fluctuates slightly: when, by luck, the upper path through the
MDP appears better, the cost-l arc into the goal will be followed more often and
the learned Q value will decrease, while when the lower path appears better the
cost-2 arc will be weighted more heavily and the Q value will increase. Since the
two arcs out of the initial state always have the same expected backed-up Q value
(because the states they lead to are constrained to have the same value), each path
will appear better infinitely often and the oscillation will continue forever.
On the other hand, if we can represent the optimal Q function q*, then no matter
what D(i) is, the expected sarsa update has its fixed point at q*. Since the smallest
diagonal element of D(i) is bounded away from zero and the largest is bounded
above, we can choose an a and a " < 1 so that (I + aMAD(i)(TM - I)) is a
contraction with fixed point q* and factor " for all i. Now if we let the learning
rates satisfy Ei a(i) = 00 and Ei(a(i?)2 < 00, convergencew.p.l to q* is guaranteed
by a theorem of (Jaakkola et al., 1994). (See also the theorem in (Tsitsiklis, 1994).)
More generally, if MA is linear and can represent q* - c for some vector c, we can
bound the error between q* and the fixed point of the expected sarsa update on
iteration i: if we choose an a and a " < 1 as in the previous paragraph,
II E(q(Hl) Iq(i?) -
q*
II
~
,'II q(i)
- q*
II + 211 ell
for all i. A minor modification of the theorem of (Jaakkola et al., 1994) shows that
the distance from q(i) to the region
{ q
III q -
q*
II
~ 211 c 111 ~ "
}
converges w.p.l to zero. That is, while the sequence q(i) may not converge, the
worst it will do is oscillate in a region around q* whose size is determined by how
Stable Fitted Reinforcement Learning
1057
accurately we can represent q* and how frequently we visit the least frequent (state,
action) pair.
Finally, if we follow a fixed exploration policy on every trajectory, the matrix D( i)
will be the same for every i; in this case, because of the contraction property
proved in the previous section, convergence w.p.1 for appropriate learning rates is
guaranteed again by the theorem of (Jaakkola et al., 1994).
4
NONDISCOUNTED PROBLEMS
When M is not discounted, the Q-Iearning backup operator TM is no longer a max
norm contraction. Instead, as long as every policy guarantees absorption w.p.1 into
some set of cost-free terminal states, TM is a contraction in some weighted max
norm. The proofs of the previous sections still go through, if we substitute this
weighted max norm for the unweighted one in every case. In addition, the random
variables L(i) which determine when each trial ends may be set to the first step t
so that Xt is terminal, since this and all subsequent steps will have Bellman errors
of zero. This choice of L(i) is not independent of the i-th trial, but it does have a
finite mean and it does result in a constant D(i).
5
DISCUSSION
We have proven new convergence theorems for two online fitted reinforcement learning algorithms based on Watkins' (1989) Q-Iearning algorithm. These algorithms,
sarsa and sarsa with a fixed exploration policy, allow the use of function approximators whose mappings MA are max norm nonexpansions and satisfy M~ = MA.
The prototypical example of such a function approximator is state aggregation. For
similar results on a larger class of approximators, see (Gordon , 1995b).
Acknowledgements
This material is based on work supported under a National Science Foundation
Graduate Research Fellowship and by ARPA grant number F33615-93-1-1330. Any
opinions, findings, conclusions, or recommendations expressed in this publication
are those of the author and do not necessarily reflect the views of the National
Science Foundation, ARPA, or the United States government.
References
L. Baird. Residual algorithms: Reinforcement learning with function approxima-
tion. In Machine Learning (proceedings of the twelfth international conference),
San Francisco, CA, 1995. Morgan Kaufmann.
D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice Hall, 1989.
J. A. Boyan and A. W. Moore. Generalization in reinforcement learning: safely
approximating the value function. In G. Tesauro and D. Touretzky, editors, Advances in Neural Information Processing Systems, volume 7. Morgan Kaufmann,
1995.
S. J. Bradtke. Reinforcement learning applied to linear quadratic regulation. In S. J.
Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information
Processing Systems, volume 5. Morgan Kaufmann, 1993.
P. Dayan. The convergence of TD(A) for general lambda. Machine Learning, 8(34):341-362, 1992.
1058
G. J. GOROON
G. J. Gordon. Stable function approximation in dynamic programming. In Machine
Learning (proceedings of the twelfth international conference), San Francisco, CA,
1995. Morgan Kaufmann.
G. J. Gordon. Online fitted reinforcement learning. In J . A. Boyan, A. W. Moore,
and R. S. Sutton, editors, Proceedings of the Workshop on Value Function Approximation, 1995. Proceedings are available as tech report CMU-CS-95-206.
T . Jaakkola, M.I. Jordan, and S. P. Singh. On the convergence of stochastic iterative
dynamic programming algorithms. Neural Computation, 6(6):1185- 1201, 1994.
S. P. Singh, T. Jaakkola, and M. I. Jordan. Reinforcement learning with soft state
aggregation. In G. Tesauro and D. Touretzky, editors, Advances in Neural Information Processing Systems, volume 7. Morgan Kaufmann, 1995.
S. P. Singh and R. S. Sutton. Reinforcement learning with replacing eligibility
traces. Machine Learning, 1996.
R. S. Sutton. Learning to predict by the methods of temporal differences. Machine
Learning, 3(1):9- 44, 1988.
G. Tesauro. Neurogammon: a neural network backgammon program. In IJCNN
Proceedings III, pages 33-39, 1990.
J. N. Tsitsiklis and B. Van Roy. Feature-based methods for large-scale dynamic
programming. Technical Report P-2277, Laboratory for Information and Decision
Systems, 1994.
J. N. Tsitsiklis. Asynchronous stochastic approximation and Q-Iearning. Machine
Learning, 16(3):185-202, 1994.
C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, King's College,
Cambridge, England, 1989.
| 1133 |@word trial:2 exploitation:1 version:4 norm:12 twelfth:2 seek:1 contraction:12 initial:3 contains:1 series:1 united:1 must:3 numerical:1 subsequent:2 benign:1 predetermined:1 update:9 fewer:1 mln:1 direct:3 prove:1 paragraph:1 introduce:1 expected:8 behavior:1 frequently:2 terminal:2 bellman:2 discounted:5 td:5 actual:1 perceives:1 begin:1 discover:2 bounded:4 notation:1 what:2 finding:1 guarantee:1 safely:1 temporal:1 every:4 iearning:22 exactly:1 oscillates:1 control:2 grant:1 appear:1 bertsekas:2 before:1 positive:2 treat:1 sutton:8 path:4 advised:1 might:1 chose:2 limited:1 range:5 perpendicular:1 graduate:1 unique:2 practice:1 implement:1 maq:2 maxx:1 cannot:1 operator:1 put:1 prentice:1 equivalent:1 yt:1 backed:1 go:1 rule:1 handle:1 imagine:1 suppose:1 heavily:1 exact:1 programming:3 us:1 pa:1 element:5 roy:3 expensive:1 approximated:1 solved:1 worst:1 nonexpansion:3 region:2 decrease:1 luck:1 environment:4 reward:1 dynamic:3 motivate:1 singh:6 solving:1 tight:1 completely:1 indirect:2 represented:1 forced:1 describe:2 choosing:1 whose:3 fluctuates:1 larger:2 valued:1 solve:1 online:15 sequence:2 remainder:1 frequent:1 rapidly:1 convergence:10 converges:3 tct:1 iq:1 approxima:1 minor:1 c:2 involves:1 come:1 stochastic:3 exploration:4 opinion:1 material:1 government:1 generalization:2 absorption:1 sarsa:9 sufficiently:1 around:1 hall:1 mapping:4 predict:1 smallest:2 purpose:1 visited:2 largest:2 weighted:15 always:1 modified:1 avoid:1 jaakkola:6 publication:1 backgammon:1 tech:1 dayan:2 nondiscounted:2 bt:1 entire:1 provably:1 constrained:1 ell:1 identical:1 look:2 others:2 report:2 gordon:10 resulted:1 national:2 cheaper:1 delayed:1 onwards:1 possibility:1 evaluation:1 chain:1 experience:1 unless:1 fitted:17 arpa:2 instance:1 soft:1 giles:1 cost:13 too:2 optimally:1 accomplish:1 international:2 randomized:1 tmr:1 again:1 reflect:1 satisfied:1 thesis:1 choose:5 lambda:1 til:1 baird:3 matter:1 satisfy:2 ad:15 depends:2 tion:1 later:2 try:1 helped:1 view:1 observing:1 apparently:1 start:3 aggregation:5 parallel:1 minimize:2 il:3 variance:1 kaufmann:5 nonsingular:1 accurately:1 trajectory:5 worth:1 researcher:1 touretzky:2 lengthy:1 inexpensive:1 frequency:1 proof:1 proved:1 knowledge:1 back:1 appears:3 follow:2 yb:1 box:2 xa:7 hand:3 ei:2 replacing:1 mdp:9 unbiased:1 moore:3 laboratory:1 during:2 eligibility:1 performs:1 bradtke:2 volume:3 m1:2 mae:1 mellon:1 composition:1 cambridge:1 counterexample:1 had:1 stable:5 longer:1 behaving:1 tesauro:4 success:1 continue:1 approximators:5 morgan:5 additional:1 converge:8 aggregated:1 period:1 determine:1 ii:20 full:1 cxa:2 technical:1 england:1 long:1 divided:1 visit:1 cmu:2 iteration:13 represent:4 qed:1 background:1 fellowship:1 addition:4 annealing:1 extra:1 unlike:1 comment:1 cowan:1 jordan:2 neurogammon:1 near:1 noting:1 intermediate:1 iii:2 enough:1 concerned:2 suboptimal:1 tm:25 tradeoff:1 proceed:1 oscillate:1 action:14 useful:2 generally:1 amount:1 discount:3 estimated:1 per:1 carnegie:1 write:2 shall:1 four:1 sum:1 extends:1 oscillation:1 decision:5 interleaved:1 bound:3 ct:3 followed:1 guaranteed:8 convergent:1 distinguish:1 quadratic:3 occur:1 ijcnn:1 f33615:1 performing:1 department:1 according:2 slightly:1 newer:1 dxa:4 modification:1 happens:1 hl:3 restricted:1 equation:1 end:1 available:2 apply:2 observe:1 away:1 appropriate:2 substitute:1 ensure:1 xay:1 prof:1 approximating:1 comparatively:1 move:1 diagonal:4 gradient:2 wrap:1 distance:1 mad:1 regulation:2 trace:1 policy:7 perform:1 upper:2 observation:1 markov:3 arc:6 finite:4 descent:1 interacting:1 arbitrary:2 pair:2 hanson:1 learned:3 summarize:1 program:1 max:12 event:1 difficulty:3 rely:1 boyan:3 natural:1 residual:1 representing:1 scheme:1 mdps:6 extract:1 acknowledgement:1 mixed:1 prototypical:1 proven:4 geoffrey:1 approximator:1 foundation:2 agent:9 sufficient:1 editor:4 repeat:1 last:1 free:1 supported:1 infeasible:1 asynchronous:1 tsitsiklis:8 offline:14 allow:2 van:3 distributed:1 overcome:1 transition:3 unweighted:5 author:1 reinforcement:14 wouldn:1 san:2 approximate:3 qlearning:1 forever:2 pittsburgh:1 francisco:2 iterative:1 ggordon:1 learn:3 ca:2 boxed:1 necessarily:2 meanwhile:1 linearly:1 backup:1 allowed:1 repeated:1 watkins:4 theorem:6 xt:9 submitting:1 workshop:1 phd:1 lt:1 simply:1 infinitely:1 expressed:1 recommendation:1 ma:27 goal:3 king:1 change:4 determined:1 discontinuously:1 acting:1 called:3 xtat:1 exception:1 college:1 |
149 | 1,134 | Competence Acquisition in an
Autonomous Mobile Robot using
Hardware Neural Techniques.
Geoff Jackson and Alan F. Murray
Department of Electrical Engineering
Edinburgh University
Edinburgh, ER9 3JL
Scotland, UK
gbj@ee.ed.ac.uk,afm@ee.ed.ac.uk
Abstract
In this paper we examine the practical use of hardware neural
networks in an autonomous mobile robot. We have developed a
hardware neural system based around a custom VLSI chip, EPSILON III, designed specifically for embedded hardware neural
applications. We present here a demonstration application of an
autonomous mobile robot that highlights the flexibility of this system. This robot gains basic mobility competence in very few training epochs using an "instinct-rule" training methodology.
1
INTRODUCTION
Though neural networks have been shown as an effective solution for a diverse range
of real-world problems, applications and especially hardware implementations have
been few and slow to emerge. For example in the DARPA neural networks study
of 1988; of the 77 neural network applications investigated only 4 had resulted in
field tested systems [Widrow, 1988]. Furthermore, none of these used dedicated
neural network hardware . It is our view that this lack of tangible successes can be
summarised by the following points:
? Most neural applications will be served optimally by fast, generic digital
computers .
? Dedicated digital neural accelerators have a limited lifetime as "the fastest" ,
as standard computers develop so rapidly.
lEdinburgh Pulse Stream Implemenation of a Learning Oriented Network.
G. JACKSON, A. F. MURRAY
1032
? Analog neural VLSI is a niche technology, optimally applied at the interface
between the real world and higher-level digital processing.
This attitude has some profound implications with respect to the size, nature and
constraints we place on new hardware neural designs. After several years of research
into hardware neural network implementation, we have now concentrated on the
areas in which analog neural network technology has an "edge" over well established
digital technology.
Within the pulse stream neural network research at the University of Edinburgh,
the EPSILON chip's areas of strength can be summarised as:
?
?
Analog or digital inputs, digital outputs.
Scaleable and cascadeable design.
?
?
Modest size.
Compact, low power.
This list points naturally and strongly to problems on the boundary of the real,
analog world and digital processing, such as pre-processing/interpretation of analog
sensor data. Here a modest neural network can act as an intelligent analog-to-digital
converter presenting preprocessed information to its host. We are now engaged
in a two pronged approach, whereby development of technology to improve the
performance of pulse stream neural network chips is occurring concurrently with
a search and development of applications to which this technology can be applied.
The key requirements of this technological development are that devices must:
? Work directly with analog signals.
? Provide a moderate size network.
? Have the potential for a fully integrated solution.
In working with the above constraints and goals we have developed a new chip,
EPSILON II, and a bus based processor card incorporating it . It is our aim to
use this system to develop applications. As our first demonstration the EPSILON
processor card has been mounted on an autonomous mobile robot. In this case the
network utilises a mixture of analog and digital sensor information and performs a
mapping between input/sensor space, a mixture of analog and digital signals, and
output motor control.
2
THE EPSILON II CHIP
The EPSILON II chip has been designed around the requirements of an application
based system. It follows on from an earlier generation of pulse stream neural network
chip, the EPSILON chip [Murray, 1992].
The EPSILON II chip represents neural states as a pulse encoded signal. These pulse
encoded signals have digital signal levels which make them highly immune to noise
and ideal for inter and intra-chip communication, facilitating efficient cascading of
chips to form larger systems. The EPSILON II chip can take as inputs either pulse
encoded signals or analog voltage levels, thus facilitating the fusing of analog and
digital data in one system. Internally the chip is analog in nature allowing the
synaptic multiplication function to be carried out in compact and efficient analog
cells [J ackson, 1994].
Table 1 shows the principal specifications of the EPSILON II chip. The EPSILON II chip is based around a 32x32 synaptic matrix allowing efficient interfacing
to digital systems. Several features of the device have been developed specifically
for applications based usage. The first of these is a programmable input mode. This
Competence Acquisition in an Autonomous Mobile Robot
1033
Table 1: EPSILON II Specifications
EPSILON II Chip Specifications
No. of state input pins
32
Input modes
Analog t PW or PF
Input mode programmability
Bit programmable
32 pinned out
No. of state outputs
PW or PF
Output modes
Digital recovery of analog liP
Yes - PW encoded
1024
No. of Synapses
4 per output neuron
Additional autobias synapses
Dynamic
Weight storage
Programmable activity voltage Yes
6.9mm x 7mm
Die size
allows each of the network inputs to be programmed as either a direct analog input
or a digital pulse encoded input. We believe that this is vital for application based
usage where it is often necessary to fuse real-world analog data with historical or
control data generated digitally. The second major feature is a pulse recovery mode .
This allows conversion of any analog input into a digital value for direct use by the
host system. Both these features are utilised in the robotics application described
in section 4 of this paper.
3
EPSILON PROCESSOR CARD
The need to embed the EPSILON chip in a processor card is driven by several
considerations. FirstlYt working with pulse encoded signals requires substantial
processing to interface directly to digital systems. If the neural processor is to
be transparent to the host system and is not to become a substantial processing
overhead t then all pulse support operations must be carried out independently of
the host system. SecondlYt to respond to further chip level advances and allow rapid
prototyping of new applications as they emerge t a certain amount of flexibility is
needed in the system. It is with these points in mind that the design of the flexible
EPSILON Processor Card (EPC) was undertaken .
3.1
DESIGN SPECIFICATION
The EPC has been designed to meet the following specifications. The card must :
? Operate on a conventional digital bus system.
? Be transparent to the host processor t that is carry out all the necessary
pulse encoding and decoding.
? Carry out the refresh operations of the dynamic weights stored on the
EPSILON chip.
? Generate the ramp waveforms necessary for pulse width coding.
? Support the operation of multiple EPCts.
? Allow direct input of analog signals.
As all data used and generated by the chip is effectively of 8-bit resolution t the STE
bUSt an industry standard 8-bit bUSt was chosen for the bus system. This is also cost
1034
G. JACKSON, A. F. MURRAY
effective and allows the use of readily available support cards such as processors,
DSP cards and analog and digital signal conditioning cards.
To allow the transparency of operation the card must perform a variety of functions .
A block diagram indicating these functions is shown in figure 1.
FPGA
???................... . . . ..........................
???
1--""':---1
Pulse to Dig. Conv.
__........ .
.....
.
Dig. to Pulse Cony.
Weight refresh Ctrl.
Weight RAM
Figure 1: EPSILON Processor Card
A substantial amount of digital processing is required by the card, especially in the
pulse conversion circuitry. To conform to the Eurocard standard size of the STE
specification an FPGA device is used to "absorb" most of the digital logic . A twin
mother/daughter board design is also used to isolate sensitive analog circuitry from
the digital logic. The use of the FPGA makes the card extremely versatile as it
is now easily reconfigurable to adapt to specialist application. The dotted box of
figure 1 shows functions implemented by the FPGA device. An on board EPROM
can hold multiple FPGA configurations such that the board can be reconfigured
"on the fly" . All EPSILON support functions , such as ramp generation, weight
refresh, pulse conversion and interface control are carried out on the card. Also the
use of the FPGA means that new ideas are easily tested as all digital signal paths
go via this device. Thus a card of new functionality can be designed without the
need to design a new PCB.
3.2
SPECIALIST BUSES
The digital pulse bus is buffered out under control of the FPGA to the neural bus
along with two control signals. Handshaking between EPC's is done over these lines
to allow the transfer of pulse stream data between processors. This implies that
larger networks can be implemented with little or no increase in computation time
or overhead. A separate analog bus is included to bring analog inputs directly onto
the chip.
4
APPLICATIONS DEVELOPMENT
The over-riding reason for the development of the EPC is to allow the easy development of hardware neural network applications . We have already indicated that we
believe that this form of neural technology will find its niche where its advantages
of direct sensor interface, compactness and cost-effectiveness are of prime importance. As a good and intrinsically interesting example of this genre of applications,
we have chosen autonomous mobile robotic control as a first test for EPSILON II.
The object of this demonstrator is not to advance the state-of-the-art in robotics.
1035
Competence Acquisition in an Autonomous Mobile Robot
Rather it is to demonstrate analog neural VLSI in an appropriate and stimulating
context.
4.1
"INSTINCT-RULE" ROBOT
The "instinct-rule" robotic control philosophy is based on a software-controlled exemplarfrom the University's Department of Artificial Intelligence [Nehmzow , 1992].
The robot incorporates an EPC which interfaces all the analog sensor signals and
provides the programmable neural link between sensor/input space and the motor
drive actuators.
~
-"*,::::--,,,
'"oen
ffien
-..,...,\--,,L
a) Controller Architecture.
b) Instinct rule robot.
Figure 2: "Instinct Rule" Robot
The controller architecture is shown in figure 2. The neural network implemented on
the EPC is the plastic element that determines the mapping between sensory data
and motor actions. The majority of the monitor section is currently implemented
on a host processor and monitors the performance of the neural network. It does
this by regularly evaluating a set of instinct rules. These rules are simple behaviour
based axioms. For example, we use two rules to promote simple obstacle avoidance
competence in the robot, as listed in column one of table 2
l.
2.
Table 2: Instinct
Simple obstacle avoidance.
Keep crash sensors inactive. l.
Move forward.
2.
3.
Rules
Wall following
Keep crash sensors inactive.
Keep side sensors active.
Move forward.
If an instinct rule is violated the drive selector then chooses the next strongest
output (motor action) from the neural network. This action is then performed to
see if it relieves the violation. If it does, it is used as targets to train the neural
network . If it does not, the next strongest action is tried. The mechanism to
accomplish this will be described in more detail in section 4.2 .
Using this scheme the robot can be initialised with random weights (i.e. no mapping
between sensors and motor control) and within a few epochs obtains basic obstacle
avoidance competence.
It is a relatively easy matter to promote more complex behaviour with the addition of other rules. For example to achieve a wall following behaviour a third
1036
G. JACKSON, A. F. MURRAY
rule is introduced as shown in column two of table 2. Navigational tasks can be
accomplished with the addition of a "maximise navigational signal" rule. An
example of this is a light sensor mounted on the robot producing a behaviour to
move towards a light source. Equally, a signal from a more complex, higher level,
navigational system could be used. Thus the instinct rule controller handles basic obstacle avoidance competence and motor/sensory interface tasks leaving other
resources free for intensive navigational tasks.
4.2
INSTINCT RULE EVALUATION USING SOMATIC TENSION
The original instinct rule robot used binary sensor signals and evaluated performance of alternative actions for fixed, and progressively longer, periods of time
[Nehmzow, 1992]. With the EPC interfacing directly to analog sensors an improved
scheme has been developed. If we sum all sensors onto a neuron with fixed and
equal weights we gain a measure of total sensory activity. Let us call this somatic
tension as an analogy to biological signal aggregation on the soma. If we have
an instinct violation and an alternative action is performed we can monitor this
somatic tension to gauge the performance of this action. If tension decreases significantly we continue the action. If it increases significantly we choose an alternative
action. If tension remains high and roughly the same, we are in a tight situation,
for example say a corner. In this case we perform actions for progressively longer
periods continuing to monitor somatic tension for a drop.
4.3
RESULTS AND DISCUSSION
The instinct rule robot has been constructed and its performance is comparable with
software-controlled predecessors. Unfortunately direct comparisons are not possible
due to unavailability of the original exemplars and differing physical characteristics
of the robots themselves. In developing the application several observations were
made concerning the behaviour of the system that would not have come to light in
a simulated environment.
In any system including real mechanics and real analog signals, imperfections and
noise are present. For example, in a real robot we cannot guarantee that a forward
motion directive will result in perfect forward motion due to inherent asymmetries
in the system. The instinct rule architecture does not assume a-priori knowledge
such as this so behaviour is not affected adversely. This was tested by retarding
one drive motor of the robot to give it a bias to one side.
In early development, as the monitor was being tuned, the robot showed a tendency to oscillatory motion, thus exhibiting undesirable behaviour that satisfies its
instincts. It could, for example, oscillate back and forth at a corner. In a simulated
environment this continues indefinitely. However, with real mechanics and noisy
analog sensors the robot breaks out of this undesirable behaviour.
These observations strengthen the arguments for hardware development aimed at
embedded systems. The robot application is but an example of the different, and
often surprising conditions that pertain in a "real" system. If neural networks are to
find applications in real-world, low-cost and analog-interface applications, these are
the conditions we must deal with, and appropriate, analog hardware is the optimal
medium for a solution.
Competence Acquisition in an Autonomous Mobile Robot
5
1037
CONCLUSIONS
This paper has described pulse stream neural networks that have been developed to
a system level to aid development of applications. We have therefore defined areas
of strengths of this technology along with suggestions of where this is best applied.
The strengths of this system include:
1. Direct interfacing to analog signals.
2. The ability to fuse direct analog sensor data with digital sensor data processed elsewhere in the system .
3. Distributed processing. Several EPC's may be embedded in a system to
allow multiple networks and/or multi layer networks.
4. The EPC represents a flexible system level development environment. It is
easily reconfigured for new applications or improved chip technology.
5. The EPC requires very little computational overhead from the host system
and can operate independently if needed.
A demonstration application of an instinct rule robot has been presented highlighting the use of neural networks as an interface between real-world analog signals and
digital control.
In conclusion we believe that the immediate future of neural analog VLSI is in small
applications based systems that interface directly to the real-world. We see this as
the primary niche area where analog VLSI neural networks will replace conventional
digital systems.
Acknow ledgements
Thanks are due to Ulrich Nehmzow, University of Manchester, for discussions and
information on the instinct-rule controller and the loan of his original robot - Alder.
References
[Caudell, 1990] Caudell, M. and Butler, C. (1990). Naturally Intelligent Systems.
MIT Press, Cambridge, Ma.
[Jackson, 1994] Jackson, G., Hamilton, A., and Murray, A. F. (1994). Pulse stream
VLSI neural systems: into robotics. In Proceedings ISCAS'94, volume 6, pages
375-378. IEEE Press.
[Maren, 1990] Maren, A., Harston, C., and Pap, R. (1990). Handbook of Neural
Computing Applications. Academic Press, San Diego, Ca.
[Murray,1992] Murray, A. F., Baxter, D. J., Churcher, S., Hamilton, A., Reekie,
H. M., and Tarassenko, L. (1992). The Edinburgh pulse stream implementation of
a learning-oriented network (EPSILON) chip. In Neural Information Processing
Systems (NIPS) Conference .
[Nehmzow, 1992] Nehmzow, U. (1992). Experiments in Competence Acquisition for
Autonomous Mobile Robots. PhD thesis, University of Edinburgh.
[Widrow, 1988] Widrow, B. (1988). DARPA Neural Network Study. AFCEA International Press.
| 1134 |@word pw:3 pulse:22 tried:1 versatile:1 carry:2 configuration:1 tuned:1 surprising:1 must:5 readily:1 refresh:3 motor:7 designed:4 drop:1 progressively:2 intelligence:1 device:5 scotland:1 indefinitely:1 provides:1 along:2 constructed:1 direct:7 become:1 profound:1 predecessor:1 overhead:3 inter:1 rapid:1 roughly:1 themselves:1 examine:1 mechanic:2 multi:1 little:2 pf:2 conv:1 medium:1 developed:5 differing:1 guarantee:1 act:1 uk:3 control:9 internally:1 producing:1 hamilton:2 maximise:1 engineering:1 encoding:1 meet:1 path:1 fastest:1 limited:1 programmed:1 range:1 practical:1 block:1 area:4 axiom:1 significantly:2 pre:1 onto:2 cannot:1 undesirable:2 pertain:1 storage:1 context:1 conventional:2 go:1 instinct:17 independently:2 resolution:1 x32:1 recovery:2 rule:20 avoidance:4 cascading:1 jackson:6 his:1 handle:1 tangible:1 autonomous:9 target:1 diego:1 eprom:1 strengthen:1 element:1 continues:1 tarassenko:1 fly:1 electrical:1 decrease:1 technological:1 digitally:1 substantial:3 environment:3 dynamic:2 tight:1 easily:3 darpa:2 geoff:1 chip:23 genre:1 train:1 attitude:1 fast:1 effective:2 artificial:1 encoded:6 larger:2 say:1 ramp:2 ability:1 noisy:1 advantage:1 ste:2 rapidly:1 flexibility:2 achieve:1 forth:1 manchester:1 requirement:2 asymmetry:1 perfect:1 object:1 widrow:3 ac:2 develop:2 exemplar:1 implemented:4 implies:1 come:1 exhibiting:1 waveform:1 functionality:1 pinned:1 behaviour:8 pronged:1 transparent:2 wall:2 biological:1 mm:2 hold:1 around:3 mapping:3 circuitry:2 major:1 early:1 currently:1 sensitive:1 gauge:1 mit:1 concurrently:1 sensor:17 interfacing:3 imperfection:1 aim:1 reconfigured:2 rather:1 mobile:9 voltage:2 dsp:1 integrated:1 compactness:1 vlsi:6 flexible:2 priori:1 development:10 art:1 field:1 equal:1 epc:10 represents:2 promote:2 future:1 intelligent:2 inherent:1 few:3 oriented:2 resulted:1 relief:1 iscas:1 scaleable:1 highly:1 intra:1 custom:1 evaluation:1 violation:2 mixture:2 light:3 implication:1 programmability:1 edge:1 necessary:3 mobility:1 modest:2 continuing:1 industry:1 earlier:1 obstacle:4 column:2 cost:3 fusing:1 fpga:7 optimally:2 stored:1 accomplish:1 chooses:1 thanks:1 international:1 decoding:1 caudell:2 thesis:1 choose:1 corner:2 adversely:1 potential:1 coding:1 twin:1 matter:1 stream:8 performed:2 view:1 utilised:1 break:1 aggregation:1 maren:2 characteristic:1 yes:2 plastic:1 none:1 served:1 drive:3 dig:2 processor:11 oscillatory:1 synapsis:2 strongest:2 ed:2 synaptic:2 acquisition:5 initialised:1 naturally:2 gain:2 intrinsically:1 knowledge:1 directive:1 back:1 higher:2 methodology:1 tension:6 improved:2 done:1 though:1 strongly:1 box:1 furthermore:1 lifetime:1 evaluated:1 working:2 lack:1 mode:5 indicated:1 believe:3 riding:1 usage:2 deal:1 unavailability:1 width:1 whereby:1 die:1 alder:1 presenting:1 demonstrate:1 performs:1 dedicated:2 interface:9 bring:1 motion:3 consideration:1 physical:1 bust:2 conditioning:1 volume:1 jl:1 analog:35 interpretation:1 buffered:1 cambridge:1 mother:1 had:1 immune:1 specification:6 robot:26 longer:2 showed:1 moderate:1 driven:1 prime:1 certain:1 binary:1 success:1 continue:1 accomplished:1 additional:1 utilises:1 churcher:1 period:2 signal:19 ii:10 multiple:3 transparency:1 alan:1 adapt:1 academic:1 host:7 concerning:1 equally:1 controlled:2 basic:3 controller:4 robotics:3 cell:1 addition:2 crash:2 diagram:1 source:1 leaving:1 operate:2 isolate:1 regularly:1 incorporates:1 effectiveness:1 call:1 ee:2 ideal:1 iii:1 vital:1 easy:2 baxter:1 variety:1 architecture:3 converter:1 idea:1 intensive:1 inactive:2 oscillate:1 action:10 programmable:4 listed:1 aimed:1 amount:2 cony:1 hardware:11 concentrated:1 processed:1 generate:1 demonstrator:1 dotted:1 per:1 diverse:1 summarised:2 conform:1 ledgements:1 affected:1 key:1 soma:1 monitor:5 preprocessed:1 undertaken:1 ram:1 fuse:2 pap:1 year:1 sum:1 respond:1 place:1 comparable:1 bit:3 layer:1 activity:2 strength:3 constraint:2 software:2 argument:1 extremely:1 relatively:1 department:2 developing:1 resource:1 bus:7 remains:1 pin:1 mechanism:1 needed:2 mind:1 available:1 operation:4 actuator:1 generic:1 appropriate:2 specialist:2 alternative:3 original:3 include:1 afcea:1 epsilon:21 murray:8 especially:2 move:3 already:1 primary:1 separate:1 card:15 link:1 simulated:2 majority:1 reason:1 demonstration:3 unfortunately:1 acknow:1 pcb:1 daughter:1 implementation:3 design:6 perform:2 allowing:2 conversion:3 neuron:2 observation:2 immediate:1 situation:1 communication:1 somatic:4 afm:1 competence:9 introduced:1 required:1 established:1 nip:1 prototyping:1 navigational:4 including:1 power:1 scheme:2 improve:1 technology:8 carried:3 epoch:2 multiplication:1 embedded:3 fully:1 highlight:1 accelerator:1 generation:2 mounted:2 interesting:1 analogy:1 suggestion:1 digital:27 reekie:1 ulrich:1 elsewhere:1 free:1 side:2 allow:6 bias:1 emerge:2 edinburgh:5 distributed:1 boundary:1 world:7 evaluating:1 sensory:3 forward:4 made:1 san:1 historical:1 compact:2 selector:1 obtains:1 absorb:1 logic:2 keep:3 robotic:2 active:1 handbook:1 butler:1 search:1 table:5 lip:1 nature:2 transfer:1 ca:1 investigated:1 complex:2 noise:2 ctrl:1 facilitating:2 board:3 slow:1 aid:1 third:1 niche:3 embed:1 reconfigurable:1 oen:1 list:1 incorporating:1 effectively:1 importance:1 phd:1 occurring:1 highlighting:1 determines:1 satisfies:1 ma:1 stimulating:1 goal:1 towards:1 replace:1 included:1 specifically:2 loan:1 principal:1 total:1 engaged:1 tendency:1 indicating:1 support:4 violated:1 philosophy:1 handshaking:1 tested:3 |
150 | 1,135 | Information through a Spiking Neuron
Charles F. Stevens and Anthony Zador
Salk Institute MNL/S
La J olIa, CA 92037
zador@salk.edu
Abstract
While it is generally agreed that neurons transmit information
about their synaptic inputs through spike trains, the code by which
this information is transmitted is not well understood. An upper
bound on the information encoded is obtained by hypothesizing
that the precise timing of each spike conveys information. Here we
develop a general approach to quantifying the information carried
by spike trains under this hypothesis, and apply it to the leaky
integrate-and-fire (IF) model of neuronal dynamics. We formulate the problem in terms of the probability distribution peT) of
interspike intervals (ISIs), assuming that spikes are detected with
arbitrary but finite temporal resolution . In the absence of added
noise, all the variability in the ISIs could encode information, and
the information rate is simply the entropy of the lSI distribution,
H (T) = (-p(T) log2 p(T)}, times the spike rate. H (T) thus provides an exact expression for the information rate . The methods
developed here can be used to determine experimentally the information carried by spike trains, even when the lower bound of the
information rate provided by the stimulus reconstruction method
is not tight. In a preliminary series of experiments, we have used
these methods to estimate information rates of hippocampal neurons in slice in response to somatic current injection. These pilot
experiments suggest information rates as high as 6.3 bits/spike.
1
Information rate of spike trains
Cortical neurons use spike trains to communicate with other neurons. The output
of each neuron is a stochastic function of its input from the other neurons. It is of
interest to know how much each neuron is telling other neurons about its inputs.
How much information does the spike train provide about a signal? Consider noise
net) added to a signal set) to produce some total input yet) = set) + net). This
is then passed through a (possibly stochastic) functional F to produce the output
spike train F[y(t)] --+ z(t). We assume that all the information contained in the
spike train can be represented by the list of spike times; that is, there is no extra
information contained in properties such as spike height or width. Note, however,
that many characteristics of the spike train such as the mean or instantaneous rate
C. STEVENS, A. ZADOR
76
can be derived from this representation; if such a derivative property turns out to
be the relevant one, then this formulation can be specialized appropriately.
We will be interested, then, in the mutual information 1(S(t); Z(t? between the
input signal ensemble S(t) and the output spike train ensemble Z(t) . This is defined
in terms of the entropy H(S) of the signal, the entropy H(Z) of the spike train,
and their joint entropy H(S, Z),
1(S; Z) = H(S) + H(Z) - H(S, Z).
(1)
Note that the mutual information is symmetric, 1(S; Z) = 1(Z; S), since the joint
entropy H(S, Z) = H(Z, S). Note also that if the signal S(t) and the spike train
Z(t) are completely independent, then the mutual information is 0, since the joint
entropy is just the sum of the individual entropies H(S, Z) = H(S) + H(Z). This is
completely in lin'e with our intuition, since in this case the spike train can provide
no information about the signal.
1.1 Information estimation through stimulus reconstruction
Bialek and colleagues (Bialek et al., 1991) have used the reconstruction method
to obtain a strict lower bound on the mutual information in an experimental setting. This method is based on an expression mathematically equivalent to eq. (1)
involving the conditional entropy H(SIZ) of the signal given the spike train,
1(S; Z)
H(S) - H(SIZ)
> H(S) - Hest(SIZ),
(2)
where Hest(SIZ) is an upper bound on the conditional entropy obtained from a
reconstruction sest< t) of the signal. The entropy is estimated from the second order
statistics of the reconstruction error e(t) ~ s(t)-sest (t); from the maximum entropy
property of the Gaussian this is an upper bound. Intuitively, the first equation says
that the information gained about the spike train by observing the stimulus is just
the initial uncertainty of the signal (in the absence of knowledge of the spike train)
minus the uncertainty that remains about the signal once the spike train is known,
and the second equation says that this second uncertainty must be greater for any
particular estimate than for the optimal estimate.
1.2 Information estimation through spike train reliability
We have adopted a different approach based an equivalent expression for the mutual
information:
1(S; Z) = H(Z) - H(ZIS).
(3)
The first term H(Z) is the entropy of the spike train, while the second H(ZIS)
is the conditional entropy of the spike train given the signal; intuitively this like
the inverse repeatability of the spike train given repeated applications of the same
signal. Eq. (3) has the advantage that, if the spike train is a deterministic function
of the input, it permits exact calculation of the mutual information . This follows
from an important difference between the conditional entropy term here and in eq.
2: whereas H(SIZ) has both a deterministic and a stochastic component, H(ZIS)
has only a stochastic component. Thus in the absence of added noise, the discrete
entropy H(ZIS) = 0, and eq. (3) reduces to 1(S; Z) = H(Z).
If ISIs are independent, then the H(Z) can be simply expressed in terms of the
entropy of the (discrete) lSI distribution p(T),
00
H(T)
= - LP(1'i) 10g2P(1'i)
i=O
(4)
77
Infonnation Through a Spiking Neuron
as H(Z) = nH(T), where n is the number of spikes in Z. Here p('li) is the probability that the spike occurred in the interval (i)~t to (i + l)~t. The assumption
of finite timing precision ~t keeps the potential information finite. The advantage
of considering the lSI distribution peT) rather than the full spike train distribution
p(Z) is that the former is univariate while the latter is multivariate; estimating the
former requires much less data.
Under what conditions are ISIs independent? Correlations between ISIs can arise
either through the stimulus or the spike generation mechanism itself. Below we shall
guarantee that correlations do not arise from the spike-generator by considering the
forgetful integrate-and-fire (IF) model, in which all information about the previous
spike is eliminated by the next spike. If we further limit ourselves to temporally
uncorrelated stimuli (i. e. stimuli drawn from a white noise ensemble), then we can
be sure that ISIs are independent, and eq. (4) can be applied.
In the presence of noise, H(ZIT) must also be evaluated, to give
f(S; T) = H(T) - H(TIS).
(5)
H(TIS) is the conditional entropy of the lSI given the signal,
t\J=1
H(TIS) = - /
(6)
p(1j ISi(t)) log2 p(1j ISi(t)))
3;(t)
where p(1j ISi(t)) is the probability of obtaining an lSI of 1j in response to a particular stimulus Si(t) in the presence of noise net). The conditional entropy can be
thought of as a quantification of the reliability of the spike generating mechanism:
it is the average trial-to-trial variability of the spike train generated in response to
repeated applications of the same stimulus.
1.3 Maximum spike train entropy
In what follows, it will be useful to compare the information rate for the IF neuron
with the limiting case of an exponential lSI distribution, which has the maximum
entropy for any point process of the given rate (Papoulis, 1984). This provides an
upper bound on the information rate possible for any spike train, given the spike
rate and the temporal precision. Let f(T) = re- rr be an exponential distribution
with a mean spike rate r. Assuming a temporal precision of ~t, the entropy/spike
is H(T) = log2 r~t' and the entropy/time for a rate r is rH(T) = rlog 2 -~ .
For example, if r = 1 Hz and ~t = 0.001 sec, this gives (11.4 bits/second) (1
spike/second) = 11.4 bits/spike. That is, if we discretize a 1 Hz spike train into
1 msec bins, it is nof possible for it to transmit more than 11.4 bits/second. If
we reduce the bin size two-fold, the rate increases by log2 1/2 = 1 bit/spike to
12.4 bits/spike, while if we double it we lose one bit/s to get 10.4 bit/so Note
that at a different firing rate, e.g. r = 2 Hz, halving the bin size still increases
the entropy/spike by 1 bit/spike, but because the spike rate is twice as high, this
becomes a 2 bit/second increase in the information rate.
1.4 The IF model
Now we consider the functional :F describing the forgetful leaky IF model of spike
generation. Suppose we add some noise net) to a signal set), yet) = net) + set),
and threshold the sum to produce a spike train z(t) = :F[s(t) + net)]. Specifically,
suppose the voltage vet) of the neuron obeys vet) = -v(t)/r + yet), where r is the
membrane time constant, both s(t~ and net) have a white Gaussian distributions
and yet) has mean I' and variance (T ? If the voltage reaches the threshold ()o at some
time t, the neuron emits a spike at that time and resets to the initial condition Vo.
78
c. STEVENS, A. ZAOOR
In the language of neurobiology, this model can be thought of (Tuckwell, 1988) as
the limiting case of a neuron with a leaky IF spike generating mechanism receiving
many excitatory and inhibitory synaptic inputs. Note that since the input yet) is
white, there are no correlations in the spike train induced by the signal, and since
the neuron resets after each spike there are no correlations induced by the spikegenerating mechanism. Thus ISIs are independent, and eq. (4) r.an be applied.
We will estimate the mutual information I(S, Z) between the ensemble of input
signals S and the ensemble of outputs Z. Since in this model ISIs are independent by
construction, we need only evaluate H(T) and H(TIS); for this we must determine
p(T), the distribution of ISIs, and p(Tlsi), the conditional distribution of ISIs for
an ensemble of signals Si(t). Note that peT) corresponds to the first passage time
distribution of the Ornstein-Uhlenbeck process (Tuckwell, 1988).
The neuron model we are considering has two regimes determined by the relation
of the asymptotic membrane potential (in the absence of threshold) J.l.T and the
threshold (J. In the suprathreshold regime, J.l.T > (J, threshold crossings occur even if
the signal variance is zero (0- 2 = 0). In the subthreshold regime, J.l.T ~ (J, threshold
crossings occur only if 0- 2 > O. However, in the limit that E{T} ~ T, i.e. the mean
firing rate is low compared with the integration time constant (this can only occur
in the subthreshold regime), the lSI distribution is exponential, and its coefficient
of variation (CV) is unity (cf. (Softky and Koch, 1993)). In this low-rate regime the
firing is deterministically Poisson; by this we mean to distinguish it from the more
usual usage of Poisson neuron, the stochastic situation in which the instantaneous
firing rate parameter (the probability of firing over some interval) depends on the
stimulus (i.e. f ex: set)). In the present case the exponential lSI distribution arises
from a deterministic mechanism.
At the border between these regimes, when the threshold is just equal to the asymptotic potential, (Jo = J.l.T, we have an explicit and exact solution for the entire lSI
distribution (Sugiyama et al., 1970)
peT) = (J.l.T)(T/2)-3 / 2 [e 2T1T _ 1]-3/ 2exp (2T/T _
(J.l.T?
).
(211")1/20(0-2T)(e 2T IT - 1)
(7)
This is the special case where, in the absence of fluctuations (0- 2 = 0), the membrane
potential hovers just subthreshold. Its neurophysiological interpretation is that the
excitatory inputs just balance the inhibitory inputs, so that the neuron hovers just
on the verge of firing.
1.5 Information rates for noisy and noiseless signals
Here we compare the information rate for a IF neuron at the "balance point" J.l.T = (J
with the maximum entropy spike train. For simplicity and brevity we consider only
the zero-noise case, i.e. net) = O. Fig. 1A shows the information per spike as a
function of the firing rate calculated from eq. (7), which was varied by changing
the signal variance 0- 2 . We assume that spikes can be resolved with a temporal
resolution of 1 msec, i. e. that the lSI distribution has bins 1 msec wide. The
dashed line shows the theoretical upper bound given by the exponential distribution;
this limit can be approached by a neuron operating far below threshold, in the
Poisson limit. For both the IF model and the upper bound, the information per
spike is a monotonically decreasing function of the spike rate; the model almost
achieves the upper bound when the mean lSI is just equal to the membrane time
constant. In the model the information saturates at very low firing rates, but for the
exponential distribution the information increases without bound. At high firing
rates the information goes to zero when the firing rate is too fast for individual ISIs
to be resolved at the temporal resolution. Fig. 1B shows that the information rate
(information per second) when the neuron is at the balance point goes through a
Infonnation Through a Spiking Neuron
79
maximum as the firing rate increases. The maximum occurs at a lower firing rate
than for the exponential distribution (dashed line).
1.6 Bounding information rates by stimulus reconstruction
By construction, eq. (3) gives an exact expression for the information rate in this
model. We can therefore compare the lower bound provided by the stimulus reconstruction method eq. (2) (Bialek et aI., 1991). That is, we can assess how tight
a lower bound it provides. Fig. 2 shows the lower bound provided by the reconstruction (solid line) and the reliability (dashed line) methods as a function of the
firing rate. The firing rate was increased by increasing the mean p. of the input
stimulus yet), and noise was set to O. At low firing rates the two estimates are
nearly identical, but at high firing rates the reconstruction method substantially
underestimates the information rate . The amount of the underestimate depends on
the model parameters, and decreases as noise is added to the stimulus. The tightness of the bound is therefore an empirical question. While Bialek and colleagues
(1996) show that under the conditions of their experiments the underestimate is less
than a factor of two, it is clear that the potential for underestimate under different
conditions or in different systems is greater.
2
Discussion
While it is generally agreed that spike trains encode information about a neuron's
inputs, it is not clear how that information is encoded. One idea is that it is the
mean firing rate alone that encodes the signal, and that variability about this mean
is effectively noise. An alternative view is that it is the variability itself that encodes
the signal, i. e. that the information is encoded in the precise times at which spikes
occur. In this view the information can be expressed in terms of the interspike
interval (lSI) distribution of the spike train. This encoding scheme yields much
higher information rates than one in which only the mean rate (over some interval
longer than the typical lSI) is considered. Here we have quantified the information
content of spike trains under the latter hypothesis for a simple neuronal model.
We consider a model in which by construction the ISIs are independent, so that the
information rate (in bits/sec) can be computed directly from the information per
spike (in bits/spike) and the spike rate (in spikes/sec). The information per spike
in turn depends on the temporal precision with which spikes can be resolved (if
precision were infinite, then the information content would be infinite as well, since
any message could for example be encoded in the decimal expansion of the precise
arrival time of a single spike), the reliability of the spike transduction mechanism,
and the entropy of the lSI distribution itself. For low firing rates, when the neuron
is in the subthreshold limit, the lSI distribution is close to the theoretically maximal
exponential distribution.
Much of the recent interest in information theoretic analyses of the neural code can
attributed to the seminal work of Bialek and colleagues (Bialek et al., 1991; Rieke
et al., 1996), who measured the information rate for sensory neurons in a number of
systems. The present results are in broad agreement with those of DeWeese (1996) ,
who considered the information rate of a linear-filtered threshold crossing! (LFTC)
model. DeWeese developed a functional expansion, in which the first term describes
the limit in which spike times (not ISIs) are independent, and the second term is
a correction for correlations. The LFTC model differs from the present IF model
mainly in that it does not "reset" after each spike. Consequently the "natural"
1 In the LFTC model, Gaussian signal and noise are convolved with a linear filter; the
times at which the resulting waveform crosses some threshold are called "spikes".
80
C. STEVENS, A. ZADOR
representation of the spike train in the LFTC model is as a sequence to . . . tn of
firing times, while in the IF model the "natural" representation is as a sequence
Tl .. . Tn of ISIs. The choice is one of convenience, since the two representations are
equivalent.
The two models are complementary. In the LFTC model, results can be obtained for
colored signals and noise, while such conditions are awkward in the IF model. In the
IF model by contrast, a class of highly correlated spike trains can be conveniently
considered that are awkward in the LFTC model. That is, the indendent-ISI condition required in the IF model is less restrictive than the independent-spike condition
of the LFTC model-spikes are independent iff ISIs are indepenndent and the lSI
distribution p(T) is exponential. In particular, at high firing rates the lSI distribution can be far from exponential (and therefore the spikes far from independent)
even when the ISIs themselves are independent.
Because we have assumed that the input s(t) is white, its entropy is infinite, and the
mutual information can grow without bound as the temporal precision with which
spikes are resolved improves. Nevertheless, the spike train is transmitting only a
minute fraction of the total available information. The signal thereby saturates the
capacity of the spike train . While it is not at all clear whether this is how real
neurons actually behave, it is not implausible: a typical cortical neuron receives as
many as 104 synaptic inputs, and if the information rate of each input is the same as
the target, then the information rate impinging upon the target is 104-fold greater
(neglecting synaptic unreliability, which could decrease this substantially) than its
capacity.
In a preliminary series of experiments, we have used the reliability method to estimate the information rate of hippocampal neuronal spike trains in slice in response
to somatic current injection (Stevens and Zador, unpublished). Under these conditions ISIs appear to be independent, so the method developed here can be applied.
In these pilot experiments, an information rates as high as 6.3 bits/spike was observed.
References
Bialek, W., Rieke, F., de Ruyter van Steveninck, R., and Warland, D. (1991).
Reading a neural code. Science, 252:1854- 1857.
DeWeese, M. (1996). Optimization principles for the neural code. In Hasselmo,
M., editor, Advances in Neural Information Processing Systems, vol. 8. MIT
Press, Cambridge, MA.
Papoulis, A. (1984) . Probability, random variables and stochastic processes, 2nd
edition. McGraw-Hill.
Rieke, F., Warland, D., de Ruyter van Steveninck, R., and Bialek, W . (1996). Neural
Coding. MIT Press.
Softky, W . and Koch, C. (1993). The highly irregular firing of cortical cells is
inconsistent with temporal integration of random epsps. J . Neuroscience . ,
13:334-350.
Sugiyama, H., Moore, G., and Perkel, D. (1970). Solutions for a stochastic model
of neuronal spike production. Mathematical Biosciences, 8:323-34l.
Tuckwell, H. (1988). Introduction to theoretical neurobiology (2 vols.). Cambridge.
81
Infonnation Through a Spiking Neuron
Information at balance point
,
/
\
/
/
? 1000
~
15
\
/
\
\
/
/
\
\
500
oL-~~====~--~~~~--~~~~~~~~~~
1~
1~
1~
1~
1~
firing rate (Hz)
Figure 1: Information rate at balance point. (A; top) The information per spike
decreases monotonically with the spike rate (solid line) . It is bounded above by
the entropy of the exponential limit (dashed line), which is the highest entropy lSI
distribution for a given mean rate; this limit is approached for the IF neuron in
the subthreshold regime. The information rate goes to 0 when the firing rate is of
the same order as the temporal resolution tit. The information per spike at the
balance point is nearly optimal when E{T} : : : : T. (T = 50 msec; tit = 1 msec);
(B; bottom) Information per second for above conditions. The information rate
for both the balance point (solid curve) and the exponential distribution (dashed
curve) pass through a maximum, but the maximum is greater and occurs at an
higher rate for the latter. For firing rates much smaller than T, the rates are almost
indistinguishable. (T = 50 msec; tit = 1 msec)
~r-----~----~----~----~-----'~----~----~----,
250
200
1150
D
""
100
,,-
,,-
",'"
~V----00
10
20
30
40
spike rate (Hz)
50
eo
70
80
Figure 2: Estimating information by stimulus reconstruction. The information
rate estimated by the reconstruction method solid line and the exact information
rate dashed line are shown as a function of the firing rate. The reconstruction
method significantly underestimates the actual information, particularly at high
firing rates. The firing rate was varied through the mean input p. The parameters
were: membrane time constant T = 20 msec; spike bin size tit = 1 msec; signal
variance 0"; = 0.8; threshold Q = 10.
| 1135 |@word trial:2 nd:1 thereby:1 minus:1 solid:4 papoulis:2 initial:2 series:2 current:2 si:2 yet:6 must:3 interspike:2 alone:1 filtered:1 colored:1 provides:3 height:1 mathematical:1 theoretically:1 isi:21 themselves:1 perkel:1 ol:1 decreasing:1 actual:1 considering:3 increasing:1 becomes:1 provided:3 estimating:2 bounded:1 what:2 substantially:2 developed:3 guarantee:1 temporal:9 ti:4 unreliability:1 appear:1 understood:1 timing:2 limit:8 encoding:1 firing:27 fluctuation:1 twice:1 quantified:1 obeys:1 steveninck:2 hest:2 differs:1 zis:4 empirical:1 thought:2 significantly:1 suggest:1 get:1 convenience:1 close:1 seminal:1 equivalent:3 deterministic:3 go:3 zador:5 formulate:1 resolution:4 simplicity:1 rieke:3 variation:1 transmit:2 limiting:2 construction:3 suppose:2 target:2 exact:5 hypothesis:2 agreement:1 crossing:3 particularly:1 observed:1 bottom:1 decrease:3 highest:1 intuition:1 dynamic:1 tight:2 tit:4 upon:1 completely:2 resolved:4 joint:3 represented:1 train:38 fast:1 detected:1 approached:2 encoded:4 say:2 tightness:1 statistic:1 itself:3 noisy:1 advantage:2 rr:1 sequence:2 net:8 reconstruction:12 maximal:1 reset:3 relevant:1 iff:1 rlog:1 double:1 produce:3 generating:2 develop:1 siz:5 measured:1 ex:1 eq:9 zit:1 epsps:1 nof:1 waveform:1 stevens:5 filter:1 stochastic:7 suprathreshold:1 bin:5 preliminary:2 mathematically:1 correction:1 koch:2 considered:3 exp:1 achieves:1 estimation:2 lose:1 infonnation:3 hasselmo:1 mit:2 gaussian:3 rather:1 voltage:2 encode:2 derived:1 mainly:1 contrast:1 entire:1 hovers:2 relation:1 interested:1 integration:2 special:1 mutual:8 equal:2 once:1 eliminated:1 identical:1 broad:1 nearly:2 hypothesizing:1 stimulus:14 individual:2 ourselves:1 fire:2 interest:2 message:1 highly:2 neglecting:1 re:1 theoretical:2 increased:1 too:1 receiving:1 transmitting:1 jo:1 possibly:1 verge:1 derivative:1 li:1 potential:5 de:2 sec:3 coding:1 coefficient:1 ornstein:1 depends:3 view:2 observing:1 ass:1 variance:4 characteristic:1 who:2 ensemble:6 subthreshold:5 yield:1 repeatability:1 implausible:1 reach:1 synaptic:4 underestimate:5 colleague:3 conveys:1 bioscience:1 attributed:1 emits:1 pilot:2 knowledge:1 improves:1 agreed:2 actually:1 higher:2 response:4 awkward:2 formulation:1 evaluated:1 just:7 correlation:5 receives:1 vols:1 usage:1 former:2 symmetric:1 tuckwell:3 moore:1 white:4 indistinguishable:1 width:1 hippocampal:2 hill:1 theoretic:1 vo:1 tn:2 passage:1 instantaneous:2 charles:1 specialized:1 functional:3 spiking:4 nh:1 occurred:1 interpretation:1 cambridge:2 cv:1 ai:1 sugiyama:2 language:1 reliability:5 sest:2 longer:1 operating:1 add:1 multivariate:1 lftc:7 recent:1 transmitted:1 greater:4 eo:1 determine:2 monotonically:2 signal:26 dashed:6 full:1 reduces:1 calculation:1 cross:1 lin:1 halving:1 involving:1 noiseless:1 poisson:3 uhlenbeck:1 cell:1 irregular:1 whereas:1 interval:5 grow:1 appropriately:1 extra:1 strict:1 sure:1 hz:5 induced:2 inconsistent:1 presence:2 reduce:1 idea:1 whether:1 expression:4 passed:1 generally:2 useful:1 clear:3 amount:1 lsi:18 inhibitory:2 estimated:2 neuroscience:1 per:8 discrete:2 shall:1 vol:1 threshold:11 nevertheless:1 drawn:1 changing:1 deweese:3 fraction:1 sum:2 inverse:1 uncertainty:3 communicate:1 almost:2 t1t:1 bit:13 bound:15 distinguish:1 fold:2 occur:4 encodes:2 forgetful:2 injection:2 membrane:5 describes:1 smaller:1 unity:1 lp:1 intuitively:2 equation:2 remains:1 turn:2 describing:1 mechanism:6 know:1 adopted:1 available:1 permit:1 apply:1 alternative:1 convolved:1 top:1 cf:1 log2:4 warland:2 restrictive:1 added:4 question:1 spike:92 occurs:2 usual:1 bialek:8 softky:2 capacity:2 pet:4 assuming:2 code:4 decimal:1 balance:7 upper:7 discretize:1 neuron:29 finite:3 behave:1 situation:1 neurobiology:2 variability:4 precise:3 saturates:2 varied:2 somatic:2 arbitrary:1 unpublished:1 required:1 below:2 regime:7 reading:1 natural:2 quantification:1 scheme:1 temporally:1 carried:2 asymptotic:2 generation:2 generator:1 integrate:2 principle:1 editor:1 uncorrelated:1 production:1 excitatory:2 telling:1 institute:1 wide:1 leaky:3 van:2 slice:2 curve:2 calculated:1 cortical:3 sensory:1 far:3 mcgraw:1 keep:1 assumed:1 vet:2 ruyter:2 ca:1 obtaining:1 expansion:2 anthony:1 impinging:1 mnl:1 rh:1 border:1 noise:13 arise:2 bounding:1 arrival:1 edition:1 repeated:2 complementary:1 neuronal:4 fig:3 tl:1 transduction:1 salk:2 precision:6 msec:9 deterministically:1 exponential:12 explicit:1 minute:1 list:1 effectively:1 gained:1 entropy:28 simply:2 univariate:1 neurophysiological:1 conveniently:1 expressed:2 contained:2 corresponds:1 ma:1 conditional:7 quantifying:1 consequently:1 absence:5 content:2 experimentally:1 specifically:1 determined:1 typical:2 infinite:3 total:2 called:1 pas:1 experimental:1 la:1 latter:3 arises:1 brevity:1 evaluate:1 correlated:1 |
151 | 1,136 | The Capacity of a Bump
Gary William Flake?
Institute for Advance Computer Studies
University of Maryland
College Park, MD 20742
Abstract
Recently, several researchers have reported encouraging experimental results when using Gaussian or bump-like activation functions in multilayer
perceptrons. Networks of this type usually require fewer hidden layers
and units and often learn much faster than typical sigmoidal networks.
To explain these results we consider a hyper-ridge network, which is a
simple perceptron with no hidden units and a rid?e activation function. If
we are interested in partitioningp points in d dimensions into two classes
then in the limit as d approaches infinity the capacity of a hyper-ridge and
a perceptron is identical. However, we show that for p ~ d, which is the
usual case in practice, the ratio of hyper-ridge to perceptron dichotomies
approaches pl2(d + 1).
1 Introduction
A hyper-ridge network is a simple perceptron with no hidden units and a ridge activation
function. With one output this is conveniently described as y = g(h) = g(w . x - b)
where g(h) = sgn(1 - h2 ). Instead of dividing an input-space into two classes with a
single hyperplane, a hyper-ridge network uses two parallel hyperplanes. All points in the
interior of the hyperplanes form one class, while all exterior points form another. For more
information on hyper-ridges, learning algorithms, and convergence issues the curious reader
should consult [3] .
We wouldn't go so far as to suggest that anyone actually use a hyper-ridge for a real-world
problem, but it is interesting to note that a hyper-ridge can represent linear inseparable
mappings such as XOR, NEGATE, SYMMETRY, and COUNT(m) [2, 3]. Moreover,
hyper-ridges are very similar to multilayer perceptrons with bump-like activation functions,
such as a Gaussian, in the way the input space is partitioned. Several researchers [6, 2,3, 5]
have independently found that Gaussian units offer many advantages over sigmoidal units.
?Current address: Adaptive Information and Signal Processing Department, Siemens Corporate
Research, 755 College Road East, Princeton, NJ 08540. Email: ftake@scr.siemens .com
The Capacity of a Bump
557
In this paper we derive the capacity of a hyper-ridge network. Our first result is that
hyper-ridges and simple perceptrons are equivalent in the limit as the input dimension
size approaches infinity. However, when the number of patterns is far greater than the
input dimension (as is the usual case) the ratio of hyper-ridge to perceptron dichotomies
approaches p/2(d + 1), giving some evidence that bump-like activation functions offer an
advantage over the more traditional sigmoid.
The rest of this paper is divided into three more sections. In Section 2 we derive the number
of dichotomies for a hyper-ridge network. The capacities for hyper-ridges and simple
perceptrons are compared in Section 3. Finally, in Section 4 we give our conclusions.
2
The Representation Power of a Hyper-Ridge
Suppose we have p patterns in the pattern-space, ~d, where d is the number of inputs of our
neural network. A dichotomy is a classification of all of the points into two distinct sets.
Clearly, there are at most 2P dichotomies that exist. We are concerned with the number of
dichotomies that a single hyper-ridge node can represent. Let the number of dichotomies
of p patterns in d dimensions be denoted as D(p, d).
For the case of D(1 , d), when p = 1 there are always two and only two dichotomies since
one can trivially include the single point or no points. Thus, D(1, d) = 2.
For the case of D(p, 1), all of the points are constrained to fallon a line. From this set
pick two points, say Xa and Xb. It is always possible to place a ridge function such that
all points between Xa and Xb (inclusive of the end points) are included in one set, and all
other points are excluded. Thus, there are p dichotomies consisting of a single point, p - 1
dichotomies consisting of two points, p - 2 dichotomies consisting of three points, and
so on. No other dichotomies besides the empty set are possible. The number of possible
hyper-ridge dichotomies in one dimension can now be expressed as
D(p, 1)=
2:P i + 1 = 21P(P + 1)+ 1,
(1)
i=1
with the extra dichotomy coming from the empty set.
To derive the general form of the recurrence relationship, we would have to resort to
techniques similar to those used by Cover [1], Nilsson [7], and Gardner [4] . Because of
space considerations, we do not give the full derivation of the general form of the recurrence
relationship in this paper, but instead cite the complete derivation given in [3] . The short
version of the story is that the general form of the recurrence relationship for hyper-ridge
dichotomies is identical to the equivalent expression for simple perceptrons:
D(p, d) = D(P - 1, d) + D(P - 1, d - 1).
(2)
All differences between the capacity of hyper-ridges and simple perceptrons are, therefore,
a consequence of the different base cases for the recurrence expression.
To get Equation 2 into closed form, we first expand D(p, d) a total of p times, yielding
p-l (
D(P,d)=~
p-l )
.
i
D(I , d-z).
(3)
For Equation 3 it is possible for the second term of D( 1, d - 1) to become zero or negative.
Taking the two identities D(P,O) = p + 1 and D(p, -1) = 1 are the only choices that are
consistent with the recurrent relationship expressed in Equation 2. With this in mind, there
are three separate cases that we need to be concerned with: p < d + 2, p = d + 2, and
558
G. W.FLAKE
p>d+2. Whenp<d+2
D(p, d)
=~
P ~ 1 D(1, d - i) = 2 ~ P ~ 1
p-l ( )
p-l (
)
= 2P,
(4)
since all of the second tenns in D(I, d - i) are always greater or equal to zero. When
= d + 2, the last tenn in D(I, d - i), in the summation, will be equal to -I. Thus we can
expand Equation 3 in this case to
p
DCp,d) =
=
~ (p ~ 1) D(I,d _ i)= ~ (p ~ 1) D(1, p- 2- i)
I:
I:
~ 1) D(1, p - 2- i) + 1= 2
(p
~
=
(p
~ I) + 1
l~
2(2P- 1 -l)+1=2P -1.
(5)
Finally, when p > d + 2, some ofthe last terms in D(I, d - i) are always negative. We can
disregard all d - i < -1, taking D(1 , d - i) equal to zero in these cases (which is consistent
with the recurrence relationship),
DCp,d)
=
~ (p~ 1) D(I , d _ i)= ~ (p ~ 1) D(I,d _ i)
= ~
~ (p -i
1)
.
1) = ~
D(1, d - z) + (pd -+ 1
2 ~ (p -i
1) +
1)
(pd -+ 1
. (6)
Combining Equations 4, 5, and 6 gives
d
2
L (p ~ 1) + (~~:)
for
p> d + 2
I~
D~~=
3
=d + 2
2P - 1
for p
2P
forp <d+2
m
Comparing Representation Power
Cover [1], Nilsson [7], and Gardner [4] have all shown that D(p, ~ for simple perceptrons
obeys the rule
2~ (p~ 1)
forp >d+2
2P - 2
for p=d+2
2P
forp <d+2
D(P, d) =
(8)
The interesting case is when p > d + 2, since that is where Equations 7 and 8 differ the
most. Moreover, problems are more difficult when the number of training patterns greatly
exceeds the number of trainable weights in a neural network.
Let Dh(p, d) and Dp(p, d) denote the number of dichotomies possible for hyper-ridge networks and simple perceptrons, respectively. Additionally, Let Ch , and Cp denote the
559
The Capacity of a Bump
respective capacities. We should expect both Dh(p, d)/2P and Dp(p, d)/2P to be at or around
1 for small values of p/(d + 1). At some point, for large p/(d + 1), the 2P term should
dominate, making the ratio go to zero. The capacity of a network can loosely be defined as
the value p/(d + 1) such that D(p, d)/2P = ~. This is more rigorously defined as
C
={ .
l' D(c(d+ 1).d) =~}
c . d~~
2c(d+1)
2'
which is the point in which the transition occurs in the limit as the input dimension goes to
infinity.
Figures 1, 2, and 3 illustrate and compare Cp and Ch at different stages. In Figure 1
the capacities are illustrated for perceptrons and hyper-ridges, respectively, by plotting
D(p, d)/LP versus p/(d + 1) for various values of d. On par with our intuition, the ratio
D(p, d)/LP equals 1 for small values of p/(d + 1) but decreases to zero as p(d + 1) increases.
Figure 2 and the left diagram of Figure 3 plot D(p, d)/2P versus p/(d + 1) for perceptron
and hyper-ridges, side by side, with values of d = 5,20, and 100. As d increases, the two
curves become more similar. This fact is further illustrated in the right diagram of Figure 3
where the plot is of Dh(p, d)/Dp(P, d) versus p for various values of d. The ratio clearly
approaches 1 as d increases, but there is significant difference for smaller values of d.
The differences between Dp and Dh can be more explicitly quantified by noting that
Dh(p, d)
-1)
= Dp(p, d) + ( pd + 1
for p > d + 2. This difference clearly shows up in in the plots comparing the two capacities.
We will now show that the capacities are identical in the limit as d approaches infinity. To
do this, we will prove that the capacity curves for both hyper-ridges and perceptrons crosses
~ at p/(d + 1) = 2. This fact is already widely known for perceptrons. Because of space
limitations we will handwave our way through lemma and corollary proofs. The curious
reader should consult [3) for the complete proofs.
Lemma 3.1
lim
n-oo 22n
(2nn) = O.
Short Proof Since n approaches infinity, we can use Stirling's formula as an approximation
of the factorials.
o
Corollary 3.2 For all positive integer constants, a, b, and c,
lim _1_ (2n + b)
n+c
n-oo 22n+a
= O.
Short Proof When adding the constants band c to the combination, the whole combination
can always be represented as comb(2n, n)? y, where y is some multiplicative constant. Such
a constant can always be factored out of the limit. Additionally, large values of a only
increase the growth rate of the denominator.
o
Lemma 3.3 For p/(d + 1) =2, liffid ..... oo Dp(p, d)/2P = ~.
Short Proof Consult any of Cover [1], Nilsson [7], or Gardner [4] for full proof.
o
560
G. W.FLAKE
" '\
"
\
.
d= S d =20 ---d= 100 .. .
:
\
'
d = 5d= 20 - d=l00 .
.~
0.'
~
- -
~
I
--r--_ . -
.. ?
os
--
-
Figure 1: On the left, Dp(P, tf)12P versus pl(d + 1), and on the right, Dh(p, d)/2P versus
pl(d + 1) for various values of d . Notice that for perceptrons the curve always passes
through! at pl(d + 1) = 2. For hyper-ridges, the point where the curve passes through!
decreases as d increases.
perceptmnihyper-ridge ---
I
o.s
--
-
-- -
---'r-'--
\
\. .
?0L-----~----~
2 --~~------J
2
p/(d+ I)
pl(d+ J)
Figure 2: On the left, capacity comparison for d = 5. There is considerable difference for
small values of d, especially when one considers that the capacities are normalized by 2P .
On the right, comparison for d = 20. The difference between the two capacities is much
more subtle now that d is fairly large.
t perecpuon :hyper. ridge _.
d: 1 d", 2 ----
!
d= 5
d", I O -
d= 100 - _. -
20
os
-
-
10
o L-----~----~~--~----~
o
10
20
30
40
so
60
70
80
90
100
P
Figure 3: On the left, capacity comparison for d = 100. For this value of d, the capacities
are visibly indistinguishable. On the right, Dh(P, d)1 Dp(P, tf) versus p for various values of
d. For small values of d the capacity of a hyper-ridge is much greater than a perceptron.
As d grows, the ratio asymptotically approaches 1.
The Capacity of a Bump
561
Theorem 3.4 For pl(d + 1) = 2,
!.
lim Dh(p, d) =
d-oo
2P
2
Proof Taking advantage of the relationship between perceptron dichotomies and hyperridge dichotomies allows us to expand Dh(p, d),
? Dh(P, d)
11m
d-oo
2P
=
l'1m Dp(P, d) + l'1m (p - 1) .
d-oo
2P
d-oo d + 1
By Lemma 3.3, and substituting 2(d + 1) for p, we get:
1 l'
1)
- + 1m (2d +
2 d-oo d + 1
.
Finally, by Corollary 3.2 the right limit vanishes leaving us with
o
!.
Superficially, Theorem 3.4 would seem to indicate that there is no difference between the
representation power of a perceptron and a hyper-ridge network. However, since this result
is only valid in the limit as the number of inputs goes to infinity, it would be interesting to
know the exact relationship between Dp(d, p) and Dh(d, p) for finite values of d.
In the right diagram of Figure 3 values of Dp(d,p)IDh(d,p) are plotted against various
values of p. The figure is slightly misleading since the ratio appears to be linear in p,
when, in fact, the ratio is only approximately linear in p. If we normalize the ratio by }
and recompute the ratio in the limit as p approaches infinity the ratio becomes linear in d.
Theorem 3.5 establishes this rigorously.
Theorem 3.5
Proof First, note that we can simplify the left hand side of the expression to
(~~ :) =
. 1 Dh(d,p)
.
1 Dp(d,p) +
hm = hm p-oopDp(d,p) p_oop
Dp(d,p)
.
1
(~~:)
hm p_oop Dp(d,p)
(9)
In the next step, we will invert Equation 9, making it easier to work with. We need to show
that the new expression is equal to 2(d + 1).
lim p Dp(d,p) = lim 2p
(~ ~ :)
p-oo
.
p_oo
2:
d
(P - I)! (d + 1)!(P - d - 2)!
hm 2p
p_oo
. i!(P - i-I)!
(P - I)!
1=0
L~ (p~ 1)
l
(~ ~ :)
=
.
2: (d + I)! (P = P_oo.
hm 2p
i!
(P d
1=0
d
d - 2)!
=
i-I)!
d
lim
p
2(d+l)"d!(p-d-l)!= lim2(d+l)"d!(p-d-l)! (10)
p_oo (P - 1 - d)
6 i! (P - i - 1)!
p_oo
~ i! (P - i-I)!
i=O
1=0
In Equation 10, the summation can be reduced to 1 since
1 {O1
1. d! (P - d - )! _
1m -i! (P - i-I)! p-oo
0
when :5 i < d
w hen l. = d
G. W.FLAKE
562
Thus, Equation 10 is equal to 2(d + 1), which proves the theorem.
o
Theorem 3.5 is valid only in the case when p ~ d, which is typically true in interesting
classification problems. The result of the theorem gives us a good estimate of how many
more dichotomies are computable with a hyper-ridge network when compared to a simple
perceptron. When p ~ d the equation
Dh(d,p)
P
Dp(d,p) - 2(d+ 1)
(11)
is an accurate estimate of the difference between the capacities of the two architectures.
For example, taking d = 4 and p = 60 and applying the values to Equation 11 yields the
ratio of 6, which should be interpreted as meaning that one could store six times the number
of mappings in a hyper-ridge network than one could in a simple perceptron. Moreover,
Equation 11 is in agreement with the right diagram of Figure 3 for all values of p ~ d.
4
Conclusion
An interesting footnote to this work is that the VC dimension [8] of a hyper-ridge network
is identical to a simple perceptron, namely d. However, the real difference between
perceptrons and hyper-ridges is more noticeable in practice, especially when one considers
that linear inseparable problems are representable by hyper-ridges.
We also know that there is no such thing as a free lunch and that generalization is sure
to suffer in just the cases when representation power is increased. Yet given all of the
comparisons between Ml.Ps and radial basis functions (RBFs) we find it encouraging that
there may be a class of approximators that is a compromise between the local nature of
RBFs and the global structure of MLPs.
References
[l] T.M. Cover. Geometrical and statistical properties of systems of linear inequalities
with applications in pattern recognition. IEEE Transactions on Electronic Computers,
14:326-334,1965.
[2] M.R.W. Dawson and D.P. Schopflocher. Modifying the generalized delta rule to train
networks of non-monotonic processors for pattern classification. Connection Science,
4(1), 1992.
[3] G. W. Flake. Nonmonotonic Activation Functions in Multilayer Perceptrons. PhD
thesis, University of Maryland, College Park, MD, December 1993.
[4] E. Gardner. Maximum storage capacity in neural networks. Europhysics Letters,
4:481-485,1987.
[5] F. Girosi, M. Jones, and T. Poggio. Priors, stabilizers and basis functions: from
regularization to radial, tensor and additive splines. Technical Report A.I. Memo No.
1430, C.B.C.L. Paper No. 75, MIT AI Laboratory, 1993.
[6] E. Hartman and J. D. Keeler. Predicting the future: Advanages of semilocal units.
Neural Computation, 3:566-578,1991.
[7] N.J. Nilsson. Learning Machines: Foundations of Trainable Pattern Classifying Systems. McGraw-Hill, New York, 1965.
[8] Y.N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies
of events to their probabilities. Theory of Probability and Its Applications, 16:264-280,
1971.
| 1136 |@word version:1 pick:1 chervonenkis:1 current:1 com:1 comparing:2 activation:6 yet:1 additive:1 girosi:1 plot:3 tenn:1 fewer:1 short:4 recompute:1 node:1 hyperplanes:2 sigmoidal:2 become:2 prove:1 comb:1 encouraging:2 becomes:1 moreover:3 interpreted:1 nj:1 growth:1 unit:6 positive:1 local:1 limit:8 consequence:1 approximately:1 quantified:1 obeys:1 practice:2 road:1 radial:2 suggest:1 get:2 interior:1 storage:1 applying:1 equivalent:2 go:4 independently:1 factored:1 rule:2 dominate:1 suppose:1 exact:1 us:1 agreement:1 recognition:1 decrease:2 intuition:1 pd:3 vanishes:1 rigorously:2 compromise:1 basis:2 various:5 represented:1 derivation:2 train:1 distinct:1 dichotomy:19 hyper:32 nonmonotonic:1 widely:1 say:1 hartman:1 advantage:3 coming:1 combining:1 normalize:1 convergence:2 empty:2 p:1 derive:3 recurrent:1 illustrate:1 oo:10 noticeable:1 dividing:1 indicate:1 differ:1 modifying:1 vc:1 sgn:1 require:1 generalization:1 summation:2 keeler:1 pl:5 around:1 mapping:2 bump:7 substituting:1 inseparable:2 tf:2 establishes:1 mit:1 clearly:3 gaussian:3 always:7 corollary:3 greatly:1 visibly:1 nn:1 typically:1 hidden:3 expand:3 interested:1 issue:1 classification:3 denoted:1 constrained:1 fairly:1 equal:6 identical:4 park:2 jones:1 future:1 report:1 spline:1 simplify:1 consisting:3 william:1 yielding:1 dcp:2 xb:2 accurate:1 poggio:1 respective:1 loosely:1 plotted:1 forp:3 increased:1 cover:4 stirling:1 uniform:1 reported:1 schopflocher:1 thesis:1 resort:1 explicitly:1 multiplicative:1 closed:1 parallel:1 rbfs:2 mlps:1 xor:1 yield:1 ofthe:1 researcher:2 processor:1 explain:1 footnote:1 email:1 against:1 frequency:1 proof:8 lim:6 subtle:1 actually:1 appears:1 xa:2 stage:1 just:1 hand:1 fallon:1 o:2 grows:1 normalized:1 true:1 regularization:1 excluded:1 laboratory:1 illustrated:2 indistinguishable:1 recurrence:5 generalized:1 hill:1 ridge:35 complete:2 scr:1 cp:2 geometrical:1 meaning:1 consideration:1 recently:1 sigmoid:1 significant:1 ai:1 trivially:1 base:1 store:1 inequality:1 dawson:1 tenns:1 approximators:1 greater:3 signal:1 full:2 corporate:1 exceeds:1 technical:1 faster:1 offer:2 cross:1 divided:1 europhysics:1 multilayer:3 denominator:1 represent:2 invert:1 diagram:4 leaving:1 extra:1 rest:1 pass:2 sure:1 thing:1 december:1 seem:1 integer:1 consult:3 curious:2 noting:1 concerned:2 architecture:1 stabilizer:1 computable:1 expression:4 six:1 suffer:1 york:1 factorial:1 band:1 reduced:1 exist:1 notice:1 delta:1 asymptotically:1 pl2:1 letter:1 place:1 reader:2 electronic:1 layer:1 infinity:7 inclusive:1 anyone:1 department:1 combination:2 representable:1 flake:5 smaller:1 slightly:1 partitioned:1 lp:2 lunch:1 making:2 nilsson:4 equation:12 count:1 mind:1 know:2 end:1 include:1 giving:1 especially:2 prof:1 tensor:1 already:1 occurs:1 md:2 usual:2 traditional:1 dp:16 separate:1 maryland:2 capacity:22 considers:2 besides:1 o1:1 relationship:7 ratio:12 difficult:1 negative:2 memo:1 finite:1 namely:1 connection:1 address:1 usually:1 pattern:8 power:4 event:1 predicting:1 misleading:1 gardner:4 hm:5 prior:1 hen:1 relative:1 expect:1 par:1 interesting:5 limitation:1 versus:6 h2:1 foundation:1 consistent:2 plotting:1 story:1 classifying:1 last:2 free:1 side:3 perceptron:12 institute:1 taking:4 curve:4 dimension:7 world:1 transition:1 superficially:1 valid:2 adaptive:1 wouldn:1 far:2 transaction:1 l00:1 mcgraw:1 ml:1 global:1 rid:1 additionally:2 learn:1 nature:1 exterior:1 symmetry:1 whole:1 formula:1 theorem:7 negate:1 evidence:1 vapnik:1 adding:1 phd:1 easier:1 conveniently:1 expressed:2 monotonic:1 ch:2 gary:1 cite:1 dh:13 identity:1 considerable:1 included:1 typical:1 hyperplane:1 lemma:4 total:1 experimental:1 disregard:1 siemens:2 east:1 perceptrons:14 college:3 princeton:1 trainable:2 |
152 | 1,138 | A Dynamical Systems Approach for a Learnable Autonomous Robot
J un Tani and N aohiro Fukumura
Sony Computer Science Laboratory Inc.
Takanawa Muse Building, 3-14-13 Higashi-gotanda, Shinagawa-ku,Tokyo, 141 JAPAN
Abstract
This paper discusses how a robot can learn goal-directed navigation tasks using local sensory inputs. The emphasis is that such
learning tasks could be formulated as an embedding problem of
dynamical systems: desired trajectories in a task space should be
embedded into an adequate sensory-based internal state space so
that an unique mapping from the internal state space to the motor
command could be established. The paper shows that a recurrent
neural network suffices in self-organizing such an adequate internal
state space from the temporal sensory input. In our experiments,
using a real robot with a laser range sensor, the robot navigated
robustly by achieving dynamical coherence with the environment.
It was also shown that such coherence becomes structurally stable as the global attractor is self-organized in the coupling of the
internal and the environmental dynamics.
1
Introd uction
Conventionally, robot navigation problems have been formulated assuming a global
view of the world. Given a detailed map of the workspace, described in a global
coordinate system, the robot navigates to the specified goal by following this map.
However, in situations where robots have to acquire navigational knowledge based
on their own behaviors, it is important to describe the problems from the internal
views of the robots.
[Kuipers 87], [Mataric 92] and others have developed an approach based on landmark detection. The robot acquires a graph representation of landmark types as a
topological modeling of the environment through its exploratory travels using the
local sensory inputs. In navigation, the robot can identify its topological position
by anticipating the landmark types in the graph representation obtained. It is,
however, considered that this navigation strategy might be susceptible to erroneous
landmark-matching. If the robot is once lost by such a catastrophe, its recoverance
of the positioning might be difficult. We need certain mechanisms by which the
J. TANI, N. FUKUMURA
990
robot can recover autonomously from such failures.
We study the above problems by using the dynamical systems approach, expecting
that this approach would provide an effective representational and computational
framework. The approach focuses on the fundamental dynamical structure that
arises from coupling the internal and the environmental dynamics [Beer 95]. Here,
the objective of learning is to adapt the internal dynamical function such that the
resultant dynamical structure might generate the desired system behavior. The system's performance becomes structurally stable if the dynamical structure maintains
a sufficiently large basin of attraction against possible perturbations.
We verify our claims through the implementation of our scheme on YAMABICO
mobile robot equipped with a laser range sensor. The robot conducts navigational
tasks under the following assumptions and conditions. (1) The robot cannot access
its global position , but it navigates depending on its local sensory (range image)
input. (2) There is no explicit landmarks accessible to the robot in the adopted
workspace. (3) The robot learns tasks of cyclic routing by following guidance of a
trainer. (4) The navigation should be robust enough against possible noise in the
environment.
2
NAVIGATION ARCHITECTURE
The YAMABICO mobile robot [Yuta and Iijima 90] was used as an experimental platform. The robot can obtain range images by a range finder consisting of
laser projectors and three CCD cameras. The ranges for 24 directions, covering a
160 degree arc in front of the robot, are measured every 150 milliseconds. In our
formulation, maneuvering commands are generated as the output of a composite
system consisting of two levels [Tani and Fukumura 94]. The control level generates
a collision-free, smooth trajectory using the range image, while the navigation level
directs the control level in a macroscopic sense, responding to the sequential branching that appears in the sensory flows . The control level is fixed; the navigation level,
on the other hand, can be adapted through learning. Firstly, let us describe the
control level. The robot can sense the forward range readings of the surrounding
environment, given in robot-centered polar coordinates by ri (1 ~ i ~ N). The angular range profile Ri is obtained by smoothing the original range readings through
applying an appropriate Gaussian filter. The maneuvering focus of the robot is the
maximum (the angular direction of the largest range) in this range profile. The
robot proceeds towards the maximum of the profile (an open space in the environment). The navigation level focuses on the topological changes in the range profile
as the robot moves. As the robot moves through a given workspace, the profile gradually changes until another local peak appears when the robot reaches a branching
point. At this moment of branching the navigation level decides whether to transfer
the focus to the new local peak or to remain with the current one. It is noted that
this branching could be quite undeterministic one if applied to rugged obstacle environment. The robot is likely to fail to detect branching points frequently in such
environment.
The navigation level determines the branching by utilizing the range image obtained
at branch points. Since the pertinent information in the range profile at a given
moment is assumed to be only a small fraction of the total, we employ a vector
quantization technique, known as the Kohonen network [Kohonen 82]' so that the
information in the profile may be compressed into specific lower-dimensional data.
The Kohonen network employed here consists of an I-dimensional lattice with m
nodes along each dimension (l=3 and m=6 for the experiments with YAMABICO).
The range image consisting of 24 values is input to the lattice, then the most
A Dynamical Systems Approach for a Learnable Autonomous Robot
Pn : sensory inputs
991
c n: context units
TPM's output space ?
of (6.6.6)
?
,,
F\,:
, , ,,,
,
range profile
Figure 1: Neural architecture for skill-based learning.
highly activated unit in the lattice, the "winner" unit, is found. The address of
the winner unit in the lattice denotes the output vector of the network. Therefore,
the navigation level receives the sensory input compressed into three dimensional
data. The next section will describe how the robot can generate right branching
sequences upon receiving the compressed range image.
3
3.1
Formulation
Learning state-action map
The neural adaptation schemes are applied to the navigation level so that it can
generate an adequate state-action map for a given task. Although some might consider that such map can be represented by using a layered feed-forward network
with the inputs of the sensory image and the outputs of the motor command, this
is not always true. The local sensory input does not always correspond uniquely to
the true state of the robot (the sensory inputs could be the same for different robot
positions). Therefore, there exists an ambiguity in determining the motor command
solely from sensory inputs. This is a typical example of so-called non-Markovian
problems which have been discussed by Lin and Mitchell [Lin and Mitchell 92]. In
order to solve this ambiguity, a representation of contexts which are memories of
past sensory sequences is required. For this purpose, a recurrent neural network
(RNN) [Elman 90] was employed since its recurrent context states could represent
the memory of past sequences. The employed neural architecture is shown in Figure. 1. The sensory input Pn and the context units en determine the appropriate
motor command Xn+l' The motor command Xn takes a binary value of 0 (staying at
the current branch) or 1 (a transit to a new branch). The RNN learning of sensorymotor (Pn,xn+d sequences, sampled through the supervised training, can build the
desired state-action map by self-organizing adequate internal representation in time.
J. TANI, N. FUKUMURA
992
(a)
task space
internal state space
task space
internal state space
(b)
Figure 2: The desired trajectories in the task space and its mapping to the internal
state space.
3.2
Embedding problem
The objective of the neural learning is to embed a task into certain global attractor
dynamics which are generated from the coupling of the internal neural function and
the environment. Figure 2 illustrates this idea. We define the internal state of the
robot by the state of the RNN. The internal dynamics, which are coupled with the
environmental dynamics through the sensory-motor loop, evolve as the robot travels
in the task space. We assume that the desired vector field in the task space forms a
global attractor, such as a fixed point for a homing task or limit cycling for a cyclic
routing task. All that the robot has to do is to follow this vector flow by means of its
internal state-action map. This requires a condition: the vector field in the internal
state space should be self-organized as being topologically equivalent to that in the
task space in order that the internal state determine the action (motor command)
uniquely. This is the embedding problem from the task space to the internal state
space, and RNN learning can attain this, using various training trajectories. This
analysis conjectured that the trajectories in the task space can always converge into
the desired one as long as the task is embedded into the global attractor in the
internal state space.
4
4.1
Experiment
Task and training procedure
Figure 3 shows an example of the navigation task, (which is adopted for the physical
experiment in a later section). The task is for the robot to repeat looping of a figure
of '8' and '0' in sequence. The task is not trivial because at the branching position
A the robot has to decide whether to go '8' or '0' depending on its memory of the
last sequence.
The robot learns this navigation task through supervision by a trainer. The trainer
repeatedly guides the robot to the desired loop from a set of arbitrarily selected
A Dynamical Systems Approach for a Learnable Autonomous Robot
993
CJ
Figure 3: Cyclic routing task, in which YAMABICO has to trace a figure of eight
followed by a single loop.
Figure 4: Trace of test travels for cyclic routing.
initial locations. (The training was conducted with starting the robot from 10
arbitrarily selected initial locations in the workspace.) In actual training, the robot
moves by the navigation of the control level and stops at each branching point,
where the branching direction is taught by the trainer. The sequence of range
images and teaching branching commands at those bifurcation points are fed into
the neural architecture as training data. The objective of training RNN is to find the
optimal weight matrix that minimizes the mean square error of the training output
(branching decision) sequences associating with sensory inputs (outputs of Kohonen
network). The weight matrix can be obtained through an iterative calculation of
back-propagation through time (BPTT) [Rumelhart et al. 86].
4.2
Results
After the training, we examined how the robot achieves the trained task. The robot
was started from arbitrary initial positions for this test. Fig. 4 shows example test
travels. The result showed that the robot always converged to the desired loop
regardless of its starting position. The time required to converge, however, took a
994
J. TANI. N. FUKUMURA
AA
Jt.JLGlLrr1L
.dLA[L[L
.
Q.
'iii
'"
c
:2
0
c
~
.Q
O:t....an..lIIlITIl.
nlLllil..lliLrriL
lliLalLlliLlliL
lllllLllll11
oIL .fiL .fiL .ilL
(A')
lLlLlLlL
rn:L
0.......
rn:L
0.......
dlLdILdlLdIL
dLdLdLdL
rrllrrllrrllrrll
J1..J1..JlJ1..
.ill. .ill. JlI. JlI.
(A)
[hJ[hJ[hJ[hJ
~uJrr.J~
cycle
?
Figure 5: The sequence of activations in input and context units during the cycling
travel.
certain period that depended on the case. The RNN initially could not function
correctly because of the arbitrary initial setting of the context units. However, while
the robot wandered around the workspace, the RNN became situated (recovered the
context) as it encountered pre-learned sensory sequences. Thereafter, its navigation
converged to the cycling loop.
Even after convergence, the robot could, by chance, leave the loop, under the influence of noise. However, the robot always came back to the loop after a while.
These observations indicate that the robot learned the objective navigational task
as embedded in a global attractor of limit cycling.
It is interesting to examine how the task is encoded in the internal dynamics of
the RNN. We investigated the activation patterns of RNN after its convergence
into the loop. The results are shown in Fig. 5. The input and context units at
each branching point are shown as three white and two black bars, respectively.
One cycle (the completion of two routes of '0' and '8') are aligned vertically as one
column. The figure shows those of four continuous cycles. It can be seen that robot
navigation is exposed to much noise; the sensing input vector becomes unstable
at particular locations, and the number of branchings in one cycle is not constant
(i.e. some branching points are undeterministic). The rows labeled as (A) and (A ')
are branches to the routes of '0' and '8', respectively. In this point, the sensory
input receives noisy chattering of different patterns independent of (A) or (A') . The
context units, on the other hand, is completely identifiable between (A) and (A') ,
which shows that the task sequence between two routes (a single loop and an eight)
is rigidly encoded internally, even in a noisy environment. In further experiments in
more rugged obstacle environments, we found that this sort of structural stability
could not be always assured. When the undeterministicity in the branching exceeds
a certain limit , the desired dynamical structure cannot be preserved.
A Dynamical Systems Approach for a Learnable Autonomous Robot
5
995
Summary and Discussion
The navigation learning problem was formulated from the dynamical systems perspective. Our experimental results showed that the robot can learn the goal-directed
navigation by embedding the desired task trajectories in the internal state space
through the RNN training. It was also shown that the robot achieves the navigational tasks in terms of convergence of attract or dynamics which emerge in the
coupling of the internal and the environmental dynamics. Since the dynamical
coherence arisen in this coupling leads to the robust navigation of the robot, the
intrinsic mechanism presented here is characterized by the term "autonomy".
Finally, it is interesting to study how robots can obtain analogical models of the
environment rather than state-action maps for adapting to flexibly changed goals.
We discuss such formulation based on the dynamical systems approach elsewhere
[Tani 96].
References
[Beer 95] R.D. Beer. A dynamical systems perspective on agent-environment interaction. Artificial Intelligence, Vol. 72, No.1, pp.173- 215, 1995.
[Elman 90] J.L. Elman. Finding structure in time.
pp.179- 211, 1990.
Cognitive Science, Vol. 14,
[Kohonen 82] T. Kohonen. Self-Organized Formation of Topographically Correct
Feature Maps. Biological Cybernetics, Vol. 43, pp.59- 69, 1982.
[Kuipers 87] B. Kuipers. A Qualitative Approach to Robot Exploration and
Map Learning. In AAAI Workshop Spatial Reasoning and Multi-Sensor Fusion
{Chicago),1987.
[Lin and Mitchell 92] L.-J. Lin and T.M. Mitchell. Reinforcement learning with
hidden states. In Proc. of the Second Int. Conf. on Simulation of Adaptive Behavior, pp. 271- 280, 1992.
[Mataric 92] M. Mataric. Integration of Representation into Goal-driven Behaviorbased Robot. IEEE Trans. Robotics and Automation, Vol. 8, pp.304- 312, 1992.
[Rumelhart et al. 86] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning
Internal Representations by Error Propagation. In Parallel Distributed Processing. MIT Press, 1986.
[Tani 96] J. Tani. Model-Based Learning for Mobile Robot Navigation from the
Dynamical Systems Perspective. IEEE Trans. System, Man and Cybernetics
Part B, Special issue on robot learning, Vol. 26, No.3, 1996.
[Tani and Fukumura 94] J. Tani and N. Fukumura. Learning goal-directed sensorybased navigation of a mobile robot. Neural Networks, Vol. 7, No.3, pp.553- 563,
1994.
[Yuta and Iijima 90] S. Yuta and J. Iijima. State Information Panel for InterProcessor Communication in an Autonomous Mobile Robot Controller. In proc.
of IROS'90, 1990.
| 1138 |@word bptt:1 open:1 simulation:1 moment:2 initial:4 cyclic:4 past:2 current:2 recovered:1 activation:2 chicago:1 j1:2 pertinent:1 motor:7 intelligence:1 selected:2 node:1 location:3 firstly:1 along:1 interprocessor:1 qualitative:1 consists:1 behavior:3 elman:3 frequently:1 examine:1 multi:1 kuiper:3 actual:1 equipped:1 becomes:3 panel:1 minimizes:1 developed:1 rugged:2 finding:1 temporal:1 every:1 control:5 unit:9 internally:1 local:6 vertically:1 limit:3 depended:1 rigidly:1 solely:1 might:4 black:1 emphasis:1 examined:1 range:19 directed:3 unique:1 camera:1 lost:1 procedure:1 rnn:10 attain:1 composite:1 matching:1 dla:1 pre:1 adapting:1 cannot:2 layered:1 context:9 applying:1 influence:1 equivalent:1 map:10 projector:1 go:1 regardless:1 starting:2 flexibly:1 williams:1 attraction:1 utilizing:1 embedding:4 stability:1 exploratory:1 autonomous:5 coordinate:2 jli:2 rumelhart:3 labeled:1 higashi:1 cycle:4 autonomously:1 maneuvering:2 expecting:1 environment:12 dynamic:8 trained:1 exposed:1 topographically:1 upon:1 completely:1 homing:1 represented:1 various:1 surrounding:1 laser:3 describe:3 effective:1 artificial:1 formation:1 quite:1 encoded:2 solve:1 compressed:3 noisy:2 sequence:11 took:1 interaction:1 adaptation:1 kohonen:6 aligned:1 loop:9 organizing:2 representational:1 analogical:1 convergence:3 leave:1 staying:1 coupling:5 recurrent:3 depending:2 completion:1 measured:1 indicate:1 direction:3 tokyo:1 correct:1 filter:1 centered:1 exploration:1 routing:4 wandered:1 suffices:1 biological:1 fil:2 sufficiently:1 considered:1 around:1 mapping:2 claim:1 achieves:2 purpose:1 polar:1 proc:2 travel:5 largest:1 mit:1 sensor:3 gaussian:1 always:6 rather:1 pn:3 hj:4 mobile:5 command:8 focus:4 directs:1 sense:2 detect:1 attract:1 initially:1 hidden:1 trainer:4 issue:1 ill:3 platform:1 smoothing:1 bifurcation:1 spatial:1 integration:1 field:2 once:1 special:1 others:1 employ:1 consisting:3 attractor:5 detection:1 highly:1 navigation:23 activated:1 conduct:1 desired:10 guidance:1 column:1 modeling:1 obstacle:2 markovian:1 lattice:4 conducted:1 front:1 fukumura:7 fundamental:1 peak:2 accessible:1 workspace:5 receiving:1 ambiguity:2 aaai:1 cognitive:1 conf:1 japan:1 automation:1 int:1 inc:1 later:1 view:2 mataric:3 recover:1 maintains:1 sort:1 parallel:1 square:1 became:1 correspond:1 identify:1 trajectory:6 cybernetics:2 converged:2 reach:1 failure:1 against:2 pp:6 resultant:1 sampled:1 stop:1 mitchell:4 knowledge:1 organized:3 cj:1 anticipating:1 back:2 appears:2 feed:1 supervised:1 follow:1 formulation:3 angular:2 until:1 hand:2 receives:2 propagation:2 tani:10 oil:1 building:1 verify:1 true:2 laboratory:1 white:1 during:1 self:5 branching:16 uniquely:2 acquires:1 covering:1 noted:1 reasoning:1 image:8 physical:1 winner:2 discussed:1 teaching:1 robot:61 stable:2 access:1 supervision:1 navigates:2 gotanda:1 own:1 showed:2 perspective:3 conjectured:1 driven:1 route:3 certain:4 binary:1 arbitrarily:2 came:1 seen:1 employed:3 determine:2 converge:2 period:1 branch:4 smooth:1 positioning:1 exceeds:1 adapt:1 characterized:1 calculation:1 long:1 lin:4 finder:1 controller:1 represent:1 arisen:1 robotics:1 preserved:1 macroscopic:1 flow:2 structural:1 iii:1 enough:1 architecture:4 associating:1 idea:1 whether:2 introd:1 action:6 adequate:4 repeatedly:1 collision:1 detailed:1 situated:1 generate:3 millisecond:1 correctly:1 vol:6 taught:1 thereafter:1 four:1 achieving:1 navigated:1 iros:1 shinagawa:1 graph:2 fraction:1 tpm:1 topologically:1 decide:1 coherence:3 decision:1 followed:1 topological:3 encountered:1 identifiable:1 adapted:1 looping:1 ri:2 generates:1 remain:1 gradually:1 discus:2 mechanism:2 fail:1 sony:1 fed:1 adopted:2 eight:2 appropriate:2 robustly:1 original:1 responding:1 denotes:1 ccd:1 muse:1 build:1 objective:4 move:3 strategy:1 cycling:4 landmark:5 transit:1 unstable:1 trivial:1 assuming:1 acquire:1 difficult:1 susceptible:1 trace:2 implementation:1 observation:1 arc:1 situation:1 hinton:1 communication:1 rn:2 perturbation:1 arbitrary:2 required:2 specified:1 learned:2 established:1 trans:2 address:1 bar:1 proceeds:1 dynamical:17 pattern:2 reading:2 navigational:4 memory:3 scheme:2 started:1 conventionally:1 coupled:1 evolve:1 determining:1 embedded:3 interesting:2 degree:1 agent:1 basin:1 beer:3 row:1 autonomy:1 elsewhere:1 summary:1 changed:1 repeat:1 last:1 free:1 guide:1 emerge:1 distributed:1 dimension:1 xn:3 world:1 sensory:18 forward:2 reinforcement:1 adaptive:1 skill:1 global:8 decides:1 assumed:1 un:1 iterative:1 continuous:1 ku:1 learn:2 robust:2 transfer:1 investigated:1 assured:1 noise:3 profile:8 fig:2 en:1 structurally:2 position:6 explicit:1 learns:2 erroneous:1 embed:1 specific:1 jt:1 learnable:4 sensing:1 fusion:1 exists:1 intrinsic:1 uction:1 quantization:1 sequential:1 workshop:1 illustrates:1 likely:1 chattering:1 catastrophe:1 aa:1 environmental:4 determines:1 chance:1 goal:6 formulated:3 towards:1 man:1 change:2 typical:1 total:1 called:1 experimental:2 internal:23 arises:1 |
153 | 1,139 | Rapid Quality Estimation of Neural
Network Input Representations
Kevin J. Cherkauer
Jude W. Shav lik
Computer Sciences Department, University of Wisconsin-Madison
1210 W. Dayton St., Madison, WI 53706
{cherkauer,shavlik}@cs.wisc.edu
Abstract
The choice of an input representation for a neural network can have
a profound impact on its accuracy in classifying novel instances.
However, neural networks are typically computationally expensive
to train, making it difficult to test large numbers of alternative
representations. This paper introduces fast quality measures for
neural network representations, allowing one to quickly and accurately estimate which of a collection of possible representations
for a problem is the best. We show that our measures for ranking
representations are more accurate than a previously published measure, based on experiments with three difficult, real-world pattern
recognition problems.
1
Introduction
A key component of successful artificial neural network (ANN) applications is an
input representation that suits the problem. However, ANNs are usually costly to
train, preventing one from trying many different representations. In this paper,
we address this problem by introducing and evaluating three new measures for
quickly estimating ANN input representation quality. Two of these, called [DBleaves
and Min (leaves), consistently outperform Rendell and Ragavan's (1993) blurring
measure in accurately ranking different input representations for ANN learning on
three difficult, real-world datasets.
2
Representation Quality
Choosing good input representations for supervised learning systems has been
the subject of diverse research in both connectionist (Cherkauer & Shavlik, 1994;
Kambhatla & Leen, 1994) and symbolic paradigms (Almuallim & Dietterich, 1994;
46
K. J. CHERKAUER, J. W. SHA VLIK
Caruana & Freitag, 1994; John et al., 1994; Kira & Rendell, 1992). Two factors
of representation quality are well-recognized in this work: the ability to separate
examples of different classes (sufficiency of the representation) and the number of
features present (representational economy). We believe there is also a third important component that is often overlooked, namely the ease of learning an accurate
concept under a given representation, which we call transparency. We define transparency as the density of concepts that are both accurate (generalize well) and
simple (of low complexity) in the space of possible concepts under a given input
representation and learning algorithm. Learning an accurate concept will be more
likely if the concept space is rich in accurate concepts that are also simple, because
simple concepts require less search to find and less data to validate.
In this paper, we introduce fast transparency measures for ANN input representations. These are orders of magnitude faster than the wrapper method (John
et al., 1994), which would evaluate ANN representations by training and testing
the ANN s themselves. Our measures are based on the strong assumption that,
for a fixed input representation, information about the density of accurate, simple
concepts under a (fast) decision-tree learning algorithm will transfer to the concept
space of an ANN learning algorithm. Our experiments on three real-world datasets
demonstrate that our transparency measures are highly predictive of representation
quality for ANNs, implying that the transfer assumption holds surprisingly well for
some pattern recognition tasks even though ANNs and decision trees are believed
to work best on quite different types of problems (Quinlan, 1994).1 In addition, our
Exper. 1 shows that transparency does not depend on representational sufficiency.
Exper. 2 verifies this conclusion and also demonstrates that transparency does not
depend on representational economy. Finally, Exper. 3 examines the effects of redundant features on the transparency measures, demonstrating that the ID31eaves
measure is robust in the face of such features.
2.1
Model-Based Transparency Measures
We introduce three new "model-based" measures that estimate representational
transparency by sampling instances of roughly accurate concept models from a
decision-tree space and measuring their complexities. If simple, accurate models
are abundant, the average complexity of the sampled models will be low. If they
are sparse, we can expect a higher complexity value.
Our first measure, avg(leaves), estimates the expected complexity of accurate concepts as the average number of leaves in n randomly constructed decision trees that
correctly classify the training set:
avg(leaves) == ~ 2:;=11eaves(t)
where leaves(t) is the number of leaves in tree t. Random trees are built top-down;
features are chosen with uniform probability from those which further partition the
training examples (ignoring example class). Tree building terminates when each
leaf achieves class purity (Le., the tree correctly classifies all the training examples).
High values of avg(leaves) indicate high concept complexity (i.e., low transparency).
The second measure, min(leaves), finds the minimum number of leaves over the n
randomly constructed trees instead of the average to reflect the fact that learning
systems try to make intelligent, not random, model choices:
min (leaves) == min {leaves(t)}
t=l,n
lWe did not preselect datasets based on whether our experiments upheld the transfer
assumption. We report the results for all datasets that we have tested our transparency
measures on.
Rapid Quality Estimation of Neural Network Input Representations
47
Table 1: Summary of datasets used.
Dataset
DNA
NIST
Magellan
II
Examples
20,000
3,471
625
Classes
6
10
2
Cross Validation Folds
4
10
4
The third measure, ID31eaves, simply counts the number of leaves in the tree grown
by Quinlan's (1986) ID3 algorithm:
ID31eaves == leaves(ID3 tree)
We always use the full ID3 tree (100% correct on the training set). This measure
assumes the complexity of the concept ID3 finds depends on the density of simple,
accurate models in its space and thus reflects the true transparency.
All these measures fix tree training-set accuracy at 100%, so simpler trees imply
more accurate generalization (Fayyad, 1994) as well as easier learning. This lets us
estimate transparency without the multiplicative additional computational expense
of cross validating each tree. It also lets us use all the training data for tree building.
2.2
"Blurring" as a Transparency Measure
Rendell and Ragavan (1993) address ease of learning explicitly and present a metric for quantifying it called blurring. In their framework, the less a representation
requires the use of feature interactions to produce accurate concepts, the more
transparent it is. Blurring heuristically estimates this by measuring the average
information content of a representation's individual features. Blurring is equivalent
to the (negation of the) average information gain (Quinlan, 1986) of a representation's features with respect to a training set, as we show in Cherkauer and Shavlik
(1995).
3
Evaluating the Transparency Measures
We evaluate the transparency measures on three problems: DNA (predicting gene
reading frames; Craven & Shavlik, 1993), NIST (recognizing handwritten digits;
"FI3" distribution), and Magellan (detecting volcanos in radar images of the planet
Venus; Burl et al., 1994).2 The datasets are summarized in Table l.
To assess the different transparency measures, we follow these steps for each dataset
in Exper. 1 and 2:
1. Construct several different input representations for the problem.
2. Train ANNs using each representation and test the resulting generalization
accuracy via cross validation (CV). This gives us a (costly) ground-truth
ranking of the relative qualities of the different representations.
3. For each transparency measure, compute the transparency score of each
representation. This gives us a (cheap) predicted ranking of the representations from each measure.
4. For each transparency measure, compute Spearman's rank correlation coefficient between the ground-truth and predicted rankings. The higher this
correlation, the better the transparency measure predicts the true ranking.
20n these problems, we have found that ANNs generalize 1- 6 percentage points better
than decision trees using identical input representations, motivating our desire to develop
fast measures of ANN input representation quality.
48
K. 1. CHERKAUER, J. W. SHAVLIK
Table 2: User CPU seconds on a Sun SPARCstation 10/30 for the largest representation
of each dataset. Parenthesized numbers are standard deviations over 10 runs.
I Dataset \I Blurring I ID3leaves I Min! A vg(leaves) I Backprop I
1,245 3.96)
13,444 56.25
212,900
DNA
1.68 2.38
221 2.75
1,558 5.00
501,400
NIST
2.69 2.31
1 0.07
12 0.13)
Magellan 0.21 0.15
6,300
In Exper. 3 we rank only two representations at a time, so instead of computing a
rank correlation in step 4, we just count the number of pairs ranked correctly.
We created input representations (step 1) with an algorithm we call RS ("Representation Selector"). RS first constructs a large pool of plausible, domain-specific
Boolean features (5,460 features for DNA, 251,679 for NIST, 33,876 for Magellan).
For each CV fold, RS sorts the features by information gain on the entire training
set. Then it scans the list, selecting each feature that is not strongly pairwise dependent on any feature already selected according to a standard X2 independence
test using the X 2 statistic.
This produces a single reasonable input representation, Rl.3 To obtain the additional representations needed for the ranking experiments, we ran RS several times
with successively smaller subsets of the initial feature pool, created by deleting
features whose training-set information gains were above different thresholds. For
each dataset, we made nine additional representations of varying qualities, labeled
R 2 -R lO , numbered from least to most "damaged" initial feature pool.
To get the ground-truth ranking (step 2), we trained feed-forward ANNs with backpropagation using each representation and one output unit per class. We tried
several different numbers of hidden units in one layer and used the best CV accuracy among these (Fig. 1, left) to rank each input representation for ground truth.
Each transparency measure also predicted a ranking of the representations (step 3).
A CPU time comparison is in Table 2. This table and the experiments below report
min (leaves) and avg(leaves) results from sampling 100 random trees, but sampling
only 10 trees (giving a factor 10 speedup) yields similar ranking accuracy.
Finally, in Exper. 1 and 2 we evaluate each transparency measure (step 4) using
6.Em
d2
Spearman's rank correlation coefficient, rs = 1 - m(";i.!:l)?' between the groundtruth and predicted rankings (m is the number of representations (10); di is the
ground-truth rank (an integer between 1 and 10) minus the transparency rank).
We evaluate the transparency measures in Exper. 3 by counting the number (out
of ten) of representation pairs each measure orders the same as ground truth.
4
Experiment I-Transparency vs. Sufficiency
This experiment demonstrates that our transparency measures are good predictors
of representation quality and shows that transparency does not depend on representational sufficiency (ability to separate examples). In this experiment we used
transparency to rank ten representations for each dataset and compared the rankings to the ANN ground truth using the rank correlation coefficient. RS created
the representations by adding features until each representation could completely
separate the training data into its classes. Thus, representational sufficiency was
3Though feature selection is not the focus of this paper, note that similar feature
selection algorithms have been used by others for machine learning applications (Baim,
1988; Battiti, 1994).
Rapid Quality Estimation of Neural Network Input Representations
49
DNA Backprop Ground-Truth Cross-Validation
100 .---..---.----.--r--.---,.----.-....--,.---.---,
~ 90
!!!
~
~ 80
~
~
rfl
Cii
~
~
70
60
Experiment 1 Experiment 2 ..........
50
DNA Dataset
Measure
Exp1 rs Exp2 rs
ID3leaves
0.99
0.95
Min (leaves)
0.99
0.94
AvgJleaves)
0.78
0.96
Blurring
0.78
0.81
40~~~~~~~~~~~~
R1 R2 R3 R4 R5 R6 R7 R8 R9R10
Representation Number
NIST Backprop Ground-Truth Cross-Validation
100 r-"--"'--~--.--r--r--.---.----.---..--,
~
90
~
o
!i
80
*-OJ
70
(f)
1ii
~
~
60
Experiment 1 Experiment 2 ..........
50
NIST Dataset
Measure
Exp1 rs Exp2 rs
ID3leaves
1.00
1.00
Min(leaves)
1.00
1.00
Avg(leaves)
1.00
1.00
Blurring
1.00
1.00
40~~~~~~~~~~~~
R1 R2 R3 R4 R5 R6 R7 R8 R9R10
Representation Number
Magellan Backprop Ground-Truth Cross-Validation
100.---..--...--~--.--r--r--.---.----.---..--,
~
90
~
o
!i
80
*-OJ
70
rfl
Cii
60
!li
50
~
Experiment 1 Experiment 2 ..........
Magellan Dataset
Measure
Exp1 rs Exp2 rs
ID3leaves
0.81
0.78
Min(leaves)
0.83
0.76
Avg(leaves)
0.71
0.71
Blurring
0.48
0.73
40~~~~~~~~~~~~
R1 R2 R3 R4 R5 R6 R7 R8 R9R10
Representation Number
Figure 1: Left: Exper. 1 and 2 ANN CV test-set accuracies (y axis; error bars are 1
SD) used to rank the representations (x axis). Right: Exper. 1 and 2, transparency
rankings compared to ground truth. rs: rank correlation coefficient (see text).
held constant. (The number of features could vary across representations.)
The rank correlation results are shown in Fig. 1 (right). ID31eaves and min (leaves)
outperform the less sophisticated avg(leaves) and blurring measures on datasets
where there is a difference. On the NIST data, all measures produce perfect rankings. The confidence that a true correlation exists is greater than 0.95 for all
measures and datasets except blurring on the Magellan data, where it is 0.85.
The high rank correlations we observe imply that our transparency measures captUre a predictive factor of representation quality. This factor does not depend on
representational sufficiency, because sufficiency was equal for all representations.
50
K. J. CHERKAUER. J. W. SHAVLIK
Table 3: Exper. 3 results: correct rankings (out of 10) by the transparency measures of
the corresponding representation pairs, Ri vs. R~, from Exper. 1 and Exper. 2.
I Dataset II
I
5
ID3leaves
~:naJi ~~
Min{leaves)
Avg(leaves)
Blurring
~
~
~
Experiment 2-Transparency vs. Economy
This experiment shows that transparency does not depend on representational economy (number of features), and it verifies Exper. 1's conclusion that it does not
depend on sufficiency. It also reaffirms the predictive power of the measures.
In Exper. 1, sufficiency was held constant, but economy could vary. Exper. 2 demonstrates that transparency does not depend on economy by equalizing the number
of features and redoing the comparison. In Exper. 2, RS added extra features to
each representation used in in Exper. 1 until they all contained a fixed number of
features (200 for DNA, 250 for NIST, 100 for Magellan). Each Exper. 2 representation, R~ (i = 1, ... , 10), is thus a proper superset of the corresponding Exper. 1
representation, Ri. All representations for a given dataset in Exper. 2 have an
identical number of features and allow perfect classification of the training data, so
neither economy nor sufficiency can affect the transparency scores now.
The results (Fig. 1, right) are similar to Exper. 1's. The notable changes are that
blurring is not as far behind ID3leaves and min (leaves) on the Magellan data as before, and avg(leaves) has joined the accuracy of the other two model-based measures
on the DNA. The confidence that correlations exist is above 0.95 in all cases.
Again, the high rank correlations indicate that transparency is a good predictor
of representation quality. Exper. 2 shows that transparency does not depend on
representational economy or sufficiency, as both were held constant here.
6
Experiment 3-Redundant Features
Exper. 3 tests the transparency measures' predictions when the number of redundant features varies, as ANNs can often use redundant features to advantage (Sutton
& Whitehead, 1993), an ability generally not attributed to decision trees.
Exper. 3 reuses the representations Ri and R~ (i = 1, ... , 10) from Exper. 1 and 2.
Recall that R~ => R i . The extra features in each R~ are redundant as they are not
needed to separate the training data. We show the number of Ri vs. R~ representation pairs each transparency measure ranks correctly for each dataset (Table 3). For
DNA and NIST, the redundant representations always improved ANN generalization (Fig. 1, left; 0.05 significance). Only ID3leaves predicted this correctly, finding
smaller trees with the increased flexibility afforded by the extra features. The other
measures were always incorrect because the lower quality redundant features degraded the random trees (avg (leaves) , min (leaves)) and the average information
gain (blurring). For Magellan, ANN generalization was only significantly different
for one representation pair, and all measures performed near chance.
7
Conclusions
We introduced the notion of transparency (the prevalence of simple and accurate
concepts) as an important factor of input representation quality and developed in-
Rapid Quality Estimation of Neural Network Input Representations
51
expensive, effective ways to measure it. Empirical tests on three real-world datasets
demonstrated these measures' accuracy at ranking representations for ANN learning at much lower computational cost than training the ANNs themselves. Our
next step will be to use transparency measures as scoring functions in algorithms
that apply extensive search to find better input representations.
Acknowledgments
This work was supported by ONR grant N00014-93-1-099S, NSF grant CDA9024618 (for CM-5 use), and a NASA GSRP fellowship held by KJC.
References
Almuallim, H. & Dietterich, T. (1994). Learning Boolean concepts in the presence of
many irrelevant features. Artificial Intelligence, 69(1- 2):279-305.
Bairn, P. (1988). A method for attribute selection in inductive learning systems. IEEE
Transactions on Pattern Analysis fj Machine Intelligence, 10(6):888-896.
Battiti, R. (1994). Vsing mutual information for selecting features in supervised neural
net learning. IEEE Transactions on Neural Networks, 5(4):537-550.
Burl, M., Fayyad, V., Perona, P., Smyth, P., & Burl, M. (1994). Automating the hunt
for volcanoes on Venus. In IEEE Computer Society Con! on Computer Vision fj Pattern
Recognition: Proc, Seattle, WA. IEEE Computer Society Press.
Caruana, R. & Freitag, D. (1994). Greedy attribute selection. In Machine Learning: Proc
11th Intl Con!, (pp. 28-36), New Brunswick, NJ. Morgan Kaufmann.
Cherkauer, K. & Shavlik, J. (1994). Selecting salient features for machine learning
from large candidate pools through parallel decision-tree construction. In Kitano, H.
& Hendler, J., ecis., Massively Parallel Artificial Intel. MIT Press, Cambridge, MA.
Cherkauer, K. & Shavlik, J. (1995). Rapidly estimating the quality of input representations for neural networks. In Working Notes, IJCAI Workshop on Data Engineering for
Inductive Learning, (pp. 99-108), Montreal, Canada.
Craven, M. & Shavlik, J. (1993). Learning to predict reading frames in E. coli DNA
sequences. In Proc 26th Hawaii Intl Con! on System Science, (pp. 773-782), Wailea, HI.
IEEE Computer Society Press.
Fayyad, V. (1994). Branching on attribute values in decision tree generation. In Proc
12th Natl Con! on Artificial Intel, (pp. 601-606), Seattle, WA. AAAIjMIT Press.
John, G., Kohavi, R., & Pfleger, K. (1994). Irrelevant features and the subset selection
problem. In Machine Learning: Proc 11th Intl Con!, (pp. 121-129), New Brunswick, NJ.
Morgan Kaufmann.
Kambhatla, N. & Leen, T. (1994). Fast non-linear dimension reduction. In Advances in
Neural In!o Processing Sys (vol 6), (pp. 152-159), San Francisco, CA. Morgan Kaufmann.
Kira, K. & Rendell, L. (1992). The feature selection problem: Traditional methods and a
new al~orithm. In Proc 10th Natl Con! on Artificial Intel, (pp. 129-134), San Jose, CA.
AAAI/MIT Press.
Quinlan, J. (1986). Induction of decision trees. Machine Learning, 1:81-106.
Quinlan, J. (1994). Comparing connectionist and symbolic learning methods. In Hanson,
S., Drastal, G., & Rivest, R., eds., Computational Learning Theory fj Natural Learning
Systems (vol I: Constraints fj Prospects). MIT Press, Cambridge, MA.
Rendell, L. & Ragavan, H. (1993). Improving the design of induction methods by analyzing algorithm functionality and data-based concept complexity. In Proc 13th Intl Joint
Con! on Artificial Intel, (pp. 952-958), Chamhery, France. Morgan Kaufmann.
Sutton, R. & Whitehead, S. (1993) . Online learning with random representations. In Machine Learning: Proc 10th IntI Con/, (pp. 314-321), Amherst, MA. Morgan Kaufmann.
| 1139 |@word d2:1 heuristically:1 r:14 tried:1 minus:1 reduction:1 initial:2 wrapper:1 score:2 selecting:3 pfleger:1 kitano:1 comparing:1 john:3 ecis:1 planet:1 partition:1 cheap:1 v:4 implying:1 intelligence:2 leaf:30 selected:1 greedy:1 sys:1 detecting:1 simpler:1 constructed:2 profound:1 incorrect:1 freitag:2 introduce:2 pairwise:1 expected:1 rapid:4 roughly:1 themselves:2 nor:1 cpu:2 estimating:2 classifies:1 rivest:1 cm:1 developed:1 sparcstation:1 finding:1 nj:2 demonstrates:3 unit:2 grant:2 reuses:1 before:1 engineering:1 sd:1 sutton:2 analyzing:1 r4:3 ease:2 hunt:1 acknowledgment:1 testing:1 backpropagation:1 prevalence:1 digit:1 dayton:1 empirical:1 significantly:1 confidence:2 numbered:1 symbolic:2 get:1 selection:6 equivalent:1 demonstrated:1 orithm:1 examines:1 notion:1 construction:1 damaged:1 user:1 smyth:1 expensive:2 recognition:3 predicts:1 labeled:1 capture:1 sun:1 prospect:1 ran:1 complexity:8 radar:1 trained:1 depend:8 predictive:3 blurring:14 completely:1 joint:1 grown:1 train:3 fast:5 effective:1 artificial:6 kevin:1 choosing:1 quite:1 whose:1 fi3:1 plausible:1 cherkauer:9 ability:3 statistic:1 id3:4 online:1 advantage:1 sequence:1 equalizing:1 net:1 interaction:1 rapidly:1 exp2:3 flexibility:1 representational:9 validate:1 vsing:1 seattle:2 ijcai:1 r1:3 intl:4 produce:3 perfect:2 develop:1 montreal:1 strong:1 c:1 predicted:5 indicate:2 correct:2 attribute:3 functionality:1 backprop:4 require:1 fix:1 generalization:4 transparent:1 hold:1 ground:11 predict:1 kambhatla:2 achieves:1 vary:2 estimation:4 proc:8 largest:1 reflects:1 mit:3 always:3 varying:1 focus:1 consistently:1 rank:15 preselect:1 economy:8 dependent:1 typically:1 entire:1 hidden:1 perona:1 france:1 among:1 classification:1 mutual:1 equal:1 construct:2 sampling:3 identical:2 r5:3 r7:3 connectionist:2 report:2 intelligent:1 others:1 randomly:2 individual:1 suit:1 negation:1 highly:1 introduces:1 behind:1 natl:2 held:4 accurate:13 tree:25 abundant:1 instance:2 classify:1 lwe:1 boolean:2 increased:1 caruana:2 measuring:2 cost:1 introducing:1 deviation:1 subset:2 uniform:1 predictor:2 recognizing:1 successful:1 motivating:1 varies:1 st:1 density:3 amherst:1 automating:1 pool:4 quickly:2 again:1 reflect:1 aaai:1 successively:1 hawaii:1 coli:1 li:1 summarized:1 redoing:1 coefficient:4 notable:1 explicitly:1 ranking:16 depends:1 multiplicative:1 try:1 performed:1 sort:1 parallel:2 ass:1 accuracy:8 degraded:1 kaufmann:5 yield:1 generalize:2 handwritten:1 accurately:2 published:1 anns:8 ed:1 vlik:1 pp:9 di:1 attributed:1 con:8 sampled:1 gain:4 dataset:12 recall:1 sophisticated:1 magellan:10 nasa:1 feed:1 higher:2 supervised:2 follow:1 improved:1 leen:2 sufficiency:11 though:2 strongly:1 just:1 correlation:11 until:2 working:1 quality:18 believe:1 building:2 dietterich:2 effect:1 concept:17 true:3 burl:3 inductive:2 branching:1 trying:1 demonstrate:1 fj:4 rendell:5 image:1 novel:1 rl:1 cambridge:2 cv:4 upheld:1 irrelevant:2 massively:1 n00014:1 onr:1 battiti:2 scoring:1 morgan:5 minimum:1 additional:3 greater:1 cii:2 purity:1 recognized:1 paradigm:1 redundant:7 ii:3 lik:1 full:1 transparency:42 faster:1 believed:1 cross:6 impact:1 prediction:1 vision:1 metric:1 jude:1 addition:1 fellowship:1 kohavi:1 extra:3 subject:1 validating:1 call:2 integer:1 near:1 counting:1 presence:1 superset:1 independence:1 affect:1 venus:2 whether:1 nine:1 generally:1 ten:2 dna:10 outperform:2 percentage:1 exist:1 nsf:1 correctly:5 per:1 kira:2 diverse:1 vol:2 key:1 salient:1 demonstrating:1 threshold:1 wisc:1 neither:1 run:1 volcano:2 jose:1 reasonable:1 groundtruth:1 decision:9 layer:1 hi:1 fold:2 constraint:1 x2:1 ri:4 afforded:1 fayyad:3 min:13 speedup:1 department:1 according:1 eaves:1 craven:2 spearman:2 terminates:1 smaller:2 em:1 across:1 wi:1 making:1 inti:1 computationally:1 previously:1 count:2 r3:3 needed:2 whitehead:2 apply:1 observe:1 alternative:1 top:1 assumes:1 quinlan:5 madison:2 giving:1 society:3 already:1 added:1 costly:2 sha:1 traditional:1 separate:4 induction:2 kjc:1 difficult:3 expense:1 design:1 proper:1 allowing:1 datasets:9 nist:9 rfl:2 frame:2 canada:1 overlooked:1 introduced:1 namely:1 pair:5 extensive:1 hanson:1 address:2 bar:1 usually:1 pattern:4 below:1 reading:2 built:1 oj:2 deleting:1 power:1 ranked:1 natural:1 predicting:1 imply:2 axis:2 created:3 hendler:1 text:1 relative:1 wisconsin:1 expect:1 generation:1 vg:1 validation:5 classifying:1 lo:1 summary:1 surprisingly:1 supported:1 allow:1 shavlik:9 face:1 sparse:1 dimension:1 world:4 evaluating:2 rich:1 preventing:1 forward:1 collection:1 avg:10 made:1 san:2 far:1 transaction:2 selector:1 gene:1 francisco:1 search:2 table:7 transfer:3 robust:1 parenthesized:1 exper:25 ignoring:1 ca:2 improving:1 domain:1 did:1 significance:1 verifies:2 fig:4 intel:4 candidate:1 exp1:3 r6:3 third:2 down:1 specific:1 r8:3 almuallim:2 list:1 r2:3 exists:1 workshop:1 adding:1 magnitude:1 easier:1 simply:1 likely:1 desire:1 contained:1 joined:1 truth:11 chance:1 ma:3 ann:13 quantifying:1 content:1 change:1 except:1 called:2 drastal:1 brunswick:2 scan:1 evaluate:4 tested:1 |
154 | 114 | 527
IMPLICATIONS OF
RECURSIVE DISTRIBUTED REPRESENTATIONS
Jordan B. Pollack
Laboratory for AI Research
Ohio State University
Columbus, OH -'3210
ABSTRACT
I will describe my recent results on the automatic development of fixedwidth recursive distributed representations of variable-sized hierarchal data
structures. One implication of this wolk is that certain types of AI-style
data-structures can now be represented in fixed-width analog vectors. Simple
inferences can be perfonned using the type of pattern associations that
neural networks excel at Another implication arises from noting that these
representations become self-similar in the limit Once this door to chaos is
opened. many interesting new questions about the representational basis of
intelligence emerge, and can (and will) be discussed.
INTRODUCTION
A major problem for any cognitive system is the capacity for, and the induction of the
potentially infinite structures implicated in faculties such as human language and
memory.
Classical cognitive architectures handle this problem through finite but recursive sets of
rules, such as fonnal grammars (Chomsky, 1957). Connectionist architectures, while
yielding intriguing insights into fault-tolerance and machine leaming, have, thus far, not
handled such productive systems in an adequate fashion.
So, it is not surprising that one of the main attacks on connectionism, especially on its
application to language processing models, has been on the adequacy of such systems to
deal with apparently rule-based behaviors (Pinker & Prince, 1988) and systematicity
(Fodor & Pylyshyn, 1988).
I had earlier discussed precisely these challenges for connectionism, calling them the
generative capacity problem for language, and the representational adequacy problem for
data structures (Pollack, 1987b). These problems are actually intimately related, as the
capacity to recognize or generate novel language relies on the ability to represent the
underlying concept.
Recently, I have developed an approach to the representation problem, at least for recursive structures like sequences and trees. Recursive auto-associative memory (RAAM)
(Pollack, 1988a). automatically develops recursive distributed representations of finite
training sets of such structures, using Back-Propagation (Rumelhart et al., 1986). These
representations appear to occupy a novel position in the space of both classical and connectionist symbolic representations.
A fixed-width representation of variable-sized symbolic trees leads immediately to the
implication that simple fonns of neural-netwolk associative memories may be able to
perfonn inferences of a type that are thought to require complex machinery such as variable binding and unification.
But when we take seriously the infinite part of the representational adequacy problem, we
are lead into a strange intellectual area, to which the second part of this paper is
addressed.
528
Pollack
BACKGROUND
RECURSIVE AUTO-ASSOCIATIVE MEMORY
A RAAM is composed of two mechanisms: a compressor, and a reconstructor, which are
simultaneously trained. The job of the compressor is to encode a small set of fixed-width
patterns into a single pattern of the same width. This compression can be recursively
applied, from the bottom up, to a fixed-valence tree with distinguished labeled terminals
(leaves), resulting in a fixed-width pattern representing the entire structure. The job of
the reconstructor is to accurately decode this pattern into its parts, and then to further
decode the parts as necessary, until the tenninal patterns are found, resulting in a reconstruction of the original tree.
For binary trees with k-bit binary patterns as the leaves, the compressor could be a
single-layer feedforward network with 2k inputs and k outputs, along with additional control machinery. The reconstructor could be a single-layer feedforward-network with k
inputs and 2k outputs, along with a mechanism for testing whether a pattern is a tenninal.
We simultaneously train these two networks in an auto-associative framework as follows.
Consider the tree, ?0 (A N?(Y (P (0 N?), as one member of a training set of such trees,
where the lexical categories are pre-encoded as k-bit vectors. If the 2k-k-2k network is
successfully trained (defined below) with the following patterns (among other such patterns in the training environment), the resultant compressor and reconstructor can reliably
fonn representations for these binary trees.
input pattern
hidden pattern
A+N
~
O+RAN(t)
~
D+N
~
P+RDN(t)
Y+RpDN(t)
RDAN(t)+RvPDN(t)
~
~
~
RAN(t)
RDAN(t)
RDN(t)
RpDN(t)
RVPDN(t)
RDANVPDN<t)
output pattern
~
A,+N,
~
O~RAN(t)'
~
O,+N,
~
~
~
P,+RDN(t),
Y,+RpDN(t),
RDAN(t)I+RvPDN(t),
The (initially random) values of the hidden units, Rj(t), are part of the training environment, so it (and the representations) evolve along with the weights. I
Because the training regime involves multiple compressions, but only single reconstructions, we rely on an induction that the reconstructor works. If a reconstructed pattern, say
R pDN', is sufficiently close to the original pattern, then its parts can be reconstructed as
well.
AN EXPERIMENT
The tree considered above was one member of the first experiment done on RAAM's. I
used a simple context-free parser to parse a set of lexical-category sequences into a set of
bracketed binary trees:
(0 (A (A (A N??
?0 N)(P (0 N?)
(Y (0 N?
(P (0 (A N?)
?0 N) Y)
1
This
~moving
target" strategy is also used by (Elman, 1988) and (Dyer et aI., 1988).
Implications of Recursive Distributed Representations
?D
N) (V (D (A N??)
?D (A N? (V (P (D N??
Each terminal pattern (D A N V & P) was represented as a l-bit-in-5 code padded with 5
zeros. A 20-10-20 RAAM devised the representations shown in figure I.
oOOa?
,??00
DODO' .?. - 0
0000' .?. ?0
o? a - . ? . 00?
0?0? . ? ?00 D
? -Do ?0 ' 000
pp
p
(P (0 N?
(P (0 (A N?)
(A N)
? ?oD?o?o?O
? ?0? ?o?oao
? -0' ?0 ? 000
?
(A (A N? .. 000? . D a .
A A A N ' DODO? ? ? 0 ?0 N) V) 00?00 D 000?
?0 N)(V (0 (A N?? o? D ? ? ? ? 0 D ?
?0 (A N? (V (P (0 N?? . o? ? . 0 . 0 0 .
Figure I.
Representations of all the binary trees in the training set. devised by a
20-10-20 RAAM. manually clustered by phrase-type. The squares represent
values between 0 and 1 by area.
I labeled each tree and its representation by the phrase type in the grammar, and sorted
them by type. The RAAM, without baving any intrinsic concepts of phrase-type, has
clearly developed a representation with similarity between members of the same type.
For example, the third feature seems to be clearly distinguishing sentences from nonsentences, the fifth feature seems to be involved in separating adjective phrases from others, while the tenth feature appears to distinguish prepositional and noun phrases from
others. 2
At the same time, the representation must be keeping enough information about the subtrees in order to allow the reconstructor to accurately recover the original structure. So,
knowledge about structural regularity flows into the wt:ights while constraints about context similarity guide the development of the representations.
RECURSIVE DISTRIBUTED REPRESENTATIONS
These vectors are a very new kind of representation, a recursive, distributed representation, hinted at by Hinton's (1988) notion of a reduced description.
They combine aspects of several disparate representations. Like feature-vectors, they are
fixed-width, similarity-based, and their content is easily accessible. Like symbols, they
combine only in syntactically well-formed ways. Like symbol-structures, they have constituency and compositionality. And, like pointers. they refer to larger symbol structures
2 In fact, by these metrics, the test case ?D N)(P (D N))) should really be classified as a sentence; since it was
not used in any other construction, there was no reason for the RAAM to believe otherwise.
529
530
Pollack
which can be efficiently retrieved.
But. unlike feature-vectors. they compose. Unlike symbols. they can be compared.
Unlike symbol structures. they are fixed in size. And. unlike pointers. they have content.
Recursive distributed representations could. potentially. lead to a reintegration of syntax
and semantics at a very low level 3? Rather than having meaning-free symbols which syntactically combine. and meanings which are recursively ascribed. we could functionally
compose symbols which bear their own meanings.
IMPLICATIONS
One of the reasons for the historical split between symbolic AI and fields such as pattern
recognition or neural networks is that the structured representations AI requires do not
easily commingle with the representations offered by n-dimensional vectors.
Since recursive distributed representations form a bridge from structured representations
to n-dimensional vectors. they will allow high-level AI tasks to be accomplished with
neural networks.
ASSOCIATIVE INFERENCE
There are many kinds of inferences which seem to be very easy for humans to perform.
In fact, we must perform incredibly long chains of inferences in the act of understanding
natural language (Birnbaum. 1986).
And yet, when we consider performing those inferences using standard techniques which
involve variable binding and unification, the costs seem prohibitive. For humans. however. these inferences seem to cost no more than simple associative priming (Meyer &
Schvaneveldt. 1971).
Since RAAMS can devise representations of trees as analog patterns which can actually
be associated, they may lead to very fast neuro-Iogical inference engines.
For example. in a larger experiment. which was reported in (Pollack. 1988b). a 48-16-48
RAAM developed representations for a set of ternary trees. such as
(THOUGHT PAT (KNEW JOHN (LOVED MARY JOHN?)
which corresponded to a set of sentences with complex constituent structure. This
RAAM was able to represent. as points within a 16-dimensional hypercube. all cases of
(LOVED X Y) where X and Y were chosen from the set {JOHN, MARY. PAT. MAN}.
A simple test of whether or not associative inference were possible. then, would be to
build a "symmetric love" network, which would perform the simple inference: "If
(LOVED X Y) then (LOVED Y X)".
A netwoIk with 16 input and output units and 8 hidden units was successfully trained on
12 of the 16 possible associations. and worked perfectly on the remaining 4. (Note that it
accomplished this task without any explicit machinery for matching and moving X and
Y.)
One might think. that in order to chain simple inferences like this one we will need many
hidden layers. But there has recently been some coincidental work showing that feedThe wrong distinction is the inverse of the undifferentiated concept problem in science, such as the fusing of
the notions of heat and temperature in the 17th century (Wiser & Carey. 1983). For example. a company which
manufactured workstations based on a hardware distinction between characters and graphics had deep trouble
when trying to build a modem window system ...
3
Implications of Recursive Distributed Representations
forward networks with two layers of hidden units can compute arbitrary mappings
(Lapedes & Farber. 1988a; Lippman. 1987). Therefore, we can assume that the sequential application of associative-style inferences can be speeded up, at least by retraining. to
a simple 3-cycle process.
OPENING THE DOOR TO CHAOS
The Capacity of RAAM's
As discussed in the introduction. the question of infinite generative capacity is central. 10
the domain of RAAM's the question becomes: Given a finite set of trees to represent.
how can the system then represent an infinite number of related trees.
For the syntactic-tree experiment reported above. the 20-10-20 RAAM was ooly able to
represent 32 new trees. The 48-16-48 RAAM was able to represent many more than it
was trained on. but not yet an infinite number in the linguistics sense.
I do not yet have any closed analytical forms for the capacity of a recursive autoassociative memory. Given that is is not really a file-cabinet or content-addressable
memory, but a memory for a gestalt of rules for recursive pattern compression and reconstruction. capacity results such as those of (Willshaw. 1981) and (Hopfield, 1982) do not
directly apply. Binary patterns are not being stored. so one cannot simply count how
many.
I have considered. however. the capacity of such a memory in the limit, where the actual
functions and analog representations are not bounded by single linear transformations
and sigmoids or by 32-bit floating point resolution.
Figure 2.
A plot of the bit-interspersal function. The x and y axis represent the left and
right subtrees. and the height represents the output of the function.
Consider just a 2-1-2 recursive auto-associator. It is really a reconstructible mapping
from points in the unit square to points on the unit line. 10 order to work. the function
should define a parametric I-dimensional curve in 2-space. perhaps a set of connected
splines.4 As more and more data points need to be encoded. this parametric curve will
become more convoluted to cover them . In the limit, it will no longer be a I-dimensional
curve. but a space-filling curve with a fractional dimension.
4
(Saund, 1987) originally made the connection between auto-association and dimensionality reduction. If such
531
532
Pollack
One possible functional basis for this ultimate 2-1-2 recursive auto-associator is "bitinterspersal," where the compression function would return a number, between 0 and 1,
by interleaving the bits of the binary-fractional representations of the left and right subtrees. Figure 2 depicts this function, not as a space-filling curve, but as a surface, where
no two points project to the same height. The surface is a 3-dimensional variant of a
recognizable instance of Cantor dust called the devil's staircase.
Thus, it is my working hypothesis that alternative activation functions (i.e. other than the
usual sigmoidal or threshold), based on fractal or chaotic mathematics, is the critical
missing link between neural networlcs and infinite capacity systems.
Between AI and Chaos
The remainder of this paper is what is behind the door; the result of simultaneous consideration of the fields of AI, Neural Networks, Fractals, and Olaos. s It is, in essence, a
proposal on where (I am planning) to look for fruitful interplay between these fields, and
what some interdisciplinary problems are which could be solved in this context.
There has already been some intrusion of interest in chaos in the physics-based study of
neural networlcs as dynamical systems. For example both (Hubennan & Hogg, 1987) and
(Kurten, 1987) show how phase-transitions occur in particular neural-like systems, and
(Lapedes & Farber, 1988b) demonstrate how a network trained to predict a simple
iterated function would follow that function's bifurcations into chaos.
However, these efforts are either noticing chaos, or working with it as a domain. At the
other end of the spectrum are those relying on chaos to explain such things as the emergence of consciousness, or free will (Braitenberg, 1984, p. 65).
In between these extremes lies some very hard problems recognized by AI which, I
believe, could benefit from a new viewpoint.
Self-Similarity and the Symbol-Grounding Problem
The bifurcation between structure and form which leads to the near universality of
discrete symbolic structures with ascribed meanings has lead to a yawning gap between
cognitive and perceptual subareas of AI.
This gulf can be seen between such fields as speech recognition and language
comprehension, early versus late vision, and robotics versus planning. The low-level
tasks require numeric, sensory re~resentations, while the high-level ones require compositional symbolic representations.
The idea of infinitely regressing symbolic representations which bottom-out at perception
has been an unimplementable folk idea ("Turtles all the way down") in AI for quite some
time.
The reason for its lack of luster is that the amount of information in such a structure is
considered combinatorially explosive. Unless, of course, one considers self-similarity to
be an information-limiting construction.
a complete 2-1-2 RAAM could be found. it would give a unique number to every binary tree such that the
number of a tree would be a invertible function of the numbers of its two subtrees .
.5 Talking about 4 disciples is both difficult, and dangerous. considering the current size of the chasm. and the
mutual hostilities: AI thinks NN is just a spectre. NN thinks AI is dead, F thinks it subsumes C, and C thinks F
is its just showbiz.
6
It is no surprise then. that neural networks arc much more successful at the former tasks.
Implications of Recursive Distributed Representations
While working on a ~w activation function for RAAMS which would magically have
this property, I have started building modular systems of RAAMs, following Ballard's
(1987) work on non-recursive auto-associators.
When viewing a RAAM as a constrained system, one can see that the terminal patterns
are overconstrained and the highest-level non-terminal patterns are unconstrained. Only
those non-terminals which are further compressed have a reasonable similarity constraint. One could imagine a cascade of RAAMs, where the highest non-terminal patterns
of a low-level RAAM (say, for encodings of letters) are the terminal patterns for a
middle-level RAAM (say, for words), whose non-terminal patterns are the terminals for a
higher-level RAAM (say, for sentences).
If all the representations were the same width, then there must be natural similarities
between the structures at different conceptual scales.
Induction Inference and Strange Automata
The problem of inductive inference1 , of developing a machine which can learn to recognize or generate a language is a pretty hard problem, even for regular languages.
In the process of extending my work on a recurrent high-order neural network called
sequential cascaded nets (Pollack, 1987a), something strange occurred.
It is always possible to completely map out any unknown finite-state machine by providing each known state with every input token, and keeping track of the states. TIris is, in
fact, what defines such a machine as finite.
Since a recurrent network is a dynamical system, rather than an automaton, one must
choose a fuzz-factor for comparing real numbers. For a particular network trained on a
context-free grammar, I was unable to map it out. Each time I reduced the fuzz-factor,
the machine doubled in size, much like Mandelbrot's coastline (Mandelbrot, 1982)
TIris suggests a bidirectional analogy between finite state automata and dynamical systems of the neural network sort8. An automaton has an initial state, a set of states, a lexicon, and and a function which produces a new state given an old state and input token. A
subset of states are distinguished as accepting states. A dynamical system has an initial
state, and an equation which defines its evolution over time, perhaps in response to
environment.
Such dynamical systems have elements known as attractor states, to which the state of
the system usually evolves. Two such varieties, limit points and limit cycles, correspond
directly to similar elements in finite-state automata, states with loops back to themselves,
and short boring cycles of states (such as the familiar "Please Login. Enter Password.
Bad Password. Please Login..... ).
But there is an element in non-linear dynamical systems which does not have a correlate
in formal automata theory, which is the notion of a chaotic, or strange, attractor, fiISt
noticed in work on weather prediction (Lorenz, 1963). A chaotic attractor does not
repeat.
The implications for inductive inference is that while, formally, push-down automata and
Turing machines are necessary for recognizing harder classes of languages, such as
context-free or context-sensitive, respectively, the idiosyncratic state-table and external
memory of such devices make them impossible to induce. On the other hand, chaotic
dynamical systems look much like automata, and should be about as hard to induce. The
7 For a good survey see (Angluin & Smith, 1983). J. Feldman recently posed this as a "challenge" problem for
neural networks (c.f. Servan-Scrieber, Cleermans, & McClelland (this volume?.
8
Wolfram (1984) has, of course, made the analogy between dynamical systems and cellular automata.
533
534
Pollack
infinite memory is internal to the state vector, and the finite-state-control is built into a
more regular, but non-linear, function.
Fractal Energy Landscapes and Natural Kinds
Hopfield (1982) described an associative memory in which each of a finite set of binary
vectors to be stored would define a local minima in some energy landscape. The
Boltzmann Machine (Ackley et al., 1985) uses a similar physical analogy along with
simulated annealing to seek the global minimum in such landscapes as well. Pineda
(1987) has a continuous version of such a memory, where the attract or states are analog
vectors.
One can think of these energy minimization process as a ball rolling down hills. Given a
smooth landscape, that ball will roll into a local minima. On the other hand, if the
landscape were constructed by recursive similarity, or by a midpoint displacement technique, such as those used in figures of fractal mountains, there will be an infinite number
of local minima, which will be detected based on the size of the ball. N aillon and
Theeten's report (this volume), in which an exponential number of attractors are used, is
along the proposed line.
The idea of high-dimensional feature vectors has a long history in psychological studies
of memory and representation, and is known to be inadequate from that perspective as
well as from the representational requirements of AI. But AI has no good empirical candidates for a theory of mental representation either.
Such theories generally break down when dealing with novel instances of Natural Kinds,
such as birds, chairs, and games. A robot with necessary and sufficient conditions, logical rules, or circumscribed regions in feature space cannot deal with walking into a room,
recognizing and sitting on a hand-shaped chair.
If the chairs we know fonn the large-scale local minima of an associative memory, then
perhaps the chairs we don't know can also be found as local minima in the same space,
albeit on a smaller scale. Of course, all the chairs we know are only smaller-scale minima
in our memory for furniture.
Fractal Compression and the Capacity of Memory
Consider something like the Mandelbrot set as the basis for a reconstructive memory.
Rather than storing all pictures, one merely has to store the "pointer" to a picture,9 and,
with the help of a simple function and large computer, the picture can be retrieved. Most
everyone has seen glossy pictures of the colorful prototype shapes of yeasts and dragons
that infinitely appear as the location and scale are changed along the chaotic boundary.
The first step in this hypothetical construction is to develop a related set with the additional property that it can be inverted in the following sense: Given a rough sketch of a
picture likely to be in the set, return the best "pointer" to it 10
The second step, perhaps using nonnal neural-netwOIk technology, is to build an invertible DOn-linear mapping from the prototypes in a application domain (like chess positions,
human faces, sentences, schemata, etc .. ) to the largest-scale prototypes in the mathematical memory space.
9
I.e. a point on the complex plane and the window size
Related sets might show up with great frequency using iterated systems, like Newton's method or backpropagation. And a more precise notion of inversion, involving both representational tolerance and scale. is
required.
10
Implications of Recursive Distributed Representations
Taken together, this hypothetical system turns out to be a look-up table for an infinite set
of similar representations which incurs no memory cost for its contents. Only the pointers
and the reconstruction function need to be stored. Such a basis for reconstructive storage
would render meaningless the recent attempts at "counting the bits" of human memory
(Hillis, 1988; Landauer, 1986).
While these two steps together sound quite fantastic, it is closely related to the RAAM
idea using a chaotIc activation function. The reconstructor produces contents from
pointers, while the compressor retums pointers from contents. And the idea of a unifonn
fractal basis for memory is not really too distant from the idea of a unifonn basis for
visual images, such as iterated fractal surfaces based on the collage theorem (Barnsley et
aI.,1985).
A moral could be that impressive demonstrations of compression, such as the bidirectional mapping from ideas to language, must be easy when one can discover the underlying regularity.
CONCLUSION
Recursive auto-associatIve memory can develop fixed-width recursive distributed
representations for variable-sized data-structures such as symbolic trees. Given such
representations, one implication is that complex inferences, which seemed to require
complex infonnation handling strategies, can be accomplished with associations.
A second implication is that the representations must become self-similar and spacefilling in the limit. This implication, of fractal and chaotic structures in mental representations, may lead to a reconsideration of many fundamental decisions in computational
cognitive science.
Dissonance for cognitive scientists can be induced by comparing the infinite output of a
fonnallanguage generator (with anybody's rules), to the boundary areas of the Mandelbrot set with its simple underlying function. Which is vaster? Which more natural?
For when one considers the relative success of fractal versus euclidean geometry at compactly describing natural objects, such as trees and coastlines, one must wonder at the
accuracy of the pervasive description of naturally-occurring mental objects as features or
propositions which bottom-out at meaningless tenns.
References
Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann Machines.
Cognitive Science. 9, 147-169.
Angluin, D. & Smith, C. H. (1983). Inductive Inference: Theory and Methods. Computing Surveys. 15, 237269.
Ballard, D. H. (1987). Modular Learning in Neural Networl<;~. In Proceedings of the Sixth Nationd
Conference on Artificial Intelligence. Seattle, 279-284.
Bamsley, M. F., Ervin, V., Hardin, D. & Lancaster, J. (1985). Solution of an inverse problem for fractals and
other sets. Proceedings of the National Academy of Science. 83.
Birnbaum, L. (1986). Integrated processing in planning and understanding. Research Report 489, New Haven:
Computer Science Dept., Yale Univeristy.
Braitenberg, V. (1984). Vehicles: Experiments in synthetic psychology. Cambridge: MIT press.
Otomsky, N. (1957). Syntactic structures. The Hague: Mouton and Co ..
Dyer, M. G., Rowers, M. & Wang, Y. A. (1988). Weight Matrix =Pattern of Activation: Encoding Semantic
Networks as Distributed Representations in DUAL, a PDP architecture. UCLA-Artificial
Intelligence-88-5, Los Angeles: Artificial Intelligence Laboratory, UCLA.
Elman. J. L. (1988). Finding Structure in Time. Report 8801. San Diego: Center for Research in Language.
UCSD.
Fodor, J. & Pylyshyn, A. (1988). Connectionism and Cognitive Architecture: A Critical Analysis. Cognition.
28,3-71.
Hillis. W. D. (1988). Intelligence as emergent behavior; or. the songs of eden. Daedelus.117, 175-190.
Hinton, G. (1988). Representing Part-Whole hierarchies in connectionist networks. In Proceedings of the
Tenth Annual Conference of the Cognitive Science SOciety. Montreal, 48-54.
535
536
Pollack
Hopfield, J. J. (1982). Neural Networks and physical systems with emergent collective computational abilities.
Procudings of the National Academy of Sciences USA. 79, 2554-2558.
Hubennan, B. A. & Hogg, T. (1987). Phase Transitions in Artificial Intelligence Systems. Artificial
Intelligence. 33, 155-172.
Kurten, K. E. (1987). Phase transitions in quasirandom neural networks. In Institute of Electrical and
Electronics Engineers First International Conference on Neural Networks. San Diego, n-197-20.
Landauer, T. K. (1986). How much do people remember? Some estimates on the quantity of learned
infonnation in long-term memory .. Cognitive Science. 10, 477-494.
Lapedes, A. S. & Farber, R. M. (1988). How Neural Nets Work. LAUR-88-418: Los Alamos.
Lapedes, A. S. & Farber, R. M. (1988). Nonlinear Signal Processing using Neural Networks: Prediction and
system modeling. Biological Cybernetics. To appear.
Lippman, R. P. (1987). An introduction to computing with neural networks. Institute of Electrical and
Electronics Engineers ASSP Magazine. April, 4- 22.
Lorenz, E. N. (1963). Detenninistic Nonperiodic flow. Journal of Atmospheric Sciences. 20, 130-141.
Mandelbrot, B. (1982). The Fractal Geometry of Nature. San Francisco: Freeman.
Meyer, D. E. & Schvaneveldt, R. W. (1971). Facilitation in recognizing pairs of words: Evidence of a
dependence between retrieval operations. Journal of Experimental Psychology. 90, 227-234.
Pineda, F. J. (1987). Generalization of Back-Propagation to Recurrent Neural Networks. Physical Review
Letters. 59, 2229-2232.
Pinker, S. & Prince, A. (1988). On Language and Connectionism: Analysis of a parallel distributed processing
model of language inquisition .. Cognition. 28, 73-193.
Pollack, J. B. (1987). Cascaded Back Propagation on Dynamic Connectionist Networks. In Proceedings oftM
Ninth Conference of the Cognitive Science Society. Seattle, 391-404.
Pollack, J. B. (1987). On Connectionist Models of Natural Language Processing. Ph.D. Thesis, Urbana:
Computer Science Department, University of Illinois. (Available as MCCS-87-IOO, Computing
Research Laboratory, Las Cruces, NM)
Pollack, J. B. (1988). Recursive Auto-Associative Memory: Devising Compositional Distributed
Representations. In Proceedings of the Tenth Annual Conference of the Cognitive Science Society.
Montreal, 33-39.
Pollack, J. B. (1988). Recursive Auto-Associative Memory: Devising Compositional Distributed
Representations. MCCS-88-124, Las Cruces: Computing Research Laboratory, New Mexico State
University.
Rumelhart, D. E., Hinton, G. & Williams, R. ( 1986). Learning Internal Representations through Error
Propagation. In D. E. Rumelhart, J. L. McClelland & the PDP research Group, (Eds.), Parallel
Distributed Processing: Experiments in the Microstructure of Cognition, Vol. l. Cambridge: MIT
Press.
Saund, E. (1987). Dimensionality Reduction and Constraint in Later Vision. In Proceedings of tM Ninth
Annual Conference of the Cognitive Science Society. Seattle, 908-915.
Willshaw, D. J. (1981). Holography, Associative Memory, and Inductive Generalization. In G. E. Hinton & J.
A. Anderson, (Eds.), Parallel models of associative memory. Hillsdale: Lawrence Erlbaum
Associates.
Wiser, M. & Carey, S. (1983). When heat and temperature were one. In D. Gentner & A. Stevens, (Eds.),
Mental Models. Hillsdale: Erlbaum.
Wolfram, S. (1984). Universality and Complexity in Cellular Automata. Physica.1OD, 1-35.
| 114 |@word middle:1 version:1 faculty:1 compression:6 seems:2 retraining:1 inversion:1 seek:1 fonn:2 incurs:1 harder:1 recursively:2 reduction:2 electronics:2 initial:2 seriously:1 lapedes:4 current:1 comparing:2 od:2 surprising:1 activation:4 yet:3 intriguing:1 must:7 universality:2 john:3 distant:1 shape:1 plot:1 pylyshyn:2 intelligence:7 generative:2 leaf:2 prohibitive:1 device:1 devising:2 plane:1 smith:2 short:1 wolfram:2 accepting:1 pointer:7 mental:4 intellectual:1 lexicon:1 location:1 attack:1 sigmoidal:1 height:2 mathematical:1 along:6 constructed:1 mandelbrot:5 become:3 combine:3 compose:2 recognizable:1 ascribed:2 undifferentiated:1 ervin:1 behavior:2 elman:2 love:1 planning:3 themselves:1 terminal:9 hague:1 freeman:1 relying:1 automatically:1 company:1 actual:1 window:2 considering:1 becomes:1 project:1 discover:1 underlying:3 bounded:1 what:3 mountain:1 kind:4 coincidental:1 developed:3 finding:1 transformation:1 perfonn:1 remember:1 every:2 hypothetical:2 act:1 willshaw:2 wrong:1 control:2 unit:6 colorful:1 appear:3 scientist:1 local:5 limit:6 encoding:2 might:2 coastline:2 bird:1 suggests:1 raams:4 co:1 speeded:1 unique:1 testing:1 ternary:1 recursive:25 backpropagation:1 chaotic:7 lippman:2 addressable:1 displacement:1 area:3 empirical:1 thought:2 cascade:1 matching:1 weather:1 pre:1 word:2 regular:2 induce:2 chomsky:1 symbolic:7 doubled:1 cannot:2 close:1 storage:1 context:6 chasm:1 impossible:1 fruitful:1 map:2 lexical:2 missing:1 center:1 williams:1 incredibly:1 automaton:10 survey:2 resolution:1 immediately:1 rule:5 insight:1 oh:1 facilitation:1 century:1 handle:1 notion:4 fodor:2 limiting:1 hierarchy:1 target:1 construction:3 decode:2 parser:1 imagine:1 diego:2 distinguishing:1 us:1 hypothesis:1 magazine:1 associate:1 element:3 rumelhart:3 recognition:2 circumscribed:1 walking:1 labeled:2 bottom:3 ackley:2 solved:1 wang:1 electrical:2 region:1 cycle:3 connected:1 highest:2 ran:3 environment:3 complexity:1 productive:1 dynamic:1 trained:6 basis:6 completely:1 compactly:1 easily:2 hopfield:3 emergent:2 represented:2 train:1 heat:2 fast:1 describe:1 reconstructive:2 sejnowski:1 detected:1 artificial:5 corresponded:1 lancaster:1 quite:2 encoded:2 larger:2 modular:2 whose:1 say:4 posed:1 otherwise:1 compressed:1 grammar:3 ability:2 think:6 syntactic:2 emergence:1 associative:15 interplay:1 sequence:2 pineda:2 analytical:1 net:2 hardin:1 reconstruction:4 remainder:1 loop:1 representational:5 academy:2 description:2 convoluted:1 constituent:1 los:2 seattle:3 regularity:2 requirement:1 extending:1 produce:2 object:2 help:1 recurrent:3 develop:2 montreal:2 job:2 involves:1 farber:4 closely:1 stevens:1 opened:1 human:5 viewing:1 hillsdale:2 require:4 crux:2 microstructure:1 clustered:1 really:4 generalization:2 proposition:1 biological:1 connectionism:4 hinted:1 comprehension:1 physica:1 sufficiently:1 considered:3 great:1 lawrence:1 mapping:4 predict:1 cognition:3 major:1 early:1 infonnation:2 bridge:1 sensitive:1 largest:1 combinatorially:1 successfully:2 minimization:1 rough:1 clearly:2 mit:2 always:1 rather:3 password:2 pervasive:1 encode:1 intrusion:1 sense:2 am:1 inference:16 attract:1 nn:2 entire:1 integrated:1 initially:1 hidden:5 semantics:1 nonnal:1 among:1 dual:1 development:2 univeristy:1 noun:1 bifurcation:2 mutual:1 constrained:1 field:4 once:1 having:1 shaped:1 dissonance:1 manually:1 represents:1 look:3 filling:2 braitenberg:2 connectionist:5 others:2 develops:1 spline:1 opening:1 haven:1 report:3 composed:1 simultaneously:2 recognize:2 national:2 floating:1 familiar:1 phase:3 geometry:2 dodo:2 attractor:4 explosive:1 attempt:1 interest:1 regressing:1 extreme:1 yielding:1 behind:1 chain:2 implication:13 subtrees:4 unification:2 necessary:3 detenninistic:1 folk:1 machinery:3 unless:1 tree:22 old:1 euclidean:1 re:1 prince:2 pollack:14 psychological:1 instance:2 earlier:1 modeling:1 cover:1 servan:1 phrase:5 cost:3 fusing:1 subset:1 rolling:1 alamo:1 recognizing:3 successful:1 wonder:1 inadequate:1 erlbaum:2 graphic:1 too:1 reported:2 stored:3 my:3 synthetic:1 fundamental:1 international:1 accessible:1 interdisciplinary:1 physic:1 invertible:2 together:2 thesis:1 central:1 nm:1 choose:1 cognitive:12 dead:1 external:1 style:2 return:2 manufactured:1 bamsley:1 subsumes:1 bracketed:1 vehicle:1 systematicity:1 saund:2 closed:1 break:1 apparently:1 reconstructor:7 schema:1 pinker:2 recover:1 theeten:1 parallel:3 carey:2 square:2 formed:1 accuracy:1 roll:1 efficiently:1 correspond:1 sitting:1 landscape:5 iterated:3 accurately:2 tenninal:2 cybernetics:1 classified:1 history:1 simultaneous:1 explain:1 ed:3 sixth:1 energy:3 pp:1 involved:1 spectre:1 frequency:1 resultant:1 associated:1 cabinet:1 naturally:1 workstation:1 logical:1 knowledge:1 fractional:2 dimensionality:2 overconstrained:1 actually:2 back:4 appears:1 bidirectional:2 originally:1 higher:1 follow:1 response:1 april:1 reconsideration:1 done:1 anderson:1 just:3 until:1 working:3 hand:3 sketch:1 parse:1 nonlinear:1 propagation:4 lack:1 defines:2 holography:1 columbus:1 perhaps:4 yeast:1 believe:2 mary:2 grounding:1 reconstructible:1 building:1 concept:3 staircase:1 usa:1 former:1 inductive:4 evolution:1 symmetric:1 laboratory:4 consciousness:1 semantic:1 deal:2 game:1 width:8 self:4 please:2 essence:1 associators:1 trying:1 syntax:1 hill:1 complete:1 demonstrate:1 oao:1 syntactically:2 temperature:2 cleermans:1 meaning:4 netwoik:2 chaos:7 ohio:1 novel:3 recently:3 consideration:1 image:1 functional:1 networl:1 physical:3 volume:2 analog:4 association:4 discussed:3 occurred:1 functionally:1 refer:1 cambridge:2 feldman:1 ai:16 enter:1 automatic:1 unconstrained:1 mouton:1 mathematics:1 hogg:2 illinois:1 language:14 had:2 moving:2 robot:1 similarity:8 longer:1 surface:3 etc:1 impressive:1 something:2 dust:1 own:1 recent:2 cantor:1 retrieved:2 perspective:1 hierarchal:1 certain:1 store:1 binary:9 success:1 tenns:1 fault:1 accomplished:3 raam:19 devise:1 inverted:1 seen:2 minimum:7 additional:2 recognized:1 signal:1 multiple:1 sound:1 rj:1 smooth:1 long:3 retrieval:1 devised:2 prediction:2 neuro:1 variant:1 involving:1 fonns:1 vision:2 metric:1 represent:8 robotics:1 proposal:1 background:1 addressed:1 annealing:1 kurten:2 meaningless:2 unlike:4 file:1 induced:1 thing:1 member:3 flow:2 seem:3 jordan:1 adequacy:3 structural:1 mccs:2 near:1 noting:1 door:3 feedforward:2 enough:1 reintegration:1 split:1 easy:2 variety:1 counting:1 psychology:2 loved:4 architecture:4 perfectly:1 laur:1 idea:7 prototype:3 tm:1 luster:1 angeles:1 whether:2 handled:1 pdn:1 ultimate:1 effort:1 moral:1 song:1 render:1 gulf:1 speech:1 compositional:3 adequate:1 deep:1 autoassociative:1 fractal:11 generally:1 involve:1 amount:1 ph:1 hardware:1 category:2 mcclelland:2 reduced:2 generate:2 occupy:1 constituency:1 angluin:2 gentner:1 unifonn:2 track:1 discrete:1 vol:1 group:1 threshold:1 eden:1 birnbaum:2 tenth:3 padded:1 merely:1 inverse:2 noticing:1 letter:2 turing:1 reasonable:1 strange:4 decision:1 bit:7 layer:4 distinguish:1 furniture:1 yale:1 annual:3 occur:1 dangerous:1 precisely:1 constraint:3 worked:1 networlcs:2 calling:1 ucla:2 dragon:1 aspect:1 turtle:1 chair:5 performing:1 structured:2 developing:1 department:1 ball:3 magically:1 smaller:2 intimately:1 character:1 prepositional:1 evolves:1 chess:1 handling:1 taken:1 equation:1 turn:1 count:1 mechanism:2 describing:1 know:3 dyer:2 end:1 available:1 operation:1 apply:1 distinguished:2 alternative:1 original:3 remaining:1 linguistics:1 trouble:1 newton:1 especially:1 fonnal:1 build:3 classical:2 hypercube:1 society:4 noticed:1 question:3 already:1 quantity:1 strategy:2 parametric:2 dependence:1 usual:1 valence:1 link:1 unable:1 separating:1 capacity:10 simulated:1 considers:2 cellular:2 reason:3 induction:3 code:1 providing:1 demonstration:1 mexico:1 difficult:1 idiosyncratic:1 potentially:2 wiser:2 disparate:1 reliably:1 collective:1 boltzmann:2 unknown:1 perform:3 fuzz:2 modem:1 urbana:1 arc:1 finite:9 pat:2 hinton:5 assp:1 precise:1 pdp:2 ucsd:1 ninth:2 arbitrary:1 fantastic:1 atmospheric:1 compositionality:1 pair:1 required:1 sentence:5 connection:1 engine:1 distinction:2 learned:1 hillis:2 able:4 below:1 pattern:26 dynamical:8 perception:1 usually:1 regime:1 challenge:2 adjective:1 built:1 memory:27 everyone:1 perfonned:1 critical:2 natural:7 rely:1 cascaded:2 representing:2 technology:1 picture:5 axis:1 started:1 excel:1 auto:10 review:1 understanding:2 evolve:1 relative:1 bear:1 interesting:1 analogy:3 versus:3 generator:1 offered:1 sufficient:1 viewpoint:1 storing:1 course:3 changed:1 token:2 repeat:1 free:5 keeping:2 implicated:1 guide:1 allow:2 formal:1 institute:2 face:1 emerge:1 fifth:1 midpoint:1 distributed:17 tolerance:2 curve:5 dimension:1 benefit:1 transition:3 numeric:1 login:2 boundary:2 sensory:1 forward:1 made:2 seemed:1 san:3 ights:1 historical:1 far:1 gestalt:1 correlate:1 reconstructed:2 dealing:1 global:1 conceptual:1 quasirandom:1 francisco:1 knew:1 landauer:2 spectrum:1 don:2 continuous:1 pretty:1 table:2 barnsley:1 ballard:2 learn:1 nature:1 associator:2 ioo:1 complex:5 priming:1 domain:3 main:1 whole:1 depicts:1 fashion:1 position:2 meyer:2 explicit:1 exponential:1 lie:1 candidate:1 perceptual:1 collage:1 third:1 late:1 rdn:3 interleaving:1 down:4 theorem:1 bad:1 boring:1 showing:1 symbol:8 evidence:1 intrinsic:1 lorenz:2 albeit:1 sequential:2 sigmoids:1 push:1 occurring:1 gap:1 surprise:1 simply:1 likely:1 infinitely:2 visual:1 compressor:5 talking:1 binding:2 relies:1 sized:3 sorted:1 leaming:1 room:1 man:1 content:6 hard:3 infinite:10 wt:1 engineer:2 called:2 experimental:1 la:2 formally:1 internal:2 people:1 arises:1 devil:1 dept:1 later:1 nonperiodic:1 |
155 | 1,140 | Active Learning in Multilayer
Perceptrons
Kenji Fukumizu
Information and Communication R&D Center, Ricoh Co., Ltd.
3-2-3, Shin-yokohama, Yokohama, 222 Japan
E-mail: fuku@ic.rdc.ricoh.co.jp
Abstract
We propose an active learning method with hidden-unit reduction.
which is devised specially for multilayer perceptrons (MLP). First,
we review our active learning method, and point out that many
Fisher-information-based methods applied to MLP have a critical
problem: the information matrix may be singular. To solve this
problem, we derive the singularity condition of an information matrix, and propose an active learning technique that is applicable to
MLP. Its effectiveness is verified through experiments.
1
INTRODUCTION
When one trains a learning machine using a set of data given by the true system, its
ability can be improved if one selects the training data actively. In this paper, we
consider the problem of active learning in multilayer perceptrons (MLP). First, we
review our method of active learning (Fukumizu el al., 1994), in which we prepare a
probability distribution and obtain training data as samples from the distribution.
This methodology leads us to an information-matrix-based criterion similar to other
existing ones (Fedorov, 1972; Pukelsheim, 1993).
Active learning techniques have been recently used with neural networks (MacKay,
1992; Cohn, 1994). Our method, however, as well as many other ones has a crucial
problem: the required inverse of an information matrix may not exist (White, 1989).
We propose an active learning technique which is applicable to three-layer perceptrons. Developing a theory on the singularity of a Fisher information matrix, we
present an active learning algorithm which keeps the information matrix nonsingular. We demonstrate the effectiveness of the algorithm through experiments.
296
2
2.1
K. FUKUMIZU
STATISTICALLY OPTIMAL TRAINING DATA
A CRITERION OF OPTIMALITY
We review the criterion of statistically optimal training data (Fukumizu et al., 1994).
We consider the regression problem in which the target system maps a given input
z to y according to
y = I(z) + Z,
where I( z) is a deterministic function from R L to R M , and Z is a random variable
whose law is a normal distribution N(O,(12I M ), (IM is the unit M x M matrix).
Our objective is to estimate the true function 1 as accurately as possible.
Let {/( z; O)} be a parametric model for estimation. We use the maximum likelihood
estimator (MLE) 0 for training data ((z(v), y(v?)}~=l' which minimizes the sum of
squared errors in this case. In theoretical derivations, we assume that the target
function 1 is included in the model and equal to 1(,;( 0 ).
We make a training example by choosing z(v) to try, observing the resulting output
y(v), and pairing them. The problem of active learning is how to determine input
data {z(v)} ~=l to minimize the estimation error after training. Our approach is
a statistical one using a probability for training, r( z), and choosing {z(v) }:Y"=l as
independent samples from r(z) to minimize the expectation of the MSE in the
actual environment:
In the above equation, Q is the environmental probability which gives input vectors
to the true system in the actual environment, and E{(zlv"yIV')} means the expectation on training data. Eq.(I), therefore, shows the average error of the trained
machine that is used as a substitute of the true function in the actual environment.
2.2
REVIEW OF AN ACTIVE LEARNING METHOD
Using statistical
a.~ymptotic
theory, Eq. (1) is approximated
a.~
follows:
2
EMSE =
(12
+ ~ Tr [I(Oo)J-1(Oo)] + O(N- 3j 2),
(2)
where the matrixes I and J are (Fisher) illformation matrixes defined by
1(0)
=
J
I(z;O)dQ(z). J(O)
=
J
I(z;O)r(z)dz.
The essential part of Eq.(2) is Tr[I(Oo)J-1(Oo?), computed by the unavailable parameter 00 ? We have proposed a practical algorithm in which we replace 00 with O.
prepare a family of probability {r( z; 'lI) I 'U : paramater} to choose training samples,
and optimize 'U and {) iteratively (Fllkumizll et al., 1994).
Active Learning Algorithm
1. Select an initial training data set D[o] from r( z; 'lI[O])' and compute 0[0]'
2. k:= 1.
3. Compute the optimal v = V[k] to minimize Tr[I(O[k_l])J-1(O[k_l]?)'
297
Active Learning in Multilayer Perceptrons
4. Choose
D[k-l]
~ new training data from r(z;V[k]) and let D[k] be a union of
and the new data.
5. Compute the MLE 9[k] based on the training data set D[k].
6. k := k + 1 and go to 3.
The above method utilizes a probability to generate training data. It has the
advantage of making many data in one step compared to existing ones in which
only one data is chosen in each step, though their criterions are similar to each
other.
3
3.1
SINGULARITY OF AN INFORMATION MATRIX
A PROBLEM ON ACTIVE LEARNING IN MLP
Hereafter, we focus on active learning in three-layer perceptrons with H hidden
units, NH = {!(z, O)}. The map !(z; 0) is defined by
h(z; 0)
H
L
j=1
k=1
= L Wij s(L UjkXk + (j) + 7]i,
(1~i~M),
(3)
where s(t) is the sigmoidal function: s(t) = 1/(1 + e- t ).
Our active learning method as well as many other ones requires the inverse of an
information matrix J. The information matrix of MLP, however, is not always
invertible (White, 1989). Any statistical algorithms utilizing the inverse, then,
cannot be applied directly to MLP (Hagiwara et al., 1993). Such problems do not
arise in linear models, which almost always have a nonsingular information matrix.
3.2
SINGULARITY OF AN INFORMATION MATRIX OF MLP
The following theorem shows that the information matrix of a three-layer perceptron
is singular if and only if the network has redundant hidden units. We can deduce
tha.t if the information matrix is singular, we can make it nonsingular by eliminating
redundant hidden units without changing the input-output map.
Theorem 1 Assume r(z) is continuous and positive at any z. Then. the Fisher
information matrix J is singular if and only if at least one of the follo'wing three
con(litions is satisfied:
(1) u,j := (Ujl, ... , UjL)T = 0, for some j.
(2) Wj:= (Wlj, ... ,WMj) = OT , for some j.
(3) For difJerenth andh, (U,h,(jt) = (U,1,(h) or (U,h,(it) = -(U,h,(h)?
The rough sketch of the proof is shown below. The complete proof will appear in a
forthcoming pa.per ,(Fukumizu, 1996).
Rough sketch of the proof. We know easily that an information matrix is singular if
and ouly if {()fJ:~(J)}a are linearly dependent. The sufficiency can be proved easily.
To show the necessity, we show that the derivatives are linearly independent if none
of the three conditions is satisfied. Assume a linear relation:
K. FUKUMIZU
298
We can show there exists a basis of R L , (Z(l), ... , Z(L?, such that Uj . z(l) i- 0 for
'Vj, 'VI, and Uj! . z(l) + (h i- ?(u12 . z(l) + (h) for jl i- h,'VI. We replace z in
eq.(4) by z(l)t (t E R). Let
:= Uj? z(l), Sjl) := {z E C I z = ((2n+ 1)1T/=1-
my)
(j)/m~l),
n E Z}, and D(l) := C - UjSY). The points in
of s(m~l) z + (j). We define holomorphic functions on
q,~l)(z) ._
D(l)
S~l) are the singularities
as
'Ef=l aijs(my> z + (j) + aiO + 'E~l 'E~=l,BjkWijS'(my) z + (j)x~l> z
+'E~l,BjOWijS'(my)z+(j),
(1 ~ i ~ M).
=
From eq.( 4), we have q,~l) (t)
0 for all t E R. Using standard arguments on isolated
singularities of holomorphic functions, we know
are removable singularities of
SY)
q,~l)(z), and finally obtain
Wij 'E~=l,BjkX~I) = 0,
Wij,BjO
= 0,
aij
= 0,
aiO
= o.
It is easy to see ,Bjk = O. This completes the proof.
3.3
REDUCTION PROCEDURE
We introduce the following reduction procedure based on Theorem 1. Used during BP training, it eliminates redundant hidden units and keeps the information
matrix nonsingular. The criterion of elimination is very important, because excessive elimination of hidden units degrades the approximation capacity. We propose
an algorithm which does not increase the mean squared error on average. In the
following, let Sj := s( itj . z + llj) and ?( N) == A/ N for a positive number A.
Reduction Procedure
IIWjll2 J(Sj - s((j))2dQ < ?(N),
and lli -. lli + WijS((j) for all i.
1. If
2. If
then eliminate the jth hidden unit,
< ?(N), then eliminate the jth hidden unit.
IIwhll2 J(sh - sjJ 2 dQ < ?(N) for different it and h,
IIwjll2 J(sj)2dQ
3. If
then eliminate the hth hidden unit and Wij! -. wih
+ Wijz
for all i.
IIwhll2 J(1 - sh - sjJ 2 dQ < ?(N) for different jl and
~hen eliminate the j2th hidden unit and wih -. Wij! - wih,
wih for all 'i.
4. If
h,
ili -. ili +
From Theorem 1, we know that Wj, itj, (ith' (h) - (it'};, (j!), or (ith, (h )+( it]:, (h)
can be reduced to 0 if the information matrix is singular. Let 0 E NK denote
the reduced parameter from iJ according to the above procedure. The above four
conditions are, then, given by calculating II/(x; 0) -/(x; iJ)WdQ.
J
We briefly explain how the procedure keeps the information matrix nonsingular
and does not increase EMSE in high probability. First, suppose detJ(Oo) = 0, then
there exists Off E NK (K < H) such that f(x;Oo)
f(x;Off) and detJ(Of) i- 0
in N K. The elimination of hidden units up to K, of course, does not increase the
EMSE. Therefore, we have only to consider the case in which detJ(Oo) i- 0 and
hidden units are eliminated.
=
J
Suppose II/(z; Off) -/(z; Oo)1I 2dQ > O(N- 1 ) for any reduced parameter Off from
00 ? The probability of satisfying II/(z;iJ) -/(z;O)WdQ < A/N is very small for
J
299
Active Learning in Multilayer Perceptrons
a sufficiently small A. Thus, the elimination of hidden units occurs in very tiny
probability. Next, suppose 1I!(x; (Jff) - !(x; (Jo)1I 2dQ = O(N-l). Let 0 E NK be
a reduced parameter made from 9 with the same procedure as we obtain (Jff from
(Jo. We will show for a sufficiently small A,
J
where OK is MLE computed in N K. We write (J = ((J(l),(J(2?) in which (J(2) is
changed to 0 in reduction, changing the coordinate system if necessary. The Taylor
expansion and asymptotic theory give
E
[JII!(x; OK) - !(x; (Jo)1I 2dQ]
~ JII!(x; (Jf)- !(x; (Jo)11 2dQ+ ~ Tr[In((Jf)Jil1((Jf)),
2
E [JII!(x; 9) - !(x; O)WdQ]
~ JII!(x; (Jf)- !(x; (Jo)1I 2dQ+ ;, Tr[h2 ((Jf)J2;l ((Jo)],
where Iii and Jii denote the local information matrixes w.r.t. (J(i) ('i = 1,2). Thus,
E [JII!(x; 0)
~
- !(x; (Jo)1I 2dQ] - E [JII!(x; OK) - !(x; (Jo)1I 2 dQ]
2
-E [JII!(x;o) - !(X;O)1I 2 dQ]
2
- ;, Tr[Ill((Jf)Jil 1 ((Jf)]
+;' Tr[h2((Jf)J;1((Jo))
+ E [JII!(x; 0) -
!(x; (Jo)1I 2dQ] .
Since the sum of the last two terms is positive, the 1.h.s is positive if E[f II!( x; OK)_
!(x; 0)1I 2 dQ) < BIN for a sufficiently small B. Although we cannot know the value
of this expectation, we can make the probability of holding this enequality very high
by taking a small A.
4
ACTIVE LEARNING WITH REDUCTION
PROCEDURE
The reduction procedure keeps the information matrix nonsingular and makes the
active learning algorithm applicable to MLP even with surplus hidden units.
Active Learning with Hidden Unit Reduction
1. Select initial training data set Do from r( x; V[O]). and compute 0[0]'
2. k:= 1, and do REDUCTION PROCEDURE.
3. Compute the optimal v = 1'[k] to minimize Tr[I(9[k_l])J-l (9[k-l] )). using
the steepest descent method.
4. Choose Nk new training data from r( x; V[k]) and let D[k] be a union of
D[k-l] and the new data.
5. Compute the MLE 9[kbbased on the training data D[k] using BP with
REDUCTION PROCE URE.
6. k:= k
+ 1 and
go to 3.
The BP with reduction procedure is applicable not only to active learning, but to
a variety of statistical techniques that require the inverse of an information matrix.
We do not discuss it in this paper. however.
300
K. FUKUMIZU
- - Active Learning
? Active Learning [Av?Sd,Av+Sd]
- ..- . Passive Learning
+ Passive Learning [Av?Sd,Av+Sd] ~
4
0.00001
- - Learning Curve
..?.. It of hidden units
?
?
? ?
+
+
+
? ? ?
+
+
O.IXXXlOOI
200
400
600
800
100>
100 200 300 400
The Number of Training nata
soo
0
600 700 800 900 100>
The Number of Training nata
Figure 1: Active/Passive Learning: f(x) = s(x)
5
EXPERIMENTS
We demonstrate the effect of the proposed active learning algorithm through experiments. First we use a three-layer model with 1 input unit, 3 hidden units, and
1 output unit. The true function f is a MLP network with 1 hidden unit. The
information matrix is singular at 0o, then. The environmental probability, Q, is
a normal distribution N(O,4). We evaluate the generalization error in the actual
environment using the following mean squared error of the function values:
!
1If(:l!; 0) - f(:l!)11 2 dQ.
We set the deviation in the true system II = 0.01. As a family of distributions
for training {r(:l!;v)}, a mixture model of 4 normal distributions is used. In each
step of active learning, 100 new samples are added. A network is trained using
online BP, presented with all training data 10000 times in each step, and operated
the reduction procedure once a 100 cycles between 5000th and 10000th cycle. We
try 30 trainings changing the seed of random numbers. In comparison, we train a
network passively based on training samples given by the probability Q.
Fig.1 shows the averaged learning curves of active/passive learning and the number
of hidden units in a typical learning curve. The advantage of the proposed active
learning algorithm is clear. We can find that the algorithm has expected effects on
a simple, ideal approximation problem.
Second, we apply the algorithm to a problem in which the true function is not
included in the MLP model. We use MLP with 4 input units, 7 hidden units, and 1
output unit. The true function is given by f(:l!) = erf(xt), where erf(t) is the error
function. The graph of the error function resembles that of the sigmoidal function,
while they never coincide by any affine transforms. We set Q = N(0,25 X 14). We
train a network actively/passively based on 10 data sets, and evaluate MSE's of
function values. Other conditions are the same as those of the first experiment.
Fig.2 shows the averaged learning curves and the number of hidden units in a
typical learning curve. We find tha.t the active learning algorithm reduces the errors
though the theoretical condition is not perfectly satisfied in this case. It suggests
the robustness of our active learning algorithm.
301
Active Learning in Multilayer Perceptrons
8
- - Active Learning
- ..- . Passive Learning
O.IXXlI
- - Learning Curve
..?.. # of hidden units
7
;;
It
:s
c
:;
~
L .... . ... . .. . ......... :
6
e.
:r
is:
~
:s
.. 5;
~
r-~~~r-~-r~--r-~-+4
200
400
600
800
The Number ofTraining nata
IIXXl
100 200
300 400 500 600 700 800 900 IIXXl
The Number of Training nata
Figure 2: Active/Passive Learning: f(z)
6
= erf(xI)
CONCLUSION
We review statistical active learning methods and point out a problem in their application to MLP: the required inverse of an information matrix does not exist if the
network has redundant hidden units. We characterize the singularity condition of
an information matrix and propose an active learning algorithm which is applicable
to MLP with any number of hidden units. The effectiveness of the algorithm is
verified through computer simulations, even when the theoretical assumptions are
not perfectly satisfied.
References
D. A. Cohn. (1994) Neural network exploration using optimal experiment design.
In J. Cowan et al. (ed.), A d'vances in Neural Information Processing SYHtems 6,
679-686. San Mateo, CA: Morgan Kaufmann.
V. V. Fedorov. (1972) Theory of Optimal Experiments. NY: Academic Press.
K. Fukumizu. (1996) A Regularity Condition of the Information Matrix of a Multilayer Percept ron Network. Neural Networks, to appear.
K. Fukumizu, & S. Watanabe. (1994) Error Estimation and Learning Data Arrangement for Neural Networks. Proc. IEEE Int. Conf. Neural Networks :777-780.
K. Hagiwara, N. Toda, & S. Usui. (1993) On the problem of applying AIC to
determine the structure of a layered feed-forward neural network. Proc. 1993 Int.
Joint ConI. Neural Networks :2263-2266.
D. MacKay. (1992) Information-based objective functions for active data selection,
Ne'ural Computation 4(4):305-318.
F. Pukelsheim. (1993) Optimal Design of Experiments. NY: John Wiley & Sons.
H. White. (1989) Learning in artificial neural networks: A statistical perspective
Neural Computation 1 (4 ):425-464.
| 1140 |@word briefly:1 eliminating:1 simulation:1 tr:8 reduction:12 necessity:1 initial:2 hereafter:1 existing:2 john:1 ouly:1 ith:2 steepest:1 ron:1 sigmoidal:2 pairing:1 introduce:1 expected:1 actual:4 minimizes:1 unit:29 appear:2 positive:4 local:1 sd:4 ure:1 resembles:1 mateo:1 suggests:1 co:2 statistically:2 averaged:2 bjk:1 practical:1 union:2 procedure:11 shin:1 holomorphic:2 cannot:2 layered:1 selection:1 applying:1 optimize:1 map:3 deterministic:1 center:1 dz:1 go:2 estimator:1 utilizing:1 coordinate:1 target:2 suppose:3 pa:1 approximated:1 satisfying:1 wj:2 cycle:2 environment:4 trained:2 basis:1 easily:2 joint:1 derivation:1 train:3 artificial:1 choosing:2 whose:1 solve:1 ability:1 erf:3 online:1 advantage:2 wijs:1 propose:5 wih:4 j2:1 paramater:1 regularity:1 derive:1 oo:8 ij:3 eq:5 kenji:1 exploration:1 elimination:4 bin:1 require:1 generalization:1 singularity:8 im:1 sufficiently:3 ic:1 normal:3 seed:1 estimation:3 proc:2 applicable:5 prepare:2 fukumizu:9 rough:2 ujl:2 always:2 focus:1 likelihood:1 dependent:1 el:1 eliminate:4 hidden:24 relation:1 wij:5 selects:1 ill:1 mackay:2 equal:1 once:1 never:1 eliminated:1 excessive:1 mlp:14 mixture:1 sh:2 operated:1 necessary:1 taylor:1 isolated:1 theoretical:3 wmj:1 removable:1 aio:2 jil:1 deviation:1 characterize:1 my:4 off:4 invertible:1 itj:2 jo:10 squared:3 satisfied:4 choose:3 conf:1 wing:1 derivative:1 actively:2 japan:1 li:2 jii:9 int:2 vi:2 try:2 observing:1 yiv:1 minimize:4 kaufmann:1 percept:1 sy:1 nonsingular:6 hagiwara:2 accurately:1 none:1 lli:2 explain:1 ed:1 proof:4 con:1 proved:1 surplus:1 ok:4 feed:1 methodology:1 improved:1 sufficiency:1 though:2 wlj:1 sketch:2 cohn:2 effect:2 true:8 iteratively:1 white:3 during:1 criterion:5 complete:1 demonstrate:2 fj:1 passive:6 ef:1 recently:1 jp:1 nh:1 jl:2 deduce:1 perspective:1 proce:1 morgan:1 determine:2 redundant:4 rdc:1 ii:5 ural:1 reduces:1 academic:1 devised:1 mle:4 regression:1 multilayer:7 expectation:3 completes:1 singular:7 crucial:1 ot:1 eliminates:1 specially:1 cowan:1 effectiveness:3 ideal:1 iii:1 easy:1 variety:1 forthcoming:1 perfectly:2 ltd:1 sjl:1 clear:1 ymptotic:1 transforms:1 reduced:4 generate:1 exist:2 per:1 write:1 four:1 changing:3 verified:2 graph:1 sum:2 inverse:5 family:2 almost:1 utilizes:1 layer:4 oftraining:1 aic:1 bp:4 argument:1 optimality:1 passively:2 llj:1 developing:1 according:2 son:1 making:1 equation:1 discus:1 know:4 apply:1 sjj:2 k_l:3 robustness:1 yokohama:2 substitute:1 follo:1 coni:1 calculating:1 uj:3 toda:1 objective:2 added:1 arrangement:1 occurs:1 parametric:1 degrades:1 capacity:1 mail:1 ricoh:2 holding:1 design:2 av:4 fedorov:2 descent:1 communication:1 required:2 below:1 u12:1 soo:1 critical:1 usui:1 ne:1 review:5 hen:1 asymptotic:1 law:1 h2:2 affine:1 dq:16 tiny:1 course:1 changed:1 last:1 jth:2 aij:1 perceptron:1 taking:1 curve:6 forward:1 made:1 coincide:1 san:1 sj:3 keep:4 active:36 xi:1 continuous:1 ca:1 unavailable:1 hth:1 expansion:1 mse:2 vj:1 linearly:2 arise:1 fig:2 ny:2 wiley:1 watanabe:1 theorem:4 xt:1 jt:1 essential:1 exists:2 nk:4 pukelsheim:2 environmental:2 tha:2 replace:2 fisher:4 jf:8 included:2 typical:2 ili:2 perceptrons:8 select:2 evaluate:2 |
156 | 1,141 | Investment Learning
with Hierarchical PSOMs
Jorg Walter and Helge Ritter
Department of Information Science
University of Bielefeld, D-33615 Bielefeld, Germany
Email: {walter.helge}@techfak.uni-bielefeld.de
Abstract
We propose a hierarchical scheme for rapid learning of context dependent
"skills" that is based on the recently introduced "Parameterized SelfOrganizing Map" ("PSOM"). The underlying idea is to first invest some
learning effort to specialize the system into a rapid learner for a more
restricted range of contexts.
The specialization is carried out by a prior "investment learning stage",
during which the system acquires a set of basis mappings or "skills" for
a set of prototypical contexts. Adaptation of a "skill" to a new context
can then be achieved by interpolating in the space of the basis mappings
and thus can be extremely rapid.
We demonstrate the potential of this approach for the task of a 3D visuomotor map for a Puma robot and two cameras. This includes the forward and backward robot kinematics in 3D end effector coordinates, the
2D+2D retina coordinates and also the 6D joint angles. After the investment phase the transformation can be learned for a new camera set-up
with a single observation.
1 Introduction
Most current applications of neural network learning algorithms suffer from a large number
of required training examples. This may not be a problem when data are abundant, but in
many application domains, for example in robotics, training examples are costly and the
benefits of learning can only be exploited when significant progress can be made within a
very small number of learning examples.
Investment Learning with Hierarchical PSOMs
571
In the present contribution, we propose in section 3 a hierarchically structured learning approach which can be applied to many learning tasks that require system identification from
a limited set of observations. The idea builds on the recently introduced "Parameterized
Self-Organizing Maps" ("PSOMs"), whose strength is learning maps from a very small
number of training examples [8, 10, 11].
In [8], the feasibility of the approach was demonstrated in the domain of robotics, among
them, the learning of the inverse kinematics transform of a full 6-degree of freedom (DOF)
Puma robot. In [10], two improvements were introduced, both achieve a significant increase in mapping accuracy and computational efficiency. In the next section, we give a
short summary of the PSOM algorithm; it is decribed in more detail in [11] which also
presents applications in the domain of visual learning.
2 The PSOM Algorithm
A Parameterized Self-Organizing Map is a parametrized, m-dimensional hyper-surface
M = {w(s) E X ~ rn.dls E S ~ rn.m} that is embedded in some higher-dimensional
vector space X. M is used in a very similar way as the standard discrete self-organizing
map: given a distance measure dist(x, x') and an input vector x, a best-match location
s*(x) is determined by minimizing
s*:= argmin dist(x, w(s))
(1)
SES
The associated "best-match vector" w(s*) provides the best approximation of input x in the
manifold M. If we require dist(?) to vary only in a subspace X in of X (i.e., dist( x, x') =
dist(Px, Px /), where the diagonal matrix P projects into xin), s* (x) actually will only
depend on Px. The projection (l-P)w(s* (x)) E x out ofw(s* (x)) lies in the orthogonal
subspace x out can be viewed as a (non-linear) associative completion of a fragmentary
input x of which only the part Px is reliable. It is this associative mapping that we will
exploit in applications of the PSOM.
M
is
constructed
as
a
manifold
that passes
3
through a given set D of
data examples (Fig. I depicts the situation schematically). To this end, we
D aeA;;S
assign to each data sample a point a E Sand
Figure 1: Best-match s* and associative completion w(s*(x)) of denote the associated data
input Xl, X2 (Px) given in the input subspace Xin. Here in this sample by Wa. The set A
simple case, the m = 1 dimensional manifold M is constructed of the assigned parameter
to pass through four data vectors (square marked). The left side values a should provide a
shows the d = 3 dimensional embedding space X = xin X X out
good discrete "model" of
and the right side depicts the best match parameter s* (x) parameter
the
topology of our data set
manifold S together with the "hyper-lattice" A of parameter values
(Fig.
I right). The assign(indicated by white squares) belonging to the data vectors.
ment between data vectors
and points a must be made
in a topology preserving fashion to ensure good interpolation by the manifold M that is
obtained by the following steps.
X out
1. WALTER, H. RITTER
572
For each point a E A, we construct a "basis function" H(?, a; A) or simplified I H(?, a) :
S ~ 1R that obeys (i) H (ai, aj) = 1 for i = j and vanishes at all other points of A i =J j
(orthonormality condition,) and (ii) EaEA H (a, s) = 1 for '<Is ("partition of unity" condition.) We will mainly be concerned with the case of A being a m-dimensional rectangular
hyper-lattice; in this case, the functions H(?, a) can be constructed as products of Lagrange
interpolation polynomials, see [11] . Then,
W(s) =
L
H(s, a) Wa?
(2)
aEA
defines a manifold M that passes through all data examples. Minimizing dist(?) in Eq. 1
can be done by some iterative procedure, such as gradient descent or - preferably - the
Levenberg-Marquardt algorithm [11]. This makes M into the attractor manifold of a (discrete time) dynamical system. Since M contains the data set D, any at least m-dimensional
"fragment" of a data example x = wED will be attracted to the correct completion w.
Inputs x ? D will be attracted to some approximating manifold point.
This approach is in many ways the continuous analog of the standard discrete selforganizing map. Particularly attractive features are (i) that the construction of the map
manifold is direct from a small set of training vectors, without any need for time consuming adaptation sequences, (ii) the capability of associative completion, which allows
to freely redefine variables as inputs or outputs (by changing dist(?) on demand, e.g. one
can reverse the mapping direction), and (iii) the possibility of having attractor manifolds
instead of just attractor points.
3
Hierarchical PSOMs: Structuring Learning
Rapid learning requires that the structure of the learner is well matched to his task. However, if one does not want to pre-structure the learner by hand, learning again seems to be
the only way to achieve the necessary pre-structuring. This leads to the idea of structuring
learning itself and motivates to split learning into two stages:
(i) The earlier stage is considered as an "investment stage" that may be slow and that may
require a larger number of examples. It has the task to pre-structure the system in such a
way that in the later stage,
(ii) the now specialized system can learn fast and with extremely few examples.
To be concrete, we consider specialized mappings or "skills", which are dependent on
the state of the system or system environment. Pre-structuring the system is achieved by
learning a set of basis mappings, each in a prototypical system context or environment state
("investment phase".) This imposes a strong need for an efficient learning tool- efficient
in particular with respect to the number of required training data points.
The PSOM networks appears as a very attractive solution: Fig. 2 shows a hierarchical
arrangement of two PSOM. The task of mapping from input to output spaces is learned and performed, by the "Transformation-PSOM" ("T-PSOM").
During the first learning stage, the investment learning phase the T-PSOM is used to learn
a set of basis mappings T j : Xl +-t X2 or context dependent "skills" is constructed in the
"T-PSOM", each of which gets encoded as a internal parameter or "weight" set Wj . The
1 In contrast to kernel methods, the basis functions may depend on the relative POSition to all other
knots. However, we drop in our notation the dependency H (a, s) = H (a, s; A) on the latter.
Investment Learning with Hierarchical PSOMs
Context
573
.~~~~.? (Meta-PSOM) .........~
---.~
-..
:
CO.
weIghts
( T-P!OM )
Figure 2: The transforming ''T-PSOM'' maps between input and output spaces (changing direction
on demand). In a particular environmental context, the correct transformation is learned, and encoded
in the internal parameter or weight set w. Together with an characteristic environment observation
Uref, the weight set w is employed as a training vector for the second level "Meta-PSOM". After
learning a structured set of mappings, the Meta-PSOM is able to generalizing the mapping for a new
environment. When encountering any change, the environment observation Uref gives input to the
Meta-PSOM and determines the new weight set w for the basis T-PSOM.
second level PSOM ("Meta-PSOM") is responsible for learning the association between
the weight sets Wj of the first level T-PSOM and their situational contexts.
The system context is characterized by a suitable environment observation, denoted ure /'
see Fig. 2.
The context situations are chosen such that the associated basis mappings capture already a
significant amount of the underlying model structure, while still being sufficiently general
to capture the variations with respect to which system environment identification is desired.
For the training of the second level Meta-PSOM each constructed T-PSOM weight set Wj
serves together with its associated environment observation ure/,j as a high dimensional
training data vector.
Rapid learning is the return on invested effort in the longer pre-training phase. As a result, the task of learning the "skill" associated with an unknown system context now takes
the form of an immediate Meta-PSOM --+ T-PSOM mapping: the Meta-PSOM maps the
new system context observation ure/,new into the parameter set W new for the T-PSOM.
Equipped with W new , the T-PSOM provides the desired mapping Tnew.
4
Rapid Learning of a Stereo Visuo-motor Map
In the following, we demonstrate the potential of the investment learning approach, with
the task of fast learning of 3D vi suo-motor maps for a robot manipulator seen by a pair of
movable cameras. Thus, in this demonstration, each situated context is given by a particular
camera arrangement, and the assicuated "skill" is the mapping between camera and robot
coordinates.
The Puma robot is positioned behind a table and the entire scene is displayed on two windows on a computer monitor. By mouse-pointing, a user can, for example, select on the
monitor one point and the position on a line appearing in the other window, to indicate a
good position for the robot end effector, see Fig. 3. This requires to compute the transformation T between pixel coordinates U = (uL , uR ) on the monitor images and corresponding world coordinates if in the robot reference frame - or alternatively - the corresponding
six robot joint angles (j (6 DOF). Here we demonstrate an integrated solution, offering both
solutions with the same network.
The T-PSOM learns each individual basis mapping Tj by visiting a rectangular grid set
of end effector positions (here a 3x3x3 grid in if of size 40 x 40 x 30cm 3 ) jointly with
ei
574
J. WALTER, H. RITTER
(OL
weights
Figure 3: Rapid learning of the 3D visuo-motor coordination for two cameras. The basis T-PSOM
(m = 3) is capable of mapping to and from three coordinate systems: Cartesian robot world coordinates, the robot joint angles (6-DOF), and the location of the end-effector in coordinates of the
two camera retinas. Since the left and right camera can be relocated independently, the weight set of
T-PSOM is split, and parts W L, W R are learned in two separate Meta-PSOMs ("L" and "R").
the joint angle tuple ~ and the location in camera retina coordinates (2D in each camera)
Thus the training vectors wai for the construction of the T- PSOM are the tuples
ut, uf?
~
~
-+L .....R
( Xi,
(}i, U i 'U i ).
However, each Tj solves the mapping task only for the current camera arrangement, for
which Tj was learned. Thus there is not yet any particular advantage to other, specialized
methods for camera calibration [1]. The important point is, that we now will employ the
Meta-PSOM to interpolate in the space of the mappings {Tj }.
To keep the number of prototype mappings manageable, we reduce some DOFs of the
cameras by calling for fixed focal length. camera tripod height. and twist joint. To constrain
the elevation and azimuth viewing angle. we require one land mark f.!ix to remain visible
in a constant image position. This leaves two free parameters per camera, that can now
be determined by one extra observation of a chosen auxiliary world reference point f.re!.
We denote the camera image coordinates of f.re! by Ure! = (u~e! ' u::e!). By reuse of the
cameras as "environment sensor", Ure! now implicitly encodes the two camera positions.
In the investing pre-training phase, nine mappings T j are learned by the T-PSOM, each
camera visiting a 3 x 3 grid. sharing the set of visited robot positions f.i. As Fig. 2 suggests,
normal1y the entire weight set w serves as part of the training vector to the Meta-PSOM.
Here the problem becomes factorized since the left and right camera change tripod place
independently: the weight set of the T-PSOM is split, and the two parts can be learned in
separate Meta-PSOMs. Each training vector Waj for the left camera Meta-PSOM consists
of the context observation u~e! and the T-PSOM weight set part w L = (uf,???, U~7)
(analogous the right camera Meta-PSOM.)
This enables in the following phase the rapid learning, for new. unknown camera places.
On the basis of one single observation Ure!. the desired transformation T is constructed.
As visualized in Fig. 3. Ure! serves as the input to the second level Meta-PSOMs. Their
outputs are interpolations between previously learned weight sets and they project directly
into the weight set of the basis level T-PSOM.
The resulting T-PSOM can map in various directions. This is achieved by specifying a
suitable distance function dist(?) via the projection matrix P, e.g.:
i(u)
DUI-tX
I'T-PSOM
( ....
u;
(3)
Investment Learning with Hierarchical PSOMs
575
8(u)
FT~tSOM (u;
wL( u~e/)' WR (u~/))
u(x)
Ff~PSOM(X;
wL(u~e/),WR(U~e/))
wL(u~el )
FM7t~-PSOM,L(U~e/;
(4)
(5)
OL); analog WR(u~/)
(6)
Table 1 shows experimental results averaged over 100 random locations ~ (from within the
range of the training set) seen in 10 different camera set-ups, from within the 3 x 3 square
grid of the training positions, located in a normal distance of about 125 cm (center to work
space center, 1 m 2 , total range of about 55-21Ocm), covering a disparity angle range of
25?-150?. For identification of the positions ~ in image coordinates, a tiny light source
was installed at the manipulator tip and a simple procedure automated the finding of with
about ?0.8 pixel accuracy. For the achieved precision it is important to share the same
set of robot positions ~i' and that the sets are topologically ordered, here as a 3x3x3 goal
position grid (i) and two 3 x 3 camera location (j) grids.
u
Mapping Direction
Direct trained
T-PSOM
T-PSOMwith
Meta-PSOM
pixel Ul--t Xrobot => Cartesian error !:1x
Cartesian x I--t u => pixel error
pixel Ul--t o"obot => Cartesian error !:1x
1.4mm
1.2pix
3.8mm
4.4mm
3.3 pix
5.4mm
0.008
0.010
0.023
0.025
0.025
0.030
Table 1: Mean Euclidean deviation (mm or pixel) and normalized root mean square error (NRMS)
for 1000 points total in comparison of a direct trained T-PSOM and the described hierarchical MetaPSOM network, in the rapid learning mode after one single observation.
5 Discussion and Conclusion
A crucial question is how to structure systems, such that learning can be efficient. In
the present paper, we demonstrated a hierarchical approach that is motivated by a decomposition of the learning phase into two different stages: A longer, initial learning phase
"invests" effort into a gradual and domain-specific specialization of the system. This investment learning does not yet produce the final solution, but instead pre-structures the
system such that the subsequently final specialization to a particular solution (within the
chosen domain) can be achieved extremely rapidly.
To implement this approach, we used a hierarchical architecture of mappings. While in
principle various kinds of network types could be used for this mappings, a practically
feasible solution must be based on a network type that allows to construct the required
basis mappings from rather small number of training examples. In addition, since we use
interpolation in weight space, similar mappings should give rise to similar weight sets to
make interpolation meaningful. PSOM meat this requirements very well, since they allow
a direct non-iterative construction of smooth mappings from rather small data sets. They
achieve this be generalizing the discrete self-organizing map [3, 9] into a continuous map
manifold such that interpolation for new data points can benefit from topology information
that is not available to most other methods.
While PSOMs resemble local models [4, 5, 6] in that there is no interference between
different training points, their use of a orthogonal set of basis functions to construct the
576
J. WALTER, H. RIITER
map manifold put them in a intennediate position between the extremes of local and of
fully distributed models.
A further very useful property in the present context is the ability of PSOMs to work as
an attractor network with a continuous attractor manifold. Thus a PSOM needs no fixed
designation of variables as inputs and outputs; Instead the projection matrix P can be used
to freely partition the full set of variables into input and output values. Values of the latter
are obtained by a process of associative completion.
Technically, the investment learning phase is realized by learning a set of prototypical basis
mappings represented as weight sets of a T-PSOM that attempt to cover the range of tasks
in the given domain. The capability for subsequent rapid specialization within the domain
is then provided by an additional mapping that maps a situational context into a suitable
combination of the previously learned prototypical basis mappings. The construction of
this mapping again is solved with a PSOM ("Meta"-PSOM) that interpolates in the space
oJprototypical basis mappings that were constructed during the "investment phase".
We demonstrated the potential of this approach with the task of 3D visuo-motor mapping,
learn-able with a single observation after repositioning a pair of cameras.
The achieved accuracy of 4.4 mm after learning by a single observation, compares very
well with the distance range 0.5-2.1 m of traversed positions. As further data becomes
available, the T-PSOM can certainly be fine-tuned to improve the perfonnance to the level
of the directly trained T-PSOM.
The presented arrangement of a basis T-PSOM and two Meta-PSOMs demonstrates further
the possibility to split hierarchical learning in independently changing domain sets. When
the number of involved free context parameters is growing, this factorization is increasingly
crucial to keep the number of pre-trained prototype mappings manageable.
References
[1] K. Fu, R. Gonzalez and C. Lee. Robotics: Control, Sensing, Vision, and Intelligence. McGraw-
Hill, 1987
[2] F. Girosi and T. Poggio.
63(3):169-176,1990.
Networks and the best approximation property.
BioI. Cybem.,
[3] T. Kohonen. Self-Organization and Associative Memory. Springer, Heidelberg, 1984.
[4] 1. Moody and C. Darken. Fast learning in networks of locally-tuned processing units. Neural
Computation, 1:281-294, 1989.
[5] S. Omohundro. Bumptrees for efficient function, constraint, and classification learning. In
NIPS*3, pages 693-699. Morgan Kaufman Publishers, 1991.
[6] 1. Platt. A resource-allocating network for function interpolation. Neural Computation, 3:213255,1991
[7] M. Powell. Radial basis functions for multivariable interpolation: A review, pages 143-167.
Clarendon Press, Oxford, 1987.
[8] H. Ritter. Parametrized self-organizing maps. In S. Gielen and B. Kappen; editors, ICANN'93Proceedings, Amsterdam, pages 568-575. Springer Verlag, Berlin, 1993.
[9] H. Ritter, T. Martinetz, and K. Schulten. Neural Computation and Self-organizing Maps. Addison Wesley, 1992.
[10] 1. Walter and H. Ritter. Local PSOMs and Chebyshev PSOMs - improving the parametrised
self-organizing maps. In Proc. ICANN, Paris, volume 1, pages 95-102, October 1995.
[11] 1. Walter and H. Ritter. Rapid learning with parametrized self-organizing maps. Neurocomputing, Special Issue, (in press), 1996.
| 1141 |@word manageable:2 polynomial:1 seems:1 gradual:1 decomposition:1 kappen:1 initial:1 contains:1 fragment:1 disparity:1 offering:1 tuned:2 current:2 marquardt:1 yet:2 must:2 attracted:2 subsequent:1 visible:1 partition:2 girosi:1 enables:1 motor:4 drop:1 intelligence:1 leaf:1 short:1 provides:2 location:5 height:1 constructed:7 direct:4 specialize:1 consists:1 redefine:1 rapid:11 dist:8 growing:1 ol:2 equipped:1 window:2 becomes:2 project:2 provided:1 underlying:2 matched:1 notation:1 factorized:1 intennediate:1 duo:1 argmin:1 cm:2 kind:1 kaufman:1 finding:1 transformation:5 preferably:1 demonstrates:1 platt:1 control:1 unit:1 local:3 installed:1 oxford:1 bumptrees:1 ure:7 interpolation:8 suggests:1 specifying:1 co:1 limited:1 factorization:1 range:6 obeys:1 averaged:1 camera:26 responsible:1 investment:13 implement:1 procedure:2 powell:1 projection:3 puma:3 pre:8 ups:1 radial:1 get:1 put:1 context:18 map:21 demonstrated:3 center:2 independently:3 rectangular:2 his:1 embedding:1 coordinate:11 variation:1 analogous:1 techfak:1 construction:4 user:1 particularly:1 located:1 ft:1 tripod:2 solved:1 capture:2 wj:3 vanishes:1 environment:9 transforming:1 trained:4 depend:2 technically:1 efficiency:1 learner:3 basis:19 joint:5 various:2 tx:1 represented:1 walter:7 fast:3 visuomotor:1 hyper:3 dof:3 whose:1 encoded:2 larger:1 s:1 ability:1 invested:1 transform:1 itself:1 jointly:1 final:2 associative:6 sequence:1 advantage:1 propose:2 ment:1 product:1 adaptation:2 kohonen:1 rapidly:1 organizing:8 achieve:3 invest:1 requirement:1 produce:1 completion:5 progress:1 eq:1 solves:1 strong:1 auxiliary:1 resemble:1 indicate:1 direction:4 correct:2 subsequently:1 viewing:1 sand:1 require:4 assign:2 elevation:1 traversed:1 mm:6 practically:1 sufficiently:1 considered:1 normal:1 mapping:34 pointing:1 vary:1 proc:1 visited:1 coordination:1 wl:3 tool:1 sensor:1 rather:2 psom:53 structuring:4 improvement:1 mainly:1 contrast:1 dependent:3 el:1 entire:2 integrated:1 germany:1 pixel:6 issue:1 among:1 classification:1 denoted:1 special:1 construct:3 having:1 few:1 retina:3 employ:1 interpolate:1 individual:1 neurocomputing:1 phase:10 attractor:5 attempt:1 freedom:1 organization:1 possibility:2 certainly:1 extreme:1 light:1 behind:1 tj:4 parametrised:1 allocating:1 tuple:1 capable:1 fu:1 necessary:1 poggio:1 orthogonal:2 perfonnance:1 dofs:1 euclidean:1 abundant:1 desired:3 re:2 effector:4 earlier:1 cover:1 lattice:2 deviation:1 azimuth:1 dependency:1 repositioning:1 ritter:7 lee:1 tip:1 together:3 mouse:1 concrete:1 moody:1 again:2 return:1 potential:3 de:1 includes:1 vi:1 later:1 performed:1 root:1 capability:2 contribution:1 decribed:1 square:4 om:1 accuracy:3 characteristic:1 identification:3 knot:1 tnew:1 wed:1 sharing:1 wai:1 email:1 involved:1 associated:5 visuo:3 ut:1 positioned:1 actually:1 appears:1 wesley:1 clarendon:1 higher:1 done:1 just:1 stage:7 hand:1 ei:1 defines:1 mode:1 aj:1 indicated:1 manipulator:2 normalized:1 orthonormality:1 assigned:1 white:1 attractive:2 during:3 self:9 acquires:1 covering:1 levenberg:1 multivariable:1 hill:1 omohundro:1 demonstrate:3 x3x3:2 image:4 recently:2 specialized:3 twist:1 volume:1 analog:2 association:1 significant:3 ai:1 relocated:1 suo:1 grid:6 focal:1 ocm:1 robot:13 calibration:1 encountering:1 surface:1 longer:2 movable:1 reverse:1 verlag:1 meta:18 exploited:1 preserving:1 seen:2 additional:1 morgan:1 employed:1 freely:2 ii:3 full:2 nrms:1 smooth:1 match:4 characterized:1 feasibility:1 vision:1 kernel:1 achieved:6 robotics:3 schematically:1 want:1 addition:1 fine:1 situational:2 source:1 crucial:2 publisher:1 extra:1 pass:2 martinetz:1 iii:1 split:4 concerned:1 automated:1 architecture:1 topology:3 reduce:1 idea:3 prototype:2 chebyshev:1 specialization:4 six:1 motivated:1 reuse:1 ul:3 effort:3 stereo:1 suffer:1 aea:2 interpolates:1 nine:1 useful:1 selforganizing:2 amount:1 locally:1 situated:1 visualized:1 per:1 wr:3 discrete:5 four:1 monitor:3 changing:3 backward:1 pix:2 angle:6 parameterized:3 inverse:1 topologically:1 place:2 bielefeld:3 gonzalez:1 strength:1 constraint:1 constrain:1 x2:2 scene:1 encodes:1 calling:1 extremely:3 px:5 uf:2 department:1 structured:2 combination:1 belonging:1 remain:1 increasingly:1 unity:1 helge:2 ur:1 restricted:1 interference:1 resource:1 previously:2 kinematics:2 addison:1 end:5 serf:3 available:2 hierarchical:11 appearing:1 ensure:1 exploit:1 build:1 approximating:1 arrangement:4 already:1 question:1 realized:1 costly:1 diagonal:1 visiting:2 gradient:1 subspace:3 distance:4 separate:2 berlin:1 parametrized:3 manifold:13 length:1 minimizing:2 demonstration:1 october:1 ofw:1 rise:1 jorg:1 motivates:1 unknown:2 observation:13 darken:1 descent:1 displayed:1 immediate:1 situation:2 frame:1 rn:2 introduced:3 pair:2 required:3 paris:1 waj:1 learned:9 nip:1 able:2 dynamical:1 reliable:1 memory:1 suitable:3 scheme:1 improve:1 carried:1 prior:1 review:1 relative:1 embedded:1 fully:1 prototypical:4 designation:1 degree:1 imposes:1 principle:1 editor:1 tiny:1 share:1 land:1 invests:1 summary:1 free:2 side:2 allow:1 benefit:2 distributed:1 world:3 forward:1 made:2 simplified:1 skill:7 uni:1 implicitly:1 meat:1 mcgraw:1 keep:2 cybem:1 consuming:1 tuples:1 xi:1 alternatively:1 continuous:3 iterative:2 investing:1 table:3 learn:3 improving:1 heidelberg:1 interpolating:1 domain:8 icann:2 hierarchically:1 fig:7 ff:1 depicts:2 fashion:1 slow:1 precision:1 position:13 schulten:1 xl:2 lie:1 learns:1 ix:1 specific:1 sensing:1 dl:1 cartesian:4 demand:2 generalizing:2 gielen:1 visual:1 lagrange:1 amsterdam:1 ordered:1 springer:2 environmental:1 determines:1 bioi:1 viewed:1 marked:1 goal:1 feasible:1 change:2 determined:2 total:2 pas:1 experimental:1 xin:3 meaningful:1 select:1 internal:2 mark:1 latter:2 |
157 | 1,142 | Explorations with the Dynamic Wave
Model
Thomas P. Rebotier
Jeffrey L. Elman
Department of Cognitive Science
UCSD, 9500 Gilman Dr
LA JOLLA CA 92093-0515
rebotier@cogsci.ucsd .edu
Department of Cognitive Science
UCSD, 9500 Gilman Dr
LA JOLLA CA 92093-0515
elman@cogsci.ucsd.edu
Abstract
Following Shrager and Johnson (1995) we study growth of logical function complexity in a network swept by two overlapping
waves: one of pruning , and the other of Hebbian reinforcement of
connections. Results indicate a significant spatial gradient in the
appearance of both linearly separable and non linearly separable
functions of the two inputs of the network ; the n.l.s. cells are much
sparser and their slope of appearance is sensitive to parameters in
a highly non-linear way.
1
INTRODUCTION
Both the complexity of the brain (and concomittant difficulty encoding that C0111plexity through any direct genetic mapping). as well as the apparently high degree
of cortical plasticity suggest that a great deal of cortical structure is emergent
rather than pre-specified. Several neural models have explored the emergence of
complexity. Von der Marlsburg (1973) studied the grouping of orientation selectivity by competitive Hebbian synaptic modification. Linsker (1986.a, 1986 .b and
1986.c) showed how spatial selection cells (off-center on-surround), orientation selective cells, and finally orientation columns , emerge in successive layers from random
input by simple, Hebbian-like learning rules . ~[iller (1992, 1994) studied the emergence of orientation selective columns from activity dependant competition between
on-center and off-center inputs.
Kerzsberg, Changeux and Dehaene (1992) studied a model with a dual-aspect learning mechanism: Hebbian reinforcement of the connection strengths in case of correlated activity, and gradual pruning of immature connections. Cells in this model
were organized on a 2D grid , connected to each other according to a probability exponentially decreasing with distance , and received inputs from two different sources,
550
T. P. REBOTIER, J. L. ELMAN
A and B, which might or might not be correlated. The analysis of the network revealed 17 different kinds of cells: those whose ou tpu t after several cycles depended
on the network's initial state, and the 16 possible logical functions of two inputs.
Kerzsberg et al. found that learning and pruning created different patches of cells
implementing common logical functions, with strong excitation within the patches
and inhibition between patches.
Shrager and Johnson (1995) extended that work by giving the network structure in
space (structuring the inputs in intricated stripes) or in time, by having a Hebbian
learning occur in a spatiotemporal wave that passed through the network rather
than occurring everywhere simultaneously. Their motivation was to see if these
learning conditions might create a cascade of increasingly complex functions. The
approach was also motivated by developmental findings in humans and monkeys
suggesting a move of the peak of maximal plasticity from the primary sensory
and motor areas to\vards parietal and then frontal regions. Shrager and Johnson
classified the logical functions into three groups: the constants (order 0), those that
depend on one input only (order 1), those that depend on both inputs (order 2).
They found that a slow wave favored the growth of order 2 cells, whereas a fast
wave favored order 1 cells. However, they only varied the connection reinforcement
(the growth Trophic Factor), so that the still diffuse pruning affected the rightmost
connections before they could stabilize, resulting in an overall decrease which had
to be compensated for in the analysis.
In this work, v,,'e followed Shrager and Johnson in their study of the effect of a
dynamic wave of learning. We present three novel features. Firstly, both the growth
trophic factor (hereafter, TF) and the probability of pruning (by analogy, "death
factor", DF) travel in gaussian-shaped waves. Second, we classify the cells in 4, not
3, orders: order 3 is made of the non-linearly separable logical functions, whereas
the order 2 is now restricted to linearly separable logical functions of both inputs.
Third. we use an overall measure of network performance: the slope of appearance
of units of a given order. The density is neglected as a measure not related to the
specific effects we are looking for, namely, spatial changes in complexity. Thus, each
run of our network can be analyzed using 4 values: the slopes for units of order 0,
1, 2 and 3 (See Table 1.). This extreme summarization of functional information
allows us to explore systematically many parameters and to study their influence
over how complexity grows in space.
Table 1: Orders of logical complexity
ORDER
o
1
2
3
2
FUNCTIONS
True False
A !A B !B
A.B !A.B A.!B !A.!B AvB !AvB Av!B !Av!B
A xor B, A==B
METHODS
Our basic network consisted of 4 columns of 50 units (one simulation verified the
scaling up of results, see section 3.2). Internal connections had a gaussian bandwidth and did not wrap around . All initial connections were of weight 1, so that the
connectivity weights given as parameters specified a number of labile connections.
Early investigations were made with a set of manually chosen parameters (" MAN-
551
Explorations with the Dynamic Wave Model
UAL"). Afterwards, two sets of parameters were determined by a Genetic Algorithm
(see Goldberg 1989): the first, "SYM", by maximizing the slope of appearance of
order 3 units only, the second, " ASY" , byoptimizing jointly the appearance of order
2 and order 3 units (" ASY"). The "SYM" network keeps a symmetrical rate of
presentation between inputs A and B. In contrast, the" ASY" net presents input
B much more often than input A. Parameters are specified in Table 1 and, are in
"natural" units: bandwidths and distances are in "cells apart", trophic factor is
homogenous to a weight, pruning is a total probability. Initial values and pruning necessited random number generation. \Ve used a linear congruence generator
(see p284 in Press 1988), so that given the same seed, two different machines could
produce exactly the same run. All the points of each Figure are means of several
(usually 40) runs with different random seeds and share the same series of random
seeds.
Table 2: Default parameters
MAN.
SYM.
ASY.
name
description
8.5
6.5
8.5
6.5
5.0
3.5
0.2
7.0
7.0
0.7
1.5
9.87
0.6
3.5
0.6
0.65
0.5
0.5
0.00
6.20
5.2
8.5
6.5
6.5
1.24
0.20
1.26
2.86
0.68
3.0
17.6
0.6
1.87
0.64
0.62
0.5
0.5
0.00
12
9.7
13.4
14.1
9.9
12.4
0.28
0.65
0.03
0.98
-3.2
16.4
0.6
3.3
0.5
0.12
0.06
0.81
0.00
Wae
Wai
Wbe
\Vbi
Wne
Wni
DW
Bne
Bni
Cdw
Ddw
Wtf
Btf
Tst
Bdf
Pdf
Pa
Pb
Pab
mean ini. weight of A excitatory connections
mean ini. weight of A inhibitory connections
mean ini. weight of B excitatory connections
mean ini . weight of B inhibitory connections
m.ini. density of internal excitatory connections
m.ini. density of internal inhibitory connections
relative variation in initial weights
bandwidth of internal excitatory connections
bandwidth of internal inhibitory connections
celerity of dynamic wave
distance between the peaks of both waves
base level of TF (=highest available weight)
bandwidth of TF dynamic wave
Threshold of stabilisation (pruning stop)
band ..vidth of DF dynamic wave
base level of DF (total proba. of degeneration)
probability of A alone in the stimulus set
probability of B alone in the stimulus set
probability of simultaneous s A and B
3
3.1
RESULTS
RESULTS FORMAT
All Figures have the same format and summarize 40 runs per point unless otherwise
specified. The top graph presents the mean slope of appearance of all 4 orders
of complexity (see Table 1) on the y axis , as a function of different values of the
experimentally manipulated parameter, on the x axis. The bottom left graph shows
the mean slope for order 2, surrounded by a gray area one standard deviation below
and above. The bottom right graph shows the mean slope for order 3, also with
a I-s.d. surrounding area. The slopes have not been normalized, and come from
networks whose columns are 50 units high, so that a slope of 1.0 indicates that the
number of such units increase in average by one unit per columns, ie, by 3 units
| 1142 |@word effect:2 consisted:1 true:1 indicate:1 come:1 avb:2 normalized:1 move:1 death:1 simulation:1 gradual:1 exploration:2 human:1 deal:1 primary:1 gradient:1 implementing:1 wrap:1 distance:3 excitation:1 iller:1 initial:4 series:1 ini:6 hereafter:1 investigation:1 pdf:1 genetic:2 rightmost:1 around:1 novel:1 great:1 congruence:1 mapping:1 seed:3 common:1 functional:1 plasticity:2 early:1 motor:1 exponentially:1 alone:2 travel:1 summarization:1 significant:1 av:2 sensitive:1 surround:1 create:1 tf:3 grid:1 parietal:1 extended:1 looking:1 gaussian:2 successive:1 had:2 firstly:1 rather:2 ucsd:4 varied:1 inhibition:1 direct:1 base:2 structuring:1 showed:1 namely:1 specified:4 jolla:2 apart:1 connection:16 indicates:1 selectivity:1 contrast:1 elman:3 immature:1 der:1 swept:1 brain:1 wtf:1 usually:1 below:1 decreasing:1 summarize:1 selective:2 afterwards:1 overall:2 dual:1 orientation:4 hebbian:5 favored:2 kind:1 difficulty:1 ual:1 monkey:1 spatial:3 natural:1 homogenous:1 finding:1 labile:1 having:1 shaped:1 manually:1 axis:2 basic:1 created:1 growth:4 linsker:1 exactly:1 df:3 stimulus:2 unit:10 cell:10 manipulated:1 simultaneously:1 before:1 ve:1 vards:1 whereas:2 relative:1 depended:1 source:1 shrager:4 generation:1 encoding:1 jeffrey:1 proba:1 analogy:1 generator:1 might:3 highly:1 dehaene:1 degree:1 studied:3 name:1 analyzed:1 extreme:1 systematically:1 share:1 revealed:1 surrounded:1 excitatory:4 sym:3 bandwidth:5 unless:1 emerge:1 area:3 motivated:1 cascade:1 default:1 passed:1 cortical:2 pre:1 column:5 classify:1 sensory:1 suggest:1 made:2 reinforcement:3 selection:1 influence:1 deviation:1 pruning:8 center:3 compensated:1 maximizing:1 johnson:4 band:1 keep:1 stabilisation:1 symmetrical:1 bne:1 btf:1 spatiotemporal:1 inhibitory:4 rule:1 density:3 peak:2 ie:1 per:2 dw:1 table:5 off:2 ca:2 variation:1 correlated:2 affected:1 group:1 connectivity:1 von:1 threshold:1 pb:1 complex:1 goldberg:1 tst:1 vbi:1 grows:1 pa:1 dr:2 gilman:2 cognitive:2 verified:1 did:1 linearly:4 motivation:1 stripe:1 graph:3 wni:1 bdf:1 suggesting:1 bottom:2 run:4 everywhere:1 stabilize:1 region:1 degeneration:1 connected:1 cycle:1 slow:1 patch:3 decrease:1 highest:1 scaling:1 apparently:1 developmental:1 complexity:7 wave:12 competitive:1 layer:1 followed:1 third:1 neglected:1 trophic:3 dynamic:6 slope:9 ddw:1 depend:2 activity:2 strength:1 occur:1 specific:1 xor:1 changeux:1 explored:1 wae:1 diffuse:1 grouping:1 aspect:1 emergent:1 false:1 separable:4 surrounding:1 format:2 fast:1 occurring:1 department:2 classified:1 cogsci:2 according:1 simultaneous:1 sparser:1 asy:4 synaptic:1 wai:1 whose:2 increasingly:1 appearance:6 explore:1 otherwise:1 pab:1 modification:1 bni:1 restricted:1 emergence:2 jointly:1 stop:1 tpu:1 logical:7 net:1 organized:1 mechanism:1 ou:1 maximal:1 presentation:1 man:2 change:1 experimentally:1 available:1 determined:1 description:1 total:2 competition:1 la:2 thomas:1 produce:1 top:1 internal:5 overlapping:1 cdw:1 dependant:1 frontal:1 gray:1 received:1 giving:1 strong:1 |
158 | 1,143 | Improving Policies without Measuring
Merits
Peter Dayan!
CBCL
E25-201, MIT
Cambridge, MA 02139
Satinder P Singh
Harlequin, Inc
1 Cambridge Center
Cambridge, MA 02142
dayan~ai.mit.edu
singh~harlequin.com
Abstract
Performing policy iteration in dynamic programming should only
require knowledge of relative rather than absolute measures of the
utility of actions (Werbos, 1991) - what Baird (1993) calls the advantages of actions at states. Nevertheless, most existing methods
in dynamic programming (including Baird's) compute some form of
absolute utility function . For smooth problems, advantages satisfy
two differential consistency conditions (including the requirement
that they be free of curl), and we show that enforcing these can lead
to appropriate policy improvement solely in terms of advantages.
1
Introd uction
In deciding how to change a policy at a state, an agent only needs to know the
differences (called advantages) between the total return based on taking each action
a for one step and then following the policy forever after, and the total return
based on always following the policy (the conventional value of the state under the
policy). The advantages are like differentials - they do not depend on the local levels
of the total return. Indeed, Werbos (1991) defined Dual Heuristic Programming
(DHP), using these facts, learning the derivatives of these total returns with respect
to the state. For instance, in a conventional undiscounted maze problem with a
lWe are grateful to Larry Saul, Tommi Jaakkola and Mike Jordan for comments, and
Andy Barto for pointing out the connection to Werbos' DHP. This work was supported by
NSERC, MIT, and grants to Professor Michael I Jordan from ATR Human Information
Processing Research and Siemens Corporation.
1060
P. DAYAN, S. P. SINGH
penalty for each move, the advantages for the actions might typically be -1,0
or 1, whereas the values vary between 0 and the maximum distance to the goal.
Advantages should therefore be easier to represent than absolute value functions in a
generalising system such as a neural network and, possibly, easier to learn. Although
the advantages are differential, existing methods for learning them, notably Baird
(1993), require the agent simultaneously to learn the total return from each state.
The underlying trouble is that advantages do not appear to satisfy any form of a
Bellman equation. Whereas it is clear that the value of a state should be closely
related to the value of its neighbours, it is not obvious that the advantage of action
a at a state should be equally closely related to its advantages nearby.
In this paper, we show that under some circumstances it is possible to use a solely
advantage-based scheme for policy iteration using the spatial derivatives of the
value function rather than the value function itself. Advantages satisfy a particular
consistency condition, and, given a model of the dynamics and reward structure
of the environment, an agent can use this condition to directly acquire the spatial
derivatives of the value function. It turns out that the condition alone may not
impose enough constraints to specify these derivatives (this is a consequence of the
problem described above) - however the value function is like a potential function
for these derivatives, and this allows extra constraints to be imposed.
2
Continuous DP, Advantages and Curl
Consider the problem of controlling a deterministic system to minimise V"'(xo) =
minu(t)
r(y(t), u(t?)dt, where y(t) E Rn is the state at time t, u(t) E Rm is
the control, y(O) = xo, and y(t) = f((y(t), u(t)). This is a simplified form of a
classic variational problem since rand f do not depend on time t explicitly, but
only through y(t) and there are no stopping time or terminal conditions on y(t)
(see Peterson, 1993; Atkeson, 1994, for recent methods for solving such problems) .
This means that the optimal u(t) can be written as a function of y(t) and that
V(xo) is a function of Xo and not t. We do not treat the cases in which the infinite
integrals do not converge comfortably and we will also assume adequate continuity
and differentiability.
Jo=
The solution by advantages: This problem can be solved by writing down the
Hamilton-Jacobi-Bellman (HJB) equation (see Dreyfus, 1965) which V"'(x) satisfies:
0= mJn [r(x, u)
+ f(x, u) . V'
x
V"'(x)]
(1)
This is the continuous space/time analogue of the conventional Bellman
equation (Bellman, 1957) for discrete, non-discounted, deterministic decision problems, which says that for the optimal value function V"', 0 =
mina [r(x, a) + V'" (f(x, a)) - V"'(x)] , where starting the process at state x and using action a incurs a cost r(x, a) and leaves the process in state !(x, a). This, and
its obvious stochastic extension to Markov decision processes, lie at the heart of
temporal difference methods for reinforcement learning (Sutton, 1988; Barto, Sutton & Watkins, 1989; Watkins, 1989). Equation 1 describes what the optimal value
function must satisfy. Discrete dynamic programming also comes with a method
called value iteration which starts with any function Vo(x), improves it sequentially,
and converges to the optimum.
The alternative method, policy iteration (Howard, 1960), operates in the space of
Improving Policies without Measuring Merits
1061
policies, ie functions w(x). Starting with w(x), the method requires evaluating
everywhere the value function VW(x) = 1000 r(y(t), w(y(t))dt, where y(O) =
x, and y(t) = f(y(t), w(y(t)). It turns out that VW satisfies a close relative of
equation 1:
0= r(x, w(x)) + f(x, w(x)) . V' x VW(x)
In policy iteration, w(x) is improved, by choosing the maximising action:
(2)
= argm~ [r(x, u) + f(x , u) . V' x VW (x)]
(3)
as the new action. For discrete Markov decision problems, the equivalent of this
process of policy improvement is guaranteed to improve upon w.
Wi (x)
In the discrete case and for an analogue of value iteration, Baird (1993) defined the
optimal advantage function A*(x, a) = [Q*(x, a) - maxb Q*(x, b)] jM, where 6t is
effectively a characteristic time for the process which was taken to be 1 above, and
the optimal Q function (Watkins, 1989) is Q*(x, a) = r(x, a) + V*(f(x, a)), where
V* (y) = maxb Q* (y, b). It turns out (Baird, 1993) that in the discrete case, one can
cast the whole of policy iteration in terms of advantages. In the continuous case,
we define advantages directly as
(4)
This equation indicates how the spatial derivatives of VW determine the advantages.
Note that the consistency condition in equation 2 can be written as AW(x, w(x)) =
O. Policy iteration can proceed using
w'(x) = argmaxuAW(x, u).
(5)
Doing without VW: We can now state more precisely the intent of this paper: a)
the consistency condition in equation 2 provides constraints on the spatial derivatives V' x VW(x), at least given a model of rand f; b) equation 4 indicates how these
spatial derivatives can be used to determine the advantages, again using a model;
and c) equation 5 shows that the advantages tout court can be used to improve the
policy. Therefore, one apparently should have no need to know Vv.' (x) but just its
spatial derivatives in order to do policy iteration.
Didactic Example - LQR: To make the discussion more concrete, consider
the case of a one-dimensional linear quadratic regulator (LQR). The task is to
minimise V*(xo) =
o:x(t)2 + (3u(t)2dt by choosing u(t), where 0:,(3 > O,?(t) =
-[ax(t) + u(t)] and x(O) = Xo. It is well known (eg Athans & Falb, 1966) that
the solution to this problem is that V*(x) = k*x 2j2 where k* = (0: + (3(u*)2)j(a +
u*) and u(t) = (-a + Ja 2 + o:j (3)x(t). Knowing the form of the problem, we
consider policies w that make u(t) = wx(t) and require h(x,k) == V'" VW(x) = kx ,
where the correct value of k = (0: + (3w 2)j(a + w). The consistency condition
in equation 2 evaluated at state x implies that 0 = (0: + (3w 2)X 2 - h(x, k)(a +
w)x. Doing online gradient descent in the square inconsistency at samples Xn gives
kn+l = kn -fa [(0: + (3W2)x~ - knXn(a + W)X n]2 jak n, which will reduce the square
inconsistency for small enough f unless x = O. As required, the square inconsistency
can only be zero for all values of x if k = (0: + (3w2)j((a + w)). The advantage of
performing action v (note this is not vx) at state x is, from equation 4, AW (x, v) =
o:x2 + (3v 2 - (ax + v)(o: + (3w 2)xj(a + w), which, minimising over v (equation 5)
gives u(x) = w'x where Wi = (0: + (3w 2)j(2(3(a+ w)) , which is the Newton-Raphson
iteration to solve the quadratic equation that determines the optimal policy. In this
case, without ever explicitly forming VW (x), we have been able to learn an optimal
It
P. DAYAN, S. P. SINGH
1062
policy. This was based, at least conceptually, on samples
of the agent with the world.
Xn
from the interaction
The curl condition: The astute reader will have noticed a problem. The consistency condition in equation 2 constrains the spatial derivatives \7 x VW in only one
direction at every point - along the route f(x, w(x)) taken according to the policy
there. However, in evaluating actions by evaluating their advantages, we need to
know \7 x VW in all the directions accessible through f(x, u) at state x. The quadratic
regulation task was only solved because we employed a function approximator (which
was linear in this case h(x, k) = kx). For the case of LQR, the restriction that h be
linear allowed information about f(X', w(x' )) . \7 x' VW (x') at distant states x' and
for the policy actions w(x' ) there to determine f(x, u) . \7 x VW(x) at state x but
for non-policy actions u. If we had tried to represent h(x, k) using a more flexible
approximator such as radial basis functions, it might not have worked. In general, if
we didn't know the form of \7 x VW (x), we cannot rely on the function approximator
to generalize correctly.
There is one piece of information that we have yet to use - function h(x, k) ==
\7 x VW (x) (with parameters k, and in general non-linear) is the gradient of something - it represents a conservative vector field. Therefore its curl should vanish
(\7 x x h(x, k) = 0). Two ways to try to satisfy this are to represent h as a suitably
weighted combination of functions that satisfy this condition or to use its square as
an additional error during the process of setting the parameters k. Even in the case
of the LQR, but in more than one dimension, it turns out to be essential to use the
curl condition. For the multi-dimensional case we know that VW (x) = x T KWx/2
for some symmetric matrix KW, but enforcing zero curl is the only way to enforce
this symmetry.
The curl condition says that knowing how some component of \7 x VW(x) changes
in some direction (eg 8\7 x VW(xh/8xl) does provide information about how some
other component changes in a different direction (eg 8\7 x vw (xh /8X2). This information is only useful up to constants of integration, and smoothness conditions will
be necessary to apply it.
3
Simulations
We tested the method of approximating hW(x) = \7 x VW(x) as a linearly weighted
combination of local conservative vector fields hW(x) = L~=l ci\7 x <p(x, Zi), where
ci are the approximation weights that are set by enforcing equation 2, and
</J(x, Zi) = e-a:lx-z;l2 are standard radial basis functions (Broomhead & Lowe, 1988;
Poggio & Girosi, 1990). We enforced this condition at a discrete set {xd of 100
points scattered in the state space, using as a policy, explicit vectors Uk at those
locations, and employed 49 similarly scattered centres Zi. Issues of learning to
approximate conservative and non-conservative vector fields using such sums have
been discussed by Mussa-Ivaldi (1992). One advantage of using this representation is that 1jJ(x) = L~=l ci <p(x, Zi) can be seen as the system's effective policy
evaluation function VW(x), at least modulo an arbitrary constant (we call this an
un-normalised value function).
We chose two 2-dimensional problems to prove that the system works. They share
the same dynamics x(t) = -x(t) + u(t), but have different cost functions:
Improving Policies without Measuring Merits
1063
TLQR(X(t), U(t)) = 5lx(tW + lu(tW ,
TSp(X(t), U(t)) = Ix(tW + \/1 + IU(t)12
TLQR makes for a standard linear quadratic regulation problem, which haJ6:l
quadratic optimal value function and a linear optimal controller as before (although
now we are using limited range basis functions instead of using the more appropriate
linear form). TSp has a mixture of a quadratic term in x(t), which encourages the
state to move towards the origin, and a more nearly linear cost term in u(t), which
would tend to encourage a constant speed. All the sample points Xk and radial
basis function centres Zi were selected within the {-I , IF square. We started from
a randomly chosen policy with both components of Uk being samples from the uniform distribution U( -.25, .25). This was chosen so that the overall dynamics of the
system, including the -x(t) component should lead the agent towards the origin.
Figure Ia shows the initial values of Uk in the regulator case, where the circles are at
the leading edges of the local policies which point in the directions shown with relative magnitudes given by the length of the lines, and (for scale) the central object is
the square {-O.I,O.IF. The 'policy' lines are centred at the 100 Xk points. Using
the basis function representation, equation 2 is an over-determined linear system,
and so, the standard Moore-Penrose pseudo-inverse was used to find an approximate solution. The un-normalised approximate value function corresponding to
this policy is shown in figure lb. Its bowl-like character is a feature of the optimal
value function. For the LQR case, it is straightforward to perform the optimisation in equation 5 analytically, using the values for h W (Xk) determined by the ci.
Figure Ic,d show the policy and its associated un-normalised value function after 4
iterations. By this point, the policy and value functions are essentially optimal - the
policy shows the agent moves inwards from all Xk and the magnitudes are linearly
related to the distances from the centre. Figure Ie,f show the same at the end point
for TSp. One major difference is that we performed the optimisation in equation 5
over a discrete set of values for Uk rather than analytically. The tendency for the
agent to maintain a constant speed is apparent except right near the origin. The
bowl is not centred exactly at (0,0) - which is an approximation error.
4
Discussion
This paper has addressed the question of whether it is possible to perform policy
iteration using just differential quantities like advantages. We showed that using a
conventional consistency condition and a curl constraint on the spatial derivatives of
the value function it is possible to learn enough about the value function for a policy
to improve upon that policy. Generalisation can be key to the whole scheme. We
showed this working on an LQR problem and a more challenging non-LQR case. We
only treated 'smooth' problems - addressing discontinuities in the value function,
which imply un differentiability, is clearly key. Care must be taken in interpreting
this result. The most challenging problem is the error metric for the approximation.
The consistency condition may either under-specify or over-specify the parameters.
In the former case, just as for standard approximation theory, one needs prior
information to regularise the gradient surface. For many problems there may be
spatial discontinuities in the policy evaluation, and therefore this is particularly
difficult. IT the parameters are over-specified (and, for good generalisation, one
would generally be working in this regime), we need to evaluate inconsistencies.
Inconsistencies cost exactly to the degree that the optimisation in equation 5 is
compromised - but this is impossible to quantify. Note that this problem is not
1064
P. DAYAN, S. P. SINOH
a
c
e
b
d
f
Figure 1: a-d) Policies and un-normalised value functions for the
the rsp problem.
rLQR
and e-f) for
confined to the current scheme of learning the derivatives of the value function it also impacts algorithms based on learning the value function itself. It is also
unreasonable to specify the actions Uk only at the points Xk. In general, one would
either need a parameterised function for u(x) whose parameters would be updated in
the light of performing the optimisations in equation 5 (or some sort of interpolation
scheme), or alternatively one could generate u on the fly using the learned values
of h(x) .
fo
If there is a discount factor, ie V*(xo) = minu(t) oo e-Atr(y(t), u(t?dt, then 0 =
r(x, w(x? - AV w (x) + f(x, w(x?? \7 x VW (x) is the equivalent consistency condition
to equation 2 (see also Baird, 1993) and so it is no longer possible to learn \7 x VW (x)
without ever considering VW(x) itself. One can still optimise parameterised forms
for VW as in section 3, except that the once arbitrary constant is no longer free .
The discrete analogue to the differential consistency condition in equation 2 amounts
to the tautology that given current policy 7r, 't/x, A7r(x,7r(x? = O. As in the
continuous case, this only provides information about V7r(f(x, 7r(x?) - V7r(x) and
not V7r(f(x, a?- V 7r (x) for other actions a which are needed for policy improvement.
There is an equivalent to the curl condition: if there is a cycle in the undirected
transition graph, then the weighted sum of the advantages for the actions along the
cycle is equal to the equivalently weighted sum of payoffs along the cycle, where
the weights are +1 if the action respects the cycle and -1 otherwise. This gives
a consistency condition that A 7r has to satisfy - and, just as in the constants of
integration for the differential case, it requires grounding: A 7r (z, a) = 0 for some z
in the cycle. It is certainly not true that all discrete problems will have sufficient
cycles to specify A 7r completely - in an extreme case, the undirected version of the
directed transition graphs might contain no cycles at all. In the continuous case, if
the updates are sufficiently smooth, this is not possible. For stochastic problems,
the consistency condition equivalent to equation 2 will involve an integral, which,
Improving Policies without Measuring Merits
1065
if doable, would permit the application of our method .
Werbos's (1991) DHP and Mitchell and Thrun's (1993) explanation-based Qlearning also study differential forms of the Bellman equation based on differentiating the discrete Bellman equation (or its Q-function equivalent) with respect to
the state. This is certainly fine as an additional constraint that V* or Q* must
satisfy (as used by Mitchell and Thrun and Werbos' Globalized version of DHP) ,
but by itself, it does not enforce the curl condition, and is insufficient for the whole
of policy improvement.
References
Athans, M & Falb, PL (1966). Optimal Control. New York, NY: McGraw-Hill.
Atkeson, CG (1994). Using Local Trajectory Optimizers To Speed Up Global Optimization in Dynamic Programming. In NIPS 6.
Baird, LC, IIIrd (1993). Advantage Updating. Technical report, Wright Laboratory,
Wright-Patterson Air Force Base.
Barto, AG, Bradtke, SJ & Singh, SP (1995). Learning to act using real-time dynamic programming. Artificial Intelligence, 72, 81-138.
Barto, AG, Sutton , RS & Watkins, CJCH (1990) . Learning and sequential decision
making. In M Gabriel & J Moore, editors, Learning and Computational Neuroscience: Foundations of Adaptive Networks. Cambridge, MA: MIT Press, Bradford
Books.
Bellman, RE (1957). Dynamic Programming. Princeton, NJ: Princeton University
Press.
Broomhead , DS & Lowe, D (1988). Multivariable functional interpolation and
adaptive networks. Complex Systems, 2, 321-55.
Dreyfus, SE (1965). Dynamic Programming and the Calculus of Variations. New
York, NY: Academic Press.
Howard, RA (1960). Dynamic Programming and Markov Processes. New York,
NY: Technology Press & Wiley.
Mitchell, TM & Thrun, SB (1993). Explanation-based neural network learning for
robot control. In NIPS 5.
Mussa-Ivaldi, FA (1992). From basis functions to basis fields: Vector field approximation from sparse data. Biological Cybernetics, 67, 479-489.
Peterson, JK (1993). On-Line estimation of optimal value functions. In NIPS 5.
Poggio, T & Girosi, F (1990) . A theory of networks for learning. Science, 247,
978-982.
Sutton, RS (1988). Learning to predict by the methods of temporal difference.
Machine Learning, 3, pp 9-44.
Watkins, CJCH (1989). Learning from Delayed Rewards. PhD Thesis. University
of Cambridge, England.
Werbos, P (1991). A menu of designs for reinforcement learning over time. In
WT Miller IIIrd, RS Sutton & P Werbos, editors, Neural Networks for Control.
Cambridge, MA: MIT Press, 67-96.
| 1143 |@word version:2 suitably:1 calculus:1 simulation:1 tried:1 r:3 incurs:1 initial:1 ivaldi:2 lqr:7 existing:2 current:2 com:1 yet:1 written:2 must:3 distant:1 wx:1 girosi:2 update:1 alone:1 globalized:1 leaf:1 selected:1 intelligence:1 xk:5 argm:1 provides:2 location:1 lx:2 along:3 differential:7 prove:1 hjb:1 notably:1 ra:1 indeed:1 multi:1 terminal:1 bellman:7 discounted:1 jm:1 considering:1 underlying:1 didn:1 what:2 ag:2 corporation:1 cjch:2 nj:1 temporal:2 pseudo:1 every:1 act:1 xd:1 exactly:2 rm:1 uk:5 control:4 grant:1 appear:1 hamilton:1 before:1 local:4 treat:1 consequence:1 sutton:5 solely:2 kwx:1 interpolation:2 might:3 chose:1 challenging:2 limited:1 range:1 directed:1 optimizers:1 radial:3 cannot:1 close:1 impossible:1 writing:1 restriction:1 conventional:4 imposed:1 deterministic:2 center:1 equivalent:5 straightforward:1 starting:2 menu:1 classic:1 variation:1 updated:1 controlling:1 modulo:1 programming:9 origin:3 particularly:1 updating:1 jk:1 werbos:7 mike:1 fly:1 solved:2 cycle:7 environment:1 constrains:1 reward:2 dynamic:11 singh:5 depend:2 grateful:1 solving:1 upon:2 patterson:1 basis:7 completely:1 bowl:2 effective:1 artificial:1 choosing:2 apparent:1 heuristic:1 whose:1 solve:1 say:2 otherwise:1 itself:4 tsp:3 online:1 advantage:27 doable:1 interaction:1 j2:1 tout:1 requirement:1 undiscounted:1 optimum:1 converges:1 object:1 oo:1 come:1 implies:1 quantify:1 tommi:1 direction:5 closely:2 correct:1 stochastic:2 human:1 vx:1 larry:1 require:3 ja:1 biological:1 extension:1 pl:1 a7r:1 sufficiently:1 ic:1 wright:2 cbcl:1 deciding:1 minu:2 predict:1 pointing:1 major:1 vary:1 estimation:1 weighted:4 mit:5 clearly:1 always:1 rather:3 barto:4 jaakkola:1 ax:2 improvement:4 indicates:2 cg:1 dayan:5 stopping:1 sb:1 typically:1 iu:1 issue:1 dual:1 flexible:1 overall:1 spatial:9 integration:2 field:5 once:1 equal:1 represents:1 kw:1 nearly:1 report:1 mjn:1 randomly:1 neighbour:1 simultaneously:1 delayed:1 mussa:2 maintain:1 evaluation:2 certainly:2 mixture:1 extreme:1 light:1 andy:1 integral:2 encourage:1 edge:1 necessary:1 poggio:2 unless:1 circle:1 re:1 instance:1 lwe:1 measuring:4 cost:4 addressing:1 uniform:1 kn:2 aw:2 ie:3 accessible:1 michael:1 e25:1 concrete:1 jo:1 again:1 central:1 thesis:1 possibly:1 book:1 derivative:12 leading:1 return:5 potential:1 centred:2 inc:1 baird:7 satisfy:8 explicitly:2 piece:1 performed:1 try:1 lowe:2 doing:2 apparently:1 start:1 sort:1 square:6 air:1 characteristic:1 miller:1 v7r:3 conceptually:1 generalize:1 lu:1 trajectory:1 cybernetics:1 fo:1 pp:1 obvious:2 associated:1 jacobi:1 athans:2 broomhead:2 mitchell:3 knowledge:1 improves:1 dt:4 specify:5 improved:1 rand:2 evaluated:1 just:4 parameterised:2 d:1 working:2 continuity:1 grounding:1 contain:1 true:1 former:1 analytically:2 symmetric:1 moore:2 laboratory:1 eg:3 during:1 encourages:1 multivariable:1 mina:1 hill:1 vo:1 bradtke:1 interpreting:1 variational:1 dreyfus:2 functional:1 comfortably:1 discussed:1 cambridge:6 ai:1 curl:10 smoothness:1 consistency:12 similarly:1 centre:3 had:1 robot:1 longer:2 surface:1 base:1 something:1 recent:1 showed:2 route:1 inconsistency:5 seen:1 additional:2 care:1 impose:1 employed:2 converge:1 determine:3 smooth:3 technical:1 academic:1 england:1 minimising:1 raphson:1 equally:1 impact:1 controller:1 circumstance:1 optimisation:4 essentially:1 metric:1 iteration:12 represent:3 confined:1 whereas:2 fine:1 addressed:1 extra:1 w2:2 comment:1 tend:1 undirected:2 jordan:2 call:2 vw:25 near:1 enough:3 maxb:2 xj:1 zi:5 reduce:1 tm:1 knowing:2 court:1 minimise:2 whether:1 introd:1 utility:2 penalty:1 peter:1 proceed:1 york:3 jj:1 action:16 adequate:1 gabriel:1 useful:1 generally:1 clear:1 involve:1 se:1 amount:1 discount:1 differentiability:2 generate:1 neuroscience:1 correctly:1 discrete:10 didactic:1 key:2 nevertheless:1 falb:2 astute:1 graph:2 sum:3 enforced:1 inverse:1 everywhere:1 reader:1 decision:4 guaranteed:1 quadratic:6 constraint:5 precisely:1 worked:1 x2:2 nearby:1 regulator:2 speed:3 performing:3 according:1 combination:2 describes:1 character:1 wi:2 tw:3 making:1 xo:7 heart:1 taken:3 equation:26 turn:4 needed:1 know:5 merit:4 tautology:1 end:1 permit:1 unreasonable:1 apply:1 appropriate:2 enforce:2 alternative:1 trouble:1 newton:1 approximating:1 move:3 noticed:1 question:1 quantity:1 fa:2 gradient:3 dp:1 distance:2 atr:2 thrun:3 enforcing:3 maximising:1 length:1 insufficient:1 dhp:4 acquire:1 equivalently:1 regulation:2 difficult:1 inwards:1 intent:1 design:1 policy:42 perform:2 av:1 markov:3 howard:2 descent:1 jak:1 payoff:1 ever:2 rn:1 arbitrary:2 lb:1 cast:1 required:1 specified:1 connection:1 learned:1 discontinuity:2 nip:3 able:1 regime:1 including:3 optimise:1 explanation:2 analogue:3 ia:1 treated:1 rely:1 force:1 scheme:4 improve:3 technology:1 imply:1 started:1 regularise:1 prior:1 l2:1 relative:3 approximator:3 foundation:1 agent:7 degree:1 sufficient:1 editor:2 share:1 supported:1 free:2 normalised:4 vv:1 saul:1 peterson:2 taking:1 differentiating:1 absolute:3 sparse:1 dimension:1 xn:2 evaluating:3 world:1 maze:1 transition:2 reinforcement:2 adaptive:2 simplified:1 atkeson:2 sj:1 approximate:3 qlearning:1 forever:1 mcgraw:1 satinder:1 global:1 sequentially:1 generalising:1 alternatively:1 continuous:5 un:5 compromised:1 learn:5 symmetry:1 improving:4 complex:1 sp:1 linearly:2 whole:3 allowed:1 scattered:2 ny:3 wiley:1 lc:1 explicit:1 xh:2 xl:1 harlequin:2 lie:1 vanish:1 watkins:5 ix:1 hw:2 down:1 essential:1 uction:1 sequential:1 effectively:1 ci:4 phd:1 magnitude:2 kx:2 easier:2 forming:1 penrose:1 rsp:1 nserc:1 satisfies:2 determines:1 ma:4 goal:1 towards:2 professor:1 change:3 infinite:1 determined:2 operates:1 except:2 generalisation:2 wt:1 conservative:4 called:2 total:5 bradford:1 tendency:1 siemens:1 evaluate:1 princeton:2 tested:1 |
159 | 1,144 | Factorial Hidden Markov Models
Zoubin Ghahramani
zoubin@psyche.mit.edu
Department of Computer Science
University of Toronto
Toronto, ON M5S 1A4
Canada
Michael I. Jordan
jordan@psyche.mit.edu
Department of Brain & Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
USA
Abstract
We present a framework for learning in hidden Markov models with
distributed state representations. Within this framework , we derive a learning algorithm based on the Expectation-Maximization
(EM) procedure for maximum likelihood estimation. Analogous to
the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the
combinatorial nature of the hidden state representation, the exact
E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that
both the mean field approximation and Gibbs sampling are viable
alternatives to the computationally expensive exact algorithm.
1
Introduction
A problem of fundamental interest to machine learning is time series modeling. Due
to the simplicity and efficiency of its parameter estimation algorithm, the hidden
Markov model (HMM) has emerged as one of the basic statistical tools for modeling
discrete time series, finding widespread application in the areas of speech recognition (Rabiner and Juang, 1986) and computational molecular biology (Baldi et al. ,
1994). An HMM is essentially a mixture model, encoding information about the
history of a time series in the value of a single multinomial variable (the hidden
state). This multinomial assumption allows an efficient parameter estimation algorithm to be derived (the Baum-Welch algorithm). However, it also severely limits
the representational capacity of HMMs. For example, to represent 30 bits of information about the history of a time sequence, an HMM would need 230 distinct
states. On the other hand an HMM with a distributed state representation could
achieve the same task with 30 binary units (Williams and Hinton, 1991). This paper
addresses the problem of deriving efficient learning algorithms for hidden Markov
models with distributed state representations.
Factorial Hidden Markov Models
473
The need for distributed state representations in HMMs can be motivated in two
ways. First, such representations allow the state space to be decomposed into
features that naturally decouple the dynamics of a single process generating the
time series. Second, distributed state representations simplify the task of modeling
time series generated by the interaction of multiple independent processes. For
example, a speech signal generated by the superposition of multiple simultaneous
speakers can be potentially modeled with such an architecture.
Williams and Hinton (1991) first formulated the problem of learning in HMMs with
distributed state representation and proposed a solution based on deterministic
Boltzmann learning. The approach presented in this paper is similar to Williams
and Hinton's in that it is also based on a statistical mechanical formulation of hidden
Markov models. However, our learning algorithm is quite different in that it makes
use of the special structure of HMMs with distributed state representation, resulting
in a more efficient learning procedure. Anticipating the results in section 2, this
learning algorithm both obviates the need for the two-phase procedure of Boltzmann
machines, and has an exact M-step. A different approach comes from Saul and
Jordan (1995), who derived a set of rules for computing the gradients required for
learning in HMMs with distributed state spaces. However, their methods can only
be applied to a limited class of architectures.
2
Factorial hidden Markov models
Hidden Markov models are a generalization of mixture models. At any time step,
the probability density over the observables defined by an HMM is a mixture of
the densities defined by each state in the underlying Markov model. Temporal
dependencies are introduced by specifying that the prior probability of the state at
time t depends on the state at time t -1 through a transition matrix, P (Figure 1a).
Another generalization of mixture models, the cooperative vector quantizer (CVQ;
Hinton and Zemel, 1994 ), provides a natural formalism for distributed state representations in HMMs. Whereas in simple mixture models each data point must be
accounted for by a single mixture component, in CVQs each data point is accounted
for by the combination of contributions from many mixture components, one from
each separate vector quantizer. The total probability density modeled by a CVQ
is also a mixture model; however this mixture density is assumed to factorize into
a product of densities, each density associated with one of the vector quantizers.
Thus, the CVQ is a mixture model with distributed representations for the mixture
components.
Factorial hidden Markov models! combine the state transition structure of HMMs
with the distributed representations of CVQs (Figure 1b). Each of the d underlying
Markov models has a discrete state s~ at time t and transition probability matrix
Pi. As in the CVQ, the states are mutually exclusive within each vector quantizer
and we assume real-valued outputs. The sequence of observable output vectors is
generated from a normal distribution with mean given by the weighted combination
of the states of the underlying Markov models:
where C is a common covariance matrix. The k-valued states
Si
are represented as
1 We refer to HMMs with distributed state as factorial HMMs as the features of the
distributed state factorize the total state representation.
474
Z. GHAHRAMANI. M. I. JORDAN
discrete column vectors with a 1 in one position and 0 everywhere else; the mean of
the observable is therefore a combination of columns from each of the Wi matrices.
a)
~-------....
y
p
Figure 1. a) Hidden Markov model. b) Factorial hidden Markov model.
We capture the above probability model by defining the energy of a sequence of T
states and observations, {(st, yt)};=l' which we abbreviate to {s, y}, as:
1l( {s,y}) =
~
t. k-t. w;s:]'
C- 1
[yt -
t. w;s:]- t. t.
sf A;S:-l, (1)
where [Ai]jl = logP(s~jls~I-I) such that 2::=1 e[Ai]j/ = 1, and I denotes matrix
transpose. Priors for the initial state, sl, are introduced by setting the second term
in (1) to - 2:t=1 sf log7ri. The probability model is defined from this energy by
the Boltzmann distribution
1
(2)
P({s,y}) = Z exp{-ll({s,y})}.
Note that like in the CVQ (Ghahramani, 1995), the undamped partition function
Z =
J
d{y} Lexp{-ll({s,y})},
{s}
evaluates to a constant, independent of the parameters. This can be shown by
first integrating the Gaussian variables, removing all dependency on {y}, and then
summing over the states using the constraint on e[A,]j/ .
The EM algorithm for Factorial HMMs
As in HMMs, the parameters of a factorial HMM can be estimated via the EM
(Baum-Welch) algorithm. This procedure iterates between assuming the current
parameters to compute probabilities over the hidden states (E-step), and using
these probabilities to maximize the expected log likelihood of the parameters (Mstep).
Using the likelihood (2), the expected log likelihood of the parameters is
Q(4) new l4>)
= (-ll({s,y}) -logZ)c ,
(3)
475
Factorial Hidden Markov Models
where </J = {Wi, Pi, C}f=l denotes the current parameters, and Oc denotes expectation given the damped observation sequence and </J. Given the observation
sequence, the only random variables are the hidden states. Expanding equation (3)
and limiting the expectation to these random variables we find that the statistics
that need to be computed for the E-step are (sDc, (s~sj')c, and (S~S~-l\. Note
that in standard HMM notation (Rabiner and Juang, 1986), (sDc corresponds to
t t I'
,
It and (SiSi - )c corresponds to
whereas (s~st?)c has no analogue when there
is only a single underlying Markov model. The ~-step uses these expectations to
maximize Q with respect to the parameters.
et,
The constant partition function allowed us to drop the second term in (3). Therefore, unlike the Boltzmann machine, the expected log likelihood does not depend
on statistics collected in an undamped phase of learning, resulting in much faster
learning than the traditional Boltzmann machine (Neal, 1992).
M-step
Setting the derivatives of Q with respect to the output weights to zero, we obtain
a linear system of equations for W:
Wnew = [2:(SS')c] t [2:(S)CY'] ,
N,t.
N,t
where sand Ware the vector and matrix of concatenated Si and. Wi,
respectively,L: N denotes summation over a data set of N sequences, and t is the
Moore-Penrose pseudo-inverse. To estimate the log transition probabilities we solve
8Q/8[A i ]jl = 0 subject to the constraint L: j e[A,]i l = 1, obtaining
'"
)
_ I
i...JN,t (stij st-l)
il
c
[A ?.]~ew
JI
- og (
t t-l
.
L:N,t,j(SijSil
(4)
)c
The covariance matrix can be similarly estimated:
c new =
2: YY' - 2: y(s)~(ss')!(s)cy'.
N,t
N,t
The M-step equations can therefore be solved analytically; furthermore, for a single
underlying Markov chain, they reduce to the traditional Baum-Welch re-estimation
equations.
E-step
Unfortunately, as in the simpler CVQ, the exact E-step for factorial HMMs is computationally intractable. For example, the expectation of the lh unit in vector i at
time step t, given {y}, is:
(s!j)c
p(sL = II{y}, </J)
k
2: P(s~it=I,.",s~j = 1, ... ,s~,jd=ll{y},</J)
it,???,jhyt',oo.,jd
Although the Markov property can be used to obtain a forward-back ward-like factorization of this expectation across time steps, the sum over all possible configurations of the other hidden units within each time step is unavoidable. For a data set
476
Z. GHAHRAMANI, M. I. JORDAN
of N sequences of length T, the full E-ste~ calculated through the forward-backward
procedure has time complexity O(NTk2 ). Although more careful bookkeeping can
reduce the complexity to O(NTdk d +1), the exponential time cannot be avoided.
This intractability of the exact E-step is due inherently to the cooperative nature
of the model-the setting of one vector only determines the mean of the observable
if all the other vectors are fixed.
Rather than summing over all possible hidden state patterns to compute the exact
expectations, a natural approach is to approximate them through a Monte Carlo
method such as Gibbs sampling. The procedure starts with a clamped observable
sequence {y} and a random setting of the hidden states {sj}. At each time step,
each state vector is updated stochastically according to its probability distribution
conditioned on the setting of all the other state vectors: s~ '" P (s~ I{y }, {sj : j "#
i or T "# t}, ?). These conditional distributions are straightforward to compute and
a full pass of Gibbs sampling requires O(NTkd) operations. The first and secondorder statistics needed to estimate (sDc, (s~sj\ and (S~s~-l\ are collected using
the S~j'S visited and the probabilities estimated during this sampling process.
Mean field approximation
A different approach to computing the expectations in an intractable system is
given by mean field theory. A mean field approximation for factorial HMMs can be
obtained by defining the energy function
~L
1l({s,y}) =
[yt -Itt]' C- 1 [yt -Itt] - Lsf logm}.
t
t ,i
which results in a completely factorized approximation to probability density (2):
.P({s,y}) ex
II exp{-~ [yt -Itt], C-
1
[yt -Itt]}
II (m~j)3:j
(5)
t ,i ,j
t
In this approximation, the observables are independently Gaussian distributed with
mean Itt and each hidden state vector is multinomially distributed with mean m~.
This approximation is made as tight as possible by chosing the mean field parameters Itt and m~ that minimize the ?ullback-Liebler divergence
K.q.PIIP) == (logP)p - (log?)p
where Op denotes expectation over the mean field distribution (5). With the
observables clamped, Itt can be set equal to the observable yt. Minimizing K?(.PIIP)
with respect to the mean field parameters for the states results in a fixed-point
equation which can be iterated until convergence:
m~ new
+ W!C-1Wim~ - ~diag{W!C-IWd -
u{W!C- 1 [yt - yt]
+At'm~-l
t
1 (6)
+ A~m~+1}
t
t
where yt == Ei Wim~ and u{-} is the softmax exponential, normalized over each
hidden state vector. The first term is the projection of the error in the observable
onto the weights of state vector i-the more a hidden unit can reduce this error, the
larger its mean field parameter. The next three terms arise from the fact that (s;j) p
is equal to mij and not m;j. The last two terms introduce dependencies forward
and backward in time. Each state vector is asynchronously updated using (6), at
a time cost of O(NTkd) per iteration. Convergence is diagnosed by monitoring
the K? divergence in the mean field distribution between successive time steps; in
practice convergence is very rapid (about 2 to 10 iterations of (6)).
Factorial Hidden Markov Models
477
Table 1: Comparison of factorial HMM on four problems of varying size
d k
Alg
Train
Test
Cycles
Time7Cycle
#
3 2 HMM 5 649 ?8
358 ? 81
33 ? 19
l.ls
Exact
877 ?O
768 ?O
22 ?6
3.0 s
Gibbs
627 ? 129 28 ?ll
710 ? 152
6.0 s
MF
755 ? 168
670 ? 137 32 ? 22
l.2 s
-782 ? 128 23 ? 10
3 3 HMM 5 670 ? 26
3.6 s
Exact
568 ? 164
276 ? 62
35 ? 12
5.2 s
Gibbs
564 ? 160
45 ? 16
9.2 s
305 ? 51
MF
495 ? 83
326 ? 62
38 ? 22
l.6 s
-2634 ? 566 18 ? 1
5.2 s
5 2 HMM 5 588 ? 37
Exact
223 ? 76
159 ? 80
31 ? 17
6.9 s
123 ? 103
Gibbs
40 ?5
12.7 s
73 ? 95
MF
292 ? 101
237 ? 103 54 ? 29
2.2 s
-00,-00 ,-00
14,14,12
90.0 s
5 3 HMM 3 1671,1678,1690
Exact
-55,-354,-295
-123,-378 ,-402 90,100,100
5l.Os
-123,-160 ,-194 -202,-237 ,-307 100,73,100
Gibbs
14.2 s
4.7 s
MF
-287,-286,-296 -364 ,-370,-365 100,100,100
Table 1. Data was generated from a factorial HMM with d underlying Markov models of
k states each. The training set was 10 sequences of length 20 where the observable was a
4-dimensional vector; the test set was 20 such sequences. HMM indicates a hidden Markov
model with k d states; the other algorithms are factorial HMMs with d underlying k-state
models. Gibbs sampling used 10 samples of each state. The algorithms were run until
convergence, as monitored by relative change in the likelihood, or a maximum of 100 cycles.
The # column indicates number of runs. The Train and Test columns show the log likelihood
? one standard deviation on the two data sets. The last column indicates approximate time
per cycle on a Silicon Graphics R4400 processor running Matlab.
3
Empirical Results
We compared three EM algorithms for learning in factorial HMMs-using Gibbs
sampling, mean field approximation , and the exact (exponential) E step- on the
basis of performance and speed on randomly generated problems. Problems were
generated from a factorial HMM structure, the parameters of which were sampled from a uniform [0,1] distribution, and appropriately normalized to satisfy the
sum-to-one constraints of the transition matrices and priors. Also included in the
comparison was a traditional HMM with as many states (k d ) as the factorial HMM.
Table 1 summarizes the results. Even for moderately large state spaces (d ~
3 and k ~ 3) the standard HMM with k d states suffers from severe overfitting.
Furthermore, both the standard HMM and the exact E-step factorial HMM are
extremely slow on the larger problems. The Gibbs sampling and mean field approximations offer roughly comparable performance at a great increase in speed.
4
Discussion
The basic contribution of this paper is a learning algorithm for hidden Markov
models with distributed state representations. The standard Baum-Welch procedure is intractable for such architectures as the size of the state space generated
from the cross product of d k-valued features is O(kd), and the time complexity of
Baum-Welch is quadratic in this size. More importantly, unless special constraints
are applied to this cross-product HMM architecture , the number of parameters also
478
z. GHAHRAMANI, M. 1. JORDAN
grows as O(k2d), which can result in severe overfitting.
The architecture for factorial HMMs presented in this paper did not include any
coupling between the underlying Markov chains. It is possible to extend the algorithm presented to architectures which incorporate such couplings. However, these
couplings must be introduced with caution as they may result either in an exponential growth in parameters or in a loss of the constant partition function property.
The learning algorithm derived in this paper assumed real-valued observables. The
algorithm can also be derived for HMMs with discrete observables, an architecture
closely related to sigmoid belief networks (Neal, 1992). However, the nonlinearities
induced by discrete observables make both the E-step and M-step of the algorithm
more difficult.
In conclusion, we have presented Gibbs sampling and mean field learning algorithms
for factorial hidden Markov models. Such models incorporate the time series modeling capabilities of hidden Markov models and the advantages of distributed representations for the state space. Future work will concentrate on a more efficient
mean field approximation in which the forward-backward algorithm is used to compute the E-step exactly within each Markov chain, and mean field theory is used to
handle interactions between chains (Saul and Jordan, 1996).
Acknowledgements
This project was supported in part by a grant from the McDonnell-Pew Foundation, by a
grant from ATR Human Information Processing Research Laboratories, by a grant from
Siemens Corporation, and by grant N00014-94-1-0777 from the Office of Naval Research.
References
Baldi, P., Chauvin, Y., Hunkapiller, T ., and McClure, M. (1994). Hidden Markov models
of biological primary sequence information. Proc. Nat. Acad. Sci. (USA),91(3):10591063.
Ghahramani, Z. (1995) . Factorial learning and the EM algorithm. In Tesauro, G., Touretzky, D., and Leen, T., editors, Advances in Neural Information Processing Systems
7. MIT Press, Cambridge, MA.
Hinton, G . and Zemel, R. (1994). Autoencoders, minimum description length, and
Helmholtz free energy. In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information Processing Systems 6. Morgan Kaufmanm Publishers,
San Francisco, CA .
Neal, R. (1992). Connectionist learning of belief networks. Artificial Intelligence, 56:71113.
Rabiner, 1. and Juang, B. (1986). An Introduction to hidden Markov models.
Acoustics, Speech ?1 Signal Processing Magazine , 3:4-16.
IEEE
Saul, 1. and Jordan, M. (1995). Boltzmann chains and hidden Markov models. In Tesauro,
G., Touretzky, D., and Leen, T., editors, Advances in Neural Information Processing
Systems 7. MIT Press, Cambridge, MA.
Saul, 1. and Jordan, M. (1996) . Exploiting tractable substructures in Intractable networks. In Touretzky, D., Mozer, M., and Hasselmo, M., editors, Advances in Neural
Information Processing Systems 8. MIT Press.
Williams, C. and Hinton, G. (1991) . Mean field networks that learn to discriminate temporally distorted strings. In Touretzky, D., Elman, J., Sejnowski, T., and Hinton, G.,
editors, Connectionist Models: Proceedings of the 1990 Summer School, pages 18-22.
Morgan Kaufmann Publishers, Man Mateo, CA.
| 1144 |@word covariance:2 initial:1 configuration:1 series:6 current:2 si:2 must:2 partition:3 mstep:1 drop:1 update:1 intelligence:1 quantizer:3 provides:1 iterates:1 toronto:2 successive:1 simpler:1 multinomially:1 viable:1 combine:1 baldi:2 introduce:1 expected:3 rapid:1 alspector:1 elman:1 roughly:1 brain:1 decomposed:1 project:1 underlying:8 notation:1 factorized:1 string:1 caution:1 sisi:1 finding:1 corporation:1 temporal:1 pseudo:1 growth:1 exactly:1 unit:4 grant:4 limit:1 severely:1 acad:1 encoding:1 ware:1 jls:1 mateo:1 specifying:1 hmms:17 limited:1 factorization:1 practice:1 procedure:7 logz:1 area:1 empirical:2 projection:1 integrating:1 zoubin:2 suggest:1 cannot:1 onto:1 deterministic:1 yt:10 baum:6 williams:4 straightforward:1 independently:1 l:1 welch:6 simplicity:1 rule:2 importantly:1 deriving:1 handle:1 analogous:1 limiting:1 updated:2 magazine:1 exact:13 us:1 secondorder:1 helmholtz:1 expensive:1 recognition:1 cooperative:2 solved:2 capture:1 cy:2 cycle:3 mozer:1 complexity:3 moderately:1 dynamic:1 depend:1 tight:1 efficiency:1 observables:6 completely:1 basis:1 represented:1 train:2 distinct:1 monte:1 sejnowski:1 artificial:1 zemel:2 quite:1 emerged:1 larger:2 valued:4 solve:1 s:2 statistic:3 ward:1 asynchronously:1 sequence:11 advantage:1 interaction:2 product:3 ste:1 achieve:1 representational:1 description:1 exploiting:1 juang:3 convergence:4 generating:1 derive:1 oo:1 coupling:3 school:1 op:1 come:1 concentrate:1 closely:1 human:1 sand:1 generalization:2 biological:1 summation:1 normal:1 exp:2 great:1 estimation:4 proc:1 combinatorial:1 superposition:1 visited:1 wim:2 hasselmo:1 tool:1 weighted:1 mit:5 gaussian:2 rather:1 og:1 varying:1 office:1 derived:5 naval:1 likelihood:7 indicates:3 hidden:30 special:2 softmax:1 field:16 equal:2 sampling:8 biology:1 future:1 connectionist:2 simplify:1 randomly:1 divergence:2 phase:2 logm:1 interest:1 severe:2 mixture:11 damped:1 chain:5 lh:1 unless:1 re:1 formalism:1 modeling:4 column:5 logp:2 maximization:1 cost:1 deviation:1 uniform:1 graphic:1 dependency:3 st:3 density:7 fundamental:1 michael:1 unavoidable:1 cognitive:1 stochastically:1 derivative:1 nonlinearities:1 stij:1 satisfy:1 depends:1 start:1 capability:1 substructure:1 contribution:2 minimize:1 il:1 kaufmann:1 who:1 rabiner:3 iterated:1 carlo:1 monitoring:1 r4400:1 m5s:1 processor:1 history:2 liebler:1 simultaneous:1 suffers:1 touretzky:4 evaluates:1 energy:4 naturally:1 associated:1 monitored:1 sampled:1 massachusetts:1 anticipating:1 back:1 formulation:1 leen:2 diagnosed:1 furthermore:2 until:2 autoencoders:1 hand:1 ei:1 o:1 widespread:1 grows:1 usa:2 normalized:2 analytically:2 moore:1 laboratory:1 neal:3 ll:5 during:1 speaker:1 oc:1 common:1 sigmoid:1 bookkeeping:1 multinomial:2 ji:1 jl:2 extend:1 refer:1 silicon:1 cambridge:3 gibbs:11 ai:2 pew:1 similarly:1 tesauro:3 n00014:1 binary:1 morgan:2 minimum:1 maximize:2 signal:2 ii:3 multiple:2 full:2 faster:1 offer:1 mcclure:1 cross:2 molecular:1 basic:2 essentially:1 expectation:9 iteration:2 represent:1 whereas:2 else:1 lsf:1 publisher:2 appropriately:1 unlike:1 subject:1 induced:1 cowan:1 jordan:9 architecture:7 reduce:3 motivated:1 speech:3 matlab:1 factorial:22 sl:2 estimated:3 per:2 yy:1 discrete:5 four:1 backward:3 sum:2 run:2 inverse:1 everywhere:1 distorted:1 ullback:1 summarizes:1 comparable:1 bit:1 summer:1 quadratic:1 constraint:4 speed:2 extremely:1 department:2 according:1 mcdonnell:1 combination:3 kd:1 across:1 em:5 psyche:2 wi:3 computationally:2 equation:5 mutually:1 lexp:1 needed:1 tractable:2 operation:1 alternative:1 jn:1 jd:2 obviates:1 denotes:5 running:1 include:1 a4:1 concatenated:1 ghahramani:6 primary:1 exclusive:1 traditional:3 gradient:1 separate:1 atr:1 capacity:1 hmm:21 sci:1 collected:2 chauvin:1 assuming:1 length:3 modeled:2 minimizing:1 difficult:1 unfortunately:1 potentially:1 quantizers:1 boltzmann:6 observation:3 markov:29 defining:2 hinton:7 canada:1 piip:2 introduced:3 mechanical:1 required:1 acoustic:1 address:1 pattern:1 belief:2 analogue:1 natural:2 abbreviate:1 technology:1 cvqs:2 temporally:1 prior:3 acknowledgement:1 relative:1 loss:1 k2d:1 undamped:2 foundation:1 editor:5 intractability:1 pi:2 accounted:2 supported:1 last:2 transpose:1 free:1 allow:1 institute:1 saul:4 distributed:17 calculated:1 transition:5 forward:4 made:1 san:1 avoided:1 sdc:3 sj:4 observable:7 approximate:2 overfitting:2 summing:2 assumed:2 francisco:1 factorize:2 table:3 learn:1 nature:2 itt:7 expanding:1 inherently:1 hunkapiller:1 obtaining:1 ca:2 alg:1 diag:1 did:1 arise:1 allowed:1 chosing:1 slow:1 position:1 sf:2 exponential:4 clamped:2 removing:1 intractable:5 iwd:1 nat:1 conditioned:1 mf:4 penrose:1 mij:1 corresponds:2 determines:1 wnew:1 ma:3 conditional:1 formulated:1 careful:1 man:1 change:1 included:1 cvq:6 decouple:1 total:2 pas:1 discriminate:1 siemens:1 ew:1 l4:1 incorporate:2 ex:1 |
160 | 1,145 | A Predictive Switching Model of
Cerebellar Movement Control
Andrew G. Barto
.J ay T. Buckingham
Department of Computer Science
University of Massachusetts
Amherst, MA 01003-4610
barto@cs.umass.edu
.J ames C. Houk
Department of Physiology
Northwestern University Medical School
303 East Chicago Ave
Chicago, Illinois 60611-3008
houk@acns.nwu.edu
Abstract
We present a hypothesis about how the cerebellum could participate in regulating movement in the presence of significant feedback
delays without resorting to a forward model of the motor plant. We
show how a simplified cerebellar model can learn to control endpoint positioning of a nonlinear spring-mass system with realistic
delays in both afferent and efferent pathways. The model's operation involves prediction, but instead of predicting sensory input, it
directly regulates movement by reacting in an anticipatory fashion
to input patterns that include delayed sensory feedback.
1
INTRODUCTION
The existence of significant delays in sensorimotor feedback pathways has led several
researchers to suggest that the cerebellum might function as a forward model of the
motor plant in order to predict the sensory consequences of motor commands before
actual feedback is available; e.g., (Ito, 1984; Keeler, 1990; Miall et ai., 1993). While
we agree that there are many potential roles for forward models in motor control
systems, as discussed, e.g., in (Wolpert et al., 1995), we present a hypothesis about
how the cerebellum could participate in regulating movement in the presence of significant feedback delays without resorting to a forward model. We show how a very
simplified version of the adjustable pattern generator (APG) model being developed
by Houk and colleagues (Berthier et al., 1993; Houk et al., 1995) can learn to control endpoint positioning of a nonlinear spring-mass system with significant delays
in both afferent and efferent pathways. Although much simpler than a multilink
dynamic arm, control of this spring-mass system involves some of the challenges
critical in the control of a more realistic motor system and serves to illustrate the
principles we propose. Preliminary results appear in (Buckingham et al., 1995).
Predictive Switching Model of Cerebellar Movement Control
A
139
B
A
0 .5
0.4
~
0.3
~
i
0.2
~
0.1
"g
o
foooo:t_~'_p_-_---.-_--_-", -= - ~~
-
~
-~--------~- ---~
-
-_-_-_- -r- -_ - _-_- ~ . -- -,--~-----,-----,----- ~
~
u
~
u
u
/I
-0.1
??
? T
?
p
-0 . 2+----~-----.---~~--~-
-0.1
U
-0.05
o
Position (m)
tIs.:}
0.05
0.1
Figure 1: Pulse-step control of a movement from initial position :zJo = 0 to target
endpoint position :zJT = .05. Panel A: Top-The pulse-step command. MiddleVelocity as a function of time. Bottom-Position as a function of time. Panel B:
Switching curve. The dashed line plots states of the spring-mass system at which
the command should switch from pulse to step so that the mass will stick at the
endpoint :zJT = .05 starting from different initial states. The bold line shows the
phase-plane trajectory of the movement shown in Panel A.
2
NONLINEAR VISCOSITY
An important aspect of the model is that the plant being contolled has a form of
nonlinear viscosity, brought about in animals through a combination of muscle and
spinal reflex properties. To illustrate this, we use a nonlinear spring-mass model
based on studies of human wrist movement (Wu et al., 1990):
mz + bzt + k(:zJ -
= 0,
(1)
where :zJ is the position (in meters) of an object of mass m (kg) attached to the
spring, :zJ eq is the resting, or equilibrium, position, b is a damping coefficient, and k
is the spring's stiffness. Setting m = I, b = 4, and k = 60 produces trajectories that
are qualitatively similar to those observed in human wrist movement (Wu et al.,
1990).
:zJ eq )
This one-fifth power law viscosity gives the system the potential to produce fast
movements that terminate with little or no oscillation. However, the principle of
setting the equilibrium position to the desired movement endpoint does not work in
practice because the system tends to "stick" at non-equilibrium positions, thereafter
drifting extremely slowly toward the equilibrium position, :zJ eq ? We call the position
at which the mass sticks (which we define as the position at which its absolute
velocity falls and remains below .005mjs) the endpoint of a movement, denoted :zJ e .
Thus, endpoint control of this system is not entirely straightforward. The approach
taken by our model is to switch the value of the control signal, :zJ eq , at a preciselyplaced point during a movement. This is similar to virtual trajectory control, except
that here the commanded equilibrium position need not equal the desired endpoint
either before or after the switch.
Panel A of Fig. 1 shows an example of this type of control. The objective is to
move the mass from an initial position :zJo = 0 to a target endpoint :zJT = .05. The
control signal is the pulse-step shown in the top graph, where :zJp = .1 and :zJ. = .04
A. G. BARTO, J. T. BUCKINGHAM, J. C. HOUK
140
..---------i~
target
.....-------.
exlrllcerebella,
"",""",ve convnand
sparse
expansive
encoding
spring-mass
system
command
state
Figure 2: The simplified model. PC, Purkinje cell; MFs, mossy fibers; PFs, parallel
fibers; CF, climbing fiber. The labels A and B mark places in the feedback loop to
which we refer in discussing the model's behavior.
respectively denote the pulse and step values, and d denotes the pulse duration.
The mass sticks near the target endpoint ZT = .05, which is different from both
equilibrium positions. If the switch had occurred sooner (later), the mass would
have undershot (overshot) the target endpoint.
The bold trajectory in Panel B of Fig. 1 is the phase-plane portrait of this movement. During its initial phase, the state follows the trajectory that would eventually
lead to equilibrium position zp' When the pulse ends, the state switches to the trajectory that would eventually lead to equilibrium position z" which allows a rapid
approach to the target endpoint ZT = .05, where the mass sticks before reaching z,.
The dashed line plots pairs of positions and velocities at which the switch should
occur so that movements starting from different initial states will reach the endpoint
ZT = .05. This switching curve has to vary as a function of the target endpoint.
3
THE MODEL'S ARCHITECTURE
The simplified model (Fig. 2) consists of a unit representing a Purkinje cell (PC)
whose input is derived from a sparse expansive encoding of mossy fiber (MF) input
representing the target position, ZT, which remains fixed throughout a movement,
delayed information about the state of the spring-mass system, and the current
motor command, Zeq.l Patterns of MF activity are recoded to form sparse activity
patterns over a large number (here 8000) of binary parallel fibers (PFs) which
synapse upon the PC unit, along the lines suggested by Man (Marr, 1969) and the
CMAC model of Albus (Albus, 1971). While some liberties have been taken with
this representation, the delay distributions are within the range observed for the
intermediate cerebellum of the monkey (Van Kan et 01., 1993).
Also as in Man and Albus, the PC unit is trained by a signal representing the
activity of a climbing fiber (CF), whose response properties are described below.
Occasional corrective commands, also discussed below, are assumed to be generated
1 In this model, 256 Gaussian radial basis function (RBF) units represent the target
position, 400 RBF units represent the position of the mass (i.e., the length of the spring),
with centers distributed uniformly across an appropriate range of positions and with delays
distributed according to a Gaussian of mean 15msec and standard deviation 6msec. This
distribution is truncated so that the minimum delay is 5msec. This delay distribution
is represented by 71 in Fig. 2. Another 400 RBF units similarly represent mass velocity.
An additional 4 MF inputs are efference copy signals that simply copy the current motor
command.
A Predictive Switching Model of Cerebellar Movement Control
141
by an extracerebellar system. The PC's output determines the motor command
through a simple transformation. The model includes an efferent and CF delays,
both equal to 20msec (T2 and T3, respectively, in Fig. 2). These delays are also
within the physiological range for these pathways (Gellman et al., 1983). How this
model is related to the full APG model and its justification in terms of the anatomy
and physiology of the cerebellum and premotor circuits are discussed extensively
elsewhere (Berthier et al., 1993; Houk et al., 1995).
=
The PC unit is a linear threshold unit with hysteresis. Let s(t)
I:i Wi(t)4>i(t),
where 4>i(t) denotes the activity of PF i at time t and Wi(t) is the weight at time
step t of the synapse by which PF i influences the PC unit. The output of the PC
unit at time t, denoted y(t), is the PC's activity state, high or low, at time t, which
represents a high or a low frequency of simple spike activity. PC activation depends
on two thresholds: (Jhigh and (J,01D < (Jhigh. The activity state switches from low
to high when s(t) > (Jhigh, and it switches from high to low when s(t) < (J,01lJ. If
(Jhigh = (J,01D' the PC unit is the usual linear threshold unit. Although hysteresis is
not strictly necessary for the control task we present here, it accelerates learning:
A PC can more easily learn when to switch states than it can learn to maintain
the correct output on a moment-to-moment basis. The bistability of this PC unit
is a simplified representation of multistability that could be produced by dendritic
zones of hysteresis arising from ionic mechanisms (Houk et al., 1995).
Because PC activity inhibits premotor circuits, PC state low corresponds to the
pulse phase ofthe motor command, which sets a "far" equilibrium position, zp; PC
state high corresponds to the step phase, which sets a "near" equilibrium position,
z,. Thus, the pulse ends when the PC state switches from low to high. Because the
precise switching point determines where the mass sticks, this single binary PC can
bring the mass to any target endpoint in a considerable range by switching state at
the right moment during a movement.
4
LEARNING
Learning is based on the idea that corrective movements following inaccurate movements provide training information by triggering CF responses. These responses
are presumed to be proprioceptively triggered by the onset of a corrective movement, being suppressed during the movement itself. Corrective movements can be
generated when a cerebellar module generates an additional pulse phase of the motor command, or through the action of a system other than the cerebellum. The
second, extracerebellar, source of corrective movements only needs to operate when
small corrections are needed.
The learning mechanism has to adjust the PC weights, Wi, so that the PC switches
state at the correct moment during a movement. This is difficult because training information is significantly delayed due to the combined effects of movement
duration and delays in the relevant feedback pathways. The relevant PC activity
is completed well before a corrective movement triggers a CF response. To learn
under these conditions, the learning mechanism needs to modify synaptic actions
that occurred prior to the CF's discharge. The APG model adopts Klopf's (Klopf,
1982) idea of a synaptic "eligibility trace" whereby appropriate synaptic activity
sets up a synaptically-local memory trace that renders the synapse "eligible" for
modification if and when the appropriate training information arrives within a short
time period.
The learning rule has two components: one implments a form of long-term depression (LTD); the other implements a much weaker form of long-term potentiation
A. G. BARTO, J. T. BUCKINGHAM, J. C. HOUK
142
=
(LTP). It works as follows. Whenever the CF fires (c(t)
1), the weights of all
the eligible synapses decrease. A synapse is eligible if its presynaptic parallel fiber
was active in the past when the PC switched from low to high, with the degree of
eligibility decreasing with the time since that state switch. This makes the PC less
likely to switch to high in future situations represented by patterns of PF activity
similar to the pattern present when the eligibility-initiating switch occurred. This
has the effect of increasing the duration of the PC pause, which increases the duration of the pulse phase of the motor command. Superimposed on weight decreases
are much smaller weight increases that occur for any synapse whose presynaptic
PF is active when the PC switches from low to high, irrespective of CF activity.
This makes the PC more likely to switch to high under similar circumstances in the
future, which decreases the duration of the pulse phase of the movement command.
To define this mathematically, let 11(t) detect when the
PC's activity state switches
from low to high: 11(t) = 0
unless y( t - 1)
low and
y(t)
high, in which case
11(t) = 1.
The eligibility
trace for synapse i at time step
t, denoted ei (t), is set to 1
whenever 11(t)
1 and thereafter decays geometrically toward zero until it is reset to 1
when 11 is again set to 1 by another upward switch of PC activity level. Then the learning
rule is given for t = 1,2, ... ,
by:
A
20
w_2~~---1 ----
=
=
o
~ high
0.1
0.2
0.3
o
~D':l
o
0
_ ___
2D~
-20
o
0.1
0.2
0.3
0.4
~ hi9hl
0.1
0.2
_ __
0.3
lowJ-?-
o
0.4
........- - - - -0.1
0.2
0.3
0.4
:c: -------_.
0.1
0.2
0.3
0.4
0.3
t (sec)
0.4
!D.5j
to.~
>
0
0.1
0.2
= -o:c(t)ei(t)+,811(t)?i(t),
w
0 .4
1
lowJ----1._ _ _-
=
LlWi(t)
B
0.1
0.2
0.3
r]~
>
0
D.l
D.2
D.3
0.4
D.4
t(sec)
Figure 3: Model behavior. Panel A: early in learnwhere 0: and ,8, with 0:
,8, ing; Panel B: late in learning. Assume that at time
are positive parameters respec- step 0, ZT has just been switched from 0 to .05.
tively determining the rate of Shown are the time courses of the PC's weighted
LTD and LTP. See (Houk sum, s, activation state, y, and the position and
et al., 1995) for a discussion velocity of the mass.
of this learning rule in light of
physiological data and cellular mechanisms.
?
5
SIMULATIONS
We performed a number of simulations of the simplified APG model learning to
control the nonlinear spring-mass system. We trained each version of the model to
move the mass from initial positions selected randomly from the interval [-.02, .02]
to a target position randomly set to .03, .04, or .05. We set the pulse height, zP'
and the step height, z" to .1 and .04 respectively. Each simulation consisted of
a series of trial movements. The parameters of the learning rule, which were not
optimized, were 0: = .0004 and ,8
.00004. Eligibility traces decayed 1% per time
step.
=
Figure 3 shows time courses of relevant variables at different stages in learning to
move to target endpoint ZT
.05 from initial position Zo
O. Early in learning
(Panel A), the PC has learned to switch to low at the beginning of the trial but
=
=
A Predictive Switching Model of Cerebellar Movement Control
143
switches back to high too soon, which causes the mass to undershoot the target.
Because of this undershoot, the CF fires at the end of the movement due to a final
very small corrective movement generated by an extracerebellar system. The mass
sticks at Ze = .027. Late in learning (Panel B), the mass sticks at Ze = .049, and
the CF does not fire. Note that to accomplish this, the PC state has to switch to
high well before (about 150ms) the endpoint is reached.
Figure 4 shows three representations of the switching curve learned by a version of
the model for target ZT = .05. As an aid to understanding the model's behavior, all
the proprioceptive signals in this version of the model had the same delay of 30ms
(Tl in Fig. 2) instead of the more realistic distribution of delays described above.
Hence the total loop delay (Tl + T2) was 50ms. The curve labeled "spring switch",
which closely coincides with the optimal switching curve (also shown), plots states
that the spring-mass system passes through when the command input to the spring
switches. In other words, this is the switching curve as seen from the point marked
A in Fig. 2. That this coincides with the optimal switching curve shows that the
model learned to behave correctly. The movement trajectory crosses this curve
about 150ms before the movement ends.
0.6
? _ .? Prap_opdvo .....
0.5
- - Spring SWItch
-PCs..ild\
.....
-
'
~
Troj.-y
..._.-.'.......'._._.
0.2
0.'
o
0.02
P_lm)
0.04
0.06
Figure 4: Phase-plane portraits of switching curves implemented by the model after learning. Four switching curves and
one movement trajectory are shown. See
text for explanation.
The curve labeled "PC switch", on
the other hand, plots states that the
spring-mass system passes through
when the PC unit switches state: it
is the switching curve as seen from
the point marked B in Fig. 2 (assuming the expansive encoding involves no
delay). The state of the spring-mass
system crosses this curve 20ms before
it reaches the "spring switch" curve.
One can see, therefore, that the PC
unit learned to switch its activity state
20ms before the motor command must
switch state at the spring itself, appropriately compensating for the 20ms latency of the efferent pathway.
We can also ask what is the state of the
spring-mass system that the PC actually "sees", via proprioceptive signals,
when it has to switch state. When the
PC has to switch states, that is, when
the spring-mass state reaches switching curve "PC switch", the PC is actually receiving via its PF input a description of the system state that occurred a significant
time earlier (Tl = 30ms in Fig. 2) . Switching curve "proprioceptive input" in Fig. 4
is the locus of system states that the PC is sensing when it has to switch. The PC
has learned to do this by learning, on the basis of delayed CF training information,
to switch when it sees PF patterns that code spring-mass states that lie on curve
"proprioceptive input".
6
DISCUSSION
The model we have presented is most closely related to adaptive control methods
known as direct predictive adaptive controllers (Goodwin & Sin, 1984). Feedback
delays pose no particular difficulties despite the fact that no use is made of a forward model of the motor plant. Instead of producing predictions of proprioceptive
144
A. G. BARTO, 1. T. BUCKINGHAM, J. C. HOUK
feedback, the model uses its predictive capabilities to directly produce appropriately timed motor commands. Although the nonlinear viscosity of the spring-mass
system renders linear control principles inapplicable, it actually makes the control
problem easier for an appropriate controller. Fast movements can be performed with
little or no oscillation. We believe that similar nonlinearities in actual motor plants
have significant implications for motor control. A critical feature of this model's
learning mechanism is its use of eligibility traces to bridge the temporal gap between
a PC's activity and the consequences of this activity on the movement endpoint.
Cellular studies are needed to explore this important issue. Although nothing in the
present paper suggests how this might extend to more complex control problems,
one of the objectives of the full APG model is to explore how the collective behavior
of multiple APG modules might accomplish more complex control.
Acknowledgements
This work was supported by NIH 1-50 MH 48185-04.
References
Albus, JS (1971). A theory of cerebellar function. Mathematical Biosciences, 10,
25-61.
Berthier, NE, Singh, SP, Barto, AG, & Houk, JC (1993). Distributed representations of limb motor programs in arrays of adjustable pattern generators.
Cognitive Neuroscience, 5, 56-78.
Buckingham, JT, Barto, AG, & Houk, JC (1995). Adaptive predictive control with
a cerebellar model. In: Proceedings of the 1995 World Congress on Neural
Networks, 1-373-1-380.
Gellman, R, Gibson, AR, & Houk, JC (1983). Somatosensory properties of the
inferior olive of the cat. J. Compo Neurology, 215, 228-243.
Goodwin, GC & Sin, KS (1984). Adaptive Filtering Prediction and Control. Englewood Cliffs, N.J.: Prentice-Hall.
Houk, JC, Buckingham, JT, & Barto, AG (1995). Models of the cerebellum and
motor learning. Brain and Behavioral Sciences, in press.
Ito, M (1984). The Cerebellum and Neural Control. New York: Raven Press.
Keeler, JD (1990). A dynamical system view of cerebellar function. Physica D, 42,
396-410.
Klopf, AH (1982). The Hedonistic Neuron: A Theory of Memory, Learning, and
Intelligence. Washington, D.C.: Hemishere.
Marr, D (1969). A theory of cerebellar cortex. J. Physiol. London, 202, 437-470.
Miall, RC, Weir, DJ, Wolpert, DM, & Stein, JF (1993). Is the cerebellum a smith
predictor? Journal of Motor Behavior, 25, 203-216.
Van Kan, PLE, Gibson, AR, & Houk, JC (1993). Movement-related inputs to
intermediate cerebellum ofthe monkey. Journal of Physiology, 69, 74-94.
Wolpert, DM, Ghahramani, Z, & Jordan, MI (1995). Foreward dynamic models
in human motor control: Psychophysical evidence. In: Advances in Neural
Information Processing Systems 7, (G Tesauro, DS Touretzky, & TK Leen,
eds) , Cambridge, MA: MIT Press.
Wu, CH, Houk, JC, Young, KY, & Miller, LE (1990). Nonlinear damping of limb
motion. In: Multiple Muscle Systems: Biomechanics and Movement Organization, (J Winters & S Woo, eds). New York: Springer-Verlag.
| 1145 |@word trial:2 version:4 simulation:3 pulse:13 t_:1 moment:4 initial:7 series:1 uma:1 past:1 current:2 activation:2 buckingham:7 must:1 olive:1 physiol:1 realistic:3 chicago:2 berthier:3 motor:20 plot:4 intelligence:1 selected:1 plane:3 beginning:1 smith:1 short:1 compo:1 ames:1 simpler:1 height:2 mathematical:1 along:1 rc:1 direct:1 consists:1 pathway:6 behavioral:1 presumed:1 rapid:1 behavior:5 brain:1 compensating:1 initiating:1 decreasing:1 actual:2 little:2 pf:6 increasing:1 panel:9 mass:31 circuit:2 what:1 kg:1 monkey:2 developed:1 ag:3 transformation:1 temporal:1 ti:1 stick:8 control:27 unit:15 medical:1 appear:1 producing:1 before:8 positive:1 local:1 modify:1 tends:1 congress:1 consequence:2 switching:17 despite:1 encoding:3 cliff:1 reacting:1 might:3 k:1 suggests:1 commanded:1 p_:1 range:4 wrist:2 practice:1 implement:1 cmac:1 gibson:2 physiology:3 significantly:1 word:1 radial:1 suggest:1 prentice:1 influence:1 center:1 straightforward:1 nwu:1 starting:2 duration:5 rule:4 array:1 marr:2 mossy:2 justification:1 discharge:1 target:14 trigger:1 us:1 hypothesis:2 velocity:4 ze:2 labeled:2 bottom:1 role:1 observed:2 module:2 movement:39 mz:1 decrease:3 dynamic:2 trained:2 singh:1 predictive:7 upon:1 inapplicable:1 basis:3 easily:1 mh:1 represented:2 fiber:7 corrective:7 cat:1 zo:1 fast:2 london:1 whose:3 premotor:2 efference:1 itself:2 final:1 triggered:1 propose:1 reset:1 relevant:3 loop:2 albus:4 description:1 ky:1 zp:3 produce:3 object:1 tk:1 illustrate:2 andrew:1 pose:1 school:1 eq:4 implemented:1 c:1 involves:3 overshot:1 somatosensory:1 liberty:1 anatomy:1 closely:2 correct:2 human:3 gellman:2 virtual:1 potentiation:1 preliminary:1 dendritic:1 mathematically:1 keeler:2 strictly:1 correction:1 physica:1 hall:1 houk:16 equilibrium:10 predict:1 vary:1 early:2 pfs:2 label:1 bridge:1 weighted:1 brought:1 mit:1 gaussian:2 reaching:1 barto:8 command:15 derived:1 superimposed:1 expansive:3 ave:1 detect:1 inaccurate:1 lj:1 upward:1 issue:1 denoted:3 animal:1 equal:2 washington:1 hedonistic:1 represents:1 future:2 t2:2 randomly:2 winter:1 ve:1 delayed:4 phase:9 fire:3 maintain:1 organization:1 regulating:2 englewood:1 adjust:1 arrives:1 pc:42 light:1 implication:1 necessary:1 damping:2 unless:1 sooner:1 desired:2 timed:1 portrait:2 earlier:1 purkinje:2 ar:2 bistability:1 deviation:1 predictor:1 delay:17 too:1 accomplish:2 combined:1 decayed:1 amherst:1 receiving:1 again:1 slowly:1 cognitive:1 potential:2 nonlinearities:1 bold:2 sec:2 includes:1 coefficient:1 hysteresis:3 jc:6 afferent:2 depends:1 onset:1 later:1 performed:2 view:1 reached:1 parallel:3 capability:1 miller:1 t3:1 ofthe:2 climbing:2 produced:1 ionic:1 trajectory:8 researcher:1 ah:1 synapsis:1 reach:3 touretzky:1 whenever:2 synaptic:3 ed:2 sensorimotor:1 colleague:1 frequency:1 dm:2 bioscience:1 mi:1 efferent:4 massachusetts:1 ask:1 actually:3 back:1 response:4 synapse:6 anticipatory:1 leen:1 just:1 stage:1 until:1 d:1 hand:1 ei:2 nonlinear:8 ild:1 believe:1 effect:2 consisted:1 undershoot:2 hence:1 proprioceptive:5 cerebellum:10 llwi:1 during:5 sin:2 eligibility:6 inferior:1 whereby:1 coincides:2 m:8 ay:1 motion:1 bring:1 nih:1 regulates:1 tively:1 spinal:1 endpoint:18 attached:1 discussed:3 occurred:4 extend:1 resting:1 significant:6 refer:1 cambridge:1 ai:1 resorting:2 similarly:1 illinois:1 had:2 dj:1 cortex:1 j:1 tesauro:1 verlag:1 binary:2 discussing:1 muscle:2 seen:2 minimum:1 additional:2 period:1 dashed:2 signal:6 full:2 multiple:2 ing:1 positioning:2 cross:2 long:2 biomechanics:1 prediction:3 controller:2 circumstance:1 cerebellar:10 represent:3 synaptically:1 cell:2 interval:1 source:1 appropriately:2 operate:1 pass:2 ltp:2 jordan:1 call:1 near:2 presence:2 intermediate:2 switch:34 architecture:1 triggering:1 idea:2 ltd:2 render:2 york:2 cause:1 action:2 depression:1 latency:1 viscosity:4 stein:1 extensively:1 zj:8 neuroscience:1 arising:1 per:1 correctly:1 thereafter:2 four:1 threshold:3 zjp:1 graph:1 geometrically:1 sum:1 place:1 throughout:1 eligible:3 wu:3 oscillation:2 entirely:1 apg:6 accelerates:1 activity:17 occur:2 generates:1 aspect:1 extremely:1 spring:23 inhibits:1 department:2 according:1 combination:1 across:1 smaller:1 suppressed:1 wi:3 modification:1 taken:2 agree:1 remains:2 eventually:2 mechanism:5 needed:2 locus:1 serf:1 end:4 available:1 operation:1 multistability:1 stiffness:1 limb:2 occasional:1 appropriate:4 drifting:1 existence:1 jd:1 denotes:2 top:2 include:1 cf:11 completed:1 ghahramani:1 psychophysical:1 objective:2 move:3 spike:1 usual:1 participate:2 presynaptic:2 cellular:2 toward:2 assuming:1 length:1 code:1 difficult:1 trace:5 recoded:1 zt:7 collective:1 adjustable:2 neuron:1 behave:1 truncated:1 situation:1 precise:1 gc:1 pair:1 goodwin:2 optimized:1 learned:5 suggested:1 below:3 pattern:8 dynamical:1 challenge:1 program:1 memory:2 explanation:1 power:1 critical:2 difficulty:1 predicting:1 pause:1 arm:1 representing:3 ne:1 irrespective:1 woo:1 text:1 prior:1 understanding:1 acknowledgement:1 meter:1 determining:1 law:1 plant:5 northwestern:1 filtering:1 generator:2 switched:2 degree:1 principle:3 elsewhere:1 course:2 supported:1 copy:2 soon:1 weaker:1 fall:1 fifth:1 absolute:1 sparse:3 van:2 distributed:3 feedback:9 curve:17 world:1 sensory:3 forward:5 qualitatively:1 adopts:1 adaptive:4 simplified:6 made:1 ple:1 miall:2 far:1 active:2 assumed:1 neurology:1 learn:5 terminate:1 bzt:1 complex:2 sp:1 nothing:1 fig:10 tl:3 fashion:1 aid:1 position:27 msec:4 proprioceptively:1 lie:1 late:2 ito:2 young:1 jt:2 sensing:1 decay:1 physiological:2 evidence:1 raven:1 gap:1 easier:1 mf:4 wolpert:3 led:1 zjt:3 simply:1 likely:2 explore:2 reflex:1 springer:1 ch:1 corresponds:2 kan:2 determines:2 ma:2 marked:2 rbf:3 weir:1 jf:1 man:2 considerable:1 respec:1 except:1 uniformly:1 total:1 east:1 klopf:3 zone:1 mark:1 |
161 | 1,146 | The Role of Activity in Synaptic
Competition at the Neuromuscular
Junction
Samuel R. H. Joseph
Centre for Cognitive Science
Edinburgh University
Edinburgh, U.K.
email: sam@cns.ed.ac.uk
David J. Willshaw
Centre for Cognitive Science
Edinburgh University
Edinburgh, U.K.
email: david@cns.ed.ac.uk
Abstract
An extended version of the dual constraint model of motor endplate morphogenesis is presented that includes activity dependent
and independent competition. It is supported by a wide range of
recent neurophysiological evidence that indicates a strong relationship between synaptic efficacy and survival. The computational
model is justified at the molecular level and its predictions match
the developmental and regenerative behaviour of real synapses.
1
INTRODUCTION
The neuromuscular junction (NMJ) of mammalian skeletal muscle is one of the
most extensively studied areas of the nervous system. One aspect of its development that it shares with many other parts of the nervous system is its achievement
of single innervation, one axon terminal connecting to one muscle fibre, after an
initial state of polyinnervation. The presence of electrical activity is associated
with this transition, but the exact relationship is far from clear. Understanding
how activity interacts with the morphogenesis of neural systems could provide us
with insights into methods for constructing artificial neural networks. With that in
mind, this paper examines how some of the conflicting ideas about the development
of neuromuscular connections can be resolved.
The Role of Activity in Synaptic Competition at the Neuromuscular Junction
2
97
EXPERIMENTAL FINDINGS
The extent to which a muscle is innervated can be expressed in terms of the motor
unit size - the number of fibres contacted by a given motor axon. Following removal
of some motor axons at birth, the average size of the remaining motor units after
withdrawal of poly innervation is larger than normal (Fladby & Jansen, 1987). This
strongly suggests that individual motor axons successfully innervate more fibres
as a result of the absence of their neighbours. It is appealing to interpret this as
a competitive process where terminals from different axons compete for the same
muscle endplate. Since each terminal is made up of a number of synapses the
process can be viewed as the co-existence of synapses from the same terminal and
the elimination of synapses from different terminals on the same end plate.
2.1
THE EFFECTS OF ELECTRICAL ACTIVITY
There is a strong activity dependent component to synapse elimination. Paralysis
or stimulation of selected motor units appears to favour the more active motor
terminals (Colman & Lichtman, 1992), while inactive axon terminals tend to coexist.
Recent work also shows that active synaptic sites can destabilise inactive synapses
in their vicinity (Balice-Gordon & Lichtman, 1994). These findings support the
idea that more active terminals have a competitive advantage over their inactive
fellows , and that this competition takes place at a synaptic level.
Activity independent competition has been demonstrated in the rat lumbrical muscle (Ribchester, 1993). This muscle is innervated by the sural and the lateral plantar
nerves. If the sural nerve is damaged the lateral plantar nerve will expand its territory to the extent that it innervates the entire muscle. On subsequent reinnervation
the regenerating sural nerve may displace some of the lateral plantar nerve terminals. If the muscle is paralysed during reinnervation more lateral plantar nerve
terminals are displaced than in the normal case, indicating that competition between inactive terminals does take place, and that paralysis can give an advantage
to some terminals.
3
MODELS AND MECHANISMS
If the nerve terminals are competing with each other for dominance of motor endplates, what is the mechanism behind it? As mentioned above, activity is thought
to play an important role in affecting the competitive chances of a terminal, but
in most models the terminals compete for some kind of trophic resource (Gouze et
aI., 1983; Willshaw, 1981). It is possible to create models that use competition for
either a postsynaptic (endplate) resource or a presynaptic (motor axon) resource.
Both types of model have advantages and disadvantages, which leads naturally to
the possibility of combining the two into a single model.
3.1
BENNET AND ROBINSON'S DUAL CONSTRAINT MODEL
The dual constraint model (DCM) (Bennet & Robinson, 1989), as extended by
Rasmussen & Willshaw (1993), is based on a reversible reaction between molecules
from a presynaptic resource A and a postsynaptic resource B. This reaction takes
place in the synaptic cleft and produces a binding complex C which is essential for
98
S. R. H. JOSEPH, D. J. Wll...LSHAW
the terminal's survival. Each motor axon and muscle fibre has a limited amount of
their particular resource and the size of each terminal is proportional to the amount
of the binding complex at that terminaL The model achieves single innervation and
a perturbation analysis performed by Rasmussen & Willshaw (1993) showed that
this single innervation state is stable. However, for the DCM to function the forward
rate of the reaction had to be made proportional to the size of the terminal, which
was difficult to justify other than suggesting it was related to electrical activity.
3.2
SELECTIVE MECHANISMS
While the synapses in the surviving presynaptic terminal are allowed to coexist,
synapses from other axons are eliminated. How do synapses make a distinction
between synapses in their own terminal and those in others? There are two possibilities: (i) Synchronous transmitter release in the synaptic boutons of a motor
neuron could distinguish synapses, allowing them to compete as cartels rather than
individuals (Colman & Lichtman, 1992). (ii) The synapses could be employing selective recognition mechanisms, e.g the 'induced-fit' model (Rib chester & Barry,
1994).
A selective mechanism implies that all the synapses of a given motor neuron can
be identified by a molecular substrate. In the induced-fit model each motor neuron is associated with a specific isoform of a cellular adhesion molecule (CAM);
the synapses compete by attempting to induce all the CAMs on the end plate into
the conformation associated with their neuron. This kind of model can be used to
account for much of the developmental and regenerative processes of the NMJ. However, it has difficulty explaining Balice-Gordon & Lichtman's (1994) focal blockade
experiments which show competition between synapses distinguished only by the
presence of activity. If, instead, activity is responsible for the distinction of friend
from foe, how can competition take place at the terminal level when activity is not
present? Could we resolve this dilemma by extending the dual constraint model?
4
EXTENDING THE DUAL CONSTRAINT MODEL
Tentative suggestions can be made for the identity of the 'mystery molecules' in
the DCM. According to McMahan (1990) a protein called agrin is synthesised in
the cell bodies of motor neurons and transported down their axons to the muscle.
When this protein binds to the surface of the developing muscle, it causes acetylcholine receptors (AChRs), and other components of the postsynaptic apparatus,
to aggregate on the myotube surface in the vicinity of the activated agrin.
Other work (Wallace, 1988) has provided insights into the mechanism used by agrin
to cause the aggregation of the postsynaptic apparatus. Initially, AChR aggregates,
or 'speckles', are free to diffuse laterally in the myotube plasma membrane (Axelrod
et aI., 1976). When agrin binds to an agrin-specific receptor, AChR speckles in the
immediate vicinity of the agrin-receptor complex are immobilised. As more speckles
are trapped larger patches are formed, until a steady state is reached. Such a patch
will remain so long as agrin is bound to its receptor and Ca++ and energy supplies
are available.
Following AChR activation by acetylcholine, Ca++ enters the postsynaptic cell.
Since Ca++ is required for both the formation and maintenance of AChR aggregates,
The Role of Activity in Synaptic Competition at the Neuromuscular Junction
99
a feedback loop is possible whereby the bigger a patch is the more Ca++ it will
have available when the receptors are activated. Crucially, depolarisation of nonjunctional regions blocks AChR expression (Andreose et al., 1995) and it is AChR
activation at the NMJ that causes depolarisation of the postsynaptic cell. So it
seems that agrin is a candidate for molecule A, but what about B or C? It is
tempting to posit AChR as molecule B since it is the critical postsynaptic resource.
However, since agrin does not bind directly to the acetylcholine receptor, a different
sort of reaction is required.
4.1
A DIFFERENT SORT OF REACTION
If AChR is molecule B, and one agrin molecule can attract at least 160 AChRs
(Nitkin et al., 1987) the simple reversible reaction of the DCM is ruled out. Alternatively, AChR could exist in either free, B f' or bound, Bb states, being converted
through the mediation of A. Bb would now play the role of C in the DCM. It is
possible to devise a rate equation for the change in the number of receptors at a
nerve terminal over time:
dBb
(1)
-dt = nABf - (3Bb
where n and (3 are rate constants. The increase in bound AChR over time is
proportional to the amount of agrin at a junction and the number of free receptors
in the endplate area, while the decrease is proportional to the amount of bound
AChRs. The rate equation (1) can be used as the basis of an extended DCM if four
other factors are considered: (i) Agrin stays active as receptors accumulate, so the
conservation equations for A and Bare:
M
Ao
= An + LAnj
j=1
N
Bo
= Bmf + LBimb
(2)
i=1
where the subscript 0 indicates the fixed resource available to each muscle or neuron,
the lettered subscripts indicate the amount of that substance that is present in the
neuron n, muscle fibre m and terminal nm, and there are N motor neurons and M
muscle fibres. (ii) The size of a terminal is proportional to the number of bound
AChRs, so if we assume the anterograde flow is evenly divided between the lin
terminals of neuron n, the transport equation for agrin is:
(3)
where>. and IS are transport rate constants and the retrograde flow is assumed
proportional to the amount of agrin at the terminal and inversely proportional to
the size of the terminal. (iii) AChRs are free to diffuse laterally across the surface
of the muscle, so the forward reaction rate will be related to the probability of an
AChR speckle intersecting a terminal, which is itself proportional to the terminal
diameter. (iv) The influx of Ca++ through AChRs on the surface of the endplate
will also affect the forward reaction rate in proportion to the area of the terminal.
Taking Bb to be proportional to the volume of the postsynaptic apparatus, these
last two terms are proportional to B~/3 and B;/3 respectively. This gives the final
rate equation:
(4)
100
S. R. H. JOSEPH, D. J. WILLSHAW
Equations (3) and (4) are similar to those in the original DCM, only now we have
been able to justify the dependence of the forward reaction rate on the size of the
terminal, B nmb . We can also resolve the distinction paradox, as follows.
4.2
RESOLVING THE DISTINCTION PARADOX
In terms of distinguishing between synapses it seems plausible that concurrently
active synapses (Le. those belonging to the same neuron) will protect themselves
from the negative effects of depolarisation. In paralysed systems, synapses will benefit from the AChR accumulating affects of the agrin molecules in those synapses
nearby (i.e. those in the same terminal). It was suggested (Jennings, 1994) that
competition between synapses of the same terminal was seen after focal blockade
because active AChRs help stabilise the receptors around them and suppress those
further away. This fits in with the stabilisation role of Ca++ in this model and
the suppressive effects of depolarisation, as well as the physical range of these effects during 'heterosynaptic suppression' (Lo & Poo, 1991). It seems that Jenning's
mechanism, although originally speculative, is actually quite a plausible explanation and one that fits in well with the extended DCM. The critical effect in the
XDCM is that if the system is paralysed during development there is a change in
the dependency of the forward reaction rate on the size of an individual terminal.
This gives the reinnervating terminals a small initial advantage due to their more
competitive diameter/volume ratios. As we shall see in the next section, this allows
us to demonstrate activity independent competition.
5
SIMULATING THE EXTENDED DCM
In terms of achieving single innervation the extended DCM performs just as well
as the original, and when subjected to the same perturbation analysis it has been
demonstrated to be stable. Simulating a number of systems with as many muscle
fibres and motor neurons as found in real muscles allowed a direct comparison of
model findings with experimental data (figure 1) .
..... ,-...
\-~
';
.
.t\_
'.
?
E""pcrimental
+
Simulation
-
.~\\
"
"....
.
..
??
~,~
1 ............. __ .....
~.----~----+.----~--~~
Days after birth
Figure 1: Elimination of Polyinnervation in Rat soleus muscle and Simulation
Figure 2 shows nerve dominance histograms of reinnervation in both the rat lumbrical muscle and its extended DCM simulation. Both compare the results produced
when the system is paralysed from the outset of reinnervation (removal of B~~b
The Role of Activity in Synaptic Competition at the Neuromuscular Junction
101
term from equation (4)) with the normal situation. Note that in both the simulation and the experiment the percentage of fibres singly innervated by the reinnervating sural nerve is increased in the paralysis case. Inactive sural nerve terminals
are displacing more inactive lateral plantar nerve terminals (activity independent
competition). They can achieve this because during paralysis the terminals with
the largest diameters capture more receptors, while the terminals with the largest
volumes lose more agrin; so small reinnervating terminals do a little better. However, if activity is present the receptors are captured in proportion to a terminal's
volume, so there's no advantage to a small terminal's larger diameter/volume ratio.
INerve Dominance Histogram (Experimental) I
I
I,
, I
INerve Dominance Histogram (Simulation) I
1:::11
I
I
SingleLPN
Multi
SingleSN
_ _ _ _ _ _ _ _ _ _ _ __ __ _ _ _ _ _ _ _ _ _
I
SingieLPN
Mull!
SmglcSN
~ ~
I _ __ _ _ __ _ _ _ _ __ _
Figure 2: Types of Innervation by Lateral Plantar and Sural Nerves
6
DISCUSSION
The extensions to the DCM outlined here demonstrate both activity dependent
and independent competition and provide greater biochemical plausibility. However this is still only a phenomenological demonstration and further experimental
work is required to ascertain its validity. There is a need for illumination concerning the specific chemical mechanisms that underlie agrin's aggregational effects
and the roles that both Ca++ and depolarisation play in junctional dynamics. An
important connection made here is one between synaptic efficiency and junctional
survival. Ca++ and NO have both been implicated in Hebbian mechanisms (Bliss
& Collingridge, 1993) and perhaps some of the principles uncovered here may be
applicable to neuroneuronic synapses. This work should be followed up with a direct
model of synaptic interaction at the NMJ that includes the presynaptic effects of
depolarisation, allowing the efficacy of the synapse to be related to its biochemistry;
an important step forward in our understanding of nervous system plasticity. Relating changes in synaptic efficiency to neural morphogenesis may also give insights
into the construction of artificial neural networks.
Acknowledgements
We are grateful to Michael Joseph and Bruce Graham for critical reading of the
manuscript and to the M.R.C. for funding this work.
102
S. R. H. JOSEPH, D. J. WILLS HAW
References
Andreose J. S., Fumagalli G. & L0mo T. (1995) Number of junctional acetylcholine
receptors: control by neural and muscular influences in the rat. Journal of Physiology 483.2:397-406.
Axelrod D., Ravdin P., Koppel D. E., Schlessinger J., Webb W. W., Elson E. L. &
Podleski T. R. (1976) Lateral motion offluorescently labelled acetylcholine receptors
in membranes of developing muscle fibers. Proc. Natl. Acad. Sci. USA 73:45944598.
Balice-Gordon R. J. & Lichtman J. W. (1994) Long-term synapse loss induced by
focal blockade of postsynaptic receptors. Nature 372:519-524.
Bennett M. R. & Robinson J. (1989) Growth and elimination of nerve terminals
during polyneuronal innervation of muscle cells: a trophic hypothesis. Proc. Royal
Soc. Lond. [Biol] 235:299-320.
Bliss T. V. P. & Collingridge G. L. (1993) A synaptic model of memory: long-term
potentiation in the hippocampus. Nature 361:31-39.
Colman H. & Lichtman J. W. (1992) 'Cartellian' competition at the neuromuscular
junction. Trends in Neuroscience 15, 6:197-199.
Fladby T. & Jansen J. K. S. (1987) Postnatal loss of synaptic terminals in the
partially denervated mouse soleus muscle. Acta. Physiol. Scand 129:239-246.
Gouze J. L., Lasry J. M. & Changeux J. -Po (1983) Selective stabilization of muscle
innervation during development: A mathematical model. Biol Cybern. 46:207-215.
Jennings C. (1994) Death of a synapse. Nature 372:498-499.
Lo Y. J. & Poo M. M. (1991) Activity-dependent synapse competition in vitro:
heterosynaptic suppression of developing synapses. Science 254:1019-1022.
McMahan U. J. (1990) The Agrin Hypothesis. Cold Spring Harbour Symp. Quant.
Biol. 55:407-419.
Nitkin R. M., Smith M. A., Magill C., Fallon J. R., Yao Y. -M. M., Wallace B. G.
& McMahan U. J. (1987) Identification of agrin, a synaptic organising protein from
Torpedo electric organ. Journal Cell Biology 105:2471-2478.
Rasmussen C. E. & Willshaw D. J. (1993) Presynaptic and postsynatic competition
in models for the development of neuromuscular connections. B. Cyb. 68:409-419.
Ribchester R. R. (1993) Co-existence and elimination of convergent motor nerve
terminals in reinnervated and paralysed adult rat skeletal muscle. J. Phys. 466:
421-441.
Ribchester R. R. & Barry J. A. (1994) Spatial Versus Consumptive Competition at
Polyneuronally Innervated Neuromuscular Junctions. Exp. Physiology 79:465-494.
Wallace B. G. (1988) Regulation of agrin-induced acetylcholine receptor aggregation
by Ca++ and phorbol ester. Journal of Cell Biol. 107:267-278.
Willshaw D. J. (1981) The establishment and the subsequent elimination of polyneuronal innervation of developing muscle: theoretical considerations. Proc. Royal Soc.
Lond. B212: 233-252.
| 1146 |@word version:1 hippocampus:1 seems:3 anterograde:1 proportion:2 simulation:5 crucially:1 soleus:2 initial:2 uncovered:1 efficacy:2 reaction:10 activation:2 regenerating:1 physiol:1 subsequent:2 plasticity:1 wll:1 motor:18 displace:1 selected:1 nervous:3 postnatal:1 smith:1 organising:1 mathematical:1 direct:2 contacted:1 supply:1 chester:1 symp:1 themselves:1 wallace:3 multi:1 terminal:46 depolarisation:6 resolve:2 little:1 innervation:9 provided:1 what:2 kind:2 isoform:1 finding:3 fellow:1 growth:1 laterally:2 willshaw:7 uk:2 control:1 unit:3 underlie:1 bind:3 apparatus:3 acad:1 receptor:16 subscript:2 acta:1 studied:1 suggests:1 co:2 bennet:2 limited:1 range:2 responsible:1 endplate:6 block:1 cold:1 area:3 thought:1 physiology:2 outset:1 induce:1 protein:3 coexist:2 influence:1 cybern:1 accumulating:1 demonstrated:2 poo:2 insight:3 examines:1 colman:3 construction:1 play:3 damaged:1 exact:1 substrate:1 distinguishing:1 hypothesis:2 trend:1 recognition:1 mammalian:1 role:8 electrical:3 enters:1 capture:1 region:1 innervates:1 decrease:1 mentioned:1 developmental:2 cam:2 trophic:2 dynamic:1 grateful:1 cyb:1 dilemma:1 efficiency:2 basis:1 resolved:1 po:1 fiber:1 artificial:2 aggregate:3 formation:1 birth:2 quite:1 larger:3 plausible:2 itself:1 final:1 advantage:5 interaction:1 haw:1 combining:1 loop:1 achieve:1 competition:19 achievement:1 extending:2 produce:1 help:1 friend:1 ac:2 conformation:1 strong:2 soc:2 implies:1 indicate:1 posit:1 stabilization:1 elimination:6 behaviour:1 potentiation:1 ao:1 extension:1 around:1 considered:1 normal:3 exp:1 biochemistry:1 achieves:1 lasry:1 proc:3 applicable:1 lose:1 largest:2 organ:1 create:1 successfully:1 concurrently:1 establishment:1 rather:1 acetylcholine:6 release:1 koppel:1 transmitter:1 indicates:2 suppression:2 stabilise:1 dependent:4 attract:1 biochemical:1 entire:1 initially:1 expand:1 selective:4 dual:5 development:5 jansen:2 spatial:1 eliminated:1 biology:1 dbb:1 others:1 gordon:3 neighbour:1 individual:3 cns:2 possibility:2 behind:1 activated:2 natl:1 paralysed:5 synthesised:1 iv:1 ruled:1 theoretical:1 increased:1 disadvantage:1 dependency:1 stay:1 michael:1 connecting:1 mouse:1 yao:1 intersecting:1 nm:1 ester:1 cognitive:2 suggesting:1 account:1 converted:1 includes:2 bliss:2 performed:1 reached:1 competitive:4 aggregation:2 sort:2 bruce:1 formed:1 elson:1 nmj:4 territory:1 identification:1 produced:1 foe:1 synapsis:21 phys:1 synaptic:15 email:2 ed:2 energy:1 naturally:1 associated:3 actually:1 nerve:15 appears:1 manuscript:1 originally:1 dt:1 day:1 synapse:5 strongly:1 just:1 until:1 fallon:1 transport:2 reversible:2 perhaps:1 usa:1 effect:7 validity:1 vicinity:3 chemical:1 death:1 blockade:3 during:6 whereby:1 steady:1 samuel:1 rat:5 plate:2 demonstrate:2 performs:1 motion:1 consideration:1 funding:1 speculative:1 stimulation:1 physical:1 vitro:1 volume:5 relating:1 interpret:1 accumulate:1 ai:2 focal:3 outlined:1 centre:2 innervate:1 had:1 phenomenological:1 stable:2 surface:4 own:1 recent:2 showed:1 devise:1 muscle:25 seen:1 captured:1 greater:1 barry:2 tempting:1 ii:2 resolving:1 hebbian:1 match:1 plausibility:1 long:3 lin:1 divided:1 concerning:1 molecular:2 regenerative:2 bigger:1 prediction:1 maintenance:1 histogram:3 cell:6 justified:1 affecting:1 harbour:1 adhesion:1 neuromuscular:9 suppressive:1 induced:4 tend:1 flow:2 surviving:1 presence:2 iii:1 affect:2 fit:4 competing:1 identified:1 displacing:1 quant:1 idea:2 favour:1 inactive:6 expression:1 synchronous:1 cause:3 jennings:2 clear:1 amount:6 singly:1 extensively:1 stabilisation:1 diameter:4 exist:1 percentage:1 trapped:1 neuroscience:1 skeletal:2 shall:1 dominance:4 four:1 achieving:1 retrograde:1 fibre:8 compete:4 mystery:1 heterosynaptic:2 place:4 patch:3 graham:1 bound:5 followed:1 distinguish:1 convergent:1 activity:20 constraint:5 diffuse:2 influx:1 nearby:1 aspect:1 lond:2 spring:1 attempting:1 developing:4 according:1 belonging:1 membrane:2 remain:1 across:1 ascertain:1 sam:1 postsynaptic:9 appealing:1 joseph:5 resource:8 equation:7 mechanism:9 mind:1 subjected:1 end:2 junction:8 available:3 collingridge:2 away:1 innervated:4 simulating:2 distinguished:1 polyneuronal:2 existence:2 original:2 remaining:1 dependence:1 interacts:1 lateral:7 sci:1 evenly:1 presynaptic:5 extent:2 cellular:1 scand:1 relationship:2 reinnervation:4 ratio:2 demonstration:1 difficult:1 regulation:1 webb:1 negative:1 suppress:1 allowing:2 neuron:11 displaced:1 speckle:4 immediate:1 situation:1 extended:7 axelrod:2 paradox:2 perturbation:2 morphogenesis:3 david:2 required:3 connection:3 tentative:1 cleft:1 distinction:4 conflicting:1 protect:1 robinson:3 adult:1 able:1 suggested:1 reading:1 royal:2 memory:1 explanation:1 junctional:3 critical:3 difficulty:1 inversely:1 bare:1 understanding:2 acknowledgement:1 removal:2 lettered:1 mediation:1 loss:2 suggestion:1 proportional:10 versus:1 principle:1 share:1 lo:2 supported:1 last:1 rasmussen:3 free:4 implicated:1 wide:1 explaining:1 taking:1 edinburgh:4 benefit:1 feedback:1 transition:1 forward:6 made:4 far:1 employing:1 bb:4 cartel:1 rib:1 active:6 paralysis:4 conservation:1 assumed:1 alternatively:1 nature:3 transported:1 molecule:8 ca:9 poly:1 complex:3 constructing:1 electric:1 bmf:1 allowed:2 body:1 site:1 axon:10 mcmahan:3 candidate:1 down:1 specific:3 boutons:1 substance:1 changeux:1 evidence:1 survival:3 essential:1 illumination:1 lichtman:6 neurophysiological:1 expressed:1 partially:1 bo:1 binding:2 chance:1 dcm:12 viewed:1 identity:1 labelled:1 absence:1 bennett:1 change:3 muscular:1 justify:2 called:1 experimental:4 plasma:1 withdrawal:1 indicating:1 support:1 biol:4 |
162 | 1,147 | Geometry of Early Stopping in Linear
Networks
Robert Dodier *
Dept. of Computer Science
University of Colorado
Boulder, CO 80309
Abstract
A theory of early stopping as applied to linear models is presented.
The backpropagation learning algorithm is modeled as gradient
descent in continuous time. Given a training set and a validation
set, all weight vectors found by early stopping must lie on a certain quadric surface, usually an ellipsoid. Given a training set and
a candidate early stopping weight vector, all validation sets have
least-squares weights lying on a certain plane. This latter fact can
be exploited to estimate the probability of stopping at any given
point along the trajectory from the initial weight vector to the leastsquares weights derived from the training set, and to estimate the
probability that training goes on indefinitely. The prospects for
extending this theory to nonlinear models are discussed.
1
INTRODUCTION
'Early stopping' is the following training procedure:
Split the available data into a training set and a "validation" set.
Start with initial weights close to zero. Apply gradient descent
(backpropagation) on the training data. If the error on the validation set increases over time, stop training.
This training method, as applied to neural networks, is of relatively recent origin.
The earliest references include Morgan and Bourlard [4] and Weigend et al. [7].
*Address correspondence to: dodier~cs. colorado . edu
R. DODIER
366
Finnoff et al. [2] studied early stopping empirically. While the goal of a theory of
early stopping is to analyze its application to nonlinear approximators such as sigmoidal networks, this paper will deal mainly with linear systems and only marginally
with nonlinear systems. Baldi and Chauvin [1] and Wang et al. [6] have also analyzed linear systems.
The main result of this paper can be summarized as follows. It can be shown
(see Sec. 5) that the most probable stopping point on a given trajectory (fixing
the training set and initial weights) is the same no matter what the size of the
validation set. That is, the most probable stopping point (considering all possible
validation sets) for a finite validation set is the same as for an infinite validation
set. (If the validation data is unlimited, then the validation error is the same as the
true generalization error.) However, for finite validation sets there is a dispersion
of stopping points around the best (most probable and least generalization error)
stopping point, and this increases the expected generalization error. See Figure 1
for an illustration of these ideas.
2
MATHEMATICAL PRELIMINARIES
In what follows, backpropagation will be modeled as a process in continuous time.
This corresponds to letting the learning rate approach zero. This continuum model
simplifies the necessary algebra while preserving the important properties of early
stopping. Let the inputs be denoted X = (Xij), so that Xij is the j'th component of
the i'th observation; there are p components of each of the n observations. Likewise,
let y = (Yi) be the (scalar) outputs observed when the inputs are X. Our regression
model will be a linear model, Yi = W'Xi + fi, i = 1, ... , n. Here fi represents
independent, identically distributed (LLd.) Gaussian noise, fi rv N(O, q2). Let
E(w) = !IIXw - Yll2 be one-half the usual sum of squared errors.
The error gradient with respect to the weights is \7 E(w) = w'x'x - y'X. The
backprop algorithm is modeled as Vi = -\7 E( w). The least-squares solution, at
which \7E(w) = 0, is WLS = (X'X)-lX'y. Note the appearence here of the
input correlation matrix, X'X = (2:~=1 XkiXkj). The properties of this matrix
determine, to a large extent, the properties of the least-squares solutions we find. It
turns out that as the number of observations n increases without bound, the matrix
q2(X'X)-1 converges with probability one to the population covariance matrix of
the weights. We will find that the correlation matrix plays an important role in the
analysis of early stopping.
We can rewrite the error E using a diagonalization of the correlation matrix X'X =
SAS'. Omitting a few steps of algebra,
p
L AkV~ + !y'(y -
E(w) = !
XWLS)
(1)
k=l
=
=
where v S'(W-WLS) and A diag(Al, .. . , Ap). In this sum we see that the magnitude of the k'th term is proportional to the corresponding characteristic value,
so moving w toward w LS in the direction corresponding to the largest characteristic value yields the greatest reduction of error. Likewise, moving in the direction
corresponding to the smallest characteristic value gives the least reduction of error.
Geometry of Early Stopping in Linear Networks
367
So far, we have implicitly considered only one set of data; we have assumed all data
is used for training. Now let us distinguish training data, X t and Yt, from validation
data, Xv and Yv ; there are nt training and nv validation data. Now each set of
data has its own least-squares weight vector, Wt and Wv , and its own error gradient,
\lEt(w) and \lEv(w). Also define M t = X~Xt and Mv = X~Xv for convenience.
The early stopping method can be analyzed in terms of the these pairs of matrices,
gradients, and least-squares weight vectors.
3
THE MAGIC ELLIPSOID
Consider the early stopping criterion, d~v (w)
dEv
dt
= O.
= dEv . dw = \lE
dw
dt
Applying the chain rule,
. -\lE
v
t,
(2)
where the last equality follows from the definition of gradient descent. So the early
stopping criterion is the same as saying
\lEt' \lEv
= 0,
(3)
that is, at an early stopping point, the training and validation error gradients are
perpendicular, if they are not zero.
Consider now the set of all points in the weight space such that the training and
validation error gradients are perpendicular. These are the points at which early
stopping may stop. It turns out that this set of points has an easily described shape.
The condition given by Eq. 3 is equivalent to
(4)
Note that all correlation matrices are symmetric, so MtM~ = MtM v . We see that
Eq. 4 gives a quadratic form. Let us put Eq. 4 into a standard form. Toward this
end, let us define some useful terms. Let
M
M
Vi
~w
=
=
MtM v ,
(5)
HM + M') = HMtMv + MvMt),
HWt + w v ),
Wt - Wv ,
(6)
(7)
(8)
and
~
IM- 1 (M -M')~ w.
w=w-i
(9)
Now an important result can be stated. The proof is omitted.
Proposition 1. \lEt . \lEv = 0 is equivalent to
(W - w)'M(w - w) = t~w[M + t(M' - M)M- 1 (M - M')l~w. 0
(10)
The matrix M of the quadratic form given by Eq. 10 is "usually" positive definite.
As the number of observations nt and nv of training and validation data increase
without bound, M converges to a positive definite matrix. In what follows it will
R. DODIER
368
always be assumed that M is indeed positive definite. Given this, the locus defined
by V' E t .1 V' Ev is an ellipsoid. The centroid is W, the orientation is determined by
the characteristic vectors of M, and the length of the k'th semiaxis is c/ Ak, where
c is the constant on the righthand side of Eq. 10 and 'xk is the k'th characteristic
value of M.
v'
4
THE MAGIC PLANE
Given the least-squares weight vector Wt derived from the training data and a
candidate early stopping weight vector Wes, any least-squares weight vector Wv
from a validation set must lie on a certain plane, the 'magic plane.' The proof of
this statement is omitted.
Proposition 2. The condition that
Wt, W
v , and Wes all lie on the magic ellipsoid,
(Wt -w)/M(wt -w) = (w v -w)/M(wv -w) = (wes -wYM(w es -w) = c, (11)
implies
(Wt - wes)/Mwv = (Wt - wes)/Mwes. 0
(12)
This shows that Wv lies on a plane, the magic plane, with normal M/(wt - wes).
The reader will note a certain difficulty here, namely that M = MtM v depends on
the particular validation set used, as does W v. However, we can make progress by
considering only a fixed correlation matrix Mv and letting W v vary. Let us suppose
the inputs (Xl, X2, ?? . ,Xp) are LLd. Gaussian random variables with mean zero and
some covariance E. (Here the inputs are random but they are observed exactly, so
the error model y = w/x + ? still applies.) Then
(Mv) = (X~Xv) = nvE,
so in Eq. 12 let us replace Mv with its expected value nv:E. That is, we can
approximate Eq. 12 with
(13)
Now consider the probability that a particular point w(t) on the trajectory from
w(O) to Wt is an early stopping point, that is, V' Et(w(t)) . V' Ev(w(t)) = O. This is
exactly the probability that Eq. 12 is satisfied, and approximately the probability
that Eq. 13 is satisfied. This latter approximation is easy to calculate: it is the
mass of an infinitesimally-thin slab cutting through the distribution of least-squares
validation weight vectors. Given the usual additive noise model y = w/x + ? with ?
being Li.d. Gaussian distributed noise with mean zero and variance (f2, the leastsquares weights are approximately distributed as
(14)
when the number of data is large.
Consider now the plane n = {w : Wi ft = k}. The probability mass on this plane as
it cuts through a Gaussian distribution N(/-t, C) is then
pn(k, ft) = (27rft /Cft)-1/2 exp( _~ (k ~~:)2) ds
(15)
where ds denotes an infinitesimal arc length. (See, for example, Sec. VIII-9.3 of
von Mises [3].)
Geometry of Early Stopping in Linear Networks
369
0.2S,------r-~-~-~-~__,_-_r_-_,
0.15
0.'
O.Os
~~~~~L-lli3~~.ll-~S~~~~--~
Arc leng1h Along Trajectory
Figure 1: Histogram of early stopping points along a trajectory, with bins of equal
arc length. An approximation to the probability of stopping (Eq. 16) is superimposed. Altogether 1000 validation sets were generated for a certain training set; of
these, 288 gave "don't start" solutions, 701 gave early stopping solutions (which are
binned here) somewhere on the trajectory, and 11 gave "don't stop" solutions.
5
PROBABILITY OF STOPPING AT A GIVEN POINT
Let us apply Eq. 15 to the problem at hand. Our normal is ft = nv :EM t (w t - Wes )
and the offset is k = ft' W es. A formal statement of the approximation of PO can
now be made.
Proposition 3. Assuming the validation correlation matrix X~Xv equals the mean
correlation matrix nv~, the probability of stopping at a point Wes = w(t) on the
trajectory from w(O) to Wt is approximately
with
(17)
How useful is this approximation? Simulations were carried out in which the initial
weight vector w(O) and the training data (nt = 20) were fixed, and many validation
sets of size nv = 20 were generated (without fixing X~Xv). The trajectory was
divided into segments of equal length and histograms of the number of early stopping
weights on each segment were constructed. A typical example is shown in Figure 1.
It can be seen that the empirical histogram is well-approximated by Eq. 16.
If for some w(t) on the trajectory the magic plane cuts through the true weights
w?, then Po will have a peak at t. As the number of validation data nv increases,
the variance of Wv decreases and the peak narrows, but the position w(t) of the
peak does not move. As nv -t 00 the peak becomes a spike at w(t). That is, the
peak of Po for a finite validation set is the same as if we had access to the true
generalization error. In this sense, early stopping does the right thing.
It has been observed that when early stopping is employed, the validation error
may decrease forever and never rise - thus the 'early stopping' procedure yields the
least-squares weights. How common is this phenomenon? Let us consider a fixed
R. DODIER
370
training set and a fixed initial weight vector, so that the trajectory is fixed. Letting
the validation set range over all possible realizations, let us denote by Pn(t) =
Pn(k(t), n(t)) the probability that training stops at time t or later. 1- Pn(O) is the
probability that validation error rises immediately upon beginning training, and let
us agree that Pn(oo) denotes the probability that validation error never increases.
This Pn(t) is approximately the mass that is "behind" the plane n'w v = n'w es ,
"behind" meaning the points Wv such that (w v - wes)'ft < O. (The identification
of Pn with the mass to one side of the plane is not exact because intersections of
magic planes are ignored.) As Eq. 15 has the form of a Gaussian p.dJ., it is easy
to show that
k -nw
A' "')
(18)
Pq(k, ft) = G ( (n'Cft)1/2
where G denotes the standard Gaussian c.dJ., G(z) = (211')-1/2 J~oo exp( -t2/2)dt.
Recall that we take the normal ft of the magic plane through Wes as ft = EMt(wtwes). For t = 0 there is no problem with Eq. 18 and an approximation for the
"never-starting" probability is stated in the next proposition.
Proposition 4. The probability that validation error increases immediately upon
beginning training ("never starting"), assuming the validation correlation matrix
X~Xv equals the mean correlation matrix nv:E, is approximately
1 - Pn(O) = 1 - G
(Fv [(Wt - w(O))'MtEMt(wt
w(O))'MtE(wt (w'" -
U
w(O))
). 0
w(0))P/2
(19)
With similar arguments we can develop an approximation to the "never-stopping"
probability.
Proposition 5. The probability that training continues indefinitely ("never stopping"), assuming the validation correlation matrix X~Xv equals the mean correlation matrix nvE, is approximately
Pn(oo)
=
G
(Fv (w'"A"'[(s"')'Es"'j1/2
- Wt)'Mt:E(?S"')) .
U
In Eq. 20 pick +s'" if (Wt - w(O))'s'" > 0, otherwise pick -s"'.
(20)
0
Simulations are in good agreement with the estimates given by Propositions 4 and
5.
6
EXTENDING THE THEORY TO NONLINEAR
SYSTEMS
It may be possible to extend the theory presented in this paper to nonlinear approximators. The elementary concepts carryover unchanged, although it will be more
difficult to describe them algebraically. In a nonlinear early stopping problem, there
will be a surface corresponding to the magic ellipsoid on which 'VEt ...L 'V E v , but
this surface may be nonconvex or not simply connected. Likewise, corresponding
to the magic plane there will be a surface on which least-squares validation weights
must fall, but this surface need not be fiat or unbounded.
It is customary in the world of statistics to apply results derived for linear systems
to nonlinear systems by assuming the number of data is very large and various
Geometry of Early Stopping in Linear Networks
371
regularity conditions hold. If the errors ?. are additive, the least-squares weights
again have a Gaussian distribution. As in the linear case, the Hessian of the total
error appears as the inverse of the covariance of the least-squares weights. In this
asymptotic (large data) regime, the standard results for linear regression carryover
to nonlinear regression mostly unchanged. This suggests that the linear theory of
early stopping will also apply to nonlinear regression models, such as sigmoidal
networks, when there is much data.
However, it should be noted that the asymptotic regression theory is purely local
- it describes only what happens in the neighborhood of the least-squares weights.
As the outcome of early stopping depends upon the initial weights and the trajectory taken through the weight space, any local theory will not suffice to analyze
early stopping. Nonlinear effects such as local minima and non-quadratic basins
cannot be accounted for by a linear or asymptotically linear theory, and these may
play important roles in nonlinear regression problems. This may invalidate direct
extrapolations of linear results to nonlinear networks, such as that given by Wang
and Venkatesh [5].
7
ACKNOWLEDGMENTS
This research was supported by NSF Presidential Young Investigator award IRl9058450 and grant 90-21 from the James S. McDonnell Foundation to Michael C.
Mozer.
References
[1] Baldi, P., and Y. Chauvin. "Temporal Evolution of Generalization during Learning in Linear Networks," Neural Computation 3, 589-603 (Winter 1991).
[2] Finnoff, W., F. Hergert, and H. G. Zimmermann. "Extended Regularization
Methods for Nonconvergent Model Selection," in Advances in NIPS 5, S. Hanson, J. Cowan, and C. L. Giles, eds., pp 228-235. San Mateo, CA: Morgan
Kaufmann Publishers. 1993.
[3] von Mises, R. Mathematical Theory of Probability and Statistics. New York:
Academic Press. 1964.
[4] Morgan, N., and H. Bourlard. "Generalization and Parameter Estimation in
Feedforward Nets: Some Experiments," in Advances in NIPS 2, D. Touretzky,
ed., pp 630-637. San Mateo, CA: Morgan Kaufmann. 1990.
[5] Wang, C., and S. Venkatesh. "Temporal Dynamics of Generalization in Neural
Networks," in Advances in NIPS 7, G. Tesauro, D. Touretzky, and T. Leen, eds.
pp 263-270. Cambridge, MA: MIT Press. 1995.
[6] Wang, C., S. Venkatesh, J. S. Judd. "Optimal Stopping and Effective Machine
Complexity in Learning," in Advances in NIPS 6, J. Cowan, G. Tesauro, and J.
Alspector, eds., pp 303-310. San Francisco: Morgan Kaufmann. 1994.
[7] Weigend, A., B. Huberman, and D. Rumelhart . "Predicting the Future: A Connectionist Approach," Int'l J. Neural Systems 1, 193-209 (1990).
| 1147 |@word simulation:2 covariance:3 pick:2 reduction:2 initial:6 nt:3 must:3 additive:2 j1:1 shape:1 half:1 plane:14 xk:1 beginning:2 indefinitely:2 lx:1 sigmoidal:2 unbounded:1 mathematical:2 along:3 constructed:1 direct:1 baldi:2 indeed:1 expected:2 alspector:1 considering:2 becomes:1 suffice:1 mass:4 what:4 q2:2 temporal:2 exactly:2 grant:1 positive:3 local:3 xv:7 ak:1 lev:3 ap:1 approximately:6 studied:1 mateo:2 suggests:1 co:1 perpendicular:2 range:1 acknowledgment:1 definite:3 backpropagation:3 procedure:2 empirical:1 convenience:1 close:1 cannot:1 selection:1 put:1 applying:1 equivalent:2 appearence:1 yt:1 go:1 starting:2 l:1 immediately:2 rule:1 dw:2 population:1 play:2 colorado:2 suppose:1 exact:1 origin:1 agreement:1 rumelhart:1 dodier:5 approximated:1 continues:1 cut:2 observed:3 role:2 ft:8 wang:4 calculate:1 connected:1 decrease:2 prospect:1 mozer:1 complexity:1 dynamic:1 rewrite:1 segment:2 algebra:2 purely:1 upon:3 f2:1 easily:1 po:3 various:1 describe:1 effective:1 neighborhood:1 outcome:1 otherwise:1 presidential:1 statistic:2 net:1 realization:1 regularity:1 extending:2 converges:2 hergert:1 oo:3 develop:1 fixing:2 progress:1 sa:1 eq:15 c:1 implies:1 direction:2 bin:1 backprop:1 generalization:7 preliminary:1 proposition:7 probable:3 leastsquares:2 elementary:1 im:1 hold:1 lying:1 around:1 considered:1 normal:3 exp:2 nw:1 slab:1 continuum:1 early:29 smallest:1 omitted:2 vary:1 hwt:1 estimation:1 largest:1 mit:1 quadric:1 gaussian:7 always:1 mtm:4 pn:9 r_:1 earliest:1 derived:3 superimposed:1 mainly:1 centroid:1 sense:1 stopping:40 orientation:1 denoted:1 rft:1 equal:5 never:6 represents:1 thin:1 future:1 t2:1 carryover:2 connectionist:1 few:1 winter:1 geometry:4 righthand:1 analyzed:2 behind:2 chain:1 necessary:1 giles:1 dev:2 peak:5 michael:1 squared:1 von:2 satisfied:2 again:1 li:1 summarized:1 sec:2 int:1 matter:1 mv:4 vi:2 depends:2 later:1 extrapolation:1 analyze:2 start:2 yv:1 square:13 variance:2 characteristic:5 likewise:3 kaufmann:3 yield:2 identification:1 marginally:1 trajectory:11 touretzky:2 ed:4 definition:1 infinitesimal:1 pp:4 james:1 proof:2 mi:2 stop:4 finnoff:2 recall:1 fiat:1 appears:1 dt:3 leen:1 correlation:11 d:2 hand:1 nonlinear:12 o:1 wls:2 omitting:1 effect:1 concept:1 true:3 evolution:1 equality:1 regularization:1 symmetric:1 deal:1 ll:1 during:1 noted:1 criterion:2 meaning:1 fi:3 common:1 mt:1 empirically:1 emt:1 discussed:1 extend:1 cambridge:1 had:1 pq:1 dj:2 moving:2 access:1 invalidate:1 surface:5 own:2 recent:1 tesauro:2 certain:5 nonconvex:1 wv:7 approximators:2 yi:2 exploited:1 morgan:5 preserving:1 seen:1 minimum:1 employed:1 determine:1 algebraically:1 rv:1 academic:1 divided:1 award:1 regression:6 histogram:3 publisher:1 nv:9 thing:1 cowan:2 feedforward:1 split:1 identically:1 easy:2 gave:3 idea:1 simplifies:1 hessian:1 york:1 ignored:1 useful:2 wes:10 xij:2 nsf:1 asymptotically:1 sum:2 yll2:1 weigend:2 inverse:1 saying:1 reader:1 bound:2 nve:2 distinguish:1 correspondence:1 quadratic:3 binned:1 x2:1 unlimited:1 argument:1 infinitesimally:1 relatively:1 mcdonnell:1 describes:1 lld:2 em:1 wi:1 happens:1 zimmermann:1 boulder:1 taken:1 agree:1 turn:2 locus:1 letting:3 end:1 available:1 apply:4 altogether:1 customary:1 denotes:3 include:1 somewhere:1 unchanged:2 move:1 spike:1 usual:2 gradient:8 extent:1 chauvin:2 toward:2 viii:1 assuming:4 length:4 modeled:3 ellipsoid:5 illustration:1 difficult:1 mostly:1 robert:1 statement:2 stated:2 rise:2 magic:10 observation:4 dispersion:1 arc:3 finite:3 descent:3 extended:1 venkatesh:3 pair:1 namely:1 hanson:1 fv:2 narrow:1 nip:4 address:1 usually:2 ev:2 regime:1 greatest:1 difficulty:1 predicting:1 bourlard:2 carried:1 hm:1 asymptotic:2 proportional:1 validation:32 foundation:1 basin:1 xp:1 cft:2 accounted:1 supported:1 last:1 side:2 formal:1 fall:1 distributed:3 judd:1 world:1 made:1 san:3 far:1 approximate:1 implicitly:1 cutting:1 forever:1 assumed:2 francisco:1 xi:1 don:2 continuous:2 vet:1 ca:2 mvmt:1 diag:1 main:1 noise:3 position:1 xl:1 lie:4 candidate:2 young:1 xt:1 nonconvergent:1 offset:1 diagonalization:1 magnitude:1 intersection:1 simply:1 scalar:1 applies:1 corresponds:1 ma:1 goal:1 replace:1 infinite:1 determined:1 typical:1 huberman:1 wt:16 total:1 e:4 latter:2 investigator:1 dept:1 phenomenon:1 |
163 | 1,148 | -
32
,
!
#"%$ &(' ) * ,+ .- /#0213 . 45+ 6!
7 8 6 9'
1
0
:
;<>=?A@B:
;<DCFE ;HGJI
XYVZ\[>]_^a`6YVbM^cedgfhc `iQj^_Y ]klnmoYDp l Y
yzb mA{ Y ] xa| ^ v cedg} ^_^~[?2[
} ^_^n[?2[H?H}?b ^?? ? fh[ebH[D?[?!???(?>?z?
`[F] l t?[?bH?H? l xa|?? jNcF^?^~[~z[N? l ?
/
.
K ;HLMLNIO ;HIQPD=RTSU;H< ;IPV=W
2X YDi[ ] ^ `Y bNq c>dsrutwvMx m l x
y b m { Y ] x m ^ v c d?} ^?^~[ [
} ^?^~[?2[H? } b ^?? ? f[?b??>? [ ??? ?(?w???
x [ YnYD?H? i t v x mTl x ??jNcF^?^~[ 2[? l [
&? ???e?H?N??? ?
?5Y i ]?Y x Y b ^?[ x ^?[V^ m x ^ | l ?D? `6Y?^??cN?^~tQ[D^ Y??H[ l ^?? v ? YD[F] bHx ^?tY l ?T??x cNd
l?Mc ? b x ^~[wbe^,?QY i q t??\? i YV] l Y i ^ ] cwb?b Y?^?hcF]?? | ^ t ?Y mA? t ^???^~[w??YDb(? ]_ce?
?F??????D? [wpQ?? ]_?m ^ ] [ ] v ^ t ]_Y x tNcw?A? x t YDb&^~t Y??m x ^?] j b
? YDb Y?]~[D^?Y x ^atNY?m?b i jQ^?Y ? [ ` i? Y x??Ax ` Y?`??HY?]?c d ^~tMY?? [F?mA? v mA? cwd ^amoic ]_cQ?^~tHj [Fl ^^
?Qm x ^?]?m ? jQ^?mAc bQx ??? ?Y x Y b Y?^??c>]~? x?? [ ?Ax c ?Nb cn?b5[>x?b c b c { Y ] ? [ i i mTb ? i Y?] ?
?? Y [ i x ^?m xV]???cwb?? ]?bY YV^?? chc c i ] ??Nd x ] YnceY?]?bQ]?YVYDj[D?N] [???cw?hb b lnY?Y ^ x d ce?T]?b?`?zj ?t [ m x l t c {wY?Y?[ ]?l tB[!?bcY m?M? Y t ^?tYD? ??^ac t bH]_Y? v x?t cwc b ? ? Y
cej^ ? c m b ? YD?A? t ^V? ?&? | ^at#[ ] ? mo^a]?[ ] v t mo? t i ]?c ? [ ?Qm ? mo^ v ??^ ? Y,? YD[>]?bMY ]?m x
[ ? ? Y?^?c.YV?[ l q ?Av | ?NY?bN^ mA??^ t Y l cFb?b Y?l?^?m { | ^ v?? c ] x?? Y ? Y?^?ceb ? cwd
? ?_i Y?] l Y i ^~]_c bb Y?^??c ] ? ? ? v j x??? b ? [?b Y? x ^~??^?m x ^ | l [>? ? ^_Y x?q ?t m l ^~t tQYY?? ^ni ? ? ] c ? | Y?^ x^
^?t Y x ^~]_c b ? j bH| `.?N? [ ?A? ^ v Z ]_c i Y ] ^ v c d?x j` x cFd?? b ?NY i Y bH? Y b ^ ] [>bQ? cw`
{ [w]??T?F??AY xV?
? e? ??????#????? ?????
? cw`?[ l cN` i jN^a[D^ m ceb [ ? ? Y>[ ] b ? bQ? q t YVcw] v??? ] x i Y l?? | { Y ? ? ^ m x ?Y ?A??? c? p ^~tH[ ^?YD? lVmAY bwq
?AY?[ ] b m b ?5c d bcwb?^ ]???{m [>? ? cw]?pHcwb x?? ` i ? YF??pQYVj ] [>? pMY?^??c ]_? d j b l ^ m ceb l ???4x Y x6|?x i c x?x mA? ? Y
c p ?ov? t Y b YD??^ t Y ] ? ???g^?tMY?? Y?[D] b Y ] | x ? ? ? Y q cj x Y?`?Y ` ? YV] x ?m i.? jY?] m ? x cw] ??? ?g^at Y?m x ^ ]?mo?
? jQ^ | cwb?^??\[D^ ? Y b Y?]?[D^_Y x ^ntNY m b i jM^uY ? [>` i ? Y x2|ox pQc>^2[ ] ? m q ]~[D]?v ?\jN^`YD` ?YD]?cNd x cw? Y?2YV?A?
?NY??Hb Y ??d [?`?mA? v ???cw?A?Tcn | b ? x Y??>YD]?[ ? i c x m?^?m { Y?oYD[ ] bH[ ?m ? | ^ v ]_Y x ? ^ x ceb,? | YD]?YVbN^ l ?T?Dx?x Y x cwd
cQc>? Y?[ b?? cw]_`.j?T[ x ?[.` Y?` Y x t i ? v ?
iHc
x
Y
? Jd cN ] ? Y>? ] b m b ? ^~t Y l ? [ x_x c d ? ] m j!"YV$] # i[ Y?? ] c l ]Y ? i ^ ^t ] ` c p?t pM? YV^?? Y?c>Y ] p??Mx ]?% Y l ?2Y t b Y^x Y6ibQ]?Y?c>^?&
ce]' ? ? [ ? x c6?Mb cVzp ? ? ? i YV] l Y i ^?]_c b bY?^ ce] ?wx cw]h]?Y>? ?M? c b l Y?d?c ]??.j? ? c {eYV][? Y?mA? t ^?YV?
^~t ]_Y x t c ? ? ? ? | x ??[ ] Y??AcNc ZN? ? ]?YVY?bQYVj ] [ ? b Y?^ x m bzt m l t YD[ l t?b cN?QY ? ? cebH? v cwb Y?cNjN^ ? c | b ?
hY?mA? t ^V?)(?d ? Y ` ? YD] x?t m i ? j YD] | Y x [>]?Y?bQc>^ i YD]?`.mo^_^?Y?????Dx ? ? x j`6Y ^ t ]?cwj ? t cNjM^6^~t | x
i [ i YD]???? ? Y?[>] pQm b ??^??Hm x l ? ? x ??YDla?w`6Y x m b ? ]~[ l ^~[ ? ? Y +* j b ?Y ] ? ] ? mo^ ] ? ] v | b i j ^.?Hm x ^ ]_m ? jN?
"!
#%$'& %()+*-,.*0/1"2-3547698:" "2*9%<; =>? @A)
BCED
U G J`F"Rab T F"a H:I S:cd'egfihOjc:R I G I0k cl k:HER`GF=monpW qsr O%O I
F"GH:I J%KML#HNPOQ:O%RS:T IVU O R1F"WEOYX7Z []\"^:_
G
F R G v F O U J Tbovh q?uvc h?h O U??:? Z ???:? [? ?"???????0???=?"??????? ^Z
U
I
H
F
R
J
F
#
R
~
|
E
}
T
O
O
=
R
H
7
H
u
R
O
"
v
O
h
i
w
]
x
%
y
1
z
{
O
t
t
?7? ? [9?"[?^ Z ? [? ? ?:??e J c I an.? H R F | I FJ`F O? F=H N cR U J5F W7O?hOc R I cb a h G F w H { F"W O?N WVH? O v h q?J5H {
?? ?OR"v"O ?7F"R`H I~I O F N HR=?EJ T0? U O%R?? ? J a?? ? h?O? ? U ? J`F"R ? b0T F=GH:IVJ NPO ?ER O J O I F ?I F"W G J?? c ? O R?c?J F cj?
?
F GJ`F"G v%c hA? O F m H U F Wc F O? c:v"?hw5h OjcR I J'F"W O?vl?c J=J H { v H:I0J=F c I F U O ?EF"W ? ? ? O R v"O t F R`H IYI O F N H R=? J
T? U OR F?W O<{ c n a h w H { ? ? ^ ? X ? ??? [?? ???? XV??[+^ Z ? [?? ?%??U GJ`FR G b0T F G H I JYa I N#W G v"W F W O J O F=F=?I k?H {
?
OA| vW GIo? T F??:cR G c:b0?-O G J v W HAJ O I G I U O ? O I0U O I:F hw H { F W0O HF"W O R Q?c R"a cAbhO J%??? Q:O IEF W H T k W F W7O
U O ?7F W H { F=m O I O F N HER ? n T JF b OY??O U F=H c v H ? J F c I:F S?N?O J c F G J"??W O R O c?W cR U O R lO cR IG?I0k
v R GF OR GH I F=W c I?F W0O H7I O ? R HE?H:J O U b7w F W O d e f n?H U OAl????%???M? I U O%O U S N GF W?cR=b GF R | R w W G k W
t R H:r c r G l ? F w ? F W0O ?0RH:?HAJ O U c:l k7HAR G F=m n?G-J cb h O FH ?=? ?:? ????? ?E? Z ? []????"?0?m0O F c R?k O F {?T I vF G H I ?
? H RO H ??O%R r Ov c?T J O H { G F=J J=F c F a J`F=G vch I c F T R O?x]?? F W7O t R`HE?HJ O??cjh k:H R"a F W n vc I?F=H hO R c F O
?
?
c?vh?c J=J"a?ov%c F ? H I?IH:G-J O R c F O?? T ?.F=H?F=W O a ?V{ H7R`n?c F a H:I?F=W O HAR O F G v?h a-n GF?H {1?~?? ??:yE?
? m O R O?O?? GJ`F1HF"W O R?JFc F=G?J?F av?ch n O F"W H U J?F=H hOc R I?HF W0OR?vh q?J
U OR ? c R`F G? T l c:R U G J=F=R G b T F=G?H?EJg???? c I U ? R H ?0T v F UVG J`F?R=a r T F"GHIVJ
? I?F"W O JF c F G J F"G vc:? F O J F J F WEO w T J O F=H G U O I?F a {w F=W O k c F O t c
J? O%hO F=H I???? T R ? O%w IH QAOl'v H I7F R G bT F a H7I?aJ?? H<G?I7F R H U Tv?O?| I
I O F N H R"? ? J v H ? I Ov F"G Q G F w ? N#W ? v W O? ? l H?aF=J F W0O?? ? ? ^jZ ??? Z ??
H { GI ? O ? O I ?EO IFR | I U H:ns?Ac R=G cAbl?O J?
?
J H
O ?
?? ?
R c n
ON
^ ? ?
{ R
O | U ? H:I v"O { H R n T h c J T I ?
?
? W O w ch?h r c J G v%c? h w U G? O R
O F O R=J c I ? F"W O.{ H R n T h?c ?]J
F=O J=FY? { H R U a J v H Q:O R G?I7k ? W O
??? ????? ?A??? RH7? O R F w H7??J T n<J
???
Oiv H I J=G U O R F W7Oi?0R H b hO n H { l?Oc R I0G?I k H:H hOc I { T I v F G H I J?H { F m O HEH h Ojc I UH ? c GI
? ? K O F ? ? ?"!$# ?%? ? ? bO F W7O J O F?H { Z G ? t T F Q:c R G c b h O%Jc I U&%' ? ??b0O
O ? Oj? H:F O#b7w %$+?F W O R?O JF=R G v F G H I H { cJ J"G?k7In O I F
J H n?O5c JJ G k I ? O I:F?H { F"W O J`O)(.Q:c R=a c b l O J
?*
? e ? ?? ???10??? ^Z2 H I3, GJ U O ? I O U b w c QOjv F=H7RPH {54768
9 ,;:
%?H7I?F=W O Q:cAR G c:bVhO J G?I-,
./
? c:?? c JBA I0k h O < FWR?O J m H ? UDC ? e J T J T c:? ? { HRc I w % + 'FEHG"I ? J ? F"W O H T F ? T F5H {
N?Oa kEW:F=J>=@?
2 ? % + ? G J?? * W0O I OQAO RLK ?NM + = ? ? ?O C c:? U HF"W O R N G?J OP
O?R`O J`F?R ? v F H T R J O h QAO J F`H FW O v%q.O?N W O R O OcAvW=Q? ' SR ? ? ?UT ? b T F?F W0O F"WK ?_R O^ u=W H h UJ
] ??e
cAR`O?c R b G?F R=cR w JH F"W c? N G FW H T F h H?J`J?H { k O I O R ch G F wWV NY????? H ? kYX[Z ? C\' R]&R ? ?
?
? O R"v?O ? FR=H I ? J?J c ? U ? H bO ? ^? ?a` ? 4 ? a { ch ? G F=J G-I v HAn.a IVk N?Oa k W F=J c R O T ?? ? m O?h-Oc R I a I7k
cAh k:HER G F=Woncb G l?l?T J O F W0O { H ?h H%N G-I0k ? h?c JJ"a?v | F G H?I { H?R? HJ G F G QO ? O Rv=O t F R H I Jed
f ? ? ?? ?????'??? ^Z?_g h ? W O J O?c R O ?0OjR=v"O ?EF=R=H I J?N#W G v m H T F ?VT F?? G ? H I O H R n~H R?O H {jiP F=J#a I ? T F=J
c?R?O J O FgFH ? ? ? W O J=O c R O ?lk k c F O J?H { nYT h F G ? h-O GI ?0T FJ ?
f V0 ?? ? ? ? ` ? ^ Z0?? m ? W O J O?c R?O ? O R v"O ? F R H?I7J N W G v?W H T F?o:n F a ? H:I O H:R?n<H:R O H { GF=J G I? T F=J
c R O J O FgFHpG # ? W O J O c:R?O?e@q>r k c F O J HE??n T0l F"a-? hO GI? T F=J ?
fts ? ????"? ? ??? ^u ?ev Yw Wyx J O5c R O ? O R=v"O ?7F"RH I J1N5W G v W~H T F"? T F ?5a ??F N H H:R n~H R`O H { G F=JzA I? T F J
c R O J O FgF?H s|{ ? Wyx J=O a?vlT U O n c} H?R G F w k c F O JgH { F WVR`OO a I t T F=J m
f ?Y~ ? ? ? ? ??? ^ Z ?e v ? W7O u O5| RO t O R v=O ? F R H ? J N Wa v W H T F=? T F G?o? F?N H?H R n~HAR O H { GF"J a I? T F=J
c R O JO FgFH P ? WVO J O G?I0? h?T U O n c? H R GF w k c F O J H {?{ H T R QAc R G cb h O J ?
f?? ?0? ? ? ? 0 ?9? ^ Z??? ? e#hh F?W O ? O R v?O t FR`H I0J N W a v W?? H?I0HF b0Oh H:I k F`H c I w H I O H { F W7O?c r H Q:O
{ H T R v c F O k:H?R G O J?? ? W O%w n?T JF F?W O R O ??H R?O W cQ:O c F h O q?Fg? QAO a I ? T FJ P
?
?
? O I v"O ?
G
??cv W ? O R"v"O t F=R`H I vc I W c QAO?Q:c R c b hO J?c I U ? HER HF m0O R?? O R=v O ? F R HEI0J.q aI ?T F J%??g
?
?
?
?
?
?
?
?
?
? ? ??? ^u
J
?
J
H
F
:
H
?
I
K
W
v
c
{
H
R
?
R
O
h
R?c t O vO
c Z'^ ?7? N a? h U O I HF O Oa F W7O R | QcR a c:bl?O
I
U
J
I
?
J
F
c
l HEH7? ?
F
`
R
O
?
R
O
O
O
c
boO
Z ?%??? ^ ??? ? G?J?F=W O J O F?H {?| l?l HEH h O | I {T I v F=GH?I0J.F?Wc v%c
A
H
I
Z
U
G
I
F
I
=
F
J
W
q
hw H:I O
? T T a
{ R O%O I O F N H:R ??H { ? O R v"O ? ? R H I J N W0O%RO<Ojcv m IH U O V GI vh T G?I k
?
?
H T F`k?H GI0k N?O G k:W:F?? ? W0O H T F"?VT F T I G F?H {7c I O F N HER ? N G hh H|? O? b O R?O { OR R Oj?Y| J?F=W0O ^%^ Z ^ ? ??
O J | w F"Wc F c I H U O GJ c ?H? ??? ? H { FW O ? ??=? Z ? ? O R v?O ?7F R H7I;2 G { G Fga J c ? ? n?n O U G c F O a I ? T F
2 ??f W ? h ? R O I?H { F=W O J=c n O ? O R v=O ?7F R H:I c R O vc h?h O U ? ? ? ? ? Z ? ?:?P? e ? O R v?O ? F R H7I
F?H ? O R=v"O ?7F"R`H I o
GJ?J c G?U<F"H b0Oc ? ^ ?+? ^ ? ?-? 4 ? ? ??? ? ? ? ? ? ^ Z G? c ?h a FJ v W G ? U R+O I c R O Q c R G cb h O J ? W O ?:?`?o??? H { |
IVH ?7O G J U O%?o?7O? q F W7O I T n r O%R H { ? O Rv O ? ? R H I J V ??I v h T U0GI k?F=m O ? cR?O IEF?? ? F"W O IH U O?c IU
F W0O RH7H:F I0H U O:? H ? FW0O ? c F=W ? R`H n?F W7O ? c R O I:FMH??0F"W O I0H U O F`H5F?W O RH H F m ? WEO ? O R v"O t F"R H:IJ
"!$#&%('*)+,!.-/+,0$1324+6575 1489!.-19:,;&<=>@?ACBD FEG!-IH4JK 8 1CLM -N1PO6Q D <Q@RO : R?D FS ) 1 0 2T14UIVTW C
X&YZ8[1\[ V 148^] _ 81 '$2 ` XIabGc 'd!$#[1e' 14! fg h 1 'iJkj+,!3- H.l 1 ) 1 W m 1 )n 0 po H\'rq HF m.s 't! 0,u
v -[1 8 1F) V # Ew+e[1.!yx 0Tz c ' 8 1{ 1 8p| ! -1 8 1\) V #/CE}!$#[1 8I~ ) 14'y! lFH W %7+C?(? %("!t#I1 14!4?
v -[1p? = :,>@?*< A ?r? AF; : R?t? >? A\? 6E +?' 14!3? 6E?? 8 1' b 8 1 I V 1 8??? 5 24?????i? Y c ' 8[14???[1 8 |
! # 1 8 141F)1,' V H, 2T14' ! W*x -I%72T- c '?2 F??? ?! ?1 l 1 W ? ?81 c ?3?e? +F? % + ?I??14'???? Yt??(? Y ???C?
+FW 1?'T+ c 8 V ?? =$=? + ! )[1 0 21F) !$W 6"? Y?? ??? 2 HN? ? ? b ? ?(? ??? ?7?4+ ` ?? b ? ? a???? 2 H ` ? ???7Y ?&????? ?&??? E
!-1 W 1 8I 1' ? ! 1@? c ' !?+ )1 W 2@1,)! W C X -+.? %7? V - c '?) W ) 140$J ? Y !$- 14 l + W % + [??14'??\? ? Y ? ?(?7Y ? ? ?
+ W 1i' H c 8/[F?!$e?r141@! L
??
!- c 'r)+)[1 W b x1?? ' 1 H 5 1 H W c ? 2.0 c ! 1 W c ??x-I% 2 j?%('?? W 1?+6? ??% ! c ?I'*! #+ J #?1??????
24W % ! 1 W c ?%7C! W 8&? 21 8?? ? ? + 5 %7+6C!????,? ?? 1?2 I' c 8[1 0 !$-I+6!?1 HF2-?JW H c c ? 1?[+? ) 5 1??
% ' ?141 W +\!$1 8?? ??+ ? z [,x??) W 8 ?I2T! 8[??' !$W c ? ? ? c C?? ??? b.? ?,? HF 8??? 1 ^???C] 1 5 1 h
+C22 Wk? c I?K! + ? z [ x ! + W ?F1!Z? ?(1\+F?f??I?2 V c ?? W 1,) ? 14' 1!+ ? 571? H?? )?1 W 2T1,)[!k0
[1!?x W z?}???14W ?I' 1 W l c ? +d'$1.! FE?'?[2-?1 ? +F? ) 5 14' Y ! - 1 ?C + 5 EJ-N1i?(1\+FW ?%7 ?eHF?7? W c ?@- ?
c '?! )[W N8 ?I2@1?+ ?#?)IF!T-1,' % ' E ? 2T! c C??px -? 2 - c 'i! - 1 1?IH,2@!i1,? ?[? l +C??1 ! E???p? W 1
f W ??+65 ??? x?1?'.+ ? ! -I+\! + ? ?F W c !.-[????=$? : <4? ? =$: > ` W 1 ?&+ ? !$??? %(8?14! c {[14'@? +e24? |/' CE
E ? 5 1 + "E ?[I2@! c C ' ? ? E? W?+ N
?
?C
b ?9)[W 8I?I2T! 8 c ' !$W c ? ?[! %(Cp? N?? b ? Y
+ 8PH\ ? !+FW \14! E ?[I2 ! c
? b +F5 W c !$-I? ? ?[J ) ?[!$' b x c ! - ) W H,? c ? c J ? + !??(1 ? !
+ -?) !t-[14' c ' E ? 2! c
?'T?[2-e!-I+\! ` ?
a !
? ???
a ""$
? # ?? b ? ? &
? %
' j 1*5?1,+FW c I? +F5 ?N W %(!t# ?
x c ? ? )I1 ? E Wk)
? ( 1 l 1\W +C5I't!+4!+* ' J.%(2+ ? V 14' !$'gJ 3? ? %?? 8 %?!$' #N?I) ! j 1 '%(',
- +?i1\57? b E W 1 +C2 -??C+ W %?H,? 5 1i?&? bIc !?x % 5?5 1'?! c ? H !$1 c ! /
' .?; 021[=.;&<= b 8 14? [1,89+F' ??
? 4365? ? ?87 9;:=<Z? W ` ??>@ ?BA ? ?!CD ? aFE ? W ?y?HGI ?KJ ? ?FLMN ?
?POF?
x - 1,W 1 +C5 ? ) W C?I+ ? % 5 %?!$?714'?` - 1W?1?+ [8 %( ! - 1 ' 14??1 5 a + W 1 8 14? 1 8
Q c !T- Wk14' )I1\2! V !t#[1
5 ???R
x ??!TW$+ c c ? )IW 4S ? 2T! 8 c '?! W c ? ?! %(Cp? M #1r1\?UT % 0 c 2.+ ??1,'k! % ??+ ! 1 E ? 0 5 ? ??x %(? 5
? 1 8[1[ J$1 8 ? ? V WW??K?YX ? 1?x % 575+ ? ' 3?'$1 b ? Z3\[ ` ??? aG! i8 1 [ ! 1?!T-1 %?\3I?1 2T1 f&?? C?JT-[1
' ? ?IE? W ? ? 5 + E ?px - c 2 # c ' W F! 1 8 +6!TI1\W 216)C!W ? X ???K5 ' b ? 3 ` ? ?^] ?C? ?`?_ : ? x c ? 5 81 [ ! 1
! - 1 c4
3 ?[1 ? 1 CS ?ba ? c l 1 ! - +! lF+ W c + ? 5 1?? ?dc %?'Z?[?N1 8 ! l H,???[1: % M 8 c '2 4l 1W !$#?1 ' z 1 ? 14! C
CE?!#[1 ! + W 14JE ? 2! c N Y ?T-[1 5 1FH W 1 W x c ? 52 ??)[?! 1?! -1e< A ? ; 0 1=.;?$? E?'$1 l 1 0 + ? !$0 % ) ?714' FS
l + W c + ? 5714' Y 8[14?&[1 8 H,' %%
? 43 5 ?Zg ?qp ? ?F>rts Y ?&?vuw ? ?
365 ?Bgihkj ??!lm ? Y ?&?onl ? ?
e ?
` x
q? y i? c 7 uz : <f
?
? 3 ` ?N? q? { ?!
? |}?? =b ~ o
? u ? ?
a ? ?4
3 ` ?N? q? ? ??F
? ?? b ?&?o?? ? ?
?
??1 ? +??' 1 ?? ? ' !3??H z 1??1,W 14W$W W x c !$-?? b ! -[1 5 1\+ W 1W ???[' ! ) 0 8 ??21 H6^- ?[)
?x?- c 2 - 2 ? !T+%(['K+ 575 !$-[1 c )I?! l + W c + ? 5?14' + 8 H,??5?! -1 ) 1 W 21,)!TW ' E}??5 14?I2T1,) V
l + W c + ? 5 14'H, 8 )[1\W$21,)N!W ['?x-I%72t-P+6W?1o? ?[1 8 J ?H*? I' !T+C?F! lCH,???1 ? ? ['$1@?[?[1 !$5 ?
!+ W ? ?1 ?? 8 1 ?I 1 8?C?? c ?)I? ?l +FW???+ ? 5 14'????Z+ 8 2. C? + c c ? B ) 1 W$21 )CJ W ' ?C? Y x?1
?
?Y?K?
| ??
?+?
7u? :??
? c U?6? W ` ??F?> :[?????K?????d?i?
?4?? ??
1 21 b "
. ??
x?1?-I+ l 1 ?? ? ?
? ?
? ?t)[1402T1,)! Cp[14!?x
?
c 'K) 5 ? ? c + ? c
?
?
b
?P??d?
?
b ? W ? oK????
? ? ?
?
????;?i? ?
????P?
??+?i????
! ?- ? c '
!# ' ?
Y E W H
8[1,? 1
??? ?
??9x 9
1 - + l 1 X% ??
? W ` ? ?v?> ? a
?
? ? ? ??+6 ?
h "?? # ? b?? L.? B?
W ` X ? ?? ? ?
? ?? /? ?? ??? M 1 ? ? 2 V 5 _ 5 1,+ W ? J -1e2 5 ? ' E?2 ['$!+ ? V 8I1,)J #
WY??? ! - 1 ) W ) ' 1 8 +65 ?C W c !-? 1.1 8 '*+ C ?? ? 1\W }
E 1 ?IH6??)?(14'Kx -I%?2.?? ? ? ` ' 1.1i!t-[1i+6? ?C 0 % ! -???????4??6???o? ? ?
% X % ?
?$???????????N???????
????????
?K?$???
? 1 ? Wk' !?)1 W E W ? ' C? 1?' c ??)N? ck? c [? W?1 8 ? 2.! %?C '?J # +\! - ? 8PE? W q???!$+ W ? 1.!
14!p? ` ? ? ? 1?2 H, ? 'T? ??1 b x L ? %?4% ?I? Y ! -?+,! ? ?N? ) ?! l +FW c + ??714'/-I+ l
? J$? %? x?1 c ?- ! ? ? ?8 ~/8Y c E +?) 1 W 21,)N! W N?? # ? +?? ? K? ! ? c I??xw1
W 1 ) 5?+F2T1 c !e] ? + ) 1 W 2@1 ) !tW F?x?-4? ? -?-[+F' +C?(? c !$'e% 2 C? c ? xw1 c7? #F!'?? 1? H !
??? ) 1 W 2.1 ) ? W C
1?+?[? ? ? V c l 1
c ?# ! b x1?24+
1 8 + [8??
+ ?/s
%'&'(*) + )-, #/.0214365 #/67 /# . ) 8:9 # )<;= > (
"!# $
?-@BA
C4D4EFHG4IJBKMLONPF"QHERSTE/QUWV2XYZ[ NV EQ N:\ C"]_^T`4aZbE/IG Jdc4egf D JThjiTk J F N
lnmoQ-NJgp N Z llrq:s_EG
S
a Q-N E QtuNV Q-G4v l GHwyx V ^zZ
t N
{ a|~} JM Q? V???ZJ? N tS Z v X??? m N U F QH? V Z t?N ^D-? Q N8l EG:E Q?N I J?D"E
?HZ t? Zc? N
????*?H?? sM^ N th Y ?4EtGJ V_l G J G
E i Z ?HNM?_s?? N th/N ?HE t G J-V???h Q-U? l"t N
J?V UJ h/N?V/D \ Q
?-G l-YV?\ Z J Z vp?Ze V c N ] N
? K N8????? Q N?V ZH] N t N ] Z
t?? ? V EtjD N w GHt?????? Ntjh N ?4E t G J ? R ?*? ?
? Yh Z
? VuN a Q N G?D-E/?-D ? C w Y
? \ Q?? G lY ? ?_?OG4G4v N ZJ ?HZb? D Nl?SyY Z h/Q?? Y t \ N ?4EtG ??QZ ??Z
E ? N ZH??E
E p?G?UJT?"D E/???? QI V U ]M? vUN ?a QBZ ao? iT??Z a?? G V a??? s2?N t h?N ? EtGJ Vb?
?Q-Y??Tt u? E ? E N ^ GH??E i-N Z ??HG ? ? EQB]?U V a G ? ? NJ E U w e E i
Y mON? R F Q E???? E Q ZE V ? t ?? F V G4D4EG w Y Z h Q
v E/G a Q Y w G ?? G8p2I?HFM? N ] ]MZ4??
? ??J ?-D-E ?
Z tU6? c ? N?? ? ??? G t E Q-UV p?Y Z ?4^ N ZH
???-???????????8? ??? ???H???????"???/???'? j? ???? ? ????? ?/? ?~? ???M?
??'????????? ??? ? ???H? ?? ??? ?
??O???8?'? ? ?
? ? ? ? ? ?b??-????? ? ? ?
? ?/??/? ? ?6?j?
?
?
? ? ? ?6?:??z? ? ? ?
=
? ? t"! x$#%
?
+
?
?
"
?
?
?
?
?
/
.
M
0
?
2
1
?
?
x 35
4
?
R
6
7
q 98;: ? ? U w(<
?F
? <
?
U(
}A
J @CB ? -ED ?
I KJ M
? <
(
?
U
?L
Q
? T ?6?4?? TU
O???4
? V ? W X
? ??/? *? S? RO? ? - ? ?4? ? ? ? ? ?6?:A
YZ
J
@[! -]\ "
? ^
7
? ?x 4
? ?6???*??
??:?? ????H? s S?-S/? ??
?4? ? ?
? ? Ss ?
?
? x ? ? ?? ?
s& (
s ' | ???? ) ? ? ?4?*b? ? ? ? ?
,
?>=?
q_?
?
?>GH
?NHPO s
_C` aSbUced
f ?
D ? Z ?h@ :ji ? ?lkAm V Z?K Z ? C ?n :po Jrqts d &E Q ZbE Vvu
?ZtZ a N
V E k N E Q-t?N8N ? G VVU c?NM? ZHvWD N
V w GHt
Q
aSb
J
< ?
R ? t G?]xw Q4N
tjJ Gy?c GD Jl-V{z}|~ SHES??UV U ]:?A?U NV EQ]? Z V Z]??-?WY VU??N ?BGHve4JG ] UZH?TUJ aSb ??? V
? DA?MhU N ? EOEG?? J"l?S poU a Q i U FHQ???tGc ZHc ? ?U a?eS a Q N ?Si ??
? ? Z? D N G we< ? pQ N ? ?? V?;?-N ? R>??? N t
QBZ[ U ? F U ? YJHE/?? N? ZHv? EjQ Y p YI KHQHE V U J ? Q-UV ? Z? ?4N
t?zLON EtZJ Vp? GHt ? E/Q N ? Zt F Y ? w6D J-hE ? G4J
? ?6? ? *? ? ?r?
? ??E/GoU ? V ? ? "
V U]:^?e c e?hQ Z ?BK ? JK - ? E C:sC??? ? : ZH? l?Z ? l-? JK???s a GE/Q N ? QtuYV/Q-G vl
S
G?w i ? V ? Zt NJ ? ? p Q-Y ? N8? N t <?
???O s?
g
G ?Jl E/Q N?VS?N v N EGHJ G???? Q N E?Z t F N E?w DBJ-h E/?G J S Ei N Z?WK G t UEQ-? poU????Bt?V E?? ? l ? ?? E Q?N_c'G
Ep?
? M
a G?]?? N??N ??? uth?N ?4Et G J-V ? ?/? ? ? ? Nt/hN ^ E?t G J-V pQ-G V N h?Q-U v??-t N J Zt N Z ?W??? Zt UZcvN V ?"? ?oQ N JyS
Z ? N t?JBl IJ-K:E Q?N ?Si4? ?? E Q t NV Q G ? ? V : ? GHtO??? ? N t h/N ??Et G J-V?/S?p N poU? ?Th G J-V Il?N tOE/Q NV N?c'G ?v?
SS
E G?] vY?HN vO? u th/N ?HE t GH? V Z V ? N p?? ? ] YE?Z ? Z tuU Z
c ? N ? ? E/QZ:tNb? ? ? hN Ei N Ut h/i ? ?ltN
J ? w t G ]
S
p i ? h/Q?L?N h ?
J ? a i-NbUt_? Z t?Nb? a?^ N
tuh N ? E t GH? V R$? J E i-?V ]MZ J-J N t_V?U??v ? tE/G?h/Q Z??C? t N V
J ? A
?
Z ?K G t ? E/Q?
] ??? s S m?N_L U?? c DI? ? N??N
t e ? N thSu ^ a tjGJ?G ? E Q?N ? N E` JHE ? ? p N?tN Z h Q E Q-Y_t G"G
E?
Q-N \ G4U?E?D Y{-h?? w D-?h a ?G4??L?U??W?OY ?Zc
Z t?
Y ?? c ? ? { ? ?{? ??m ?Y t h Y8?-Et?G4? x Z? l
c ? Y c Y \ ? ` VY a Q4N??? ?a t ? c D4EUGHJgC ??Z ?D
` ? U? A
G ? Z v?? ?r? R?? C4t?N_V ? u
h ?? h ZHvWveHS4L Y
?
???
???????
=???/
??
????
ie?
S
-??
? ?????T?H???? ? ? . ? ?H??* ??2? ? ? ? ?2? /? ?H??
?? R S -4? ? ? ??*???'?H?v?"?? 0 ? T
?
? ? ?
? ? ?T? ? ? ???e????????r? ?? ? - ? ? 5 ? ? ? ? ? ? ?? ? ? ? R? ? ?2?? ?
? ? ?
? ?H s?&????
t ?- ?
RR S A? R
? ? s
t :v?
H
#
?
?
? : ??
s ? ?
?
&
???
?? ? ? ? ? ? ? 8 ? s??\ ? ?
?T?
!#"%$&('*) U EE Y
\ ? ? N?m Q-Y8t?N ? ? ?.
?
?
vWY E/Q Y ? Y Z
t ?-Yt a C l-Y E N
t ]:P J-N I w hN
t E ZUJ?? Z t UZ cv NV
?
? ? x ? ? w Y???? C E Q4N t_? N t hN ? E tCHJV"? ? QU V U V ? G VVU ?
] C ??I? ? Y ? Y?-l"Y{ E ? Z ?A?-G ] ? Z t I??r? v NbV??? V2V E ? G J-F4v e
k Z[Y : Z J ?d{4YNbl]? Q4N
t?Y?Z V EtGJK
N
to? t G4? N tEezR?
t ? ???
t :v?
7
?
? ??? ?
7
? sr&
= ?? ? ?
?
?
t ?
t ?
?
H
?
?
?
?
?
??
?
?
##
?
7
?
\ 8 *4W
? t G ? /E Q-UV Z ?-? E t Z \ ?
E ?-D _C J NMh Z?rY Z V Uve ?4N? ?,+
w GHt Z vv ?0/1 s ? R %S 2 4
? 3 ?
? - a V?N?Z h E J4N VV I J?E/Q N
? GHD-t ? Z ? ? a C4C v w Ct5-?Bl IJ K a Q"N?h G4? { Yh E U ?I E e G w f 76 E_I ?
? ] M
G ? ? a U E/` a Y
? h 4
? Q Y ?Y"? a?? Y ?
p Q? E ?
L Y?h Zv ? E Q Y ? ? ? ? ? ?H?T?<;?? ?4? . ? ?= R
Y"?z?-tY ? ? Yl ?,8 a Y:9 ? V C ?
?
s
B
s
C
= H ?j? ]?P ?
J >A@
S
ED
?
7
? s 8 GF
! "$#
8BDC
%&('*)+',-/.1032547698;: <=?>$@A NO 4FE G HIJ*KML
PRQSTSVUXWZY[*\]*^`_ba cdfegahcikjml [onqprV\tsvu!w [yx z|{3nqsn}~ [????(??? \??nq\ [n ????? onD???? w? xh?
?x?? [???? u [? ?[?u?\? x5p -?? ? [ p!? ?
???? ?h?f? 67? 8 ??? ] ^
_ a c ??? a c`ik??? ? [??? } w???p(?q??x ? ? u([ ? ? [ u
\ ? x?p?? ? ?
?? ? ? ? ? e????? ? ? c i???? '?? ????f?*?? ^`_ a c d ??? ?
?
?(? ? E ?h?f? ?= ? ? ??? ? ??]?c_ha ^ ?f? a ^ i j?? [[\?n??n u??? ? [ u!\ ?+x p?? \??kn\v? n3?n?u [??h? [ u
\ ?hx5p ? d?eFn
n???? ?k? ? wf????? ? \ ? \?? [ u ?bxhu[??r?\?n?\ } x \? ckd e nqp?;^
? ? [?[ ? ? df???
?!???? ] ?? ? ? ? 6 ? 8 qn p`??^ ? ? [[ ?5??? ?? ? ^
? ? ^ d ? ? \?? ? x? ? ? n u [*??? [ u!? ? x5p ?5?? ??? ? ?|? ?p x\
? [ ?|}*? ? ? c _ ? a c d?e ? j??
? ? Q(? ? ? ? ? df? 6 ? 8 ???*????? ? ^ ? ?
?
? ? ? ]qc B+? ^ ? a+c j )+'
? ?
??/? '3) ? ' )
? ?
? &() 'h&
?? a &k' ?
???q?!? ?;?? Q?? ???
? ? ?? ) ? = 0 ? 2"! ) ?# ?$ ?% ?'& ) ? ? ?)( ? 2 ) ?# $* ? 0 ?,+ %/& ? & a ?)-
/.
B
B
<
? '2130 1?? ( a 5' 4,?"? ? 6 & ' ? ? 8? 7 0 :
- 98' ? $ ? ? ? c a c ? ?;BI
&' ? a ? c i ? ? 'b? ? ??? ? ^ ahc d ? ?h? a
H
? E ? ?(? F ? c d >? G ^ K
? = c D= l
? ? ? ? c ? >e = ^ _ ?? n`ahc ?@ } ? ?= ? B? A ? C
? J ? % & ??& 'L? ? &?' ??q?8 ( E
<
?
T
S
?
&(
' L)
/
NM>/O?
)?? ?qP? RQ?? ?
?)?U
'
"VWL ' Y? X
Z1[ & 6 \? L ) ?? ,^] L
<
i a ?dc 5' 1e1?
&'T8?
? -)h`? _ ? :
? ^`_ ? ? ? ^
b
? QT? ) '
?fZ
`6g)f ? ? a & ?h g(' ?
Z1 ? ?5i
)
? ? ? d k6l < & ?
' j
5?? ) ' 'o? /? m ? i) ?
" ? ' 13 P ? ?
>j
' n mpo ? h i6V ? ' )
/
? a % '?? ? ?
?
w
v
?
?
w
v
v
a
x
? ? % m q sr ? ? ?&'?i? t _ ??? ) ' ?
u
? t '
0
0y
0 '*)??5' / )
? s ' ?
) 'R&
"h'R
?C?&' ? ?
?
?
?
? ?
Z
1
`')??2
' /h)
? z ? ? t {
' M|
?
*R
% ? h % '??
?8' } ? ? &
% ~?
? ' ? ? ? m &(' ?
N? ? ??
? ' ?q' ? ('3)h??"' / )
? ?
P QSDS?p
U ???n s n } w [ ^!_ ? ynD?? ? ~f?Tx ? n ? 0T? [?? ? [ u`\ ?+x p ?? ? \? [*?+[ [P? ? \?^ ? e??(????\t? nq\
?
>
?? ?
c
d
_ ?? "0 ? ?= v????\t? 8? ? ? I
[ ? B? ? ? ^ d k?? ? ?
_ ?? 20 ? 6 ? B? A ? c d ?? ? ? ? 8 ? x ? n |~ ?c ?d ?/? ^ P? ?
? ?? ?
?
x?h[ x {q[??? ? n ???
\ ? x ? {5"n ??? n } w [? ?v[n????|x ??? ?q? ? ? ? n????? ? w ??x ? n ? 0 u [?? ? [?u \ ?hx?Bp ? ? ?n [ \
? ?= v ?X] c B a ^ ? efU
x ? ?$} ~ ? p ? ??? ? ? ? ? ? c d >? ? ^`?
_ ?= 0 ?
j ??? E
??x ?h[ x {3[*? ??? ?
] ? ??? ? ;n? [ \ x ? {qn s n } ~ [ ? Zn ??w ?} ~ ? p ? x ? n ? 0 u 8[ ? ? [?u \ ?hx5p u
? ? ?? ? ?
? ? ?3
\ ?knC
#
\ ?
x???? ? w"? ??p x ? ??
p ??? ? j9?;?
? ??? ] ? ? n l xq\$\ xq? w ? {3[?~ u[??h?h[ u
\ ? x ? ??? ? \??? [ ? [h?*\
\$x ? ? ? ? ? ? ? ^ iB? cu???? 0 ? 6 ? ? ? ? c i ??? ? 8 z?x ? n~?~k^ ie? ??????? c B ??? - ?\?? [ ? ? [ \? [?? [
B ? ?
[c ? ?\Fc
i ??? ? ? n5p`??^
? ??? ??????\? n\ ? ? ? ? ^?
i ?^ ?
? 0 ? ? v j ?
? ? ? ~ [ ? ??n ? { n?~ ? ? ? ? [ p?? [?? [ u`wfn?? [?y
? ? lr I
? v ? z?\?? [??x3p?? ? \ ? "x ??c ? ?? 0 ? ?h[ u ~fn?? [ ? } r
^ ?
_ ?= u
? ?
?U?
? ? s
? ?|? Q(U???I? )+'3?~?? m ?>
?6?
*%R ??)
1 ?f'51{1 ?e? a h&(' ? '? ?
?
? x 0?? ? ?{x ? ? ? ? ? )
1
&k'
M ? 8
? w?&?" ?
{~$%s
??8
Zj '3? o f?' ? 03? ?
)?? v ? '?) ??'5N? )
? w
??? o? a ?? -??
? a ?& ?? ? ? F? ?g? ? 0 (' ) ? '5N ?
?
?
Z1? & ? '51{1?? a % ' ? ' r ? ' ?)
`?/ ? ' a2? ? ? ??????>? ??? ? ?
?
?
?? ?
?
%?& ??&??3)hy
' nB
`L
/1 ' f' %^_h&?) 'i('3???~
?&'R ' ~
?Bf?? ) ??` ? ' ?
)?1?'?yfq? ) ? ? ? '
?
?
?? & 8' f?' ? & ? m +' L a ? a ?
) ' ? ? &
)
? f? ) ? ?f' a a ? ? ?B? ? ^ ? ?>? ^ _??? 0 ? 2 ? ?B? ? c d e ???d? 8 2 ( L
?
? ? m o ? h:? & ' ) ?
"?p B
o ? ? a % ?
?
' ? ? &k3
' ?L)
Z ? ? ? 6?:
??1??5?N ? h &'??8
)+)?'q?Py?('??8 a
?
?
)?'
? ' ) a ?&k'
o P o ^
? & a
a
?
?
)9' ?5? & f ?)
'
?;? o ? 8? i
?
M? t
' ?"? 1W ? ?
'
' ?
?
a
?
?
)
P ? ' > ? +'
%?B& _? t|?
? ,s
?C
? & '
) ? ? ?'
?,)
/. % & ??{
& &' ? ? h &? f '
?
?
?
?
?
g
a
?
'' ?
) "' ?q? ' ? m h&k' ) u
"Li
/1 ' f '
0 ? ) ' ? ? % b& it ' )?& g ) ' ? ?
?? '38? ? ' ?
?
?
?
?
?
?
?
j '%
1?
' ?
? f?) "? ' *- ?
? ? h ?N&
Z R
' f ?q) ? ' s ?
t 2
? ?) ' ? & ) ' ?
M?i
/V;' ?
?
?
`P
Z1?f' f' ? ?
0 '*)b?"' NhL)
' o h' o ' ? /i)
o P ? ' y
? j
" o ? h
? < & a y ^i
?%9? ) ? t e
~ t 2
' h{
' f? ) a "? ? 2'
r ? ?
o a ? &' m ?q) '?? & af?? ) ' ?
?s
?
" & ' ?
) m ' ?
? '*)h??' b)
? ?
?' j C # ?? ? ? ' ? ? a
?
? ? ? 3
? 1 ? ?q^
)
' )?5?
? \? ? ?5' O?
) ? ? ????>? ? ? ? ? "!Rt {
' hZ? V '
? $B? ? ? ? a ?"&'?? ??????|?
? ( ? ? ? ? ? *
? )+ ? ? ? ? ?,.- 0? / ? o ??
)
? ? 2' ??
?
???
) ? ? ???? ? %
;
h&(2
' 1 a %' ? '?' ?
o b' &k2
' 1|
??
*% ? h ?f' 1W1
!"$# % '&)(+* -,./0%'1 32'465798 3:9; 2'<>= ;? :9@A
BCED
FHGIJILKJMON+P<Q RSUT-V)WYXZ\[]^Y_T` TEa'b P;cedgfihkjfJV!Ph9QkfiPlnmoc TJj'f-hkp/qsrtuTYvxw r-lyTyz {
| P lj;P~})Qlr V+?s? fiPVJ?'?!?~? Xi???? T? c;h.] b?? VE?E?>r??T v wEw }P'?~j P }?Q9lr Vsh\??QkfiP? PREhocQ ?!? c? j'f
Q.? TEQ+?0????k? ?? X Sg?? R ???i?? w? ?? {?x? Q f?P? m c P ?~????9? ?!? ?S??? R ????? w?H?J? ??+? R ? ????? ? ? rl TEp?
R ?>? ? R??0?0? ? ? ?i?
? rlP;r_-^ l ? b P Qg?$?P>T cP'Q
} P lj;P } Q l r V)? N+P?Q?? ? ? ]
c ?j'fyQ f T Q ? r j?fEh p q lP;? V?r0?
l^ c } P;j'Q?Q~r ? h\????? ? ? ??????
P REhoc Q ? ? ????? ?
T? V!q ? ?
r ? _ T ` T ] b P c ? rl df hkj fe? r rV P h9cxT j'fEh ? qer0tcr-??P v6{ r l
?xw
P T?cPQ
r ??? T`T] b P;c;??T ?9p!c;hk] b h9V??cxr ? T z ww } P'l j;P } Qkl r V ? T V)q
>
? m?c>h9V???? ??? ? fiPV ? hoc?Ty]3rQ.Q r ???P _ Pp } P lj;P } Qklr-Vsd?h Qkf
? r lUT b9b ? ?u? ?????
TV!q ? ?S??? ? [ ?Y? ??? ? Q fP ? moc<PUQkfP lP
S0?R ? [ ? ?y?
c?ijfnQ9fTQ ? ? ? ? ? ? ? ???+?k? ? ?+?<? ? ??
?
??fPY?P'????TYhk? _ TE??hkq?d fiP ? dHP l0P } ?/T jP vxw w ]? v {-{ h ?UQkf?Pej rV)qh Qkm r V ? S ?? ? ??g?? w moc
lP } b TEj P;q ] ?xR S ?? ? ? ?i????)?
K
v54!6 ? v {{
?%$ & ? $ '
?21 1 ?
G
:
!#"
!(*),+ .-/0 ) 3
7 ,8 7 )9 ? ?;
< GIsI K>=@? P<Q
? ] P TycPQr ? _T` T ] b Pc ? P3TEj f r ? d fEh9j f h9c Tyj f h.?BAyr ? c r?UP zDC | P lFE
j P } Q9l r V
N ^ Q??
? ? ? P Tec P Q6r ? _ T ` T'G ? P c ? r l d?f hkj ? ??? ? ? [ ? ? ?IH ? R S0? ? ? ? ?0R ? ? ? ?KJ
#
L
M
? fPVY?
h c T c<PQ?r ? ch ] p?h V ? cr ? T ] rQ Q r ?PO P _ P bEv C } Pl0jP } Q N lr V ?RQ TV)q Q9f ? c?? h9c ] rQkQ~r ?
?P _S ?)dUT V Qf lP c } P3j'Q?Qkr ?XW T Y ? Q f?PlPxqir P?c V)rQ PREh c Q TV!? ? p ?[Z ?5\ ?^]??
T V!q ?`_ ?badcfe ? ?
t3r ? d?fEh.j?fYT ?9p r ? QkfPc P } l r } Pl?Q9h.Pcfr b q
#g
hi ? ? j Z ? ? ? ? ? # ?0?? ?kX%k ? T-V W ? ? l m ?Bn ? ? ? ? ? ? ??)? ? l ?
o
N
??
Srq
s
? ?ut
? {
p ? ? k
w
xl v
? r lP r_ Pz;y ? Q f P Qkf y ^;c f r ?q|{ r ? T ? rEQkQ r!} ?P _ P'? v C } ^;l j P } Ql rV ?~Q h ? } rc? Q T ? _ P ? r?? W hoc
r ? Q T h9V)Pq ]'? QkfiP _ T!/? ?^
r ??? ? r l? f m jf;? ? ? ?? w?????^?2? w ? ???
????^?? ? ?? ? ? ? ?0???+? ? ? [ ?
? fPl P ? ? [ j;T VJ] ^ T V?ujfEm bB? r ? ? T V W ? q PV r? ^c Qkf?P?c;? ? rF_? PlUT b ?+hkQ9cUj?f m ? q!?? V ? f h9c
g
q h?? P'lPVj3P hkc'? P?lrUh ? ??? {
? ?
#
?
?
?
?
1 ?
F?%????9? ? r) ? + - ) ? + ? ? ?r) ??)r+ ?-???%? )???d? ??,?????,? ? [/?)r?
) + ? ? ? ?
? B? ? ?? ? ? )?+ ??
? ?d? ? B?? I?<?G?^????????0V ?[?r??9??
?
` 7 ? 7
?P???*? ? ? V ??,? ? & q? ?2?
???
? ?
?
?
? "
? ? ?
?"
?
?
?
?
??
??
) ,!? ? ??? `? 7
? ? w 1 ?0? ? ? ?S ?3?
?
S????? w
? ?
? ?9?S ? ? ? ?r?`? V ?
S ?*
?)? ??? ?
,? ??%?
? ?
? 1 ???
" ?r ? ? ??!??
).
R)S ?
S??
w
?
%
?
?
?
?
?
?
w
?
?
?
`
?
)
/
?
?
?
?
|
)
u
?
?
?
)
+
?
?
?
1
?
?
??/1 ?
? ?
?
)
),?) + 7 ) ? ?
? ? ? ?
?
? 7
`) 7
?
`) 7
?
) 7 ? ?
? ? ? ?
? ? ?? ?,? ??? ? ? ? ?,? ? ?,?
? ? ??
? ?
?
?
? ? ) ? ??
? ?
)
?
?? ??
? S ? ?
? ?
? ? ?,? ? ?[?
?
? 1 ) ? ! ? S
?
? ) S ?" ? ?? ? ??
? ?? ?
?.? ??,? ???,?#$
? ? ) ?&?% ?S
?(?' ) ? ? ? ?U??,,
? + ?? ?
? ? 1
o1 ? /1 ? ?
? ?*)
) ?| ? )?. -0/ ? 2 1/ (?
w??
Z???
"!
#
%$
&'(*),+-.0/1+324"5768),+09;: < 9 .= > 9?@A)CBEDF4A+32 9GIHJ+ 2 )LK+M). 579,+N+09O2 )P1+3K1QR+TS 95UKV-.W2XZY
[ K+M),5TQ\5^])_+ D^KV. `badc]eP OfKZ]Z<V),. +gh. K 6 [ ZK 5 4 +N4A<Z)i+3K7] K . 6j9 G g K . kml
n2 ) / ),53+o)P G KK [ 5qph/e53-.r)F+32 9+sV)T<) . X +tu6v) + 2 )5 ),+wxKZgy6 ),+39 Y <9V. zh9{ GA),5 4 5^- [ P 9;+3),P s 9 GhG
{|KZ+ +0KV6 G )T<Z) G}~ 9]? }i? [ ).3O ) [ + .?? ]59.?)?ghKZ-e] P?{) g K . )+N2 )????j9]P ????? [ ). OT) [ +3.NKZ] 5
D?24hO2 9 . )U+ 2 )6j5 )?A<?)5 g KZ-]e??{),g?K?. ) + 2 ) ???? ).NO ) [ + . K / 5,?vn?M4h? tu5i),5 5 );] + 4 9ZH?4h]?KZ. P) .
+ 2 9 ++02 )8???h?????o???h?3?W? . KZ-?4u]),5?79?? ) [ . K [ ),.?-5 ?iKZg HR) ? 679j? 9 ]e??? '
?8?????
??? ??????_???
?N?,?Z?e?,???????0????T?? ??? ??0? ???e?F????? ? ????Z??? ?? ?
???C? ? ?0??V?,? R? ?j? ? ? ? ? ?
??? ??? ?u? ?0? 3? ????? ? ? ? ? ? ?? ?N? ? ? ? ? ???J? ?0? ? ? ???Z???? ? ???U? ? ? ?i? J
? ??????? ?T?h??V? ? ??? ? ? ? ? ? ? ?
????? s?~s3? ?? ? ? ? ? ? ?T??? ? ? ? ?j??? ? ? ?3a ???A? ? 8? ???v????Z? ? ?E???Z? ? ? ? ? ?E?
? ???E? ?i? ?^?j? ??? ? ?7?? ST?
?i????q? ?u?????*? XW- 5 Qu/?j?*2 ) .3] Kd{KZ-]P? s K ] )?O,9Z/),.Np
+ M 9;+J+0M )F9{|K,<Z) 509Z6 [ G )?K
? )
e9Z6 [ GA)5 4 55N-WO p ) ] +^+3KW)]e5. ) +32 c+F9 @ [ .3Ke94 G 4 + 4 ),5?c.r) ),5o+ 4 6W9 )P D 4 + M )_]K -e? 2
[ . ) ,Q"53QAK] +3K?29,<V) ? B S?B S "! ? ?s$# ?%'& Q + M)(. KZ?c??4 @ QA+rX?c + G )9?5 + ?+*", '
-/.0 ? ?21 ????43
? ?
?6587
9 ) +N2
c]?;: K ? +N)?9=<?K H )9?9?] ?)>@? ] 5BAj'DC?th6FE / g
G?. -e5 ) g - G OTK?? 8
6 ) ] ?N5?9]? P p 50O,-e53504 K ] 5
9Z?KZ-e+ );O 2 ]4uO9 H [ Kp"]+ 5 >H H ?I 2c] ? p 5 5 - [e[ KZ. +o),Pm{XKJLCM ( ? Z? .09;]+ON <QP ~ ?RST ? ?'
C 9),),PVF
U 9?PW t ? 9.39??X p4u5^50- [[ K .?+ )_Pd{ X + 2 )F:??ZYM K g\[ . 9]D]
^`_bac_ed2_efBgh_ei
?
jlkm@npoq
rts@uvxwzy
{b|p}u~vt???)?l????e? w?c?2u8?? ~}=? s?? k????? ?? ? ???? ???B??? }v??? ? ??$u$??? o??o???c? y ~ ?$?t????
? ? ? v??@? ? }? ??? o?l?t???o?Du?Q? q
?l? ?u???lov/?@v ??? ? ov??2?????????c?? ??? ?????????? ? ?4???L?h? ? ? ? ? ? ?
? ? ? ? ????? ? ? ? ?$??????L? ?? ? ? ?z??? ? ? ? ????6o$??Z? u$?Q???o????
?bo??? ???}?ewo? ? } ???b~?}c???y
????b?\?
? ???no? } u ?)? ???u$~?? ? ?? ? ?)? ? ??? u v ?o?c??? ? ?+? ? ? k?8??? ? ? ? v'?h}?u ? v ?l????? ? }0.?? } ? ? ~ o ?K? }???
??
????o ~ s??Z?l????~ ?
?t? ???lov ?p?oVu???}u~Q??v ?O? ? ? ? ???$? ? ? ? ? o. u)?c?o.??
? o~??? ? v ? ???}'? v
?
? ?
?
? ? ???$??? ???$??? ? ? ? ? ? ? ?e????c? ? ? ? ? ? ? ? ? ? ??c? ? ?? ? ? ? ?? ? wtuv
}~?
ov?? } }
?
?/u? } o ? ??"
? ! o ~ ?u$$
v # ? ? ?
? u$?$
? %'& ? ? ? )? (} ? ? ? ? k ? +? *-? , ?? ??o ?yV?b?/
? . ? ?? 1? 0 2 ? ? 3 ? ? ? ? ? ? ??
?
7 ? ? ?
?
?
?
8
?
?
h
?
]
?
?
?
?
=
+
<
>
?
B
@
A
?
?
o
~
?
D
?
C
2
v
?
F
E
}
~
?
?
3
?
?
?
G
\
%
~
}
?
?
?
I
6
H
?
J
?? >LK-M
465
4:9 ;
? ?
? W ?Q? X?? oZ? YQ[\? ? ? ?
NO ? ? Q
u P} ~ ?S
? R ?? ? ?? ?O?-? ?? ? ? ? ? ? n+?? ? } ? R o ? ~D??o ???} ~ vt+o TVU4o ? vt?? ? ? ? L
y O * sO ?^? ]L* ? L_
\ o q L
` > m ? uv? o ? ? R/a ? ? n?oq
}?u ? y ? ? ?'u8
~ b-c ? v ? d
? 'edgf?k??? >+h ? ? ?h} u ~ v ? ? ? /
? ? i o ? ] ? jo E}$~ e? k ?2? ?? ? ??v ? ????}~? ^} l
??~ on
? mZ}? ? o ~ ??b? ??o? '
? p ?t}? ? }?? ?
? r=? ?2? ? 3 ?c?8? ? Q? E8o?
A2?
? o u8?@? q }???u ? ??? }c? ? } ? -? c ? q
y )A-t= O??
?t? s
? ? ? ? ??u????@o /u ??w? v f c+? xjA h ? ? ? vQ? (2}4?o?Q?o??????? o v og
? yv?
S
s z ? u ? ?2? ? ??~?? ?t? ??? ov2? ? s ? {s}L~-I? ? ?J??I?
? ?^???w?????V?"?+? ? E$o ?? ?? ??? ?? @ ? AQAt? ?????^ ? ? #?} ? ? ?ov=? +? ? n } ???2} ~???F? ? f +? ? ? w o |
? ? ? }O~ }c? ? ???? ?
o?
?
? o ? k ? ? ? ? ??
? ?l??? ~ }??} ? v ? ?
?c? ? ? ??? ' ? ? ??? ?
? ?
?
Eo ? A?A ? O AI? * ?
?
??
?
m #:? ? ???? ? ? ? ? ? ?? ? ? ? ? ? v ?b? ???Z? a ? \ uq???u8v???e? n M ? -? Q? +?8-? M ? ? ? vF? ?}+q }u~?v u ? ? q???? ? o? ?zooq j} ?
?A V
y
?
?
?
?
?
?
?
?
?
? ? ?"? ??2?L?h? ? ? ? ? 2 ?
? ? ?2???? ? ?$?
??o~?? ? q
u} ??????c}???? ?
?? ? ?2?F? ? ?????c? ??
?
?
?
?
?e??L? ??
??? Q? xZ ???? m} ?$? o~?? ? ? ! ?\??}???
j??$m | ? ? ??v?Z?V?? k Q? ?QO ? ? ?
? ? }v"
? m?o ? ?c } :???Qo ??N} . 9 v ? ? }u ~ v ? v ?B? . +o ? w??u8??? ' ????? ? ?$uq p ? } ~??}c? ? ? ? ? -? ?
?
Z?
?
W ?
?
?
?
?
?
?2?? ? ?
? ? ?? ?
?? ? ?
? ?2V
? ?"\? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? ? ? ? ? ? ??q
? ?D?z? ? ? ? ?$?4???? ? L
??? ? ? ? ?6? O ??]
w b ? u? ? ? ~ } ?+?l??? ?? > ? ? ? ?h} ? ~v2? v2???t~ o ? u ??
? ? ????e? b?.?} ? ? ? o v ?} ? o~?? ??? ? o ? ?~o? ? b ? ? ? ????~ ?
????
? m@B
? ?lo v? ? ? ? ? ? ' ? ? 3 ? ? ? ? 0 ?? Eo q s
?
>t
? > ?)?+?y2w ? ? ? u??}co??2?
? ?? ? o ~ ?uv/|?u L
? ? uv6y
? ?m \ u q
? k v ? ? ? n y ??kc
? > ? ???
? ? ? } o???+o?6???}Z??} ? ? v ? ? q
} ? ? s
? ? ^?S? ? ? ?|??? \ o?? ? ? ? ^O?> @ Q)> ?
?
| 1148 |@word cu:1 cah:1 nd:1 bf:1 ckd:1 bn:2 t_:2 k7:1 n8:3 ghj:1 si:1 dx:2 fn:1 wx:1 cwd:3 v:1 dcfe:1 nq:2 rts:2 lr:1 c6:1 c2:2 vxw:1 ik:1 tuy:1 g4:4 isi:1 p1:1 xz:1 uz:1 ry:1 gou:1 jm:2 pof:1 sut:1 x5p:2 ag:1 ahc:2 ro:1 qm:2 k2:1 uo:1 t1:3 xv:3 sd:1 io:1 yd:15 kml:3 dut:1 co:1 bi:1 qkf:4 uy:1 ond:1 lf:1 xr:1 ga:2 bh:5 nb:4 py:1 eyv:1 yt:1 onqp:1 y8:1 qc:1 gihj:1 u6:1 i2t:2 qh:1 ep:2 ft:1 cy:1 mz:2 pvf:1 rq:3 pd:1 hkq:1 ov:2 tmy:2 po:2 mh:1 k0:1 tej:1 tx:1 jgh:1 zo:1 sc:1 tec:1 mon:1 s:1 ghg:1 hoc:2 i0k:5 mb:1 tu:2 kv:1 r1:1 zp:1 jed:1 vsd:1 ac:2 ij:1 qt:1 b0:1 c:1 tcn:1 hx:1 f1:1 ao:1 asb:2 pou:3 cb:1 k3:1 mo:6 lm:1 ojr:1 a2:2 fh:2 hgi:1 iw:1 i3:1 ck:1 ej:1 ax:2 t14:1 hk:1 wf:1 fjv:1 i0:1 bt:1 her:5 kc:1 jq:3 fhg:1 i1:6 j5:1 zz:1 kw:1 qkm:1 np:1 ve:1 tq:2 n1:1 b0t:2 reh:1 hg:1 har:2 egfih:1 afe:1 bq:3 fga:1 vuw:1 c4t:1 shes:1 eoe:1 zn:2 mac:1 acb:1 kn:2 sv:1 st:1 ie:3 ivj:2 w1:1 jo:1 nm:3 hn:2 f5:1 weo:2 e9:1 tz:1 wk:4 v2v:1 nqp:1 ynd:1 haj:2 yv:10 qac:1 hf:2 wew:1 ni:1 syy:1 zbe:2 vp:1 zy:1 mc:1 pej:1 rx:1 lrq:1 baj:1 mtb:1 ed:2 c7:1 hf2:1 ty:1 npo:3 e2:1 toe:1 hsu:1 car:2 ut:3 cj:2 ok:1 e24:1 mtl:1 x6:1 qkq:1 ox:1 xa:2 xzy:1 qo:2 fehg:1 yf:1 oyd:1 ye:1 lfh:1 i2:3 n_c:1 q_:1 tep:1 ay:2 tt:1 vo:1 edgf:1 gh:5 ef:2 dbj:1 qp:3 ji:1 c5i:1 he:5 cv:1 ai:1 pm:2 han:1 iyi:1 ikjml:1 v0:1 gj:1 tjg:1 vt:5 ced:2 eo:1 r0:1 jba:1 rv:1 o5:1 h7:2 af:2 heh:2 y_:1 e1:1 qg:1 moc:2 xw1:2 n5:1 bgc:1 df:2 ief:2 qy:2 i7f:2 w2:1 ot:1 sr:3 hz:2 n1i:1 oq:1 vw:1 boo:1 hb:2 bic:1 cn:5 ti1:1 t0:1 fmh:1 b5:1 wo:1 f:2 yw:1 j9:3 u5:1 ph:1 fz:1 zj:2 vy:1 s3:1 t3r:1 vun:1 ghd:1 tea:1 oal:1 ce:7 nyt:1 vn:1 vb:1 vf:1 fl:1 bp:1 x2:1 n3:1 hy:3 qb:1 wpq:1 ih6:1 px:2 bdc:1 l0j:1 tw:3 qu:1 cwj:1 mpo:1 hl:1 feg:1 jlk:1 vq:1 wtu:1 cfb:1 h6:1 k5:1 h5f:1 hcf:1 v2:3 tjd:1 ho:1 jn:3 qsr:2 xw:4 n_l:1 yx:3 ght:4 yz:1 l0p:1 uj:1 bl:1 rt:1 prq:1 cw:9 nbl:1 oa:1 fy:1 hnm:1 o1:1 cq:1 z3:1 dhp:1 onl:1 fe:3 ba:4 zt:1 av:1 sm:1 t:1 dc:3 wyx:2 bk:1 z1:4 fgf:4 c4:1 hgji:1 qa:1 efn:1 wy:4 ev:1 lch:1 tb:1 bov:1 tun:1 oj:2 lny:1 hpg:1 hr:1 hkc:1 yq:1 lk:3 hm:2 gf:1 kj:3 xq:2 sg:1 zh:6 qkl:2 oy:2 tjj:2 y2w:1 s0:2 i8:1 cd:1 lo:3 l_:1 fg:2 tuu:1 xia:1 qn:2 kz:8 pqm:1 c5:1 lut:2 bb:1 fip:3 iqj:1 xi:1 fem:1 z6:1 jz:1 zk:1 f_:1 e5:2 du:1 hc:1 cl:1 da:1 t8:1 vj:2 s2:1 n2:2 f9:2 ny:4 n:1 pv:1 xh:1 xl:1 pe:1 ib:1 cqc:1 fyt:1 z0:1 jt:2 ceb:4 er:1 kew:1 pz:1 yfq:1 badc:1 rlk:1 ih:3 cab:2 w9:1 te:1 i7j:1 kx:2 fc:1 v6:1 cej:1 bo:1 ma:9 ydi:1 u8:4 cav:1 fw:5 hkj:2 ew:2 zg:1 cjh:1 |
164 | 1,149 | Laterally Interconnected Self-Organizing
Maps in Hand-Written Digit Recognition
Yoonsuck Choe, Joseph Sirosh, and Risto Miikkulainen
Department of Computer Sciences
The University of Texas at Austin
Austin, TX 78712
yschoe,sirosh,risto@cs. u texas .ed u
Abstract
An application of laterally interconnected self-organizing maps
(LISSOM) to handwritten digit recognition is presented. The lateral connections learn the correlations of activity between units on
the map. The resulting excitatory connections focus the activity
into local patches and the inhibitory connections decorrelate redundant activity on the map. The map thus forms internal representations that are easy to recognize with e.g. a perceptron network. The
recognition rate on a subset of NIST database 3 is 4.0% higher with
LISSOM than with a regular Self-Organizing Map (SOM) as the
front end, and 15.8% higher than recognition of raw input bitmaps
directly. These results form a promising starting point for building
pattern recognition systems with a LISSOM map as a front end.
1
Introduction
Hand-written digit recognition has become one of the touchstone problems in neural
networks recently. Large databases of training examples such as the NIST (National
Institute of Standards and Technology) Special Database 3 have become available,
and real-world applications with clear practical value, such as recognizing zip codes
in letters, have emerged. Diverse architectures with varying learning rules have
been proposed, including feed-forward networks (Denker et al. 1989; Ie Cun et al.
1990; Martin and Pittman 1990), self-organizing maps (Allinson et al. 1994), and
dedicated approaches such as the neocognitron (Fukushima and Wake 1990) .
The problem is difficult because handwriting varies a lot, some digits are easily
confusable, and recognition must be based on small but crucial differences. For example, the digits 3 and 8, 4 and 9, and 1 and 7 have several overlapping segments,
and the differences are often lost in the noise. Thus, hand-written digit recognition can be seen as a process of identifying the distinct features and producing an
internal representation where the significant differences are magnified, making the
recognition easier.
Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition
737
In this paper, the Laterally Interconnected Synergetically Self-Organizing Map architecture (LISSOM; Sirosh and Miikkulainen 1994, 1995, 1996) was employed to
form such a separable representation. The lateral inhibitory connections of the LISSOM map decorrelate features in the input, retaining only those differences that are
the most significant . Using LISSOM as a front end, the actual recognition can be
performed by any standard neural network architecture, such as the perceptron.
The experiments showed that while direct recognition of the digit bitmaps with a
simple percept ron network is successful 72.3% of the time , and recognizing them
using a standard self-organizing map (SOM) as the front end 84.1% of the time,
the recognition rate is 88.1 % based on the LISSOM network . These results suggest
that LISSOM can serve as an effective front end for real-world handwritten character
recognition systems.
2
The Recognition System
2.1 Overall architecture
The system consists of two networks: a 20 x 20 LISSOM map performs the feature
analysis and decorrelation of the input, and a single layer of 10 perceptrons the final
recognition (Figure 1 (a)) . The input digit is represented as a bitmap on the 32 x 32
input layer. Each LISSOM unit is fully connected to the input layer through the afferent connections, and to the other units in the map through lateral excitatory and
inhibitory connections (Figure 1 (b)). The excitatory connections are short range,
connecting only to the closest neighbors of the unit, but the inhibitory connections
cover the whole map. The percept ron layer consists of 10 units, corresponding to
digits 0 to 9. The perceptrons are fully connected to the LISSOM map, receiving the full activation pattern on the map as their input . The perceptron weights
are learned through the delta rule, and the LISSOM afferent and lateral weights
through Hebbian learning.
2.2 LISSOM Activity Generation and Weight Adaptation
The afferent and lateral weights in LISSOM are learned through Hebbian adaptation. A bitmap image is presented to the input layer , and the initial activity of
the map is calculated as the weighted sum of the input . For unit (i, j), the initial
response TJij IS
TJij =
(7
('2:
(1)
eabllij,ab) ,
a,b
where eab is the activation of input unit (a, b), Ilij ,ab is the afferent weight connecting
input unit (a, b) to map unit (i, j), and (7 is a piecewise linear approximation of
the sigmoid activation function . The activity is then settled through the lateral
connections. Each new activity TJij (t) at step t depends on the afferent activation
and the lateral excitation and inhibition:
TJiAt) =
(7
('2:
a,b
eabllij,ab
+ Ie
'2:
k,l
Eij ,kITJkl(t - 1) - Ii
'2:
k ,l
Iij,kITJkl(t -
1)),
(2)
where Eij ,kl and Iij,kl are the excitatory and inhibitory connection weights from
map unit (k, l) to (i, j) and TJkl(t - 1) is the activation of unit (k , I) during the
previous time step. The constants I e and Ii control the relative strength of the
lateral excitation and inhibition.
After the activity has settled, the afferent and lateral weights are modified according
to the Hebb rule. Afferent weights are normalized so that the length of the weight
Y. CHOE, J. SIROSH, R. MIIKKULAINEN
738
Output Layer (10)
.Lq?'Li7.L:17.La7'LV.87..,.Li7.LWLp'
:
...... LISSOM Map
LaY~/~~~X20)
L::7.L7.L7""'-.L::7LI7
~
.......,.......c:7\
.
..c:7L:7.&l?7'..,......
L7
; .L7.L7.?7.L7LSJ7L7
'L7.AlFL7.L7..c:7L7
.L7.A11P".AIIP"L7.L7.o
"
\.
:mput L~yer (32x32)
L7L7~~~~~~~L7L7
".
L7.L7L:7.L7.L7..c:7L7 L7L7~~~~~~~L7L7 .
.L7.L7............,..L7..c:7 L7L7L7L7L7L7L7L7L7L7L7. '
.L7..,..L7L::7.L7.L7..c:7 L7L7L7L7L7L7L7L7L7L7L70 '
..c:7..,..L7.....~..c:7..c:7
0
20
.
L7..,...,..L7.L?..,.L7
. . ..c:7..,..L7L7.L7..,..L7
. . L7.L:7..,...,...,.L/.L:7
:L:7.L7..c:7.L7.L7..c:7L7
OJ)
?
Unit
tII'd
Units with excitatory lateral connections to (iJ)
?
Units with inhibitory lateral connections to (iJ)
(a)
(b)
Figure 1: The system architecture. (a) The input layer is activated according to the
bitmap image of digit 6. The activation propagates through the afferent connections to
the LISSOM map, and settles through its lateral connections into a stable pattern. This
pattern is the internal representation of the input that is then recognized by the perceptron
layer. Through ,t he connections from LISSOM to the perceptrons, the unit representing 6
is strongly activated, with weak activations on other units such as 3 and 8. (b) The lateral
connections to unit (i, j), indicated by the dark square, are shown. The neighborhood
of excitatory connections (lightly shaded) is elevated from the map for a clearer view.
The units in the excitatory region also have inhibitory lateral connections (indicated by
medium shading) to the center unit. The excitatory radius is 1 and the inhibitory radius
3 in this case.
vector remains the same; lateral weights are normalized to keep the sum of weights
constant (Sirosh and Miikkulainen 1994):
..
(t
+ 1) -
IllJ,mn
-
..
(t
W1J,kl
+ crinp1]ij~mn
+ crinp1]ij~mnF'
Wij,kl(t) + cr1]ij1]kl
"'" [
( )
] ,
wkl Wij ,kl t + cr1]ij1]kl
Ilij,mn(t)
(3)
VLmn[llij,mn(t)
+ 1) _
-
(4)
where Ilij,mn is the afferent weight from input unit (m, n) to map unit (i, j), and
crinp is the input learning rate; Wij ,kl is the lateral weight (either excitatory Eij ,kl
or inhibitory Iij ,kl) from map unit (k, I) to (i, j), and cr is the lateral learning rate
(either crexc or crinh).
2.3 Percept ron Output Generation and Weight Adaptation
The perceptrons at the output of the system receive the activation pattern on the
LISSOM map as their input. The perceptrons are trained after the LISSOM map
has been organized. The activation for the perceptron unit Om is
Om
= CL1]ij Vij,m,
(5)
i,j
where C is a scaling constant, 1]ij is the LISSOM map unit (i,j), and Vij,m is the
connection weight between LISSOM map unit (i,j) and output layer unit m. The
delta rule is used to train the perceptrons: the weight adaptation is proportional to
the map activity and the difference between the output and the target:
Vij,m(t
+ 1) = Vij,m(t) + crout1]ij((m -
Om),
(6)
where crout is the learning rate of the percept ron weights, 1]ij is the LISSOM map
unit activity, (m is the target activation for unit m. ((m = 1 if the correct digit =
m, 0 otherwise).
Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition
I Representation I
LISSOM
SOM
Raw Input
Training
93.0/ 0.76
84.5/ 0.68
99.2/ 0.06
739
Test
88.1/ 3.10
84.1/ 1.71
72.3/ 5.06
Table 1: Final Recognition Results. The average recognition percentage and its
variance over the 10 different splits are shown for the training and test sets. The
differences in each set are statistically significant with p > .9999.
3
Experiments
A subset of 2992 patterns from the NIST Database 3 was used as training and
testing data. 1 The patterns were normalized to make sure taht each example had
an equal effect on the LISSOM map (Sirosh and Miikkulainen 1994). LISSOM
was trained with 2000 patterns. Of these, 1700 were used to train the perceptron
layer, and the remaining 300 were used as the validation set to determine when
to stop training the perceptrons. The final recognition performance of the whole
system was measured on the remaining 992 patterns, which neither LISSOM nor
the perceptrons had seen during training . The experiment was repeated 10 times
with different random splits of the 2992 input patterns into training, validation ,
and testing sets.
The LISSOM map can be organized starting from initially random weights. However, if the input dimensionality is large, as it is in case of the 32 X 32 bitmaps,
each unit on the map is activated roughly to the same degree, and it is difficult to
bootstrap the self-organizing process (Sirosh and Miikkulainen 1994, 1996). The
standard Self-Organizing Map algorithm can be used to preorganize the map in
this case. The SOM performs preliminary feature analysis of the input, and forms
a coarse topological map of the input space. This map can then be used as the
starting point for the LISSOM algorithm, which modifies the topological organization and learns lateral connections that decorrelate and represent a more clear
categorization of the input patterns.
The initial self-organizing map was formed in 8 epochs over the training set, gradually reducing the neighborhood radius from 20 to 8. The lateral connections were
then added to the system, and over another 30 epochs, the afferent and lateral
weights of the map were adapted according to equations 3 and 4. In the beginning,
the excitation radius was set to 8 and the inhibition radius to 20. The excitation
radius was gradually decreased to 1 making the activity patterns more concentrated
and causing the units to become more selective to particular types of input patterns. For comparison, the initial self-organized map was also trained for another 30
epochs, gradually decreasing the neighborhood size to 1 as well. The final afferent
weights for the SOM and LISSOM maps are shown in figures 2 and 3.
After the SOM and LISSOM maps were organized, a complete set of activation
patterns on the two maps were collected. These patterns then formed the training
input for the perceptron layer. Two separate versions were each trained for 500
epochs, one with SOM and the other with LISSOM patterns. A third perceptron
layer was trained directly with the input bitmaps as well.
Recognition performance was measured by counting how often the most highly active perceptron unit was the correct one. The results were averaged over the 10
different splits. On average, the final LISSOM+perceptron system correctly recognized 88.1% of the 992 pattern test sets. This is significantly better than the 84.1%
1
Downloadable at ftp:j jsequoyah.ncsl.nist.gov jpubjdatabasesj.
Y. CHOE, J. SIROSH, R. MIIKKULAINEN
740
Iliiji
'?~1,;i;:!il ,
'8 .. . . ?? Slll .. ". "1111
"Q"
""'
.'11
.1 11/1
<?1,1111
Figure 2: Final Afferent Weights of the SOM map. The digit-like patterns represent
the afferent weights of each map unit projected on the input layer. For example, the lower
left corner represents the afferent weights of unit (0,0). High weight values are shown in
black and low in white. The pattern of weights shows the input pattern to which this unit
is most sensitive (6 in this case). There are local clusters sensitive to each digit category.
of the SOM+perceptron system, and the 72.3% achieved by the perceptron layer
alone (Table 1). These results suggest that the internal representations generated
by the LISSOM map are more distinct and easier to recognize than the raw input
patterns and the representations generated by the SOM map.
4
Discussion
The architecture was motivated by the hypothesis that the lateral inhibitory connections of the LISSOM map would decorrelate and force the map activity patterns
to become more distinct . The recognition could then be performed by even the
simplest classification architectures, such as the perceptron. Indeed, the LISSOM
representations were easier to recognize than the SOM patterns, which lends evidential support to the hypothesis. In additional experiments , the percept ron output
layer was replaced by a two-weight-Iayer backpropagation network and a Hebbian
associator net, and trained with the same patterns as the perceptrons. The recognition results were practically the same for the perceptron, backpropagation, and
Hebbian output networks, indicating that the internal representations formed by
the LISSOM map are the crucially important part of the recognition system.
A comparison of the learning curves reveals two interesting effects (figure 4). First,
even though the perceptron net trained with the raw input patterns initially performs well on the test set, its generalization decreases dramatically during training.
This is because the net only learns to memorize the training examples, which does
not help much with new noisy patterns. Good internal representations are therefore crucial for generalization. Second, even though initially the settling process
of the LISSOM map forms patterns that are significantly easier to recognize than
Laterally Interconnected Self-organizing Maps in Handwritten Digit Recognition
741
Figure 3: Final Afferent Weights of the LISSOM map. The squares identify the
above-average inhibitory lateral connections to unit (10,4) (indicated by the thick square).
Note that inhibition comes mostly from areas of similar functionality (i.e. areas sensitive to
similar input), thereby decorrelating the map activity and forming a sparser representation
of the input .
the initial, unsettled patterns (formed through the afferent connections only), this
difference becomes insignificant later during training. The afferent connections are
modified according to the final, settled patterns, and gradually learn to anticipate
the decorrelated internal representations that the lateral connections form.
5
Conclusion
The experiments reported in this paper show that LISSOM forms internal representations of the input patterns that are easier to categorize than the raw inputs and
the patterns on the SOM map, and suggest that LISSOM can form a useful front
end for character recognition systems, and perhaps for other pattern recognition
systems as well (such as speech) . The main direction of future work is to apply
the approach to larger data sets, including the full NIST 3 database, to use a more
powerful recognition network instead of the perceptron, and to increase the map
size to obtain a richer representation of the input space.
Acknowledgements
This research was supported in part by National Science Foundation under grant
#IRI-9309273. Computer time for the simulations was provided by the Pittsburgh
Supercomputing Center under grants IRI930005P and IRI940004P, and by a High
Performance Computer Time Grant from the University of Texas at Austin .
References
Allinson, N. M., Johnson , M. J., and Moon, K. J. (1994). Digital realisation of selforganising maps. In Touretzky, D. S., editor, Advances in Neural Information
Processing Systems 6. San Mateo, CA: Morgan Kaufmann.
742
Y. CHOE. J. SIROSH. R. MIIKKULAINEN
Comparison:Test
100
'SettIEi<CLlSSOU'
'Unsettled LISSOM'
. - 'SOM'
:.Rawj~~~t'
95
----.
.... .
... .
90
~--
"0
-... -- --- -------_.- ----~------ -- -
~
0
()
.
85
... j . .
.... -.---_ .....
-oe.
.. . . . . ... ,..
80
.
--.- ........ . .
",
..~
75
o
.................... --.....
____L -_ _-L____L -_ _-L____L -_ _-L____L -_ _
50
100
150
200
250
300
350
400
__
450
~
..
~
__
~
~
0
7
. '" ..... - ~. ...
500
Epochs
Figure 4: Comparison of the learning curves, A perceptron network was trained to
recognize four different kinds of internal representations: the settled LISSOM patterns,
the LISSOM patterns before settling, the patterns on the final SOM network, and raw
input bitmaps. The recognition accuracy on the test set was then measured and averaged
over 10 simulations. The generalization of the raw input + perceptron system decreases
rapidly as the net learns to memorize the training patterns. The difference of using settled
and unsettled LISSOM patterns diminishes as the afferent weights of LISSOM learn to
take into account the decorrelation performed by the lateral weights.
Denker, J. S., Gardner, W. R., Graf, H. P., Henderson, D., Howard, R. E., Hubbard,
W., Jackel, L. D., Baird, H. S., and Guyon, I. (1989). Neural network recognizer
for hand-written zip code digits. In Touretzky, D . S., editor, Advances in Neural
Information Processing Systems 1. San Mateo, CA: Morgan Kaufmann .
Fukushima, K., and Wake, N. (1990). Alphanumeric character recognition by
neocognitron. In Advanced Neural Computers, 263- 270. Elsevier Science Publishers B.V . (North-Holland).
Ie Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard,
W., and Jackel, 1. D. (1990) . Handwritten digit recognition with a backpropagation network. In Touretzky, D. S., editor, Advances in Neural Information Processing Systems 2. San Mateo, CA: Morgan Kaufmann.
Martin, G. L., and Pittman, J. A. (1990). Recognizing hand-printed letters and
digits. In Touretzky, D. S., editor, Advances in Neural Information Processing
Systems 2. San Mateo, CA: Morgan Kaufmann.
Sirosh, J. , and Miikkulainen, R. (1994). Cooperative self-organization of afferent
and lateral connections in cortical maps . Biological Cybernetics, 71:66- 78.
Sirosh, J., and Miikkulainen, R. (1995). Ocular dominance and patterned lateral
connections in a self-organizing model of the primary visual cortex. In Tesauro,
G ., Touretzky, D. S., and Leen, T . K., editors, Advances in Neural Information
Processing Systems 7. Cambridge, MA: MIT Press.
Sirosh, J., and Miikkulainen, R. (1996). Topographic receptive fields and patterned
lateral interaction in a self-organizing model of the primary visual cortex. Neural Computation (in press).
| 1149 |@word version:1 risto:2 simulation:2 crucially:1 decorrelate:4 thereby:1 shading:1 initial:5 bitmap:8 activation:11 ij1:2 written:4 must:1 alphanumeric:1 eab:1 alone:1 beginning:1 short:1 mnf:1 coarse:1 ron:5 direct:1 become:4 consists:2 indeed:1 roughly:1 nor:1 touchstone:1 decreasing:1 gov:1 actual:1 becomes:1 provided:1 medium:1 kind:1 magnified:1 laterally:6 control:1 unit:35 grant:3 producing:1 before:1 local:2 black:1 mateo:4 shaded:1 patterned:2 range:1 statistically:1 averaged:2 practical:1 testing:2 lost:1 backpropagation:3 bootstrap:1 digit:19 area:2 significantly:2 printed:1 regular:1 suggest:3 map:60 center:2 modifies:1 iri:1 starting:3 identifying:1 x32:1 rule:4 target:2 hypothesis:2 recognition:31 lay:1 database:5 sirosh:12 cooperative:1 region:1 connected:2 oe:1 decrease:2 trained:8 segment:1 serve:1 easily:1 represented:1 tx:1 train:2 distinct:3 effective:1 neighborhood:3 emerged:1 larger:1 richer:1 otherwise:1 topographic:1 noisy:1 final:9 net:4 interconnected:6 interaction:1 adaptation:4 causing:1 rapidly:1 organizing:14 cl1:1 cluster:1 categorization:1 ftp:1 help:1 clearer:1 measured:3 ij:8 c:1 memorize:2 come:1 direction:1 radius:6 tjkl:1 correct:2 thick:1 functionality:1 settle:1 generalization:3 preliminary:1 anticipate:1 biological:1 practically:1 recognizer:1 diminishes:1 jackel:2 sensitive:3 hubbard:2 weighted:1 mit:1 modified:2 cr:1 varying:1 focus:1 elsevier:1 initially:3 wij:3 selective:1 overall:1 l7:32 classification:1 retaining:1 special:1 equal:1 field:1 choe:4 represents:1 future:1 piecewise:1 realisation:1 recognize:5 national:2 replaced:1 fukushima:2 ab:3 organization:2 highly:1 henderson:2 activated:3 confusable:1 cover:1 cr1:2 subset:2 recognizing:3 successful:1 johnson:1 front:6 reported:1 varies:1 ie:3 receiving:1 connecting:2 settled:5 tjij:3 pittman:2 corner:1 account:1 tii:1 downloadable:1 north:1 baird:1 afferent:19 depends:1 performed:3 view:1 lot:1 later:1 om:3 square:3 formed:4 il:1 moon:1 variance:1 kaufmann:4 percept:5 accuracy:1 identify:1 weak:1 handwritten:6 raw:7 cybernetics:1 evidential:1 touretzky:5 decorrelated:1 ed:1 ocular:1 handwriting:1 stop:1 dimensionality:1 organized:4 feed:1 higher:2 response:1 decorrelating:1 leen:1 though:2 strongly:1 correlation:1 hand:5 overlapping:1 indicated:3 perhaps:1 building:1 effect:2 normalized:3 white:1 during:4 self:16 allinson:2 excitation:4 unsettled:3 neocognitron:2 complete:1 performs:3 dedicated:1 image:2 recently:1 sigmoid:1 he:1 elevated:1 significant:3 cambridge:1 had:2 stable:1 cortex:2 inhibition:4 closest:1 showed:1 tesauro:1 seen:2 morgan:4 additional:1 zip:2 employed:1 recognized:2 determine:1 redundant:1 ii:2 full:2 hebbian:4 slll:1 represent:2 achieved:1 receive:1 decreased:1 wake:2 crucial:2 publisher:1 sure:1 counting:1 split:3 easy:1 architecture:7 texas:3 motivated:1 speech:1 dramatically:1 useful:1 clear:2 dark:1 concentrated:1 category:1 simplest:1 percentage:1 inhibitory:11 delta:2 correctly:1 diverse:1 dominance:1 four:1 neither:1 sum:2 letter:2 powerful:1 guyon:1 patch:1 scaling:1 layer:15 topological:2 activity:13 adapted:1 strength:1 lightly:1 separable:1 martin:2 department:1 according:4 character:3 joseph:1 cun:2 making:2 gradually:4 equation:1 remains:1 end:6 available:1 denker:3 apply:1 remaining:2 lissom:45 added:1 receptive:1 primary:2 lends:1 separate:1 lateral:27 aiip:1 collected:1 code:2 length:1 difficult:2 x20:1 mostly:1 howard:2 nist:5 wkl:1 kl:10 connection:28 learned:2 boser:1 pattern:37 including:2 oj:1 decorrelation:2 force:1 settling:2 advanced:1 mn:5 representing:1 technology:1 gardner:1 epoch:5 acknowledgement:1 relative:1 graf:1 fully:2 generation:2 interesting:1 proportional:1 lv:1 validation:2 foundation:1 ilij:3 digital:1 degree:1 propagates:1 editor:5 vij:4 austin:3 excitatory:9 supported:1 perceptron:18 institute:1 neighbor:1 curve:2 calculated:1 cortical:1 world:2 forward:1 projected:1 san:4 supercomputing:1 miikkulainen:11 keep:1 llij:1 kitjkl:2 active:1 reveals:1 pittsburgh:1 iayer:1 w1j:1 table:2 promising:1 learn:3 associator:1 ca:4 som:14 main:1 whole:2 noise:1 repeated:1 l7l:2 hebb:1 iij:3 lq:1 third:1 learns:3 insignificant:1 yer:1 sparser:1 easier:5 eij:3 forming:1 visual:2 holland:1 ma:1 reducing:1 perceptrons:9 indicating:1 internal:9 support:1 mput:1 categorize:1 |
165 | 115 | 577
HETEROGENEOUS NEURAL NETWORKS FOR
ADAPTIVE BEHAVIOR IN DYNAMIC ENVIRONMENTS
Leon S. Sterling
Hillel J. Chiel
Randall D. Beer
CS Dept.
Dept. of Computer Engineering and Science and
Biology Dept.
& CAISR
Center for Automation and Intelligent Systems Research
& CAISR
CWRU
CWRU
Case Western Reserve University
Cleveland, OH 44106
ABSTRACT
Research in artificial neural networks has genera1ly emphasized
homogeneous architectures. In contrast, the nervous systems of natural
animals exhibit great heterogeneity in both their elements and patterns
of interconnection. This heterogeneity is crucial to the flexible
generation of behavior which is essential for survival in a complex,
dynamic environment. It may also provide powerful insights into the
design of artificial neural networks. In this paper, we describe a
heterogeneous neural network for controlling the wa1king of a
simulated insect. This controller is inspired by the neuroethological
and neurobiological literature on insect locomotion. It exhibits a
variety of statically stable gaits at different speeds simply by varying
the tonic activity of a single cell. It can also adapt to perturbations as a
natural consequence of its design.
INTRODUCTION
Even very simple animals exhibit a dazzling variety of complex behaviors which they
continuously adapt to the changing circumstances of their environment. Nervous systems
evolved in order to generate appropriate behavior in dynamic, uncertain situations and
thus insure the survival of the organisms containing them. The function of a nervous
system is closely tied to its structure. Indeed, the heterogeneity of nervous systems has
been found to be crucial to those few behaviors for which the underlying neura1 mechanisms have been worked out in any detail [Selverston, 1988]. There is every reason to
believe that this conclusion will remain valid as more complex nervous systems are studied:
The brain as an "organ" is much more diversified than, for example, the
kidney or the liver. If the performance of relatively few liver cells is
known in detail, there is a good chance of defining the role of the whole
organ. In the brain, different ce))s perform different, specific tasks...
Only rarely can aggregates of neurons be treated as though they were
homogeneous. Above all, the cells in the brain are connected with one
another according to a complicated but specific design that is of far
greater complexity than the connections between cells in other organs.
([Kuffler, Nicholls, & Martin, 1984], p. 4)
578
Beer, Chiel and Sterling
In contrast to research on biological nervous systems, work in artificial neural networks
has primarily emphasized uniform networks of simple processing units with a regular interconnection scheme. These homogeneous networks typically depend upon some general learning procedure to train them to perform specific tasks. This approach has certain
advantages. Such networks are analytically tractable and one can often prove theorems
about their behavior. Furthermore, such networks have interesting computational properties with immediate practical applications. In addition, the necessity of training these networks has resulted in a resurgence of interest in learning, and new training procedures are
being developed. When these procedures succeed, they allow the rapid construction of
networks which perform difficult tasks.
However, we believe that the role of learning may have been overemphasized in artificial
neural networks, and that the architectures and heterogeneity of biological nervous systems have been unduly neglected. We may learn a great deal from more careful study of
the design of biological nervous systems and the relationship of this design to behavior.
Toward this end, we are exploring the ways in which the architecture of the nervous
systems of simpler organisms can be utilized in the design of artificial neural networks.
We are particularly interested in developing neural networks capable of continuously
synthesizing appropriate behavior in dynamic, underspecified, and uncertain
environments of the sort encountered by natural animals.
THE ARTIFICIAL INSECT PROJECT
In order to address these issues, we have begun to construct a simulated insect which we
call Periplaneta compUlatrix. Our ultimate goal is to design a nervous system capable of
endowing this insect with a1l of the behaviors required for long-term survival in a complex and dynamic simulated environment similar to that of natural insects. The skills required to survive in this environment include the basic abilities to move around, to find
and consume food when necessary, and to escape from predators. In this paper, we focus
on the design of that portion of the insect's nervous system which controls its locomotion.
In designing this insect and the nervous system which controls it, we are inspired by the
biological literature. It is important to emphasize, however, that this is not a modeling
project. We are not altempting to reproduce the experimental data on a particular animal;
rather, we are using insights gleaned from Biology to design neural networks capable of
generating similar behaviors. In this manner, we hope to gain a better understanding of
the role heterogeneity plays in the generation of behavior by nervous systems, and to abstract design principles for use in artificial neural networks.
Figure 1. Periplaneta computatrix
Heterogeneous Neural Networks for Adaptive Behavior
BODY
The body of our artificial insect is shown in Figure 1. It is loosely based on the American
Cockroach. Periplaneta americana [Bell & Adiyodi. 1981]. However. it is a reasonable
abstraction of the bodies of most insects. It consists of an abdomen. head. six legs with
feet. two antennae. and two cerci in the rear. The mouth can open and close and contains
tactile and chemical sensors. The antennae also contain tactile and chemical sensors.
The cerci contain tactile and wind sensors. The feet may be 'either up or down. When a
foot is down. it appears as a black square. Finally. a leg can apply forces which translate
and rotate the body whenever its foot is down.
In addition. though the insect is only two-dimensional, it is capable of "falling down."
Whenever its center of mass falls outside of the polygon formed by its supporting feet,
the insect becomes statically unstable. If this condition persists for any length of time.
then we say that the insect has "fallen down" and the legs are no longer able to move the
body.
NEURAL MODEL
The essential challenge of the Artificial Insect Project is to design neural controllers capable of generating the behaviors necessary to the insect's survival. The neural model
that we are currently using to construct our controllers is shown in Figure 2. It represents
the firing frequency of a cell as a function of its input potential. We have used saturating
linear threshold functions for this relationship (see inset). The RC characteristics of the
cell membrane are also represented. These cells are interconnected by weighted synapses
which can cause currents to flow through this membrane. Finally, our model includes the
possibility of additional intrinsic currents which may be time and voltage dependent.
These currents aJlow us to capture some of the intrinsic propenies which make real neurons unique and have proven to be important components of the neural mechanisms underlying many behaviors.
I(V)le
Intrinsic
Currents
v
v
Synaptic
Currents
Firing
Frequency
C
Cell
Membrane
Firing
Properties
Figure 2. Neural Model
579
580
Beer, Chiel and Sterling
For example, a pacemaker cell is a neuron which is capable of endogenously producing
rhythmic bursting. Pacemakers have been implicated in a number of temporally patterned behaviors and playa crucial role in our locomotion controller. As described by
Kandel (1976, pp. 260-268), a pacemaker cell exhibits the following characteristics: (1)
when it is sufficiently inhibited, it is silent, (2) when it is sufficiently excited, it bursts
continuously, (3) between these extremes, the interburst interval is a continuous function
of the membrane potential, (4) a transient excitation which causes the cell to fire between
bursts can reset the bursting rhythm, and (5) a transient inhibition which prematurely terminates a burst can also reset the bursting rhythm.
These characteristics can be reproduced with our neural model through the addition of
two intrinsic currents. IH is a depolarizing current which tends to pull the membrane potential above threshold. IL is a hyperpolarizing current which tends to pull the membrane
potential below threshold. These currents change according to the following rules: (1)
IH is triggered whenever the cell goes above threshold or IL terminates, and it then remains active for a fixed period of time, and (2) IL is triggered whenever IH terminates,
and it then remains acti ve for a variable period of time whose duration is a function of the
membrane potential. In our work to date, the voltage dependence of IL has been linear.
LOCOMOTION
An animal's ability to move around its environment is fundamental to many of its other
behaviors. In most insects, this requirement is fulfilled by six-legged walking. Thus, this
was the first capability we sought to provide to P. computatrix. Walking involves the
generation of temporally patterned forces and stepping movements such that the insect
maintains a steady forward motion at a variety of speeds without falling down. Though
we do not address all of these issues here, it is worth pointing out that locomotion is an
interesting adaptive behavior in its own right. An insect robustly solves this complex coordination problem in real Lime in the presence of variations in load and terrain, developmental changes, and damage to the walking apparatus itself [Graham, 1985].
LEG CONTROLLER
The most basic components of walking are the rhythmic movements of each individual
leg. These consist of a swing phase, in which the foot is up and the leg is swinging forward, and a stance phase, in which the foot is down and the leg is swinging back, propelling the body forward. In our controller, these rhythmic movements are produced by the
leg controller circuit shown in Figure 3. There is one command neuron, C, for the entire
controller and six copies of the remainder of this circuit, one for each leg.
The rhythmic leg movements are primarily generated centrally by the portion of the leg
controller shown in solid lines in Figure 3. Each leg is controlled by three motor neurons.
The stance and swing motor neurons determine the force with which the leg is swung
backward or forward, respectively, and the foot motor neuron controls whether the foot is
up or down. Normally, the foot is down and the stance motor neuron is active, pushing
Heterogeneous Neural Networks for Adaptive Behavior
the leg back and producing a stance phase. Periodically, however, this state is interrupted
by a burst from the pacemaker neuron P. This burst inhibits the foot and stance motor
neurons and excites the swing motor neuron, lifting the foot and swinging the leg forward. When this burst terminates, another stance phase begins. Rhythmic bursting in P
thus produces the basic swing/stance cycle required for walking. The force applied during each stance phase as well as the time between bursts in P depend upOn the level of excitation supplied by the command neuron C. This basic design is based on the flexor
burst-generator model of cockroach walking [pearson, 1976].
In order to properly time the transitions between the swing and stance phases, the controller must have some information about where the legs actually are. The simplest way to
provide this information is to add sensors which signal when a leg has reached an extreme forward or backward angle, as shown with dashed lines in Figure 3. When the leg
is all the way back, the backward angle sensor encourages P to initiate a swing by exciting it. When the leg is all the way forward, the forward angle sensor encourages P to terminate the swing by inhibiting it. These sensors serve to reinforce and fine-tune the cen?
trally generated stepping rhythm. They were inspired by the hair plate receptors in P.
americana, which seem LO playa similar role in its locomotion [Pearson, 1976].
The RC characteristics of our neural model cause delays at the end of each swing before
the next stance phase begins. This pause produces a "jerky" walk which we sought to
avoid. In order to smooth out this effect, we added a stance reflex comprised of the dotted connections shown in Figure 3. This reflex gives the motor neurons a slight "kick" in
the right direction to begin a stance whenever the leg is swung all the way forward and is
also inspired by the cockroach [Pearson, 1976].
Backward Angle
Sensor
Stance
Foot
Swing
O~ ? ?
"-
... :. . . . '. '. '... '. ~:~:D
?......................
Forward Angle
Sensor
E>-
Excitatory Connection
_
Inhibitory Connection
Figure 3. Leg Controller Circuit
581
582
Beer, Chiel and Sterling
Figure 4. Central Coupling between Pacemakers
LOCOMOTION CONTROLLER
In order for these six individual leg controllers to serve as the basis for a locomotion controller, we must address the issue of stability. Arbitrary patterns of leg movements will
not, in general, lead to successful locomotion. Rather, the movements of each leg must
be synchronized in such a way as to continuously maintain stability.
A good rule of thumb is that adjacent legs should be discouraged from swinging at the
same time. As shown in Figure 4, this constraint was implemented by mutual inhibition
between the pacemakers of adjacent legs. So, for example, when leg L2 is swinging, legs
L I, L3 and R2 are discouraged from also swinging, but legs RI and R3 are unaffected (see
Figure Sa for leg labelings). This coupling scheme is also derived from Pearson's (1976)
work.
The gaits adopted by the controller described above depend in general upon the initial angles of the legs. To further enhance stability, it is desirable to impose some reliable order
to the stepping sequence. Many animals exhibit a stepping sequence known as a metachronal wave, in which a wave of stepping progresses from back to front. In insects, for example, the back leg swings, then the middle one, then the front one on each side of the
body. This sequence is achieved in our controller by slightly increasing the leg angle
ranges of the rear legs, lowering their stepping frequency. Under these conditions, the
rear leg oscillators entrain the middle and front ones, and produce metachronal waves
[Graham, 1977].
RESULTS
When this controller is embedded in the body of our simulated insect, it reliably produces
successful walking. We have found that the insect can be made to walk at different
speeds with a variety of gaits simply by varying the flring frequency of the command
neuron C. Observed gaits range from the wave gait, in which the metachronal waves on
each side of the body are very nearly separated, to the tripod gait, in which the front and
back legs on each side of the body step with the middle leg on the opposite side. These
gaits fall out of the interaction between the dynamics of the neural controller and the
body in which it is embedded.
Heterogeneous Neural Networks for Adaptive Behavior
L~R
I
I
t
Z
S
3
It3
.$1q'plq
_ ,dIrII -
_
a,
JI2
_
_
L3
_
,nT, iii
L2_
Ll ii i
It3
_
-
i.,
-
ii i
-
-
-
-
?
1m. iii' iii i 1m: i Ii iii, ,
25~dlv
stq'pfq
'dIrII
_
_
_
-
112.
al __
__
__ _
U
L3. _
Ll
_
."iI ..
-
_
-
_
-
?
-
_
_
hi 'liihi " " " " Ii i hi iii Ii liiii i i
25 --=vdlv
----------??
_IIIJ11l'aMm
.1 _
R2 _
It3
----- - - L3
L2
Ll
. -....
..
~
tI,: I
.
1t,1_11, _ _
AS
"...,., ,,--'.. \ . :~I
p
I
LS!
;:
''''_
-.
,
L, '...-,', . ? ll ? ?.. ~ - . . ' ~
? . ? ? ?_
"
?.?? _
-
A.
25 -'Ct,v
-
-
-
al
_
_
1.2 _
_
_
_-
u ___
-
-
,."'ii.i 'irm ?? ;?? .wriii'iilWt
1t3 _ _
_
:
,1.._ :
~'iii Ii
&ea'p.ptq I'dIrII
-
. -
__
__
L 1 "Ii
_
-
mTII Ii 1i'l'lT! " Ii i " " I i "
25~d,v
-
-
i
~'mT i Ii ill
B.
Figure S. (A) Description of Some Gaits Observed in Natural Insects (from [Wilson,
1966]). (B) Selected Gaits Observed in P. computatrix.
If the legs .are labeled as shown at the top of Figure Sa, then gaits may be conveniently
described by their stepping patterns. In this representation, a black bar is displayed during the swing phase of each leg. The space between bars represents the stance phase.
Selected gaits observed in P. computatrix at different speeds are shown in Figure 5b as
the command neuron firing frequency is varied from lowest (top) to highest (bottom). At
the lower speeds, the metachronal waves on each ~ide of the body are very apparent. The
metachronal waves can still be discerned in fas~r walks. However, they increasingly
overlap as the stance phases shorten, until the tripod gait appears at the highest speeds.
This sequence of gaits bears a strong resemblance to some of those that have beep described for natural insects, as shown in Figure Sa [Wilson, 1966].
In order to study the robustness of this controller and to gain insight into the detailed
mechanisms of its operation, we have begun a series of lesion studies. Such studies ex-
583
584
Beer, Chiel and Sterling
amine the behavioral effects of selective damage to a neural controller. This study is still
in progress and we only report a few preliminary results here. In general, we have been
repeatedly surprised by the intricacy of the dynamics of this controller. For example, removal of all of the forward angle sensors resulted in a complete breakdown of the
metachronal wave at low speeds. However, at higher speeds, the gait was virtually unaffected. Only brief periods of instability caused by the occasional overlap of the slightly
longer than normal swing phases were observed in the tripod gait, but the insect did not
fall down. Lesioning single forward angle sensors often dynamically produced compensatory phase shifts in the other legs. Lesions of selected central connections produced
similarly interesting effects. In general, our studies seem to suggest subtle interactions
between the central and peripheral components of the controller which deserve much
more exploration.
Finally, we have observed the phenomena of reflex stepping in P. computalrix. When the
central locomotion system is completely shut down by strongly inhibiting the command
neuron and the insect is continuously pushed from behind, it is still capable of producing
an uncoordinated kind of walking. As the insect is pushed forward, a leg whose foot is
down bends back until the backward angle sensor initiates a swing by exciting the pacemaker neuron P. When the leg has swung all the way forward, the stance reflex triggered
by the forward angle sensor puts the foot down and the cycle repeats.
Brooks (1989) has described a semi-distributed locomotion controller for an insect-like
autonomous robot. We are very much in agreement with his general approach.
However, his controller is not as fully distributed as the one described above. It relies on
a central leg lift sequencer which must be modified to produce different gaits. Donner
(1985) has also implemented a distributed hexapod locomotion controller inspired by an
early model of Wilson's (1966). His design used individual leg controllers driven by leg
load and position information. These leg controllers were coupled by forward excitation
from posterior legs. Thus, his stepping movements were produced by reflex-driven peripheral oscillators rather than the central oscillators used in our model. He did not report
the generation of the series of gaits shown in Figure Sa. Donner also demonstrated the
ability of his controller to adapt to a missing leg. We have experimented with leg amputations as well, but with mixed success. We feel that more accurate three-dimensional
load information than we currently model is necessary for the proper handling of amputations. Neither of these other locomotion controllers utilize neural networks.
CONCLUSIONS AND FUTURE WORK
We have described a heterogeneous neural network for controlling the walking of a simulated insect. This controller is completely distributed yet capable of reliably producing a
range of statically stable gaits at different walking speeds simply by varying the tonic activity of a single command neuron. Lesion studies have demonstrated that the controller
is robust, and suggested that subtle interactions and dynamic compensatory mechanisms
are responsible for this robustness.
This controller is serving as the basis for a number of other behaviors. We have already
implemented wandering, and are currently experimenting with controllers for recoil re-
Heterogeneous Neural Networks for Adaptive Behavior
sponses and edge following. In the near future, we plan to implement feeding behavior
and an escape response, resulting in what we feel is the minimum complement of behaviors necessary for survival in an insect-like environment Finally, we wish to introduce
plasticity into these controllers so that they may better adapt to the exigencies of particular environments. We believe that learning is best viewed as a means by which additional
flexibility can be added to an existing controller.
The locomotion controller described in this paper was inspired by the literature on insect
locomotion. The further development of P. compUlalrix will continue to draw inspiration
from the neuroethology and neurobiology of simpler natural organisms. In trying to design autonomous organisms using principles gleaned from Biology, we may both improve our understanding of natural nervous systems and discover design principles of use
to the construction of artificial ones. A robot with "only" the behavioral repertoire and
adaptability of an insect would be an impressive achievement indeed. In particular, we
have argued in this paper for a more careful consideration of the intrinsic architecture and
heterogeneity of biological nervous systems in the design of artificial neural networks.
The locomotion controller we have described above only hints at how productive such an
approach can be.
References
Bell, W.J. and K.G. Adiyodi eds (1981). The American Cockroach. New York: Chapman
and Hall.
Brooks, R.A. (1989). A robot that walks: emergent behaviors from a carefully evolved
network. Neural Computation 1(1).
Donner, M. (1987). Real-time control of walking
Volume 7). Cambridge, MA: Birkhauser Boston, Inc.
(Progress in Computer Science,
Graham, D. (1977). Simulation of a model for the coordination of leg movements in free
walking insects. Biological Cybernetics 26:187-198.
Graham, D. (1985). Pattern and control of walking in insects: Advances in Insect
PhYSiology 18:31-140.
Kandel, E.R. (1976). Cellular Basis of Behavior: An Introduction to Behavioral
Neurobiology. W.H. Freeman.
Kuffler, S.W., Nicholls, J.G., and Martin, A. R. (1984). From Neuron to Brain: A
Cellular Approach to the Function of the Nervous System. Sunderland, MA: Sinauer
Associates Inc.
Pearson, K. (1976). The control of walking. Scientific American 235:72-86.
Selverston, A.I. (1988). A consideration of invertebrate central pattern generators as computational data bases. Neural Networks 1:109-117.
Wilson, D.M. (1966). Insect walking. Annual Review of Entomology 11:103-122.
585
| 115 |@word beep:1 middle:3 open:1 simulation:1 excited:1 solid:1 initial:1 necessity:1 contains:1 series:2 existing:1 donner:3 current:9 nt:1 yet:1 must:4 interrupted:1 periodically:1 hyperpolarizing:1 plasticity:1 motor:7 pacemaker:7 selected:3 nervous:16 shut:1 ji2:1 simpler:2 uncoordinated:1 rc:2 burst:8 surprised:1 prove:1 consists:1 acti:1 behavioral:3 introduce:1 manner:1 indeed:2 rapid:1 behavior:25 brain:4 inspired:6 freeman:1 food:1 increasing:1 becomes:1 cleveland:1 project:3 insure:1 underlying:2 circuit:3 mass:1 begin:3 lowest:1 what:1 evolved:2 kind:1 developed:1 selverston:2 every:1 ti:1 dlv:1 control:6 unit:1 normally:1 producing:4 before:1 engineering:1 persists:1 tends:2 apparatus:1 consequence:1 hexapod:1 receptor:1 firing:4 black:2 studied:1 bursting:4 dynamically:1 patterned:2 range:3 practical:1 unique:1 responsible:1 implement:1 procedure:3 sequencer:1 bell:2 physiology:1 regular:1 suggest:1 lesioning:1 close:1 bend:1 put:1 instability:1 demonstrated:2 center:2 missing:1 go:1 kidney:1 duration:1 l:1 swinging:6 shorten:1 insight:3 periplaneta:3 rule:2 pull:2 oh:1 his:5 stability:3 variation:1 chiel:5 autonomous:2 feel:2 controlling:2 construction:2 play:1 homogeneous:3 designing:1 locomotion:16 agreement:1 associate:1 element:1 particularly:1 utilized:1 walking:15 underspecified:1 breakdown:1 labeled:1 observed:6 role:5 bottom:1 tripod:3 capture:1 connected:1 cycle:2 movement:8 highest:2 environment:9 developmental:1 complexity:1 productive:1 dynamic:8 neglected:1 legged:1 depend:3 serve:2 upon:3 basis:3 completely:2 emergent:1 polygon:1 represented:1 train:1 separated:1 describe:1 artificial:11 aggregate:1 lift:1 outside:1 hillel:1 pearson:5 whose:2 apparent:1 consume:1 say:1 interconnection:2 ability:3 antenna:2 itself:1 reproduced:1 advantage:1 triggered:3 sequence:4 gait:18 interconnected:1 interaction:3 reset:2 remainder:1 date:1 translate:1 flexibility:1 description:1 achievement:1 requirement:1 nicholls:2 generating:2 produce:5 coupling:2 liver:2 excites:1 progress:3 sa:4 strong:1 solves:1 implemented:3 c:1 involves:1 synchronized:1 direction:1 foot:15 closely:1 exploration:1 transient:2 a1l:1 argued:1 feeding:1 preliminary:1 repertoire:1 biological:6 exploring:1 around:2 sufficiently:2 hall:1 normal:1 great:2 reserve:1 pointing:1 inhibiting:2 cen:1 sought:2 early:1 currently:3 coordination:2 organ:3 weighted:1 hope:1 sensor:13 modified:1 rather:3 avoid:1 neuroethology:1 varying:3 voltage:2 command:6 wilson:4 derived:1 focus:1 properly:1 experimenting:1 contrast:2 cerci:2 abstraction:1 rear:3 dependent:1 typically:1 entire:1 sunderland:1 reproduce:1 selective:1 labelings:1 interested:1 issue:3 flexible:1 ill:1 insect:35 development:1 animal:6 plan:1 mutual:1 construct:2 chapman:1 biology:3 represents:2 survive:1 nearly:1 future:2 report:2 intelligent:1 escape:2 few:3 primarily:2 inhibited:1 hint:1 resulted:2 ve:1 individual:3 sterling:5 phase:12 fire:1 maintain:1 interest:1 possibility:1 extreme:2 behind:1 accurate:1 caisr:2 capable:8 edge:1 necessary:4 loosely:1 irm:1 walk:4 re:1 amm:1 uncertain:2 modeling:1 uniform:1 comprised:1 delay:1 successful:2 front:4 fundamental:1 enhance:1 continuously:5 central:7 containing:1 american:3 potential:5 automation:1 includes:1 inc:2 caused:1 wind:1 portion:2 wave:8 reached:1 sort:1 maintains:1 complicated:1 capability:1 predator:1 jerky:1 depolarizing:1 square:1 formed:1 il:4 characteristic:4 t3:1 fallen:1 thumb:1 produced:4 worth:1 cybernetics:1 unaffected:2 synapsis:1 whenever:5 synaptic:1 ed:1 amputation:2 frequency:5 pp:1 gain:2 begun:2 subtle:2 adaptability:1 carefully:1 actually:1 back:7 ea:1 appears:2 higher:1 response:1 discerned:1 though:3 strongly:1 furthermore:1 until:2 western:1 resemblance:1 scientific:1 believe:3 effect:3 contain:2 swing:13 analytically:1 inspiration:1 chemical:2 stance:16 deal:1 adjacent:2 ll:4 during:2 encourages:2 excitation:3 rhythm:3 steady:1 ide:1 trying:1 plate:1 complete:1 gleaned:2 motion:1 consideration:2 endowing:1 plq:1 mt:1 stepping:9 volume:1 organism:4 slight:1 he:1 cambridge:1 similarly:1 l3:4 stable:2 robot:3 longer:2 impressive:1 inhibition:2 add:1 base:1 playa:2 posterior:1 own:1 driven:2 certain:1 success:1 continue:1 flring:1 minimum:1 greater:1 additional:2 impose:1 determine:1 period:3 signal:1 propenies:1 dashed:1 ii:12 desirable:1 semi:1 smooth:1 adapt:4 long:1 controlled:1 basic:4 hair:1 heterogeneous:7 controller:37 circumstance:1 achieved:1 cell:12 addition:3 fine:1 interval:1 crucial:3 virtually:1 flow:1 seem:2 call:1 near:1 presence:1 kick:1 iii:6 variety:4 architecture:4 opposite:1 silent:1 abdomen:1 shift:1 whether:1 six:4 ultimate:1 tactile:3 wandering:1 york:1 cause:3 repeatedly:1 detailed:1 tune:1 simplest:1 generate:1 supplied:1 amine:1 inhibitory:1 it3:3 dotted:1 fulfilled:1 serving:1 threshold:4 falling:2 changing:1 neither:1 ce:1 utilize:1 backward:5 lowering:1 interburst:1 angle:11 powerful:1 flexor:1 reasonable:1 discover:1 draw:1 lime:1 graham:4 pushed:2 hi:2 ct:1 centrally:1 encountered:1 annual:1 activity:2 constraint:1 worked:1 ri:1 invertebrate:1 speed:9 leon:1 statically:3 relatively:1 martin:2 inhibits:1 developing:1 according:2 peripheral:2 membrane:7 remain:1 terminates:4 slightly:2 increasingly:1 randall:1 leg:50 handling:1 remains:2 r3:1 mechanism:4 initiate:2 tractable:1 end:2 adopted:1 operation:1 apply:1 occasional:1 appropriate:2 robustly:1 robustness:2 top:2 include:1 pushing:1 move:3 overemphasized:1 added:2 already:1 damage:2 fa:1 dependence:1 exhibit:5 discouraged:2 reinforce:1 simulated:5 unstable:1 cellular:2 reason:1 toward:1 length:1 relationship:2 difficult:1 resurgence:1 synthesizing:1 design:16 reliably:2 proper:1 perform:3 neuron:19 displayed:1 supporting:1 immediate:1 heterogeneity:6 neurobiology:2 situation:1 tonic:2 defining:1 head:1 prematurely:1 perturbation:1 varied:1 arbitrary:1 complement:1 required:3 connection:5 compensatory:2 unduly:1 brook:2 address:3 able:1 bar:2 deserve:1 below:1 pattern:5 suggested:1 challenge:1 reliable:1 mouth:1 overlap:2 natural:8 treated:1 force:4 endogenously:1 pause:1 cockroach:4 scheme:2 improve:1 brief:1 temporally:2 coupled:1 review:1 literature:3 understanding:2 l2:2 ptq:1 removal:1 sinauer:1 embedded:2 fully:1 bear:1 mixed:1 generation:4 interesting:3 proven:1 generator:2 beer:5 principle:3 exciting:2 lo:1 excitatory:1 repeat:1 copy:1 free:1 implicated:1 side:4 kuffler:2 allow:1 fall:3 rhythmic:5 distributed:4 valid:1 transition:1 forward:16 made:1 adaptive:6 far:1 skill:1 emphasize:1 neurobiological:1 active:2 terrain:1 continuous:1 learn:1 terminate:1 robust:1 complex:5 did:2 whole:1 lesion:3 body:12 position:1 wish:1 kandel:2 entrain:1 tied:1 stq:1 theorem:1 down:13 load:3 specific:3 emphasized:2 inset:1 americana:2 r2:2 experimented:1 survival:5 swung:3 essential:2 intrinsic:5 ih:3 consist:1 sponses:1 lifting:1 boston:1 intricacy:1 lt:1 simply:3 conveniently:1 saturating:1 diversified:1 reflex:5 chance:1 relies:1 ma:2 succeed:1 goal:1 viewed:1 careful:2 oscillator:3 change:2 birkhauser:1 experimental:1 rarely:1 rotate:1 dept:3 phenomenon:1 ex:1 |
166 | 1,150 | Primitive Manipulation Learning with
Connectionism
Yoky Matsuoka
The Artificial Intelligence Laboratory
NE43-819
Massachusetts Institute of Techonology
Cambridge, MA 02139
Abstract
Infants' manipulative exploratory behavior within the environment
is a vehicle of cognitive stimulation[McCall 1974]. During this time,
infants practice and perfect sensorimotor patterns that become behavioral modules which will be seriated and imbedded in more complex actions. This paper explores the development of such primitive
learning systems using an embodied light-weight hand which will
be used for a humanoid being developed at the MIT Artificial Intelligence Laboratory[Brooks and Stein 1993]. Primitive grasping
procedures are learned from sensory inputs using a connectionist
reinforcement algorithm while two submodules preprocess sensory
data to recognize the hardness of objects and detect shear using
competitive learning and back-propagation algorithm strategies,
respectively. This system is not only consistent and quick during the initial learning stage, but also adaptable to new situations
after training is completed.
1
INTRODUCTION
Learning manipulation in an unpredictable, changing environment is a complex task.
It requires a nonlinear controller to respond in a nonlinear system that contains a
significant amount of sensory inputs and noise [Miller , et al 1990]. Investigating the
human manipulation learning system and implementing it in a physical system has
not been done due to its complexity and too many unknown parameters. Conventional adaptive control theory assumes too many parameters that are constantly
changing in a real environment [Sutton, et al 1991, Williams 1988]. For an embodied hand, even the simplest form of learning process requires a more intelligent
control network. Wiener [Wiener 1948] has proposed the idea of "Connectionism" ,
which suggests that a muscle is controlled by affecting the gain of the "efferent-
| 1150 |@word laboratory:2 strategy:1 imbedded:1 human:1 during:2 implementing:1 initial:1 contains:1 connectionism:2 shear:1 stimulation:1 physical:1 infant:2 intelligence:2 unknown:1 significant:1 cambridge:1 situation:1 mit:1 become:1 behavioral:1 manipulation:3 learned:1 hardness:1 detect:1 behavior:1 brook:1 muscle:1 pattern:1 unpredictable:1 development:1 developed:1 controlled:1 controller:1 embodied:2 connectionist:1 intelligent:1 control:2 affecting:1 recognize:1 sutton:1 humanoid:1 consistent:1 suggests:1 light:1 submodules:1 practice:1 idea:1 institute:1 procedure:1 sensory:3 reinforcement:1 adaptive:1 action:1 conventional:1 quick:1 amount:1 stein:1 primitive:3 williams:1 investigating:1 too:2 simplest:1 explores:1 exploratory:1 complex:2 changing:2 cognitive:1 noise:1 module:1 respond:1 grasping:1 vehicle:1 ne43:1 environment:3 competitive:1 complexity:1 wiener:2 miller:1 preprocess:1 artificial:2 sensorimotor:1 efferent:1 gain:1 massachusetts:1 constantly:1 ma:1 back:1 adaptable:1 mccall:1 done:1 stage:1 hand:2 assumes:1 perfect:1 nonlinear:2 propagation:1 object:1 completed:1 matsuoka:1 |
167 | 1,151 | Learning long-term dependencies
is not as difficult with NARX networks
Tsungnan Lin*
Department of Electrical Engineering
Princeton University
Princeton, NJ 08540
Peter Tiiio
Dept. of Computer Science and Engineering
Slovak Technical University
Ilkovicova 3, 812 19 Bratislava, Slovakia
Bill G. Horne
NEC Research Institute
4 Independence Way
Princeton, NJ 08540
c. Lee
Giles t
NEC Research Institute
4 Independence Way
Princeton, N J 08540
Abstract
It has recently been shown that gradient descent learning algorithms for recurrent neural networks can perform poorly on tasks
that involve long-term dependencies. In this paper we explore
this problem for a class of architectures called NARX networks,
which have powerful representational capabilities. Previous work
reported that gradient descent learning is more effective in NARX
networks than in recurrent networks with "hidden states". We
show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on such problems. We present some experimental 'results
that show that NARX networks can often retain information for
two to three times as long as conventional recurrent networks.
1
Introduction
Recurrent Neural Networks (RNNs) are capable of representing arbitrary nonlinear dynamical systems [19, 20]. However, learning simple behavior can be quite
"Also with NEC Research Institute.
tAlso with UMIACS, University of Maryland, College Park, MD 20742
578
T. LIN, B. G. HORNE, P. TINO, C. L. GILES
difficult using gradient descent. For example, even though these systems are 'lUring equivalent, it has been difficult to get them to successfully learn small finite
state machines from example strings encoded as temporal sequences. Recently, it
has been demonstrated that at least part of this difficulty can be attributed to
long-term dependencies, i.e. when the desired output at time T depends on inputs
presented at times t ? T. In [13] it was reported that RNNs were able to learn short
term musical structure using gradient based methods, but had difficulty capturing
global behavior. These ideas were recently formalized in [2], which showed that if
a system is to robustly latch information, then the fraction of the gradient due to
information n time steps in the past approaches zero as n becomes large.
Several approaches have been suggested to circumvent this problem. For example, gradient-based methods can be abandoned in favor of alternative optimization
methods [2, 15]. However, the algorithms investigated so far either perform just
as poorly on problems involving long-term dependencies, or, when they are better,
require far more computational resources [2]. Another possibility is to modify conventional gradient descent by more heavily weighing the fraction of the gradient due
to information far in the past, but there is no guarantee that such a modified algorithm would converge to a minima of the error surface being searched [2]. Another
suggestion has been to alter the input data so that it represents a reduced description
that makes global features more explicit and more readily detectable [7, 13, 16, 17].
However, this approach may fail if short term dependencies are equally as important. Finally, it has been suggested that a network architecture that operates on
multiple time scales might be useful [5, 6].
In this paper, we also propose an architectural approach to deal with long-term
dependencies [11]. We focus on a class of architectures based upon Nonlinear AutoRegressive models with eXogenous inputs (NARX models), and are therefore
called NARX networks [3, 14]. This is a powerful class of models which has recently
been shown to be computationally equivalent to 'lUring machines [18]. Furthermore, previous work has shown that gradient descent learning is more effective
in NARX networks than in recurrent network architectures with "hidden states"
when applied to problems including grammatical inference and nonlinear system
identification [8]. Typically, these networks converge much faster and generalize
better than other networks. The results in this paper give an explanation of this
phenomenon.
2
Vanishing gradients and long-term dependencies
Bengio et al. [2] have analytically explained why learning problems with long- term
dependencies is difficult. They argue that for many practical applications the goal
of the network must be to robustly latch information, i.e. the network must be
able to store information for a long period of time in the presence of noise. More
specifically, they argue that latching of information is accomplished when the states
of the network stay within the vicinity of a hyperbolic attractor, and robustness
to noise is accomplished if the states of the network are contained in the reduced
attracting set of that attractor, i.e. those set of points at which the eigenvalues of
the Jacobian are contained within the unit circle.
In algorithms such as Backpropagation Through Time (BPTT), the gradient of
the cost function function C is written assuming that the weights at different time
Learning Long-term Dependencies Is Not as Difficult with NARX Networks
u(k)
u(k-l)
u(k-2)
579
y(k-3) y(k-2) y(k-l)
Figure 1: NARX network.
indices are independent and computing the partial gradient with respect to these
weights. The total gradient is then equal to the sum of these partial gradients.
It can be easily shown that the weight updates are proportional to
where Yp(T) and d p are the actual and desired (or target) output for the pth
pattern!, x(t) is the state vector of the network at time t and Jx(T,T - T) =
\l xC-r)x(T) denotes the Jacobian of the network expanded over T - T time steps.
In [2], it was shown that if the network robustly latches information, then Jx(T, n)
is an exponentially decreasing function of n, so that limn-too Jx(T, n) = 0 . This
implies that the portion of \l we due to information at times T ? T is insignificant
compared to the portion at times near T. This vanishing gradient is the essential
reason why gradient descent methods are not sufficiently powerful to discover a
relationship between target outputs and inputs that occur at a much earlier time.
3
NARX networks
An important class of discrete- time nonlinear systems is the Nonlinear AutoRegressive with eXogenous inputs (NARX) model [3, 10, 12, 21]:
y(t) =
f
(u(t - D u ), ... ,u(t - 1), u(t), y(t - D y ),'" ,y(t - 1)) ,
where u(t) and y(t) represent input and output ofthe network at time t, Du and Dy
are the input and output order, and f is a nonlinear function. When the function
f can be approximated by a Multilayer Perceptron, the resulting system is called a
NARX network [3, 14].
In this paper we shall consider NARX networks with zero input order and a one
dimensional output. However there is no reason why our results could not be
extended to networks with higher input orders. Since the states of a discrete-time
lWe deal only with problems in which the target output is presented at the end of the
sequence.
580
T. LIN, B. G. HORNE, P. TINO, C. L. GILES
t].,
09
:' 1
,
~I
DOS
j
.,
.... 0 .3
De
Em"
0'
009
, - ' ? 0. 6
"5
0.3
- ?0. 6
,
~ OO 7
i
I
~ O 06
'...\
Ii
JOO5
:i
..Q
004
"0
h 03
~
"'
002
DO'
0
60
0
10
20
30
.0
60
n
(b)
(a)
Figure 2: Results for the latching problem. (a) Plots of J(t,n) as a function of n.
(b) Plots of the ratio E~~(ltJ(tr) as a function of n .
dynamical system can always be associated with the unit-delay elements in the
realization of the system, we can then describe such a network in a state space form
i=l
i = 2, ... ,D
with y(t) =
Xl
(1)
(t + 1) .
If the Jacobian of this system has all of its eigenvalues inside the unit circle at each
time step, then the states of the network will be in the reduced attracting set of some
hyperbolic attractor, and thus the system will be robustly latched at that time. As
with any other RNN, this implies that limn-too Jx(t, n) = o. Thus, NARX networks
will also suffer from vanishing gradients and the long- term dependencies problem.
However, we find in the simulation results that follow that NARX networks are
often much better at discovering long-term dependencies than conventional RNNs .
An intuitive reason why output delays can help long-term dependencies can be
found by considering how gradients are calculated using the Backpropagation
Through Time algorithm. BPTT involves two phases: unfolding the network in
time and backpropagating the error through the unfolded network. When a NARX
network is unfolded in time, the output delays will appear as jump-ahead connections in the unfolded network. Intuitively, these jump-ahead connections provide a
shorter path for propagating gradient information, thus reducing the sensitivity of
the network to long- term dependencies. However, this intuitive reasoning is only
valid if the total gradient through these jump- ahead pathways is greater than the
gradient through the layer-to-layer pathways.
It is possible to derive analytical results for some simple toy problems to show
that NARX networks are indeed less sensitive to long-term dependencies. Here
we give one such example, which is based upon the latching problem described
in [2] . Consider the one node autonomous recurrent network described by, x(t) =
tanh(wx(t - 1)) where w = 1.25, which has two stable fixed points at ?0.710
and one unstable fixed point at zero. The one node, autonomous NARX network
x(t) = tanh (L:~=l wrx(t has the same fixed points as long as L:?:l Wi = w.
r))
581
Learning Long-tenn Dependencies Is Not as Difficult with NARX Networks
Assume the state of the network has reached equilibrium at the positive stable fixed
point and there are no external inputs. For simplicity, we only consider the Jacobian
J(t, n) = 8~{t~~)' which will be a component of the gradient 'ilwC. Figure 2a shows
plots of J(t, n) with respect to n for D = 1, D = 3 and D = 6 with Wi = wiD .
These plots show that the effect of output delays is to flatten out the curves and
place more emphasis on the gradient due to terms farther in the past. Note that the
gradient contribution due to short term dependencies is deemphasized. In Figure 2b
we show plots of the ratio L::~\tj(t,r) , which illustrates the percentage of the total
gradient that can be attributed to information n time steps in the past. These plots
show that this percentage is larger for the network with output delays, and thus
one would expect that these networks would be able to more effectively deal with
long-term dependencies.
4
4.1
Experimental results
The latching problem
We explored a slight modification on the latching problem described in [2], which
is a minimal task designed as a test that must necessarily be passed in order for
a network to robustly latch information. In this task there are three inputs Ul(t),
U2(t), and a noise input e(t), and a single output y(t) . Both Ul(t) and U2(t) are
zero for all times t> 1. At time t = 1, ul(l) = 1 and u2(1) = 0 for samples from
class 1, and ul(l) = 0 and u2(1) = 1 for samples from class 2. The noise input e(t)
is drawn uniformly from [-b, b] when L < t S T, otherwise e(t) = 0 when t S L.
This network used to solve this problem is a NARX network consisting of a single
neuron,
where the parameters
h{
are adjustable and the recurrent weights
Wr
are fixed
2.
We fixed the recurrent feedback weight to Wr = 1.251 D, which gives the autonomous
network two stable fixed points at ?0.710, as described in Section 3. It can be
shown [4] that the network is robust to perturbations in the range [-0.155,0.155].
Thus, the uniform noise in e(t) was restricted to this range.
For each simulation, we generated 30 strings from each class, each with a different
e(t). The initial values of h{ for each simulation were also chosen from the same
distribution that defines e(t). For strings from class one, a target value of 0.8 was
chosen, for class two -0.8 was chosen. The network was run using a simple BPTT
algorithm with a learning rate of 0.1 for a maximum of 100 epochs. (We found that
the network converged to some solution consistently within a few dozen epochs.) If
the simulation exceeded 100 epochs and did not correctly classify all strings then
the simulation was ruled a failure. We varied T from 10 to 200 in increments of 2.
For each value of T, we ran 50 simulations. Figure 3a shows a plot of the percentage
of those runs that were successful for each case. It is clear from these plots that
2 Although this description may appear different from the one in [2], it can be shown
that they are actually identical experiments for D = 1.
582
T. LIN, B. G. HORNE, P. TINO, C. L. GILES
'.J "60J'
09
.,
~
,,
. Ii'
09
... . 0.3
?-?-0.6
\
.......
, _.. ....... . :..:
..,..
.
,.
......' :~ . ,.'< ...... .
"
09
. 01
... ...
~
~06
~
06
~05
!,
~ i !,
1
,
0?
04
"
03
02
02
\ ..
0'
'"
?.~Il~
00~~~~~~~~~~~~
'6~
0 ~'M~~200
20
40
~~~~'0~~'5~~ro--~25--=OO~3~
5 ~4~0~4~5~50?
Langlh
(a)
01 InPJI nC.IIIe
(b)
Figure 3: (a) Plots of percentage of successful simulations as a function of T, the
length of the input strings. (b) Plots of the final classification rate with respect to
different length input strings.
the NARX networks become increasingly less sensitive to long- term dependencies
as the output order is increased.
4.2
The parity problem
In the parity problem, the task is to classify sequences depending on whether or not
the number of Is in the input string is odd. We generated 20 strings of different
lengths from 3 to 5 and added uniformly distributed noise in the range [-0.2,0.2] at
the end of each string. The length of input noise varied from 0 to 50. We arbitrarily
chose 0.7 and -0.7 to represent the symbol "1" and "0". The target is only given
at the end of each string. Three different networks with different number of output
delays were run on this problem in order to evaluate the capability of the network
to learn long-term dependencies. In order to make the networks comparable, we
chose networks in which the number of weights was roughly equal. For networks
with one to three delays, 5, 4 and 3 hidden neurons were chosen respectively, giving
21, 21, and 19 trainable weights. Initial weight values were randomly generated
between -0.5 and 0.5 for 10 trials.
Fig. 3b shows the average classification rate with respect to different length of input
noise. When the length of the noise is less than 5, all three of the networks can
learn all the sequences with the classification rate near to 100%. When the length
increases to between 10 and 35, the classification rate of networks with one feedback
delay drops quickly to about 60% while the rate of those networks with two or three
feedback delays still remains about 80%.
5
Conclusion
In this paper we considered an architectural approach to dealing with the problem of
learning long-term dependencies. We explored the ability of a class of architectures
called NARX networks to solve such problems. This has been observed previously,
in the sense that gradient descent learning appeared to be more effective in NARX
Learning Long-tenn Dependencies Is Not as Difficult with NARX Networks
583
networks than in RNNs [8]. We presented an analytical example that showed that
the gradients do not vanish as quickly in NARX networks as they do in networks
without multiple delays when the network is operating at a fixed point. We also
presented two experimental problems which show that NARX networks can outperform networks with single delays on some simple problems involving long-term
dependencies.
We speculate that similar results could be obtained for other networks. In particular
we hypothesize that any network that uses tapped delay feedback [1, 9] would
demonstrate improved performance on problems involving long-term dependencies.
Acknowledgements
We would like to thank A. Back and Y. Bengio for many useful suggestions.
References
(1] A.D. Back and A.C. Tsoi. FIR and IIR synapses, a new neural network architecture for time
series modeling. Neural Computation, 3(3):375-385, 1991.
(2] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient is difficult.
IEEE Trans. on Neural Networks, 5(2):157- 166, 1994.
(3] S. Chen, S.A. Billings, and P.M. Grant. Non-linear system identification using neural networks.
International Journal of Control, 51(6):1191-1214, 1990.
(4] P. Frasconi, M. Gori, M. Maggini, and G. Soda. Unified integration of explicit knowledge and
learning by example in recurrent networks. IEEE Trans. on Know. and Data Eng.,7(2):340-346,
1995.
(5] M. Gori, M. Maggini, and G. Soda. Scheduling of modular architectures for inductive inference of
regular grammars. In ECAI'94 Work. on Comb. Sym. and Connectionist Proc., pages 78-87.
(6J S. EI Hihi and Y. Bengio. Hierarchical recurrent neural networks for long-term dependencies. In
NIPS 8, 1996. (In this Proceedings.)
(7] S. Hochreiter and J. Schmidhuber. Long short term memory. Technical Report FKI-207-95,
Technische Universitat Munchen, 1995.
(8] B.G. Horne and C.L. Giles. An experimental comparison of recurrent neural networks. In NIPS 7,
pages 697-704, 1995.
(9J R.R. Leighton and B .C. Conrath. The autoregressive backpropagation algorithm. In Proceedings
of the International Joint Conference on Neural Networks, volume 2, pages 369-377, July 1991.
(10] I.J. Leontaritis and S.A. Billings. Input-output parametric models for non-linear systems: Part
I: deterministic non- linear systems. International Journal of Control, 41(2):303-328, 1985.
(ll] T . N. Lin, B.G. Horne, P.Tino and C.L. Giles. Learning long-term dependencies is not as difficult
with NARX recurrent neural networks. Technical Report UMIACS-TR-95-78 and CS-TR-3500,
Univ. Of Maryland, 1995.
(12] L. Ljung. System identification: Theory for the user. Prentice-Hall, 1987.
[13] M. C. Mozer. Induction of multiscale temporal structure. In J.E. Moody, S. J. Hanson, and R.P.
Lippmann, editors, NIPS 4, pages 275-282, 1992.
(14] K.S. Narendra and K. Parthasarathy. Identification and control of dynamical systems using neural
networks. IEEE Trans. on Neural Networks, 1:4-27, March 1990.
(15] G .V . Puskorius and L.A. Feldkamp. Recurrent network training with the decoupled extended
Kalman filter. In Proc . 1992 SPIE Con/. on the Sci. of ANN, Orlando, Florida, April 1992.
(16] J . Schmidhu ber. Learning complex, extended sequences using the principle of history compression.
In Neural Computation, 4(2):234-242, 1992.
(17] J. Schmidhuber. Learning unambiguous reduced sequence descriptions. In NIPS 4, pages 291298,1992.
(18] H.T. Siegelmann, B.G. Horne, and C.L. Giles. Computational capabilities of NARX neural networks. In IEEE Trans. on Systems, Man and Cybernetics, 1996. Accepted.
(19] H.T. Siegel mann and E.D. Sontag. On the computational power of neural networks. Journal of
Computer and System Science, 50(1):132-150, 1995.
[20] E.D. Sontag. Systems combining linearity and saturations and relations to neural networks.
Technical Report SYCON-92- 01, Rutgers Center for Systems and Control, 1992.
(21] H. Su, T. McAvoy, and P. Werbos. Long-term predictions of chemical processes using recurrent
neural networks: A parallel training approach. Ind. Eng . Chem. Res., 31:1338, 1992.
| 1151 |@word trial:1 compression:1 leighton:1 bptt:3 simulation:7 eng:2 tr:3 initial:2 series:1 past:4 must:3 readily:1 written:1 wx:1 hypothesize:1 plot:10 designed:1 update:1 drop:1 tenn:2 discovering:1 weighing:1 vanishing:3 short:4 farther:1 node:2 become:1 pathway:2 inside:1 comb:1 indeed:1 roughly:1 behavior:2 feldkamp:1 decreasing:1 unfolded:3 actual:1 considering:1 becomes:1 horne:7 discover:1 linearity:1 string:10 unified:1 nj:2 guarantee:1 temporal:2 ro:1 control:4 unit:3 grant:1 appear:2 positive:1 engineering:2 modify:1 path:1 might:1 rnns:4 emphasis:1 chose:2 range:3 practical:1 tsoi:1 backpropagation:3 rnn:1 hyperbolic:2 flatten:1 sycon:1 regular:1 get:1 scheduling:1 prentice:1 bill:1 conventional:3 equivalent:2 demonstrated:1 deterministic:1 center:1 formalized:1 simplicity:1 autonomous:3 increment:1 target:5 heavily:1 user:1 us:1 tapped:1 element:1 approximated:1 werbos:1 observed:1 electrical:1 ran:1 mozer:1 slovak:1 luring:2 upon:2 easily:1 joint:1 univ:1 effective:3 describe:1 quite:1 encoded:1 larger:1 solve:2 modular:1 otherwise:1 grammar:1 favor:1 ability:1 final:1 sequence:6 eigenvalue:2 analytical:2 propose:1 combining:1 realization:1 poorly:2 representational:1 description:3 intuitive:2 ltj:1 help:1 oo:2 recurrent:14 derive:1 propagating:1 depending:1 odd:1 c:1 involves:1 implies:2 filter:1 wid:1 mann:1 require:1 orlando:1 sufficiently:1 considered:1 hall:1 equilibrium:1 narendra:1 jx:4 proc:2 tanh:2 sensitive:2 successfully:1 unfolding:1 always:1 modified:1 latched:1 focus:1 consistently:1 greatly:1 sense:1 inference:2 typically:1 hidden:3 relation:1 classification:4 integration:1 equal:2 frasconi:2 identical:1 represents:1 park:1 alter:1 connectionist:1 report:3 few:1 randomly:1 phase:1 consisting:1 attractor:3 schmidhu:1 possibility:1 tj:1 puskorius:1 capable:1 partial:2 shorter:1 decoupled:1 desired:2 circle:2 ruled:1 re:1 minimal:1 lwe:1 classify:2 earlier:1 giles:7 increased:1 modeling:1 cost:1 technische:1 uniform:1 delay:12 successful:2 too:2 universitat:1 reported:2 iir:1 dependency:27 international:3 sensitivity:1 retain:1 stay:1 lee:1 quickly:2 moody:1 fir:1 external:1 simard:1 yp:1 toy:1 de:1 speculate:1 depends:1 exogenous:2 portion:2 reached:1 capability:3 parallel:1 contribution:1 il:1 musical:1 ofthe:1 generalize:1 identification:4 fki:1 cybernetics:1 converged:1 history:1 synapsis:1 failure:1 associated:1 attributed:2 spie:1 con:1 knowledge:1 actually:1 back:2 exceeded:1 higher:1 follow:1 improved:1 april:1 though:1 furthermore:1 just:1 ei:1 su:1 nonlinear:6 multiscale:1 defines:1 effect:1 inductive:1 analytically:1 vicinity:1 chemical:1 deal:3 ind:1 latch:4 tino:4 ll:1 backpropagating:1 unambiguous:1 demonstrate:1 reasoning:1 recently:4 exponentially:1 volume:1 slight:1 hihi:1 had:1 stable:3 surface:1 attracting:2 operating:1 showed:2 schmidhuber:2 store:1 arbitrarily:1 accomplished:2 minimum:1 greater:1 converge:2 period:1 july:1 ii:2 multiple:2 technical:4 faster:1 long:30 lin:5 dept:1 equally:1 maggini:2 prediction:1 involving:3 multilayer:1 rutgers:1 represent:2 hochreiter:1 limn:2 umiacs:2 near:2 presence:1 bengio:4 independence:2 architecture:7 billing:2 idea:1 whether:1 passed:1 ul:4 suffer:1 peter:1 sontag:2 useful:2 clear:1 involve:1 reduced:4 outperform:1 percentage:4 wr:2 correctly:1 discrete:2 shall:1 drawn:1 fraction:2 sum:1 run:3 powerful:3 soda:2 place:1 architectural:2 dy:1 comparable:1 capturing:1 layer:2 occur:1 ahead:3 expanded:1 department:1 march:1 em:1 increasingly:1 wi:2 modification:1 explained:1 intuitively:1 restricted:1 computationally:1 resource:1 remains:1 previously:1 detectable:1 fail:1 know:1 end:3 munchen:1 hierarchical:1 robustly:5 alternative:1 robustness:1 deemphasized:1 florida:1 abandoned:1 denotes:1 gori:2 narx:29 xc:1 giving:1 siegelmann:1 added:1 parametric:1 md:1 gradient:28 thank:1 maryland:2 sci:1 argue:2 unstable:1 reason:3 induction:1 latching:5 assuming:1 length:7 kalman:1 index:1 relationship:1 ratio:2 nc:1 difficult:9 adjustable:1 perform:2 neuron:2 finite:1 descent:7 extended:3 perturbation:1 varied:2 arbitrary:1 connection:2 hanson:1 nip:4 trans:4 able:3 suggested:2 dynamical:3 pattern:1 appeared:1 saturation:1 including:1 memory:1 explanation:1 power:1 difficulty:2 circumvent:2 representing:1 improve:1 parthasarathy:1 epoch:3 acknowledgement:1 expect:1 ljung:1 suggestion:2 proportional:1 conrath:1 principle:1 editor:1 parity:2 ecai:1 sym:1 perceptron:1 institute:3 ber:1 distributed:1 grammatical:1 curve:1 calculated:1 feedback:4 valid:1 autoregressive:3 jump:3 pth:1 far:3 lippmann:1 dealing:1 global:2 mcavoy:1 why:4 learn:4 robust:1 du:1 investigated:1 necessarily:1 complex:1 did:1 noise:9 fig:1 siegel:1 explicit:2 xl:1 vanish:1 jacobian:4 dozen:1 symbol:1 explored:2 insignificant:1 essential:1 effectively:1 nec:3 illustrates:1 chen:1 explore:1 contained:2 u2:4 goal:1 ann:1 man:1 specifically:1 operates:1 reducing:1 uniformly:2 total:3 called:4 accepted:1 experimental:4 college:1 searched:1 chem:1 evaluate:1 princeton:4 trainable:1 phenomenon:1 |
168 | 1,152 | Extracting Thee-Structured
Representations of Thained Networks
Mark W. Craven and Jude W. Shavlik
Computer Sciences Department
University of Wisconsin-Madison
1210 West Dayton St.
Madison, WI 53706
craven@cs.wisc.edu, shavlik@cs.wisc.edu
Abstract
A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We
present a novel algorithm , TREPAN, for extracting comprehensible ,
symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the
concept represented by a given network. Our experiments demonstrate that TREPAN is able to produce decision trees that maintain
a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our
algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.
1
Introduction
For many learning tasks , it is important to produce classifiers that are not only
highly accurate, but also easily understood by humans. Neural networks are limited in this respect, since they are usually difficult to interpret after training. In
contrast to neural networks, the solutions formed by "symbolic" learning systems
(e.g., Quinlan, 1993) are usually much more amenable to human comprehension.
We present a novel algorithm , TREPAN, for extracting comprehensible , symbolic
representations from trained neural networks. TREPAN queries a given network
to induce a decision tree that describes the concept represented by the network.
We evaluate our algorithm using several real-world problem domains , and present
results that demonstrate that TREPAN is able to produce decision trees that are
accurate and comprehensible, and maintain a high level of fidelity to the networks
from which they were extracted. Unlike previous work in this area, our algorithm
Extracting Tree-structured Representations of Trained Networks
25
is very general in its applicability, and scales well to large networks and problems
with high-dimensional input spaces.
The task that we address is defined as follows: given a trained network and the
data on which it was trained, produce a concept description that is comprehensible,
yet classifies instances in the same way as the network. The concept description
produced by our algorithm is a decision tree, like those generated using popular
decision-tree induction algorithms (Breiman et al., 1984; Quinlan, 1993).
There are several reasons why the comprehensibility of induced concept descriptions
is often an important consideration. If the designers and end-users of a learning
system are to be confident in the performance of the system, they must understand
how it arrives at its decisions . Learning systems may also play an important role
in the process of scientific discovery. A system may discover salient features and
relationships in the input data whose importance was not previously recognized. If
the representations formed by the learner are comprehensible, then these discoveries
can be made accessible to human review. However, for many problems in which
comprehensibility is important, neural networks provide better generalization than
common symbolic learning algorithms. It is in these domains that it is important
to be able to extract comprehensible concept descriptions from trained networks.
2
Extracting Decision Trees
Our approach views the task of extracting a comprehensible concept description
from a trained network as an inductive learning problem. In this learning task,
the target concept is the function represented by the network, and the concept
description produced by our learning algorithm is a decision tree that approximates
the network. However, unlike most inductive learning problems, we have available
an oracle that is able to answer queries during the learning process. Since the
target function is simply the concept represented by the network, the oracle uses the
network to answer queries. The advantage of learning with queries, as opposed to
ordinary training examples, is that they can be used to garner information precisely
where it is needed during the learning process .
Our algorithm, as shown in Table 1, is similar to conventional decision-tree algorithms, such as CART (Breiman et al. , 1984) , and C4.5 (Quinlan, 1993) , which
learn directly from a training set. However, TREPA N is substantially different from
these conventional algorithms in number of respects , which we detail below.
The Oracle. The role of the oracle is to determine the class (as predicted by
the network) of each instance that is presented as a query. Queries to the oracle,
however, do not have to be complete instances, but instead can specify constraints
on the values that the features can take. In the latter case, the oracle generates
a complete instance by randomly selecting values for each feature, while ensuring
that the constraints are satisfied. In order to generate these random values, TREPAN
uses the training data to model each feature's marginal distribution. TREPAN uses
frequency counts to model the distributions of discrete-valued features, and a kernel
density estimation method (Silverman, 1986) to model continuous features. As
shown in Table 1, the oracle is used for three different purposes: (i) to determine
the class labels for the network's training examples; (ii) to select splits for each of
the tree's internal nodes; (iii) and to determine if a node covers instances of only
one class. These aspects of the algorithm are discussed in more detail below .
Tree Expansion. Unlike most decision-tree algorithms, which grow trees in a
depth-first manner, TREPAN grows trees using a best-first expansion. The notion
26
M. W. CRAVEN, J. W. SHAVLIK
Table 1: The TREPAN algorithm.
TREPAN(training_examples, features)
Queue:= 0
/* sorted queue of nodes to expand
for each example E E training_examples
/* use net to label examples
class label for E := ORACLE(E)
initialize the root of the tree, T, as a leaf node
put (T, training_examples, {} ) into Queue
while Queue is not empty and size(T) < tree...size_limit
/* expand a node
remove node N from head of Queue
examplesN := example set stored with N
constraintsN := constraint set stored with N
use features to build set of candidate splits
use examplesN and calls to ORAcLE(constraintsN) to evaluate splits
S := best binary split
search for best m-of-n split, S', using 5 as a seed
make N an internal node with split S'
*/
*/
*/
for each outcome, s, of 5'
/* make children nodes */
make C, a new child node of N
constraintsc := constraintsN U {5' = s}
use calls to ORACLE( constraintsc) to determine if C should remain a leaf
otherwise
examplesc := members of examplesN with outcome s on split S'
put (C, examplesc, constraintsc) into Queue
return T
of the best node, in this case, is the one at which there is the greatest potential
to increase the fidelity of the extracted tree to the network. The function used
to evaluate node n is f(n) = reach(n) x (1 - fidelity(n)) , where reach(n) is the
estimated fraction of instances that reach n when passed through the tree, and
fidelity(n) is the estimated fidelity of the tree to the network for those instances.
Split Types. The role of internal nodes in a decision tree is to partition the input
space in order to increase the separation of instances of different classes. In C4. 5,
each of these splits is based on a single feature. Our algorithm, like Murphy and
Pazzani's (1991) ID2-of-3 algorithm, forms trees that use m-of-n expressions for
its splits. An m-of-n expression is a Boolean expression that is specified by an
integer threshold, m, and a set of n Boolean conditions. An m-of-n expression is
satisfied when at least m of its n conditions are satisfied. For example, suppose we
have three Boolean features, a, b, and c; the m-of-n expression 2-of-{ a, ....,b, c} is
logically equivalent to (a /\ ....,b) V (a /\ c) V (....,b /\ c).
Split Selection. Split selection involves deciding how to partition the input space
at a given internal node in the tree. A limitation of conventional tree-induction
algorithms is that the amount of training data used to select splits decreases with
the depth of the tree. Thus splits near the bottom of a tree are often poorly chosen
because these decisions are based on few training examples. In contrast, because
TREPAN has an oracle available, it is able to use as many instances as desired to
select each split. TREPAN chooses a split after considering at least Smin instances,
where Smin is a parameter of the algorithm.
When selecting a split at a given node, the oracle is given the list of all of the
previously selected splits that lie on the path from the root of the tree to that node.
These splits serve as constraints on the feature values that any instance generated
by the oracle can take, since any example must satisfy these constraints in order to
Extracting Tree-structured Representations of Trained Networks
27
reach the given node.
Like the ID2-of-3 algorithm, TREPAN uses a hill-climbing search process to construct its m-of-n splits. The search process begins by first selecting the best binary
split at the current node; as in C4. 5, TREPAN uses the gain ratio criterion (Quinlan,
1993) to evaluate candidate splits. For two-valued features, a binary split separates
examples according to their values for the feature. For discrete features with more
than two values, we consider binary splits based on each allowable value of the
feature (e.g., color=red?, color=blue?, ... ). For continuous features, we consider
binary splits on thresholds, in the same manner as C4.5. The selected binary split
serves as a seed for the m-of-n search process. This greedy search uses the gain ratio
measure as its heuristic evaluation function, and uses the following two operators
(Murphy & Pazzani, 1991):
? m-of-n+l : Add a new value to the set, and hold the threshold constant.
For example, 2-of-{ a, b} => 2-of-{ a, b, c} .
? m+l - of- n+l: Add a new value to the set, and increment the threshold.
For example, 2-of-{ a, b, c} => 3-of-{ a, b, c, d}.
Unlike ID2-of-3, TREPAN constrains m-of-n splits so that the same feature is not
used in two or more disjunctive splits which lie on the same path between the root
and a leaf of the tree. Without this restriction, the oracle might have to solve
difficult satisfiability problems in order create instances for nodes on such a path.
Stopping Criteria. TREPAN uses two separate criteria to decide when to stop
growing an extracted decision tree. First, a given node becomes a leaf in the tree if,
with high probability, the node covers only instances of a single class. To make this
decision, TREPAN determines the proportion of examples, Pc, that fall into the most
common class at a given node, and then calculates a confidence interval around this
proportion (Hogg & Tanis, 1983). The oracle is queried for additional examples
until prob(pc < 1 - f) < 6, where f and 6 are parameters of the algorithm.
TREPAN also accepts a parameter that specifies a limit on the number of internal
nodes in an extracted tree. This parameter can be used to control the comprehensibility of extracted trees, since in some domains, it may require very large trees to
describe networks to a high level of fidelity.
3
Empirical Evaluation
In our experiments, we are interested in evaluating the trees extracted by our algorithm according to three criteria: (i) their predictive accuracy; (ii) their comprehensibility; (i) and their fidelity to the networks from which they were extracted. We
evaluate TREPAN using four real-world domains: the Congressional voting data set
(15 features, 435 examples) and the Cleveland heart-disease data set (13 features,
303 examples) from the UC-Irvine database; a promoter data set (57 features, 468
examples) which is a more complex superset of the UC-Irvine one; and a data set in
which the task is to recognize protein-coding regions in DNA (64 features, 20,000
examples) (Craven & Shavlik, 1993b). We remove the physician-fee-freeze feature from the voting data set to make the problem more difficult. We conduct our
experiments using a 10-fold cross validation methodology, except for in the proteincoding domain. Because of certain domain-specific characteristics of this data set,
we use 4-fold cross-validation for our experiments with it.
We measure accuracy and fidelity on the examples in the test sets. Whereas accuracy is defined as the percentage of test-set examples that are correctly classified,
fidelity is defined as the percentage of test-set examples on which the classification
28
M. W. CRAVEN, J. W. SHAVLIK
Table 2: Test-set accuracy and fidelity.
accuracy
domain
networks C4.5
ID2-of-3 TREPAN
74.6%
81.8%
heart
84.5%
71.0%
promoters
84.4
83.5
87.6
90.6
94.1
protein coding
90.3
90.9
91.4
voting
92.2
89.2
90.8
87.8
fidelity
TREPAN
94.1%
85.7
92.4
95.9
made by a tree agrees with its neural-network counterpart. Since the comprehensibility of a decision tree is problematic to measure, we measure the syntactic
complexity of trees and take this as being representative of their comprehensibility.
Specifically, we measure the complexity of each tree in two ways: (i) the number
of internal (i.e., non-leaf) nodes in the tree, and (ii) the number of symbols used in
the splits of the tree. We count an ordinary, single-feature split as one symbol. We
count an m-of-n split as n symbols, since such a split lists n feature ,-alues.
The neural networks we use in our experiments have a single layer of hidden units.
The number of hidden units used for each network (0, 5, 10, 20 or 40) is chosen
using cross validation on the network's training set, and we use a validation set to
decide when to stop training networks. TREPAN is applied to each saved network.
The parameters of TREPAN are set as follows for all runs: at least 1000 instances
(training examples plus queries) are considered before selecting each split; we set
the E and 6 parameters, which are used for the stopping-criterion procedure, to 0.05;
and the maximum tree size is set to 15 internal nodes, which is the size of a complete
binary tree of depth four.
As baselines for comparison, we also run Quinlan'S (1993) C4.5 algorithm, and
Murphy and Pazzani's (1991) ID2-of-3 algorithm on the same testbeds. Recall
that ID2-of-3 is similar to C4.5, except that it learns trees that use m-of-n splits.
We use C4.5's pruning method for both algorithms and use cross validation to select
pruning levels for each training set. The cross-validation runs evaluate unpruned
trees and trees pruned with confidence levels ranging from 10% to 90%.
Table 2 shows the test-set accuracy results for our experiments. It can be seen
that, for every data set, neural networks generalize better than the decision trees
learned by C4 .5 and ID2-of-3. The decision trees extracted from the networks by
TREPAN are also more accurate than the C4.5 and ID2-of-3 trees in all domains.
The differences in accuracy between the neural networks and the two conventional
decision-tree algorithms (C4.5 and ID2-of-3) are statistically significant for all four
domains at the 0.05 level using a paired, two-tailed t-test. We also test the significance of the accuracy differences between TREPAN and the other decision-tree
algorithms . Except for the promoter domain, these differences are also statistically
significant. The results in this table indicate that, for a range of interesting tasks,
our algorithm is able to extract decision trees which are more accurate than decision
trees induced strictly from the training data.
Table 2 also shows the test-set fidelity measurements for the TREPAN trees. These
results indicate that the trees extracted by TREPAN provide close approximations
to their respective neural networks.
Table 3 shows tree-complexity measurements for C4.5, ID2-of-3, and TREPAN. For
all four data sets, the trees learned by TREPAN have fewer internal nodes than
the trees produced by C4.5 and ID2-of-3. In most cases , the trees produced by
TREPAN and ID2-of-3 use more symbols than C4.5, since their splits are more
Extracting Tree-structured Representations of Trained Networks
29
Table 3: Tree complexity.
# internal nodes
domain
heart
promoters
protein coding
voting
C4.5
17.5
11.2
155.0
20.1
ID2-of-3
15.7
12.6
66.0
19.2
TREPAN
11.8
9.2
10.0
11.2
II
C4.5
17.5
11.2
155.0
20.1
# symbols
ID2-of-3 TREPAN
48.8
20.8
47.5
23.8
455.3
36.0
77.3
20.8
complex. However, for most of the data sets, the TREPAN trees and the C4.5 trees
are comparable in terms of their symbol complexity. For all data sets, the ID2-of-3
trees are more complex than the TREPAN trees. Based on these results, we argue
that the trees extracted by TREPAN are as comprehensible as the trees learned by
conventional decision-tree algorithms.
4
Discussion and Conclusions
In the previous section, we evaluated our algorithm along the dimensions of fidelity, syntactic complexity, and accuracy. Another advantage of our approach
is its generality. Unlike numerous other extraction methods (Hayashi, 1991;
McMillan et al., 1992; Craven & Shavlik, 1993a; Sethi et al., 1993; Tan, 1994;
Tchoumatchenko & Ganascia, 1994; Alexander & Mozer, 1995; Setiono & Liu,
1995), the TREPAN algorithm does not place any requirements on either the architecture of the network or its training method. TREPAN simply uses the network
as a black box to answer queries during the extraction process. In fact, TREPAN
could be used to extract decision-trees from other types of opaque learning systems,
such as nearest-neighbor classifiers.
There are several existing algorithms which do not require special network architectures or training procedures (Saito & Nakano, 1988; Fu, 1991 ; Gallant, 1993) .
These algorithms, however, assume that each hidden unit in a network can be accurately approximated by a threshold unit. Additionally, these algorithms do not
extract m-of-n rules, but instead extract only conjunctive rules. In previous work
(Craven & Shavlik, 1994; Towell & Shavlik, 1993), we have shown that this type of
algorithm produces rule-sets which typically are far too complex to be comprehensible. Thrun (1995) has developed a general method for rule extraction, and has
described how his algorithm can be used to verify that an m-of-n rule is consistent
with a network, but he has not developed a rule-searching method that is able to
find concise rule sets. A strength of our algorithm , in contrast, is its scalability.
We have demonstrated that our algorithm is able to produce succinct decision-tree
descriptions of large networks in domains with large input spaces.
In summary, a significant limitation of neural networks is that their concept representations are usually not amenable to human understanding. We have presented an
algorithm that is able to produce comprehensible descriptions of trained networks
by extracting decision trees that accurately describe the networks' concept representations. We believe that our algorithm, which takes advantage of the fact that
a trained network can be queried, represents a promising advance towards the goal
of general methods for understanding the solutions encoded by trained networks.
Acknow ledgements
This research was partially supported by ONR grant N00014-93-1-0998.
30
M. W. eRA VEN, J. W. SHA VLIK
References
Alexander, J. A. & Mozer, M. C. (1995). Template-based algorithms for connectionist
rule extraction. In Tesauro, G., Touretzky, D., & Leen, T., editors, Advances in Neural
Information Processing Systems (volume 7). MIT Press.
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and Regression
Trees. Wadsworth and Brooks, Monterey, CA.
Craven, M. & Shavlik, J. (1993a). Learning symbolic rules using artificial neural networks.
In Proc. of the 10th International Conference on Machine Learning, (pp. 73-80), Amherst,
MA. Morgan Kaufmann.
Craven, M. W . & Shavlik, J. W. (1993b). Learning to predict reading frames in E.
coli DNA sequences. In Proc . of the 26th Hawaii International Conference on System
Sciences, (pp . 773-782), Wailea, HI. IEEE Press.
Craven, M. W. & Shavlik, J. W. (1994). Using sampling and queries to extract rules
from trained neural networks. In Proc . of the 11th International Conference on Machine
Learning, (pp. 37- 45), New Brunswick, NJ. Morgan Kaufmann.
Fu, L. (1991). Rule learning by searching on adapted nets. In Proc. of the 9th National
Conference on Artificial Intelligence, (pp. 590- 595) , Anaheim, CA. AAAI/MIT Press.
Gallant, S. I. (1993). Neural Network Learning and Expert Systems. MIT Press.
Hayashi, Y. (1991). A neural expert system with automated extraction of fuzzy ifthen rules. In Lippmann, R., Moody, J. , & Touretzky, D., editors, Advances in Neural
Information Processing Systems (volume 3). Morgan Kaufmann, San Mateo, CA.
Hogg, R. V. & Tanis, E. A. (1983). Probability and Statistical Inference. MacMillan.
McMillan, C. , Mozer, M. C. , & Smolensky, P. (1992). Rule induction through integrated
symbolic and sub symbolic processing. In Moody, J ., Hanson, S., & Lippmann, R., editors,
Advances in Neural Information Processing Systems (volume 4). Morgan Kaufmann.
Murphy, P. M. & Pazzani, M. J . (1991). ID2-of-3: Constructive induction of M-of-N
concepts for discriminators in decision trees. In Proc . of the 8th International Machine
Learning Workshop, (pp. 183- 187), Evanston, IL. Morgan Kaufmann.
Quinlan, J. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann.
Saito, K. & Nakano, R. (1988). Medical diagnostic expert system based on PDP model.
In Proc. of the IEEE International Conference on Neural Networks, (pp. 255- 262), San
Diego, CA. IEEE Press.
Sethi, I. K., Yoo, J. H., & Brickman, C. M. (1993) . Extraction of diagnostic rules
using neural networks. In Proc. of the 6th IEEE Symposium on Computer-Based Medical
Systems, (pp. 217-222), Ann Arbor, MI. IEEE Press.
Setiono, R. & Liu, H. (1995). Understanding neural networks via rule extraction. In
Proc. of the 14th International Joint Conference on Artificial Intelligence, (pp. 480- 485),
Montreal, Canada.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman
and Hall.
Tan, A.-H. (1994). Rule learning and extraction with self-organizing neural networks. In
Proc. of the 1993 Connectionist Models Summer School. Erlbaum.
Tchoumatchenko, 1. & Ganascia, J.-G. (1994). A Bayesian framework to integrate symbolic and neural learning. In Proc. of the 11th International Conference on Machine
Learning, (pp. 302- 308) , New Brunswick, NJ. Morgan Kaufmann.
Thrun, S. (1995). Extracting rules from artificial neural networks with distributed representations. In Tesauro, G., Touretzky, D., & Leen, T ., editors, Advances in Neural
Information Processing Systems (volume 7). MIT Press.
Towell, G. & Shavlik, J . (1993). Extracting refined rules from knowledge-based neural
networks. Machine Learning, 13(1):71-101.
| 1152 |@word proportion:2 concise:1 liu:2 selecting:4 existing:1 current:1 yet:1 conjunctive:1 must:2 partition:2 remove:2 greedy:1 leaf:5 selected:2 fewer:1 intelligence:2 node:26 along:1 symposium:1 manner:2 growing:1 considering:1 becomes:1 begin:1 classifies:1 discover:1 cleveland:1 substantially:1 fuzzy:1 developed:2 nj:2 every:1 voting:4 classifier:2 evanston:1 control:1 unit:4 grant:1 medical:2 before:1 understood:1 limit:1 era:1 path:3 might:1 plus:1 black:1 mateo:1 limited:1 range:1 statistically:2 id2:16 silverman:2 procedure:2 dayton:1 saito:2 area:2 empirical:1 confidence:2 induce:2 protein:3 symbolic:8 close:1 selection:2 operator:1 put:2 restriction:1 conventional:5 equivalent:1 demonstrated:1 rule:18 his:1 searching:2 notion:1 increment:1 target:2 play:1 diego:1 user:1 suppose:1 tan:2 us:10 sethi:2 approximated:1 database:1 bottom:1 role:3 disjunctive:1 region:1 decrease:1 thee:1 disease:1 mozer:3 complexity:6 constrains:1 trained:13 predictive:1 serve:1 learner:1 easily:1 joint:1 represented:4 describe:2 query:10 artificial:4 outcome:2 refined:1 whose:1 heuristic:1 encoded:1 valued:2 solve:1 otherwise:1 ifthen:1 statistic:1 syntactic:2 advantage:3 sequence:1 net:2 organizing:1 poorly:1 description:8 scalability:1 empty:1 requirement:1 produce:7 montreal:1 nearest:1 school:1 c:2 predicted:1 involves:1 indicate:2 saved:1 incomprehensible:1 human:5 require:2 generalization:1 comprehension:1 strictly:1 hold:1 around:1 considered:1 hall:1 deciding:1 seed:2 predict:1 purpose:1 estimation:2 proc:10 label:3 agrees:1 create:1 mit:4 breiman:3 logically:1 contrast:3 baseline:1 inference:1 stopping:2 typically:1 integrated:1 hidden:3 expand:2 interested:1 fidelity:14 classification:2 special:1 initialize:1 uc:2 marginal:1 wadsworth:1 construct:1 testbeds:1 extraction:8 sampling:1 chapman:1 represents:1 ven:1 connectionist:2 few:1 randomly:1 recognize:1 national:1 murphy:4 maintain:2 friedman:1 highly:1 evaluation:2 arrives:1 pc:2 amenable:2 accurate:5 fu:2 respective:2 tree:73 conduct:1 desired:1 instance:14 boolean:3 cover:2 ordinary:2 applicability:2 erlbaum:1 too:1 stored:2 answer:3 anaheim:1 chooses:1 confident:1 st:1 density:2 international:7 amherst:1 accessible:1 physician:1 moody:2 aaai:1 satisfied:3 opposed:1 hawaii:1 coli:1 expert:3 return:1 potential:1 coding:3 alues:1 satisfy:1 view:1 root:3 red:1 formed:2 il:1 accuracy:9 kaufmann:7 characteristic:1 climbing:1 generalize:1 bayesian:1 garner:1 accurately:2 produced:4 classified:1 reach:4 touretzky:3 vlik:1 frequency:1 pp:9 mi:1 gain:2 stop:2 irvine:2 popular:1 recall:1 color:2 knowledge:1 satisfiability:1 methodology:1 specify:1 leen:2 evaluated:1 box:1 generality:1 until:1 scientific:1 believe:1 grows:1 concept:13 verify:1 counterpart:1 inductive:2 during:3 self:1 criterion:5 stone:1 allowable:1 hill:1 complete:3 demonstrate:2 ranging:1 consideration:1 novel:2 common:2 volume:4 discussed:1 he:1 approximates:2 interpret:1 significant:4 measurement:2 freeze:1 queried:2 hogg:2 add:2 tesauro:2 certain:1 n00014:1 binary:7 onr:1 seen:1 morgan:7 additional:1 recognized:1 determine:4 ii:4 cross:5 paired:1 ensuring:1 calculates:1 regression:1 jude:1 kernel:1 whereas:1 interval:1 grow:1 unlike:6 comprehensibility:6 induced:2 cart:1 member:1 call:2 extracting:11 integer:1 near:1 split:35 iii:1 congressional:1 superset:1 automated:1 architecture:2 expression:5 passed:1 queue:6 monterey:1 amount:1 dna:2 generate:1 specifies:1 percentage:2 problematic:1 designer:1 estimated:2 towell:2 correctly:1 diagnostic:2 blue:1 tanis:2 ledgements:1 discrete:2 smin:2 salient:1 four:4 threshold:5 wisc:2 fraction:1 run:3 prob:1 opaque:1 place:1 decide:2 separation:1 decision:27 fee:1 comparable:1 layer:1 hi:1 summer:1 fold:2 oracle:15 strength:1 adapted:1 precisely:1 constraint:5 generates:1 aspect:1 pruned:1 structured:4 department:1 according:2 craven:10 describes:1 remain:1 wi:1 heart:3 previously:2 count:3 needed:1 end:1 serf:1 available:2 comprehensible:11 quinlan:6 madison:2 nakano:2 build:1 sha:1 separate:2 thrun:2 argue:1 reason:1 induction:4 relationship:1 ratio:2 difficult:3 olshen:1 acknow:1 gallant:2 head:1 frame:1 pdp:1 mcmillan:2 canada:1 specified:1 discriminator:1 hanson:1 c4:18 accepts:1 learned:3 brook:1 address:1 able:9 usually:4 below:2 smolensky:1 reading:1 program:1 greatest:1 numerous:1 extract:6 review:1 understanding:3 discovery:2 wisconsin:1 interesting:1 limitation:3 validation:6 integrate:1 consistent:1 unpruned:1 editor:4 summary:1 supported:1 understand:1 shavlik:12 fall:1 neighbor:1 template:1 distributed:1 depth:3 dimension:1 world:2 evaluating:1 made:2 san:2 far:1 pruning:2 lippmann:2 continuous:2 search:5 tailed:1 why:1 table:9 additionally:1 promising:1 learn:2 pazzani:4 ca:4 expansion:2 complex:4 domain:12 significance:1 promoter:4 succinct:1 child:2 west:1 representative:1 sub:1 candidate:2 lie:2 learns:1 specific:1 symbol:6 list:2 workshop:1 importance:1 simply:2 macmillan:1 partially:1 hayashi:2 determines:1 extracted:10 ma:1 sorted:1 goal:1 ann:1 towards:1 specifically:1 except:3 arbor:1 select:4 internal:9 mark:1 brunswick:2 latter:1 alexander:2 constructive:1 evaluate:6 yoo:1 |
169 | 1,153 | Does the Wake-sleep Algorithm
Produce Good Density Estimators?
Brendan J. Frey, Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Toronto, ON M5S 1A4, Canada
{frey, hinton} @cs.toronto.edu
Peter Dayan
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139, USA
dayan@ai.mit.edu
Abstract
The wake-sleep algorithm (Hinton, Dayan, Frey and Neal 1995) is a relatively efficient method of fitting a multilayer stochastic generative
model to high-dimensional data. In addition to the top-down connections in the generative model, it makes use of bottom-up connections for
approximating the probability distribution over the hidden units given
the data, and it trains these bottom-up connections using a simple delta
rule. We use a variety of synthetic and real data sets to compare the performance of the wake-sleep algorithm with Monte Carlo and mean field
methods for fitting the same generative model and also compare it with
other models that are less powerful but easier to fit.
1 INTRODUCTION
Neural networks are often used as bottom-up recognition devices that transform input vectors into representations of those vectors in one or more hidden layers. But multilayer networks of stochastic neurons can also be used as top-down generative models that produce
patterns with complicated correlational structure in the bottom visible layer. In this paper
we consider generative models composed of layers of stochastic binary logistic units.
Given a generative model parameterized by top-down weights, there is an obvious way to
perform unsupervised learning. The generative weights are adjusted to maximize the probability that the visible vectors generated by the model would match the observed data.
Unfortunately, to compute the derivatives of the log probability of a visible vector, d, with
respect to the generative weights, e, it is necessary to consider all possible ways in which
d could be generated. For each possible binary representation a in the hidden units the
derivative needs to be weighted by the posterior probability of a given d and e:
P(ald, e)
= P(ale)p(dla, e)ILP(~le)p(dl~, e).
13
(1)
662
B. J. FREY. G. E. HINTON, P. DAYAN
It is intractable to compute P(ald, 9), so instead of minimizing -logP(dI9), we minimize
an easily computed upper bound on this quantity that depends on some additional parameters, <1>:
-logP(dI9) ~ F(dI9,
<1?
= -
I, Q(al d, <I?logP(a, d19) + I,Q(ald, <I?logQ(ald, <1?. (2)
a
a
F(dI9, <1? is a Helmholtz free energy and is equal to -logP(dI9) when the distribution
Q(-Id, <1? is the same as the posterior distribution P(-Id, 9). Otherwise, F(dI9, <1?
exceeds -logP(dI9) by the asymmetric divergence:
D = I,Q(ald, <I?log (Q(ald, <I?IP(ald, 9? .
(3)
a
We restrict Q( -I d, <1? to be a product distribution within each layer that is conditional on
the binary states in the layer below and we can therefore compute it efficiently using a bottom-up recognition network. We call a model that uses bottom-up connections to minimize the bound in equation 2 in this way a Helmholtz machine (Dayan, Hinton. Neal and
Zemel 1995). The recognition weights <I> take the binary activities in one layer and stochastically produce binary activities in the layer above using a logistic function. So, for a
given visible vector, the recognition weights may produce many different representations
in the hidden layers, but we can get an unbiased sample from the distribution Q(-Id, <1? in
a single bottom-up pass through the recognition network.
The highly restricted form of Q( -I d, <1? means that even if we use the optimal recognition
weights, the gap between F(dI9, <1? and -logP(dI9) is large for some generative models.
However, when F(dI9, <1? is minimized with respect to the generative weights, these models will generally be avoided.
F(dI9, <1? can be viewed as the expected number of bits required to communicate a visible
vector to a receiver. First we use the recognition model to get a sample from the distribution Q( -I d, <1?. Then, starting at the top layer, we communicate the activities in each layer
using the top-down expectations generated from the already communicated activities in
the layer above. It can be shown that the number of bits required for communicating the
state of each binary unit is sklog(qk1pk) + (l-sk)log[(1-qk)/(1-Pk)], where Pk is the
top-down probability that Sk is on and qk is the bottom-up probability that Sk is on.
There is a very simple on-line algorithm that minimizes F(dI9, <1? with respect to the generative weights. We simply use the recognition network to generate a sample from the distribution Q(-Id, <1? and then we increment each top-down weight 9kj by ESk(SrPj), where
9kj connects unit k to unit j. It is much more difficult to exactly follow the gradient of
F(dI9, <1? with respect to the recognition weights, but there is a simple approximate
method (Hinton, Dayan, Frey and Neal 1995). We generate a stochastic sample from the
generative model and then we increment each bottom-up weight <l>ij by ESi(Sj- f/j) to
increase the log probability that the recognition weights would produce the correct activities in the layer above. This way of fitting a Helmholtz machine is called the "wake-sleep"
algorithm and the purpose of this paper is to assess how effective it is at performing highdimensional density estimation on a variety of synthetically constructed data sets and two
real-world ones. We compare it with other methods of fitting the same type of generative
model and also with simpler models for which there are efficient fitting algorithms.
2 COMPETITORS
We compare the wake-sleep algorithm with six other density estimation methods. All data
units are binary and can take on values dk = 1 (on) and dk =0 (off).
Gzip. Gzip (Gailly, 1993) is a practical compression method based on Lempel-Ziv coding.
This sequential data compression technique encodes future segments of data by transmit-
Does the Wake-sleep Algorithm Produce Good Density Estimators?
663
ting codewords that consist of a pointer into a buffer of recent past output together with
the length of the segment being coded. Gzip's perfonnance is measured by subtracting the
length of the compressed training set from the length of the compressed training set plus a
subset of the test set. Taking all disjoint test subsets into account gives an overall test set
code cost. Since we are interested in estimating the expected perfonnance on one test case,
to get a tight lower bound on gzip's perfonnance, the subset size should be kept as small
as possible in order to prevent gzip from using early test data to compress later test data.
Base Rate Model. Each visible unit k is assumed to be independent of the others with a
probability Pk of being on. The probability of vector d is p(d) = Ilk Pkdk (1 - Pk)l- dk . The
arithmetic mean of unit k's activity is used to estimate Pk' except in order to avoid serious
overfitting, one extra on and one extra off case are included in the estimate.
Binary Mixture Model. This method is a hierarchical extension of the base rate model
which uses more than one set of base rates. Each set is called a component. Component j
has probability 1tj and awards each visible unit k a probability Pjk of being on. The net
probability of dis p(d) = Lj 1tj Ilk Pj/k (1 - Pjk)l-dk . For a given training datum, we consider the component identity to be a missing value which must be filled in before the
parameters can be adjusted. To accomplish this, we use the expectation maximization
algorithm (Dempster, Laird and Rubin 1977) to maximize the log-likelihood of the training set, using the same method as above to avoid serious overfitting.
Gibbs Machine (GM). This machine uses the same generative model as the Helmholtz
machine, but employs a Monte Carlo method called Gibbs sampling to find the posterior
in equation 1 (Neal, 1992). Unlike the Helmholtz machine it does not require a separate
recognition model and with sufficiently prolonged sampling it inverts the generative
model perfectly. Each hidden unit is sampled in fixed order from a probability distribution
conditional on the states of the other hidden and visible units. To reduce the time required
to approach equilibrium, the network is annealed during sampling.
Mean Field Method (MF). Instead of using a separate recognition model to approximate
the posterior in equation 1, we can assume that the distribution over hidden units is factorial for a given visible vector. Obtaining a good approximation to the posterior is then a
matter of minimizing free energy with respect to the mean activities. In our experiments,
we use the on-line mean field learning algorithm due to Saul, Jaakkola, and Jordan (1996).
Fully Visible Belief Network (FVBN). This method is a special case of the Helmholtz
machine where the top-down network is fully connected and there are no hidden units. No
recognition model is needed since there is no posterior to be approximated.
3 DATA SETS
The perfonnances of these methods were compared on five synthetic data sets and two
real ones. The synthetic data sets had matched complexities: the generative models that
produced them had 100 visible units and between 1000 and 2500 parameters. A data set
with 100,000 examples was generated from each model and then partitioned into 10,000
for training, 10,000 for validation and 80,000 for testing. For tractable cases, each data set
entropy was approximated by the negative log-likelihood of the training set under its generative model. These entropies are approximate lower bounds on the perfonnance.
The first synthetic data set was generated by a mixture model with 20 components. Each
component is a vector of 100 base rates for the 100 visible units. To make the data more
realistic, we arranged for there to be many different components whose base rates are all
extreme (near 0 or 1) - representing well-defined clusters - and a few components with
most base rates near 0.5 - representing much broader clusters. For componentj, we
selected base rate Pjk from a beta distribution with mean Ilt and variance 1lt(1-1lt)/40 (we
chose this variance to keep the entropy of visible units low for Ilt near 0 or 1, representing
well-defined clusters). Then, as often as not we randomly replaced each Pjk with 1-Pjk to
664
B. 1. FREY, G. E. HINTON, P. DAY AN
make each component different (without doing this, all components would favor all units
off). In order to obtain many well-defined clusters, the component means Il.i were themselves sampled from a beta distribution with mean 0.1 and variance 0.02.
The next two synthetic data sets were produced using sigmoidal belief networks (Neal
1992) which are just the generative parts of binary stochastic Helrnhol tz machines. These
networks had full connectivity between layers, one with a 20~100 architecture and one
with a 5~10~15~2~100 architecture. The biases were set to 0 and the weights were sampled uniformly from [-2,2), a range chosen to keep the networks from being deterministic.
The final two synthetic data sets were produced using Markov random fields. These networks had full bidirectional connections between layers. One had a 10<=>20<=>100 architecture, and the other was a concatenation of ten independent 10<=>10 fields. The biases were
set to 0 and the weights were sampled from the set {-4, 0, 4} with probabilities {0.4, 0.4,
0.2}. To find data sets with high-order structure, versions of these networks were sampled
until data sets were found for which the base rate method performed badly.
We also compiled two versions of a data set to which the wake-sleep algorithm has previously been applied (Hinton et al. 1995). These data consist of normalized and quantized
8x8 binary images of handwritten digits made available by the US Postal Service Office of
Advanced Technology. The first version consists of a total of 13,000 images partitioned as
6000 for training, 2000 for validation and 5000 for testing. The second version consists of
pairs of 8x8 images (ie. 128 visible units) made by concatenating vectors from each of the
above data sets with those from a random reordering of the respective data set.
4 TRAINING DETAILS
The exact log-likelihoods for the base rate and mixture models can be computed, because
these methods have no or few hidden variables. For the other methods, computing the
exact log-likelihood is usually intractable. However, these methods provide an approximate upper bound on the negative log-likelihood in the form of a coding cost or Helmholtz
free energy, and results are therefore presented as coding costs in bits.
Because gzip performed poorly on the synthetic tasks, we did not break up the test and
validation sets into subsets. On the digit tasks, we broke the validation and test sets up to
make subsets of 100 visible vectors. Since the "-9" gzip option did not improve performance significantly, we used the default configuration.
To obtain fair results, we tried to automate the model selection process subject to the constraint of obtaining results in a reasonable amount of time. For the mixture model, the
Gibbs machine, the mean field method, and the Helmholtz machine, a single learning run
was performed with each of four different architectures using performance on a validation
set to avoid wasted effort. Performance on the validation set was computed every five
epochs, and if two successive validation performances were not better than the previous
one by more than 0.2%, learning was terminated. The network corresponding to the best
validation performance was selected for test set analysis. Although it would be desirable
to explore a wide range of architectures, it would be computationally ruinous. The architectures used are given in tables 3 and 4 in the appendix.
The Gibbs machine was annealed from an initial temperature of 5.0. Between each sweep
of the network, during which each hidden unit was sampled once, the temperature was
multiplied by 0.9227 so that after 20 sweeps the temperature was 1.0. Then, the generative
weights were updated using the delta rule. To bound the datum probability, the network is
annealed as above and then 40 sweeps at unity temperature are performed while summing
the probability over one-nearest-neighbor configurations, checking for overlap.
A learning rate of 0.01 was used for the Gibbs machine, the mean field method, the Helmholtz machine, and the fully visible belief network. For each of these methods, this value
was found to be roughly the largest possible learning rate that safely avoided oscillations.
Does the Wake-sleep Algorithm Produce Good Density Estimators?
665
70r-----------------------------------------------------,
Gzip
Base rate model
Mixture model
Gibbs machine
Mean field method
Fully visible belief network
Entropy
60
50
40
--e--
-4--
?
20
l:~m
- 10~--------------------------------------------------~
Mixture
2~1()()
BN
2~I()()
BN 5~IO~
15~20~I()()
MRF
1~2~l()()
MRF
lOx (IO~IO)
Single
digits
Digit
pairs
Tasks
Figure 1. Compression performance relative to the Helmholtz machine. Lines connecting
the data points are for visualization only, since there is no meaningful interpolant.
5 RESULTS
The learning times and the validation performances are given in tables 3 and 4 of the
appendix. Test set appraisals and total learning times are given in table 1 for the synthetic
tasks and in table 2 for the digit tasks. Because there were relatively many training cases
in each simulation, the validation procedure serves to provide timing information more
than to prevent overfitting. Gzip and the base rate model were very fast, followed by the
fully visible belief network, the mixture model, the Helmholtz machine, the mean field
method, and finally the Gibbs machine. Test set appraisals are summarized by compression performance relative to the Helmholtz machine in figure 1 above. Greater compression sizes correspond to lower test set likelihoods and imply worse density estimation.
When available, the data set entropies indicate how close to optimum each method comes.
The Helmholtz machine yields a much lower cost compared to gzip and base rates on all
tasks. Compared to the mixture model, it gives a lower cost on both BN tasks and the
MRF 10 x (1O~1O) task. The latter case shows that the Helmholtz machine was able to take
advantage of the independence of the ten concatenated input segments, whereas the mixture method was not. Simply to represent a problem where there are only two distinct clusters present in each of the ten segments, the mixture model would require 2 10 components.
Results on the two BN tasks indicate the Helmholtz machine is better able to model multiple simultaneous causes than the mixture method, which requires that only one component
(cause) is active at a time. On the other hand, compared to the mixture model, the Helmholtz machine performs poorly on the Mixture 20~100 task. It is not able to learn that only
one cause should be active at a time. This problem can be avoided by hard-wiring softmax
groups into the Helmholtz machine. On the five synthetic tasks, the Helmholtz machine
performs about the same as or better than the Gibbs machine, and runs two orders of magnitude faster. (The Gibbs machine was too slow to run on the digit tasks.) While the quality of density estimation produced by the mean field method is indistinguishable from the
Helmholtz machine, the latter runs an order of magnitude faster than the mean field algorithm we used. The fully visible belief network performs significantly better than the
Helmholtz machine on the two digit tasks and significantly worse on two of the synthetic
tasks. It is trained roughly two orders of magnitude faster than the Helmholtz machine.
666
B. J. FREY, G. E. HINTON, P. DAYAN
Table 1. Test set cost (bits) and total training time (hrs) for the synthetic tasks.
Entropy
gzip
Base rates
Mixture
GM
MF
HM
FVBN
Model used to produce synthetic data
Mixture
BN5~1O~
BN
MRF
20=>100
20=>100
15~20~100
10~20~100
19.2
36.5
63.5
unknown
98.0
61.4
0 92.1
35 .6
0
96.6
80.7
0 69.2
42.2
0
36.7
74.0
1 19.3
0 62.6
1
44.1
131 63.9
240 58.1
251 26.1
195
42.2
68 64.7
80 58.4
68 19.3
75
42.7
8 65.2
4 19.4
3 58.5
2
50.9
67.8
0 60.6
0 19.8
0
o
o
o
o
o
o
MRF
10 x
(1O~1O)
36.8
59.9
68.1
49.6
40.3
38.7
38.6
38.2
0
0
1
145
89
4
0
Table 2. Test set cost (bits) and training time (hrs) for the digit tasks.
Method
gzip
Base rates
Mixture
MF
HM
FVBN
Single digits
0
44.3
59.2
37.5
39.5
38
39.1
2
35.9
o
o
o
Method
gZlp
Base rates
Mixture
MF
HM
FVBN
Digit pairs
89.2
0
118.4
0
92.7
1
80.7
104
80.4
7
72.9
0
6 CONCLUSIONS
If we were given a new data set and asked to leave our research biases aside and do efficient density estimation, how would we proceed? Evidently it would not be worth trying
gzip and the base rate model. We'd first try the fully visible belief network and the mixture
model, since these are fast and sometimes give good estimates. Hoping to extract extra
higher-order structure, we would then proceed to use the Helmholtz machine or the mean
field method (keeping in mind that our implementation of the Helmholtz machine is considerably faster than Saul et al. 's implementation of the mean field method). Because it is
so slow, we would avoid using the Gibbs machine unless the data set was very small.
Acknowledgments
We greatly appreciate the mean field software provided by Tommi Jaakkola and Lawrence
Saul. We thank members of the Neural Network Research Group at the University of Toronto for helpful advice. The financial support from ITRC, IRIS, and NSERC is appreciated.
References
Dayan, P. , Hinton, G. E., Neal, R. M ., and Zemel, R. S. 1995. The Helmholtz machine.
Neural Computation 7, 889-904.
Dempster, A. P., Laird, N. M . and Rubin, D . B. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society, Series B 34, 1-38.
Gailly, J. 1993. gzip program for unix.
Hinton, G . E., Dayan, P., Frey, B. J., Neal, R. M. 1995. The wake-sleep algorithm for
unsupervised neural networks. Science 268, 1158-1161.
Neal, R. M. 1992. Connectionist learning of belief networks. Artificial Intelligence 56,71-113.
Saul, L. K., Jaakkola, T., and Jordan, M.I. 1996. Mean field theory for sigmoid belief networks. Submitted to Journal of Artificial Intelligence.
667
Does the Wake-sleep Algorithm Produce Good Density Estimators?
Appendix
The average validation set cost per example and the associated learning time for each simulation are listed in tables 3 and 4. Architectures judged to be optimal according to validation performance are indicated by "*,, and were used to produce the test results given in
the body of this paper.
Table 3. Validation set cost (bits) and learning time (min) for the synthetic tasks.
Mixture
20~100
gzip
Base rates
Mixture 20~100
Mixture 40~100
Mixture 60~100
Mixture lOO~lOO
OM 20~lOO
OM 50~lOO
OM 1O~20~100
OM 20~50~100
MF 20~100
MF 50~100
MF 1O~20~100
MF 20~50~100
HM 20~lOO
HM 50~lOO
HM lO~20~100
HM 20~50~lOO
FVBN
o
o
61.6
96.7
44.6
3
36.8*
5
36.8
7
37.0
14
50.6
1187
68.8
2328
44.1*
872
52.7 3476
49.5
518
1644
49.9
46.0
306
42.1* 1623
50.0
41
50.7
81
43.4
32
42.6*
308
7
51.0
Model used to produce synthetic data
BN
BN 5~lO~
MRF
20~100
15~20~100
10<=>20<=> 100
98.1
0 92.3
35.6
80.7
0 69.4
42.1
75.6
4 19.2*
3 63.9
3
74.8
7 19.2
5 63.2
7
74.4
7 62.9
8 19.2
8
74.0*
12 62.7*
13 19.3
12
63.9* 1639 58.1* 2084 26.1*
934
80.4
3481 76.4 5234 49.2 6472
66.4
1771 59.8 3084 28.0
767
91.3
7504 88.0 4647 55.3 3529
427 58.4* 497 19.4
64.6
862
64.8
1945 58.6 1465 20.4 1264
64.6*
658 58.5
543 19.3* 569
65.0
1798 58.6 1553 19.3 1778
65.2
28 58.8
41 19.7
15
65.5
66 59.4
78 20.2
27
65.1*
38 58.5*
45 19.4*
21
67.2
64
69 59.2
93 19.5
67.8
7 60.7
6 19.8
8
o
o
o
o
MRF
10 x (1O<=> 10)
60.0
0
68.1
0
54.8
5
52.4
15
51.0
17
49.6*
22
40.3* 1425
56.5
3472
42.3
1033
2781
63.5
39.2
471
38.7* 2427
38.9
882
1575
38.8
38.6*
30
38.9
46
38.9
46
39.4
102
38.3
6
Table 4. Validation set cost (bits) and learning time (min) for the digit tasks.
Method
gzip
Base rates
Mixture 16~64
Mixture 32~64
Mixture 64~64
Mixture 128~64
MF 16~24~64
MF24~32~64
MF 12~16~24~64
MF 16~24~32~64
HM 16~24~64
HM 24~32~64
HM 12~16~24~64
HM 16~24~32~64
FVBN
Single digits
44.2
0
59.0
0
43 .2
1
40.0
4
38.0
5
37.1*
6
39.9
341
39.1*
845
475
39.8
39.1
603
24
39.7
34
39.4
40.4
16
38.9*
52
1
35.8
Method
gzip
Base rates
Mixture 32~128
Mixture 64~128
Mixture 128~128
Mixture 256~128
MF 16~24~32~128
MF 16~32~64~128
MF 12~16~24~32~128
MF
12~16~32~64~128
HM 16~24~32~128
HM 16~32~64~128
HM 12~16~24~32~128
HM 12~16~32~64~128
FVBN
Digit pairs
88.8
1
117.9
0
96.9
6
93.8
8
92.4*
14
92.8
27
82.7
1335
81.2
1441
82.8
896
80.1* 2586
83.8
76
80.1*
138
84.6
74
80.1
135
72.5
7
PART V
IMPLEMENTATIONS
| 1153 |@word version:4 compression:5 simulation:2 tried:1 bn:7 initial:1 configuration:2 series:1 past:1 must:1 realistic:1 visible:20 hoping:1 aside:1 generative:19 selected:2 device:1 intelligence:2 pointer:1 quantized:1 postal:1 toronto:4 successive:1 sigmoidal:1 simpler:1 five:3 constructed:1 beta:2 consists:2 fitting:5 expected:2 roughly:2 themselves:1 brain:1 prolonged:1 provided:1 estimating:1 matched:1 minimizes:1 safely:1 every:1 exactly:1 unit:20 before:1 service:1 frey:8 timing:1 io:3 id:4 plus:1 chose:1 range:2 practical:1 acknowledgment:1 testing:2 communicated:1 digit:13 procedure:1 significantly:3 dla:1 get:3 close:1 selection:1 judged:1 d19:1 deterministic:1 missing:1 annealed:3 starting:1 communicating:1 estimator:4 rule:2 financial:1 increment:2 transmit:1 updated:1 gm:2 exact:2 us:3 helmholtz:24 recognition:13 approximated:2 asymmetric:1 logq:1 bottom:9 observed:1 appraisal:2 connected:1 dempster:2 complexity:1 asked:1 esi:1 interpolant:1 trained:1 esk:1 segment:4 tight:1 easily:1 train:1 distinct:1 fast:2 effective:1 monte:2 artificial:2 zemel:2 whose:1 otherwise:1 compressed:2 favor:1 transform:1 laird:2 ip:1 final:1 advantage:1 evidently:1 net:1 subtracting:1 product:1 poorly:2 cluster:5 optimum:1 produce:11 leave:1 measured:1 nearest:1 ij:1 c:1 indicate:2 come:1 tommi:1 correct:1 stochastic:5 broke:1 require:2 pjk:5 adjusted:2 extension:1 sufficiently:1 equilibrium:1 lawrence:1 automate:1 early:1 purpose:1 estimation:5 largest:1 weighted:1 mit:1 avoid:4 jaakkola:3 broader:1 office:1 likelihood:7 fvbn:7 greatly:1 brendan:1 helpful:1 itrc:1 dayan:9 lj:1 hidden:10 interested:1 overall:1 ziv:1 special:1 softmax:1 field:15 equal:1 once:1 sampling:3 unsupervised:2 future:1 minimized:1 others:1 connectionist:1 serious:2 employ:1 few:2 randomly:1 composed:1 divergence:1 replaced:1 connects:1 highly:1 mixture:31 extreme:1 tj:2 necessary:1 respective:1 perfonnance:4 unless:1 filled:1 incomplete:1 logp:6 maximization:1 cost:10 subset:5 too:1 loo:7 accomplish:1 synthetic:14 considerably:1 density:9 ie:1 off:3 together:1 connecting:1 connectivity:1 worse:2 cognitive:1 stochastically:1 tz:1 derivative:2 account:1 coding:3 summarized:1 matter:1 depends:1 later:1 performed:4 break:1 try:1 doing:1 di9:13 complicated:1 option:1 minimize:2 ass:1 il:1 om:4 qk:2 variance:3 efficiently:1 correspond:1 yield:1 handwritten:1 produced:4 carlo:2 worth:1 m5s:1 submitted:1 simultaneous:1 competitor:1 energy:3 obvious:1 associated:1 sampled:6 massachusetts:1 bidirectional:1 higher:1 day:1 follow:1 arranged:1 just:1 until:1 hand:1 logistic:2 quality:1 indicated:1 usa:1 normalized:1 unbiased:1 neal:8 wiring:1 indistinguishable:1 during:2 iris:1 trying:1 gzip:17 performs:3 temperature:4 image:3 sigmoid:1 cambridge:1 gibbs:10 ai:1 had:5 compiled:1 base:19 posterior:6 recent:1 buffer:1 binary:10 additional:1 greater:1 maximize:2 ale:1 arithmetic:1 full:2 desirable:1 multiple:1 exceeds:1 match:1 faster:4 award:1 coded:1 mrf:7 multilayer:2 gailly:2 expectation:2 sometimes:1 represent:1 addition:1 whereas:1 wake:10 extra:3 unlike:1 subject:1 member:1 jordan:2 call:1 near:3 synthetically:1 variety:2 independence:1 fit:1 architecture:7 restrict:1 perfectly:1 reduce:1 six:1 effort:1 peter:1 proceed:2 cause:3 generally:1 listed:1 factorial:1 amount:1 ten:3 generate:2 delta:2 disjoint:1 per:1 group:2 four:1 prevent:2 pj:1 kept:1 wasted:1 run:4 parameterized:1 powerful:1 communicate:2 ilt:2 unix:1 reasonable:1 oscillation:1 appendix:3 bit:7 bound:6 layer:14 followed:1 datum:2 sleep:10 ilk:2 activity:7 badly:1 constraint:1 software:1 encodes:1 min:2 performing:1 relatively:2 department:2 according:1 em:1 unity:1 partitioned:2 restricted:1 computationally:1 equation:3 visualization:1 previously:1 ilp:1 needed:1 mind:1 tractable:1 serf:1 available:2 multiplied:1 hierarchical:1 lempel:1 compress:1 top:8 a4:1 ting:1 concatenated:1 approximating:1 society:1 appreciate:1 sweep:3 already:1 quantity:1 codewords:1 gradient:1 separate:2 thank:1 concatenation:1 length:3 code:1 minimizing:2 difficult:1 unfortunately:1 negative:2 implementation:3 unknown:1 perform:1 upper:2 neuron:1 markov:1 hinton:11 canada:1 pair:4 required:3 connection:5 able:3 below:1 pattern:1 usually:1 program:1 royal:1 belief:9 overlap:1 hr:2 advanced:1 representing:3 improve:1 technology:2 imply:1 x8:2 hm:15 lox:1 extract:1 kj:2 epoch:1 checking:1 relative:2 fully:7 reordering:1 geoffrey:1 validation:14 rubin:2 lo:2 free:3 keeping:1 dis:1 appreciated:1 bias:3 institute:1 saul:4 wide:1 taking:1 neighbor:1 default:1 world:1 made:2 avoided:3 sj:1 approximate:4 perfonnances:1 keep:2 overfitting:3 active:2 receiver:1 summing:1 assumed:1 sk:3 table:9 learn:1 obtaining:2 did:2 pk:5 terminated:1 fair:1 body:1 advice:1 slow:2 inverts:1 concatenating:1 down:7 dk:4 dl:1 intractable:2 consist:2 sequential:1 magnitude:3 gap:1 easier:1 mf:15 entropy:6 lt:2 simply:2 explore:1 nserc:1 ald:7 ma:1 conditional:2 viewed:1 identity:1 hard:1 included:1 except:1 uniformly:1 correlational:1 called:3 total:3 pas:1 meaningful:1 highdimensional:1 support:1 latter:2 |
170 | 1,154 | Control of Selective Visual Attention:
Modeling the "Where" Pathway
Ernst Niebur?
Computation and Neural Systems 139-74
California Institute of Technology
Christof Koch
Computation and Neural Systems 139-74
California Institute of Technology
Abstract
Intermediate and higher vision processes require selection of a subset of the available sensory information before further processing.
Usually, this selection is implemented in the form of a spatially
circumscribed region of the visual field, the so-called "focus of attention" which scans the visual scene dependent on the input and
on the attentional state of the subject. We here present a model for
the control of the focus of attention in primates, based on a saliency
map. This mechanism is not only expected to model the functionality of biological vision but also to be essential for the understanding
of complex scenes in machine vision.
1
Introduction: "What" and "Where" In Vision
It is a generally accepted fact that the computations of early vision are massively
parallel operations, i.e., applied in parallel to all parts of the visual field. This high
degree of parallelism cannot be sustained in in~ermediate and higher vision because
of the astronomical number of different possible combination of features. Therefore,
it becomes necessary to select only a part of the instantaneous sensory input for
more detailed processing and to discard the rest . This is the mechanism of visual
selective attention.
? Present address: Zanvyl Krieger Mind/Brain Institute and Department of Neuroscience, 3400 N. Charles Street, The Johns Hopkins University, Baltimore, MD 21218 _
Control of Selective Visual Attention: Modeling the "Where" Pathway
803
It is clear that similar selection mechanisms are also required in machine vision for
the analysis of all but the simplest visual scenes. Attentional mechanisms are slowly
introduced in this field; e .g. , Yamada and Cottrell (1995) used sequential scanning
by a "focus of attention" in the context of face recognition. Another model for
eye scan path generation, which is characterized by a strong top-down influence, is
presented by Rao and Ballard (this volume). Sequential scanning can be applied to
more abstract spaces, like the dynamics of complex systems in optimization problems
with large numbers of minima (Tsioutsias and Mjolsness, this volume).
Primate vision is organized along two major anatomical pathways. One of them is
concerned mainly with object recognition. For this reason , it has been called the
What- pathway; for anatomical reasons , it is also known as the ventral pathway.
The principal task of the other major pathway is the determination of the location
of objects and therefore it is called the Where pathway or, again for anatomical
reasons, the dorsal pathway.
In previous work (Niebur & Koch , 1994), we presented a model for the implementation of the What pathway. The underlying mechanism is "temporal tagging:" it
is assumed that the attended region of the visual field is distinguished from the
unattended parts by the temporal fine-structure of the neuronal spike trains. We
have shown that temporal tagging can be achieved by introducing moderate levels
of correlation 1 between those neurons which respond to attended stimuli.
How can such synchronicity be obtained? We have suggested a simple, neurally
plausible mechanism , namely common input to all cells which respond to attended
stimuli. Such (excitatory) input will increase the propensity of postsynaptic cells to
fire for a short time after receiving this input, and thereby increase the correlation
between spike trains without necessarily increasing the average firing rate .
The subject of the present study is to provide a model of the control system which
generates such modulating input. We will show that it is possible to construct
an integrated system of attentional control which is based on neurally plausible
elements and which is compatible with the anatomy and physiology of the primate
visual system . The system scans a visual Scene and identifies its most salient parts .
A possible task would be "Find all faces in this image." We are confident that this
model will not only further our understanding of the function of biological vision
but that it will also be relevant for technical applications.
2
2.1
A Simple Model of The Dorsal Pathway
Overall Structure
Figure 1 shows an overview of the model Where pathway. Input is provided in the
form of digitized images from an NTSC camera which is then analyzed in various
feature maps. These maps are organized around the known operations in early visual
cortices. They are implemented at different spatial scales and in a center-surround
structure akin to visual receptive fields . Different spatial scales are implemented as
Gaussian pyramids (Adelson , Anderson , Bergen, Burt, & Ogden, 1984). The center
1 In (Niebur, Koch, & Rosin, 1993), a similar model was developed using periodic
"40Hz" modulation. The present model can be adapted mutatis mutandis to this type
of modulation.
E. NIEBUR, C. KOCH
804
of the receptive field corresponds to the value of a pixel at level n in the pyramid
and the surround to the corresponding pixels at level n + 2, level 0 being the image
in normal size. The features implemented so far are the thr~e principal components
of primate color vision (intensity, red-green, blue-yellow), four orientations, and
temporal change. Short descriptions of the different feature maps are presented in
the next section (2.2).
We then (section 2.3) address the question of the integration of the input in the
"saliency map," a topographically organized map which codes for the instantaneous
conspicuity of the different parts of the visual field .
???
Feature Maps
(multiscale)
Figure 1: Overview of the model Where pathway. Features are computed as centersurround differences at 4 different spatial scales (only 3 feature maps shown) . They
are combined and integrated in the saliency map ("SM") which provides input to an
array of integrate-and-fire neurons with global inhibition. This array ("WTA") has
the functionality of a winner-take-all network and provides the output to the ventral
pathway ("V2") as well as feedback to the saliency map (curved arrow) .
2.2
2.2.1
Input Features
Intensity
Intensity information is obtained from the chromatic information of the NTSC signal.
With R, G, and B being the red, green and blue channels, respectively, the intensity
I is obtained as 1= (R + G + B)/3. The entry in the feature map is the modulus
Control of Selective Visual Attention: Modeling the "Where" Pathway
805
of the contrast, i.e., IIcenter - I~tirrotindl. This corresponds roughly to the sum of
two single-opponent cells of opposite phase, i.e . bright-center - dark-surround and
vice-versa. Note, however, that the present version of the model does not reproduce
the temporal behavior of ON and OFF subfields because we update the activities in
the feature maps instantaneously with changing visual input. Therefore, we neglect
the temporal filtering properties of the input neurons.
2.2.2
Chromatic Input
Red, green and blue are the pixel values of the RGB signal. Yellow is computed
as (R + G)/2. At each pixel, we compute a quantity corresponding to the doubleopponency cells in primary visual cortex. For instance , for the red-green filter, we
firs't compute at each pixel the value of (red-green). From this, we then subtract
(green-red) of the surround. Finally, we take the absolute value of the result .
2.2.3
Orientation
The intensity image is convolved with four Gabor patches of angles 0,45 ,90, and
135 degrees, respectively. The result of these four convolutions are four arrays of
scalars at every level of the pyramid. The average orientation is then computed
as a weighted vector sum . The components in this sum are the four unit vectors
iii, i
1, .. .4 corresponding to the 4 orientations, each with the weight Wi. This
weight is given by the result of its convolution of the respective Gabor patch with
the image. Let be this vector for the center pixel, then = L;=l Wi iii .
=
c
c
The average orientation vector for the surround, s, is computed analogously. What
enters in the SM is the center-surround difference, i.e. the scalar product c( s - C).
This is a scalar quantity which corresponds to the center-surround difference in orientation at every location , and which also takes into account the relative "strength"
of the oriented edges.
2.2.4
Change
The appearance of an object and the segregation of an object from its background
have been shown to capture attention, even for stimuli which are equiluminant with
the background (Hillstrom & Yantis, 1994). We incorporate the attentional capture
by visual onsets and motion by adding the temporal derivative of the input image
sequence, taking into account chromatic information. More precisely, at each pixel
we compute at time t and for a time difference tit 200ms :
=
1
'3{IR(t) - R(t - tit) I + IG(t) - G(t - tit) I + IB(t) - B(t - tit)!}
2.2.5
(1)
Top-Down Information
Our model implements essentially bottom-up strategies for the rapid selection of
conspicuous parts of the visual field and does not pretend to be a model for higher
cognitive functions. Nevertheless, it is straightforward to incorporate some topdown influence. For instance, in a "Posner task" (Posner, 1980), the subject is
instructed to attend selectively to one part of the visual field. This instruction can
be implemented by additional input to the corresponding part of the saliency map.
806
2.3
E. NIEBUR, C. KOCH
The Saliency Map
The existence of a saliency map has been suggested by Koch and Ullman (1985);
see also the "master map" of Treisman (1988). The idea is that of a topographically
organized map which encodes information on where salient (conspicuous) objects
are located in the visual field, but not what these objects are.
The task of the saliency map is the computation of the salience at every location
in the visual field and the subsequent selection of the most salient areas or objects.
At any time, only one such area is selected . The feature maps provide current
input to the saliency map. The output of the saliency map consists of a spike train
from neurons corresponding to this selected area in the topographic map which
project to the ventral ("What") pathway. By this mechanism, they are "tagged" by
modulating the temporal structure of the neuronal signals corresponding to attended
stimuli (Niebur & Koch, 1994).
2.3.1
Fusion Of Information
Once all relevant features have been computed in the various feature maps, they have
to be combined to yield the salience, i.e. a scalar quantity. In our model, we solve
this task by simply adding the activities in the different feature maps, as computed
in section 2.2, with constant weights. We choose all weights identical except for the
input obtained from the temporal change. Because of the obvious great importance
changing stimuli have for the capture of attention, we select this weight five times
larger than the others.
2.3.2
Internal Dynamics And Trajectory Generation
By definition, the activity in a given location of the saliency map represents the
relative conspicuity of the corresponding location in the visual field. At any given
time, the maximum of this map is therefore the most salient stimulus. As a consequence, this is the stimulus to which the focus of attention should be directed next
to allow more detailed inspection by the more powerful "higher" process which are
not available to the massively parallel feature maps. This means that we have to
determine the instantaneous maximum of this map.
This maximum is selected by application of a winner-take-all mechanism. Different
mechanisms have been suggested for the implementation of neural winner-take-all
networks (e.g., Koch & Ullman, 1985; Yuille & Grzywacz, 1989). In our model, we
used a 2-dimensionallayer of integrate-and-fire neurons with strong global inhibition
in which the inhibitory population is reliably activated by any neuron in the layer.
Therefore, when the first of these cells fires, it will inhibit all cells (including itself),
and the neuron with the strongest input will generate a sequence of action potentials.
All other neurons are quiescent.
For a static image, the system would so far attend continuously the most conspicuous
stimulus . This is neither observed in biological vision nor desirable from a functional
point of view; instead, after inspection of any point, there is usually no reason to
dwell on it any longer and the next-most salient point should be attended.
We achieve this behavior by introducing feedback from the winner-take-all array.
When a spike occurs in the WTA network, the integrators in the saliency map
Control of Selective Visual Attention: Modeling the "Where" Pathway
807
receive additional input with the spatial structure of an inverted Mexican hat, ie. a
difference of Gaussians. The (inhibitory) center is at the location of the winner which
becomes thus inhibited in the saliency map and, consequently, attention switches to
the next-most conspicuous location. The function ofthe positive lobes ofthe inverted
Mexican hat is to avoid excessive jumping of the focus of attention. If two locations
are of nearly equal conspicuity and one of them is close to the present focus of
attention, the next jump will go to the close location rather than to the distant one.
3
Results
We have studied the system with inputs constructed analogously to typical visual
psychophysical stimuli and obtained results in agreement with experimental data.
Space limitations prevent a detailed presentation of these results in this report.
Therefore, in Fig. 2, we only show one example of a "real-world image." We choose,
as an example, an image showing the Caltech bookstore and the trajectory of the
focus of attention follows in our model. The most salient feature in this image is the
red banner on the the wall of the building (in the center of the image). The focus of
attention is directed first to this salient feature. The system then starts to scan the
image in the order of decreasing saliency. Shown are the 3 jumps following the initial
focussing on the red banner. The jumps are driven by a strong inhibition-of-return
mechanism. Experimental evidence for such a mechanims has been obtained recently
in area 7a of rhesus monkeys (Steinmetz, Connor, Constantinidis, & McLaughlin,
1994).
Figure 2: Example image. The black line shows the trajectory of the simulated focus
of attention over a time of 140 ms which jumps from the center (red banner on wall
of building) to three different locations of decreasing saliency.
4
Conclusion And Outlook
We present in this report a prototype for an integrated system mimicking the control
of visual selective attention. Our model is compatible with the known anatomy and
physiology of the primate visual system, and its different parts communicate by
signals which are neurally plausible. The model identifies the most salient points
in a visual scenes one-by-one and scans the scene autonomously in the order of
808
E. NIEBUR, C. KOCH
decreasing saliency. This allows the control of a subsequently activated processor
which is specialized for detailed object recognition.
At present, saliency is determined by combining the input from a set offeature maps
with fixed weights. In future work, we will generalize our approach by introducing
plasticity in these weights and thus adapting the system to the task at hand .
Acknowledgements
Work supported by the Office of Naval Research, the Air Force Office of Scientific
Research, the National Science Foundation, the Center for Neuromorphic Systems
Engineering as a part of the National Science Foundation Engineering Research
Center Program, and by the Office of Strategic Technology of the California Trade
and Commerce Agency.
References
Adelson, E., Anderson, C ., Bergen, J., Burt, P., & Ogden, J. (1984). Pyramid
methods in image processing. RCA Engineer, Nov-Dec.
Hillstrom, A. & Yantis, S. (1994). Visual motion and attentional capture. Perception
8 Psychophysics, 55(4),399-411 .
Koch, C. & Ullman, S. (1985). Shifts in selective visual attention: towards the
underlying neural circuitry. Human Neurobiol., 4,219-227.
Niebur, E. & Koch, C. (1994). A model for the neuronal implementation of selective
visual attention based on temporal correlation among neurons. Journal of
Computational Neuroscience, 1 (1), 141- 158.
Niebur, E., Koch, C., & Rosin, C. (1993). An oscillation-based model for the neural
basis of attention. Vision Research, 33, 2789-2802.
Posner, M. (1980) . Orienting of attention. Quart. 1. Exp. Psychol., 32, 3-25.
Steinmetz, M., Connor, C., Constantinidis, C., & McLaughlin, J. (1994). Covert
attention suppresses neuronal responses in area 7 A of the posterior parietal
cortex. 1. Neurophysiology, 72, 1020-1023.
Treisman, A. (1988). Features and objects: the fourteenth Bartlett memorial lecture .
Quart. 1. Exp. Psychol., 40A, 201-237.
Tsioutsias, D. 1. & Mjolsness, E. (1996). A Multiscale Attentional Framework for
Relaxation Neural Networks. In Touretzky, D., Mozer, M. C., & Hasselmo,
M. E. (Eds.), Advances in Neural Information Processing Systems, Vol. 8.
MIT Press, Cambridge, MA.
Yamada, K. & Cottrell, G. W. (1995). A model of scan paths applied to face
recognition. In Proc. 17th Ann. Cog. Sci. Con/. Pittsburgh.
Yuille, A. & Grzywacz, N. (1989). A winner-take-all mechanism based on presynaptic inhibition feedback. Neural Computation, 2, 334-344.
| 1154 |@word neurophysiology:1 version:1 instruction:1 rhesus:1 rgb:1 lobe:1 attended:5 thereby:1 outlook:1 initial:1 current:1 john:1 cottrell:2 subsequent:1 distant:1 synchronicity:1 plasticity:1 update:1 selected:3 inspection:2 short:2 yamada:2 provides:2 location:10 five:1 along:1 constructed:1 consists:1 sustained:1 pathway:16 tagging:2 expected:1 rapid:1 behavior:2 roughly:1 nor:1 brain:1 integrator:1 decreasing:3 increasing:1 becomes:2 provided:1 project:1 underlying:2 what:6 neurobiol:1 monkey:1 suppresses:1 developed:1 temporal:10 every:3 control:9 unit:1 christof:1 before:1 positive:1 attend:2 engineering:2 consequence:1 path:2 firing:1 modulation:2 black:1 studied:1 subfields:1 directed:2 commerce:1 camera:1 constantinidis:2 implement:1 area:5 physiology:2 gabor:2 adapting:1 cannot:1 close:2 selection:5 context:1 influence:2 unattended:1 map:31 center:11 straightforward:1 attention:23 go:1 array:4 ogden:2 posner:3 population:1 grzywacz:2 ntsc:2 agreement:1 element:1 circumscribed:1 recognition:4 located:1 bottom:1 observed:1 enters:1 capture:4 region:2 mjolsness:2 autonomously:1 trade:1 inhibit:1 mozer:1 agency:1 dynamic:2 tit:4 topographically:2 yuille:2 basis:1 various:2 train:3 larger:1 plausible:3 solve:1 topographic:1 itself:1 sequence:2 product:1 relevant:2 combining:1 ernst:1 achieve:1 description:1 object:9 strong:3 implemented:5 anatomy:2 functionality:2 filter:1 subsequently:1 human:1 require:1 wall:2 biological:3 koch:12 around:1 normal:1 exp:2 great:1 circuitry:1 major:2 ventral:3 early:2 proc:1 propensity:1 mutandis:1 modulating:2 hasselmo:1 vice:1 instantaneously:1 weighted:1 mit:1 gaussian:1 rather:1 avoid:1 chromatic:3 office:3 focus:9 naval:1 mainly:1 contrast:1 dependent:1 bergen:2 integrated:3 selective:8 reproduce:1 pixel:7 overall:1 mimicking:1 orientation:6 among:1 spatial:4 integration:1 psychophysics:1 field:12 construct:1 once:1 equal:1 identical:1 represents:1 adelson:2 excessive:1 nearly:1 future:1 others:1 stimulus:9 report:2 inhibited:1 oriented:1 steinmetz:2 national:2 phase:1 fire:4 analyzed:1 activated:2 edge:1 necessary:1 respective:1 jumping:1 offeature:1 instance:2 modeling:4 rao:1 neuromorphic:1 strategic:1 introducing:3 subset:1 rosin:2 entry:1 scanning:2 periodic:1 combined:2 confident:1 banner:3 ie:1 off:1 receiving:1 analogously:2 hopkins:1 treisman:2 continuously:1 again:1 choose:2 slowly:1 fir:1 cognitive:1 derivative:1 return:1 ullman:3 account:2 potential:1 onset:1 view:1 red:9 start:1 parallel:3 bright:1 ir:1 air:1 yield:1 saliency:17 ofthe:2 yellow:2 generalize:1 niebur:9 trajectory:3 processor:1 strongest:1 touretzky:1 ed:1 definition:1 obvious:1 static:1 con:1 astronomical:1 color:1 organized:4 higher:4 response:1 anderson:2 correlation:3 hand:1 multiscale:2 scientific:1 orienting:1 building:2 modulus:1 tagged:1 spatially:1 m:2 covert:1 motion:2 image:14 instantaneous:3 recently:1 charles:1 common:1 specialized:1 functional:1 overview:2 winner:6 volume:2 surround:7 versa:1 connor:2 cambridge:1 cortex:3 longer:1 inhibition:4 posterior:1 moderate:1 driven:1 discard:1 massively:2 caltech:1 inverted:2 minimum:1 additional:2 determine:1 focussing:1 signal:4 neurally:3 desirable:1 technical:1 memorial:1 determination:1 characterized:1 vision:12 essentially:1 pyramid:4 achieved:1 cell:6 dec:1 receive:1 background:2 fine:1 baltimore:1 rest:1 subject:3 hz:1 intermediate:1 iii:2 concerned:1 switch:1 opposite:1 idea:1 prototype:1 mclaughlin:2 shift:1 bartlett:1 akin:1 action:1 generally:1 detailed:4 quart:2 clear:1 dark:1 simplest:1 generate:1 conspicuity:3 inhibitory:2 neuroscience:2 anatomical:3 blue:3 vol:1 salient:8 four:5 nevertheless:1 changing:2 prevent:1 neither:1 relaxation:1 sum:3 angle:1 fourteenth:1 master:1 respond:2 powerful:1 communicate:1 tsioutsias:2 hillstrom:2 patch:2 oscillation:1 layer:1 dwell:1 activity:3 adapted:1 strength:1 precisely:1 scene:6 encodes:1 generates:1 department:1 combination:1 postsynaptic:1 conspicuous:4 wi:2 wta:2 primate:5 rca:1 segregation:1 mechanism:11 mind:1 available:2 operation:2 gaussians:1 opponent:1 v2:1 distinguished:1 convolved:1 existence:1 hat:2 top:2 neglect:1 pretend:1 psychophysical:1 question:1 quantity:3 spike:4 occurs:1 receptive:2 primary:1 strategy:1 md:1 attentional:6 simulated:1 sci:1 street:1 centersurround:1 presynaptic:1 reason:4 code:1 implementation:3 reliably:1 neuron:9 convolution:2 sm:2 curved:1 parietal:1 digitized:1 intensity:5 burt:2 introduced:1 namely:1 required:1 thr:1 california:3 address:2 suggested:3 topdown:1 usually:2 parallelism:1 perception:1 program:1 green:6 including:1 force:1 technology:3 eye:1 identifies:2 psychol:2 understanding:2 acknowledgement:1 relative:2 lecture:1 generation:2 limitation:1 filtering:1 foundation:2 integrate:2 degree:2 excitatory:1 compatible:2 supported:1 salience:2 allow:1 institute:3 face:3 taking:1 absolute:1 feedback:3 world:1 sensory:2 instructed:1 jump:4 ig:1 far:2 nov:1 global:2 pittsburgh:1 assumed:1 quiescent:1 ballard:1 channel:1 complex:2 necessarily:1 arrow:1 neuronal:4 fig:1 ib:1 down:2 bookstore:1 cog:1 showing:1 yantis:2 evidence:1 fusion:1 essential:1 sequential:2 adding:2 importance:1 krieger:1 subtract:1 simply:1 appearance:1 visual:30 mutatis:1 scalar:4 corresponds:3 ma:1 presentation:1 consequently:1 ann:1 towards:1 change:3 typical:1 except:1 determined:1 principal:2 mexican:2 called:3 engineer:1 accepted:1 experimental:2 select:2 selectively:1 internal:1 scan:6 dorsal:2 incorporate:2 |
171 | 1,155 | Exploiting Tractable Substructures
in Intractable Networks
Lawrence K. Saul and Michael I. Jordan
{lksaul.jordan}~psyche.mit.edu
Center for Biological and Computational Learning
Massachusetts Institute of Technology
79 Amherst Street, ElO-243
Cambridge, MA 02139
Abstract
We develop a refined mean field approximation for inference and
learning in probabilistic neural networks. Our mean field theory,
unlike most, does not assume that the units behave as independent
degrees of freedom; instead, it exploits in a principled way the
existence of large substructures that are computationally tractable.
To illustrate the advantages of this framework, we show how to
incorporate weak higher order interactions into a first-order hidden
Markov model, treating the corrections (but not the first order
structure) within mean field theory.
1
INTRODUCTION
Learning the parameters in a probabilistic neural network may be viewed as a
problem in statistical estimation. In networks with sparse connectivity (e.g. trees
and chains), there exist efficient algorithms for the exact probabilistic calculations
that support inference and learning. In general, however, these calculations are
intractable, and approximations are required .
Mean field theory provides a framework for approximation in probabilistic neural
networks (Peterson & Anderson, 1987). Most applications of mean field theory,
however, have made a rather drastic probabilistic assumption-namely, that the
units in the network behave as independent degrees of freedom. In this paper we
show how to go beyond this assumption. We describe a self-consistent approximation in which tractable substructures are handled by exact computations and
only the remaining, intractable parts of the network are handled within mean field
theory. For simplicity we focus on networks with binary units; the extension to
discrete-valued (Potts) units is straightforward.
Exploiting Tractable Substructures in Intractable Networks
487
We apply these ideas to hidden Markov modeling (Rabiner & Juang, 1991). The
first order probabilistic structure of hidden Markov models (HMMs) leads to networks with chained architectures for which efficient, exact algorithms are available.
More elaborate networks are obtained by introducing couplings between multiple
HMMs (Williams & Hinton, 1990) and/or long-range couplings within a single HMM
(Stolorz, 1994). Both sorts of extensions have interesting applications; in speech,
for example, multiple HMMs can provide a distributed representation of the articulatory state, while long-range couplings can model the effects of coarticulation. In
general, however, such extensions lead to networks for which exact probabilistic calculations are not feasible. One would like to develop a mean field approximation for
these networks that exploits the tractability of first-order HMMs. This is possible
within the more sophisticated mean field theory described here.
2
MEAN FIELD THEORY
We briefly review the basic methodology of mean field theory for networks of binary
(?1) stochastic units (Parisi, 1988). For each configuration {S} = {Sl, S2, ... , SN},
we define an energy E{S} and a probability P{S} via the Boltzmann distribution:
e-.8 E {S}
P{S} =
Z
'
(1)
where {3 is the inverse temperature and Z is the partition function. When it is
intractable to compute averages over P{S}, we are motivated to look for an approximating distribution Q{S}. Mean field theory posits a particular parametrized
form for Q{S}, then chooses parameters to minimize the Kullback-Liebler (KL)
divergence:
KL(QIIP) =
L Q{S} In [ Q{S}]
P{S} .
(2)
is}
Why are mean field approximations valuable for learning? Suppose that P{S}
represents the posterior distribution over hidden variables, as in the E-step of an
EM algorithm (Dempster, Laird, & Rubin, 1977). Then we obtain a mean field
approximation to this E-step by replacing the statistics of P{S} (which may be
quite difficult to compute) with those of Q{S} (which may be much simpler). If, in
addition, Z represents the likelihood of observed data (as is the case for the example
of section 3), then the mean field approximation yields a lower bound on the loglikelihood. This can be seen by noting that for any approximating distribution
Q{S}, we can form the lower bound:
In Z
=
In
L e-.8
E {S}
(3)
{S}
e-.8 E {S} ]
L Q{S}? [ Q{S}
{S}
L Q{ S}[- {3E {S} - In Q{ S}],
In
>
(4)
(5)
{S}
where the last line follows from Jensen's inequality. The difference between the left
and right-hand side of eq. (5) is exactly KL( QIIP); thus the better the approximation
to P {S}, the tighter the bound on In Z. Once a lower bound is available, a learning
procedure can maximize the lower bound. This is useful when the true likelihood
itself cannot be efficiently computed.
L. K. SAUL, M.I. JORDAN
488
2.1
Complete Factorizability
The simplest mean field theory involves assuming marginal independence for the
units Si. Consider, for example, a quadratic energy function
(6)
i<j
and the factorized approximation:
Q{ S} =
IJ (1 + :i Si ) .
(7)
I
The expectations under this mean field approximation are (Si) = mi and (Si Sj) =
mimj for i =1= j. The best approximation of this form is found by minimizing the
KL-divergence,
KL(QIIP)
=
~[(1+2mi)ln(I+2mi)+(1-2mi)ln(I-2mi)]
(8)
I
- L Jijmimj - L himi + InZ,
i<j
i
with respect to the mean field parameters mi. Setting the gradients of eq. (8) equal
to zero, we obtain the (classical) mean field equations:
tanh- 1
(mi) = L
Jijmj
+ hi?
(9)
j
2.2
Partial Factorizability
We now consider a more structured model in which the network consists of interacting modules that, taken in isolation, define tractable substructures. One example
of this would be a network of weakly coupled HMMs, in which each HMM, taken
by itself, defines a chain-like substructure that supports efficient probabilistic calculations. We denote the interactions between these modules by parameters K~v,
where the superscripts J.' and 1/ range over modules and the subscripts i and j index
units within modules. An appropriate energy function for this network is:
- ,6E{S} = L
/J
{LJ~srSf
+ LhfSr} + L K~vSrS'f.
i<j
i
/J<V
(10)
ij
The first term in this energy function contains the intra-modular interactions; the
last term, the inter-modular ones.
We now consider a mean field approximation that maintains the first sum over
modules but dispenses with the inter-modular corrections:
Q{S} =
}Q exp {L/J [~J~srSf
+ ~Hisr]}
<1
I
(11)
I
The parameters of this mean field approximation are Hi; they will be chosen to
provide a self-consistent model of the inter-modular interactions. We easily obtain
the following expectations under the mean field approximation, where J.' =1= 1/:
(Sr Sj)
(Sr sy Sk)
8/Jw (Sf Sj) + (1 - 8/Jw)(Sr)(Sj) ,
8/Jw(Sf Sk)(S'f} + 8vw (Sj Sk)(Sf)
(1- 8vw )(I- 8w/J)(Sf}{S'f} (Sk)?
(12)
+
(13)
Exploiting Tractable Substructures in Intractable Networks
489
Note that units in the same module are statistically correlated and that these correlations are assumed to be taken into account in calculating the expectations. We
assume that an efficient algorithm is available for handling these intra-modular correlations. For example, if the factorized modules are chains (e.g. obtained from
a coupled set of HMMs), then computing these expectations requires a forwardbackward pass through each chain.
The best approximation of the form, eq. (11), is found by minimizing the KLdivergence,
KL(QIIP) = In(ZjZQ)
+L
(Hf - hf) (Sn - L Kt'/ (Sf Sf),
JJ<V
(14)
ij
with respect to the mean field parameters HI:. To compute the appropriate gradients , we use the fact that derivatives of expectations under a Boltzmann distribution (e.g. a(Sn jaH k ) yield cumulants (e.g. (Sf Sk) - (Sf)(Sk)) . The conditions
for stationarity are then:
JJ<V
ij
Substituting the expectations from eqs. (12) and (13), we find that K L( QIIP) is
minimized when
o = ~ {Hi ,
hi - L
V~W
~Kit(Sj)} [(S'i Sk) - (S,:)(SI:)].
(16)
J
The resulting mean field equations are:
Hi = L
V~W
LK~V(Sj) + hi?
(17)
j
These equations may be solved by iteration, in which the (assumed) tractable algorithms for averaging over Q{S} are invoked as subroutines to compute the expectations (Sj) on the right hand side. Because these expectations depend on Hi, these
equations may be viewed as a self-consistent model of the inter-modular interactions. Note that the mean field parameter Hi plays a role analogous to-tanh- 1 (mi)
in eq. (9) of the fully factorized case.
2.3
Inducing Partial Factorizability
Many interesting networks do not have strictly modular architectures and can only
be approximately decomposed into tractable core structures. Techniques are needed
in such cases to induce partial factorizability. Suppose for example that we are given
an energy function
(18)
i<j
i<j
for which the first two terms represent tractable interactions and the last term,
intractable ones. Thus the weights Jij by themselves define a tractable skeleton
network, but the weights Kij spoil this tractability. Mimicking the steps of the
previous section, we obtain the mean field equations:
0= L
((SiSk) - (Si)(Sk)) [Hi - hi] - L
i<j
Kij [(SiSj Sk) - (SiSj )(Sk)] .
(19)
L. K. SAUL. M. I. JORDAN
490
In this case, however, the weights Kij couple units in the same core structure. Because these units are not assumed to be independent, the triple correlator (SiSjSk)
does not factorize, and we no longer obtain the decoupled update rules of eq. (17).
Rather, for these mean field equations, each iteration requires computing triple
correlators and solving a large set of coupled linear equations.
To avoid this heavy computational load, we instead manipulate the energy function
into one that can be partially factorized. This is done by introducing extra hidden
variables Wij = ?1 on the intractable links of the network. In particular, consider
the energy function
-I3E{S, W} =
~JijSiSj + ~hiSi + ~
i<j
i
[KH)Si
+ Kfj)Sj]
(20)
Wij.
i<j
The hidden variables Wij in eq. (20) serve to decouple the units connected by
the intractable weights Kij. However, we can always choose the new interactions,
(l)
h
K ij
an d jA2)
'"ij' so t at
e-.BE{S} = ~ e-.BE{S,W}.
(21)
{W}
Eq. (21) states that the marginal distribution over {S} in the new network is identical to the joint distribution over {S} in the original one. Summing both sides of
eq. (21) over {S}, it follows that both networks have the same partition function.
The form of the energy function in eq. (20) suggests the mean field approximation:
where the mean field parameters Hi have been augmented by a set of additional
mean field parameters Hij that account for the extra hidden variables. In this
expression, the variables Si and Wij act as decoupled degrees of freedom and the
methods of the preceding section can be applied directly. We consider an example
of this reduction in the following section.
3
EXAMPLE
Consider a continuous-output HMM in which the probability of an output Xt at
time t is dependent not only on the state at time t, but also on the state at time
t +~. Such a context-sensitive HMM may serve as a flexible model of anticipatory
coarticulatory effects in speech, with ~ ~ 50ms representing a mean phoneme
lifetime. Incorporating these interactions into the basic HMM probability model,
we obtain the following joint probability on states and outputs:
P{S, X} =
T-l
II
t=l
T-~
aS j S t +1
II
1
(211")D/2 exp
{I
-2" [X
t - US j -
VSj+~]
2}
.
(23)
t=l
Denoting the likelihood of an output sequence by Z, we have
Z
= P{X} = ~ P{S, X}.
(24)
{S}
We can represent this probability model using energies rather than transition probabilities (Luttrell, 1989; Saul and Jordan, 1995). For the special case of binary
491
Exploiting Tractable Substructures in Intractable Networks
Here, a++ is the probability of transitioning from the ON state to the ON state
(and similarly for the other a parameters), while 0+ and V+ are the mean outputs
associated with the ON state at time steps t and t + .6. (and similarly for 0_ and
V_). Given these definitions, we obtain an equivalent expression for the likelihood:
Z = Lexp
{S}
{-gO + 't=lE JStSt
+1
+ thtSt +
t=l
'E
KStSt+t:..} ,
(27)
t=l
where go is a placeholder for the terms in InP{S,X} that do not depend on {S}.
We can interpret Z as the partition function for the chained network of T binary
units that represents the HMM unfolded in time. The nearest neighbor connectivity of this network reflects the first order structure of the HMM; the long-range
connectivity reflects the higher order interactions that model sensitivity to context.
The exact likelihood can in principle be computed by summing over the hidden
states in eq. (27), but the required forward-backward algorithm scales much worse
than the case of first-order HMMs. Because the likelihood can be identified as a
partition function, however, we can obtain a lower bound on its value from mean
field theory. To exploit the tractable first order structure of the HMM, we induce a
partially factorizable network by introducing extra link variables on the long-range
connections, as described in section 2.3. The resulting mean field approximation
uses the chained structure as its backbone and should be accurate if the higher
order effects in the data are weak compared to the basic first-order structure.
The above scenario was tested in numerical simulations. In actuality, we implemented a generalization of the model in eq. (23): our HMM had non-binary hidden
states and a coarticulation model that incorporated both left and right context.
This network was trained on several artificial data sets according to the following
procedure. First, we fixed the "context" weights to zero and used the Baum-Welch
algorithm to estimate the first order structure of the HMM. Then, we lifted the
zero constraints and re-estimated the parameters of the HMM by a mean field EM
algorithm. In the E-step of this algorithm, the true posterior P{SIX} was approximated by the distribution Q{SIX} obtained by solving the mean field equations;
in the M-step, the parameters of the HMM were updated to match the statistics of
Q{SIX}. Figure 1 shows the type of structure captured by a typical network .
4
CONCLUSIONS
Endowing networks with probabilistic semantics provides a unified framework for incorporating prior knowledge, handling missing data, and performing inferences under uncertainty. Probabilistic calculations, however, can quickly become intractable,
so it is important to develop techniques that both approximate probability distributions in a flexible manner and make use of exact techniques wherever possible. In
IThere are boundary corrections to h t (not shown) for t = 1 and t> T - A.
L. K. SAUL, M. I. JORDAN
492
?
: .,.......
'.
',;
...~...
:;.
}
-5
-,0
.~.;
.. :
....'
"
. "' ..
Figure 1: 2D output vectors {Xt } sampled from a first-order HMM and a contextsensitive HMM, each with n = 5 hidden states. The latter's coarticulation model
used left and right context, coupling Xt to the hidden states at times t and t ? 5.
At left: the five main clusters reveal the basic first-order structure. At right: weak
modulations reveal the effects of context.
this paper we have developed a mean field approximation that meets both these objectives. As an example, we have applied our methods to context-sensitive HMMs,
but the methods are general and can be applied more widely.
Acknowledgements
The authors acknowledge support from NSF grant CDA-9404932, ONR grant
NOOOI4-94-1-0777, ATR Research Laboratories, and Siemens Corporation.
References
A. Dempster, N. Laird, and D. Rubin. (1977) Maximum likelihood from incomplete
data via the EM algorithm. J. Roy. Stat. Soc. B39:1-38.
B. H. Juang and L. R. Rabiner. (1991) Hidden Markov models for speech recognition, Technometrics 33: 251-272.
S. Luttrell. (1989) The Gibbs machine applied to hidden Markov model problems.
Royal Signals and Radar Establishment: SP Research Note 99.
G. Parisi. (1988) Statistical field theory. Addison-Wesley: Redwood City, CA.
C. Peterson and J. R. Anderson. (1987) A mean field theory learning algorithm for
neural networks. Complex Systems 1:995-1019.
L. Saul and M. Jordan. (1994) Learning in Boltzmann trees. Neural Compo 6:
1174-1184.
L. Saul and M. Jordan. (1995) Boltzmann chains and hidden Markov models.
In G. Tesauro, D. Touretzky, and T . Leen, eds. Advances in Neural Information
Processing Systems 7. MIT Press: Cambridge, MA.
P. Stolorz. (1994) Recursive approaches to the statistical physics of lattice proteins.
In L. Hunter, ed. Proc. 27th Hawaii Inti. Conf on System Sciences V: 316-325.
C. Williams and G. E. Hinton. (1990) Mean field networks that learn to discriminate
temporally distorted strings. Proc. Connectionist Models Summer School: 18-22.
| 1155 |@word briefly:1 simulation:1 b39:1 ithere:1 reduction:1 configuration:1 contains:1 denoting:1 si:8 numerical:1 partition:4 treating:1 update:1 v_:1 core:2 compo:1 provides:2 simpler:1 five:1 become:1 consists:1 manner:1 inter:4 themselves:1 decomposed:1 unfolded:1 correlator:1 factorized:4 backbone:1 string:1 developed:1 unified:1 corporation:1 kldivergence:1 act:1 exactly:1 unit:12 grant:2 subscript:1 meet:1 modulation:1 approximately:1 suggests:1 hmms:8 range:5 statistically:1 recursive:1 procedure:2 vsj:1 lksaul:1 induce:2 inp:1 protein:1 cannot:1 context:7 equivalent:1 missing:1 center:1 baum:1 go:3 straightforward:1 stolorz:2 williams:2 welch:1 simplicity:1 rule:1 analogous:1 updated:1 suppose:2 play:1 exact:6 us:1 roy:1 approximated:1 recognition:1 observed:1 role:1 module:7 solved:1 connected:1 valuable:1 forwardbackward:1 principled:1 dempster:2 skeleton:1 chained:3 radar:1 trained:1 weakly:1 depend:2 solving:2 serve:2 easily:1 joint:2 describe:1 artificial:1 refined:1 quite:1 modular:7 widely:1 valued:1 loglikelihood:1 statistic:2 itself:2 laird:2 superscript:1 advantage:1 parisi:2 sequence:1 interaction:9 jij:1 luttrell:2 inducing:1 kh:1 exploiting:4 juang:2 cluster:1 illustrate:1 develop:3 coupling:4 stat:1 nearest:1 ij:6 school:1 eq:12 soc:1 implemented:1 involves:1 posit:1 coarticulation:3 stochastic:1 generalization:1 biological:1 tighter:1 extension:3 strictly:1 correction:3 exp:2 lawrence:1 elo:1 substituting:1 estimation:1 proc:2 tanh:2 sensitive:2 city:1 reflects:2 mit:2 always:1 establishment:1 rather:3 avoid:1 lifted:1 sisj:2 focus:1 potts:1 likelihood:7 inference:3 dependent:1 lj:1 hidden:14 wij:4 subroutine:1 semantics:1 mimicking:1 flexible:2 special:1 marginal:2 field:37 once:1 equal:1 identical:1 represents:3 look:1 minimized:1 connectionist:1 divergence:2 technometrics:1 freedom:3 stationarity:1 intra:2 articulatory:1 chain:5 kt:1 accurate:1 partial:3 decoupled:2 tree:2 incomplete:1 re:1 cda:1 kij:4 modeling:1 cumulants:1 lattice:1 tractability:2 introducing:3 chooses:1 amherst:1 sensitivity:1 probabilistic:10 physic:1 michael:1 quickly:1 connectivity:3 choose:1 worse:1 hawaii:1 conf:1 derivative:1 account:2 sort:1 maintains:1 hf:2 substructure:8 minimize:1 phoneme:1 efficiently:1 sy:1 rabiner:2 yield:2 weak:3 hunter:1 liebler:1 touretzky:1 ed:2 definition:1 energy:9 associated:1 mi:8 couple:1 sampled:1 massachusetts:1 noooi4:1 knowledge:1 sophisticated:1 factorizability:4 wesley:1 higher:3 methodology:1 jw:3 anticipatory:1 done:1 leen:1 anderson:2 lifetime:1 correlation:2 hand:2 replacing:1 defines:1 reveal:2 effect:4 true:2 laboratory:1 self:3 m:1 complete:1 temperature:1 invoked:1 endowing:1 interpret:1 cambridge:2 gibbs:1 similarly:2 had:1 longer:1 posterior:2 tesauro:1 scenario:1 inequality:1 binary:5 onr:1 seen:1 captured:1 additional:1 kit:1 preceding:1 maximize:1 signal:1 ii:2 multiple:2 match:1 calculation:5 long:4 manipulate:1 basic:4 expectation:8 iteration:2 represent:2 addition:1 extra:3 unlike:1 sr:3 jordan:8 correlators:1 vw:2 noting:1 independence:1 isolation:1 architecture:2 identified:1 idea:1 actuality:1 motivated:1 handled:2 expression:2 six:3 speech:3 jj:2 useful:1 simplest:1 sl:1 exist:1 nsf:1 inz:1 estimated:1 discrete:1 backward:1 sum:1 inverse:1 uncertainty:1 distorted:1 contextsensitive:1 bound:6 hi:12 summer:1 quadratic:1 constraint:1 mimj:1 performing:1 structured:1 according:1 em:3 psyche:1 wherever:1 inti:1 taken:3 computationally:1 equation:8 ln:2 lexp:1 needed:1 addison:1 tractable:12 drastic:1 available:3 apply:1 appropriate:2 existence:1 original:1 remaining:1 placeholder:1 calculating:1 exploit:3 approximating:2 classical:1 objective:1 gradient:2 link:2 atr:1 street:1 hmm:14 parametrized:1 assuming:1 index:1 minimizing:2 difficult:1 hij:1 boltzmann:4 markov:6 acknowledge:1 behave:2 hinton:2 incorporated:1 interacting:1 redwood:1 namely:1 required:2 kl:6 connection:1 beyond:1 royal:1 representing:1 technology:1 temporally:1 lk:1 coupled:3 sn:3 review:1 prior:1 acknowledgement:1 fully:1 interesting:2 triple:2 spoil:1 degree:3 consistent:3 rubin:2 principle:1 heavy:1 last:3 side:3 institute:1 saul:7 peterson:2 neighbor:1 sparse:1 distributed:1 boundary:1 transition:1 forward:1 made:1 author:1 sj:9 approximate:1 kullback:1 summing:2 assumed:3 factorize:1 continuous:1 sk:10 why:1 learn:1 ca:1 correlated:1 dispenses:1 complex:1 factorizable:1 sp:1 main:1 s2:1 augmented:1 elaborate:1 kfj:1 sf:8 transitioning:1 load:1 xt:3 jensen:1 intractable:11 incorporating:2 qiip:5 partially:2 ma:2 viewed:2 feasible:1 typical:1 averaging:1 decouple:1 pas:1 discriminate:1 siemens:1 support:3 latter:1 incorporate:1 tested:1 handling:2 |
172 | 1,156 | Adaptive Back-Propagation in On-Line
Learning of Multilayer Networks
Ansgar H. L. West 1 ,2 and David Saad 2
1 Department of Physics , University of Edinburgh
Edinburgh EH9 3JZ, U.K.
2Neural Computing Research Group, University of Aston
Birmingham B4 7ET, U.K.
Abstract
An adaptive back-propagation algorithm is studied and compared
with gradient descent (standard back-propagation) for on-line
learning in two-layer neural networks with an arbitrary number
of hidden units. Within a statistical mechanics framework , both
numerical studies and a rigorous analysis show that the adaptive
back-propagation method results in faster training by breaking the
symmetry between hidden units more efficiently and by providing
faster convergence to optimal generalization than gradient descent .
1
INTRODUCTION
Multilayer feedforward perceptrons (MLPs) are widely used in classification and
regression applications due to their ability to learn a range of complicated maps [1]
from examples. When learning a map fo from N-dimensional inputs to scalars (
the parameters {W} of the student network are adjusted according to some training
algorithm so that the map defined by these parameters fw approximates the teacher
fo as close as possible. The resulting performance is measured by the generalization
error Eg , the average of a suitable error measure E over all possible inputs Eg = (E)e'
This error measure is normally defined as the squared distance between the output
of the network and the desired output , i.e.,
e
(1)
One distinguishes between two learning schemes: batch learning , where training
algorithms are generally based on minimizing the above error on the whole set of
given examples, and on-line learning , where single examples are presented serially
and the training algorithm adjusts the parameters after the presentation of each
A.H.L. VfEST,D.SAJU)
324
example. We measure the efficiency of these training algorithms by how fast (or
whether at all) they converge to an "acceptable" generalization error.
This research has been motivated by recent work [2] investigating an on-line learning scenario of a general two-layer student network trained by gradient descent on a
task defined by a teacher network of similar architecture . It has been found that in
the early stages oftraining the student is drawn into a suboptimal symmetric phase,
characterized by each student node imitating all teacher nodes with the same degree
of success. Although the symmetry between the student nodes is eventually broken
and the student converges to the minimal achievable generalization error, the majority of the training time may be spent with the system trapped in the symmetric
regime, as one can see in Fig. 1. To investigate possible improvements we introduce
an adaptive back-propagation algorithm, which improves the ability of the student
to distinguish between hidden nodes of the teacher. We compare its efficiency with
that of gradient descent in training two-layer networks following the framework
of [2]. In this paper we present numerical studies and a rigorous analysis of both
the breaking of the symmetric phase and the convergence to optimal performance.
We find that adaptive back-propagation can significantly reduce training time in
both regimes by breaking the symmetry between hidden units more efficiently and
by providing faster exponential convergence to zero generalization error .
2
DERIVATION OF THE DYNAMICAL EQUATIONS
The student network we consider is a soft committee machine [3] , consisting of
J< hidden units which are connected to N-dimensional inputs by their weight
vectors W = {"Wi} (i = 1, .. . , J<). All hidden units are connected to the linear
output unit by couplings of unit strength and the implemented mapping is therefore fw(e)
L~l g(Xi), where Xi "Wi ?e is the activation of hidden unit i and gO
is a sigmoidal transfer function. The map fo to be learned is defined by a teacher
network of the same architecture except for a possible difference in the number of
hidden units M and is defined by the weight vectors B
{Bn} (n 1, ... , M).
Training examples are of the form (e,(~), where the components of the input
vectors
are drawn independently from a zero mean unit variance Gaussian distribution; the outputs are (~ = L~=l g(y~), where y~ =
is the activation of
teacher hidden unit n.
e
=
=
=
e
=
En?e
An on-line training algorithm A is defined by the update of each weight in response to the presentation of an example (e~, (~) , which can take the general form
"Wi~+1 ="Wi~ +Ai({,} , W~ , e,(~) , where {,} defines parameters adjustable by
the user. In the case of standard back-propagation , i.e., gradient descent on the
error function defined in Eq. (1): Afd(7J, W~,e,(~) = (7JIN)8,/e with
8,/
= 8~g'(xn = [(~ - fw(e)] g'(xn,
(2)
where the only user adjustable parameter is the learning rate 7J scaled by 11N.
One can readily see that the only term that breaks the symmetry between different
hidden units is g'(xn, i.e., the derivative ofthe transfer function gO. The fact that
a prolonged symmetric phase can exist indicates that this term is not significantly
different over the hidden units for a typical input in the symmetric phase.
The rationale of the adaptive back-propagation algorithm defined below is therefore
to alter the g'-term, in order to magnify small differences in the activation between
hidden units. This can be easily achieved by altering g'(Xi) to g'({3xi), where {3
plays the role of an inverse "temperature". Varying {3 changes the range of hidden
unit activations relevant for training , e.g., for {3 > 1 learning is more confined to
Adaptive Back-Propagation in On-line Learning of Multilayer Networks
small activations, when compared to gradient descent
back-propagation training algorithm is therefore:
A:b P(7] , j3 , W tl ,e,(tl)
325
(/3 = 1) . The whole adaptive
= ~8tlg'(/3xnetl=
~8te
(3)
with 8tl as in Eq. (2). To compare the adaptive back-propagation algorithm with
normal gradient descent , we follow the statistical mechanics calculation in [2]. Here
we will only outline the main ideas and present the results of the calculation.
As we are interested in the typical behaviour of our training algorithm we average
We rewrite the update equations (3)
over all possible instances of the examples
in "'i as equations in the order parameters describing the overlaps between student
nodes Qij = "'i. Wj , student and teacher nodes R;,n = "'i' Bn and teacher nodes
Tnm = Bn' Bm. The generalization error cg , measuring the typical performance, can
be expressed in these variables only [2]. The order parameters Qij and Rin are the
new dynamical variables , which are self-averaging with respect to the randomness
in the training data in the thermodynamic limit (N --+ 00). If we interpret the
normalized example number 0:' = p./ N as a continuous time variable, the update
equations for the order parameters become first order coupled differential equations
dR;,n
e.
dO:'
dQ? ?
_'_J
dO:'
(4)
All the integrals in Eqs. (4) and the generalization error can be calculated explicitly
if we choose g( x) = erf( x / V2) as the sigmoidal activation function [2] . The exact
form of the resulting dynamical equations for adaptive back-propagation is similar to
the equations in [2] and will be presented elsewhere [4]. They can easily be integrated
numerically for any number of K student and M teacher hidden units. For the
remainder of the paper , we will however focus on the realizable case (K = M) and
uncorrelated isotropic teachers of unit length Tnm = 8nm .
The dynamical evolution of the overlaps Qij and R;,n follows from integrating the
equations of motion (4) from initial conditions determined by the random initialization of the student weights W. Whereas the resulting norms Qii of the student
vector will be order 0(1) , the overlaps Qij between student vectors, and studentteacher vectors Rin will be only order 0(1/.JFi) . The random initialization of the
weights is therefore simulated by initializing the norms Qii and the overlaps Qij and
Rin from uniform distributions in the [0, 0.5] and [0,10- 12] interval respectively.
In Fig. 1 we show the difference of a typical evolution of the overlaps and the
generalization error for j3 = 12 and /3 = 1 (gradient descent) for K = 3 and 7] = 0.01.
In both cases , the student is drawn quickly into a suboptimal symmetric phase,
characterized by a finite generalization error (Fig . Ie) and no differentiation between
the hidden units of the student : the student norms Qii and overlaps Qij are similar
(Figs. 1b,ld) and the overlaps of each student node with all teacher nodes Rin are
nearly identical (Figs. 1a,lc). The student trained by gradient descent (Figs. 1c,ld)
is trapped in this unstable suboptimal solution for most ofthe training time , whereas
adaptive back-propagation (Figs. 1a,lb) breaks the symmetry significantly earlier.
The convergence phase is characterized by a specialization of the different student
nodes and the evolution of the overlap matrices Q and R to their optimal value T ,
except for the permutational symmetry due to the arbitrary labeling of the student
nodes. Clearly, the choice /3 = 12 is suboptimal in this regime. The student trained
with /3 = 1 converges faster to zero generalization error (Fig. Ie). In order to
optimize /3 seperately for both the symmetric and the convergence phase , we will
examine the equations of motions analytically in the following section .
A. H. L. WEST, D. SAAD
326
1.0
1.0
(a)
0.8
i
,
/'
R 12 .........
R 13 ---_.
R21 - _.
I
0.6
Rin
0.4
R22 _ . -
I
R 23 ---R31 _ .. R 32 ??? ? .
R33 _ ....?
0.2
o
20000
40000
a 60000
-
.......
R 11 _
0.8
0.6
Rin
0.4
Rll
--
---_.
R 23
- .. - R31
. . . . Rn
.... R33
_
0.2
o
80000
20000
1.0,...-;::(b:"7)-------::::::===---,
1.0
_
0.8
0.8
..... .... Q12
____ . Q22
Q11Q12 .. ...... .
0.6
Qij
0.4
(c)
R12
---- R 13
R21
_.- R22
Q11
40000
a 60000
80000
40000
a 60000
80000
(d)
- _. Q23
Q22 ----.
0.6
Qij
Q23 - _.
0.4
- .---
Q33
Q13
Q33 _ . -
0.2
'--..-._.
Q13 -.---
O.O-+-T"'I---...-.....,.......,:=r=;=-r-.,-,--r-..-r-I
o
20000
40000
a 60000
o
80000
20000
.~-r-(~)----------------------,
Figure 1: Dynamical evolution of the
student-teacher overlaps Rin (a,c), the
student-student overlaps Qij (b,d), and
the generalization error (e) as a function
of the normalized example number a
for a student with three hidden nodes
learning an isotropic three-node teacher
(Tnm =c5nm ). The learning rate 7]=0.01
is fixed but the value of the inverse temperature varies (a,b): .8=12 and (c,d):
.8=1 (gradient descent) .
3
f3 = 12 f3 = 1 .........
e
.02-l"------.-
.01
o
20000
40000 a 60000
80000
ANALYSIS OF THE DYNAMICAL EQUATIONS
In the case of a realizable learning scenario (K=M) and isotropic teachers
(Tnm =c5nm ) the order parameter space can be very well characterized by similar diagonal and off-diagonal elements of the overlap matrices Q and R, i.e.,
Qij = Qc5ij + C(1 - c5ij ) for the student-student overlaps and, apart from a relabeling of the student nodes, by R in = Rc5in + 5(1 - c5in ) for the student-teacher
overlaps. As one can see from Fig. 1, this approximation is particularly good in the
symmetric phase and during the final convergence to perfect generalization.
3.1
SYMMETRIC PHASE AND ONSET OF SPECIALIZATION
Numerical integration of the equations of motion for a range of learning scenarios
show that the length of the symmetric phase is especially prolonged by isotropic
teachers and small learning rates 7]. We will therefore optimize the dynamics (4) in
Adaptive Back-Propagation in On-line Learning of Multilayer Networks
327
the symmetric phase with respect to {3 for isotropic teachers in the small 7] regime ,
where terms proportional to 7]2 can be neglected . The fixed point of the truncated
equations of motion
Q*
= C * = 2I<1-1
and
R*
= S* = W =
VK
1
JI?2I< - 1)
(5)
is independent of f3 and thus identical to the one obtained in [2]. However, the
symmetric solution is an unstable fixed point of the dynamics and the small perturbations introduced by the generically nonsymmetric initial conditions will eventually
drive the student towards specialization.
To study the onset of specialization, we expand the truncated differential equations to first order in the deviations q = Q - Q*, c = C - C* , T' = R - R*, and
s = S - S" from the fixed point values (5). The linearized equations of motion
take the form dv/do: = M?v, where v = (q, c, T' , s) and M is a 4 x 4 matrix whose
elements are the first derivatives of the truncated update equations (4) at the fixed
point with respect to v. Perturbations or modes which are proportional to the
eigenvectors Vi of M will therefore decrease or increase exponentially depending on
whether the corresponding eigenvalue Ai is negative or positive . For the onset of
specialization only the modes are relevant which are amplified by the dynamics, i.e.,
the ones with positive eigenvalue. For them we can identify the inverse eigenvalue
as a typical escape time Ti from the symmetric phase.
We find only one relevant perturbation for q = c = 0 and s = -r/(I< - 1). This can
be confirmed by a closer look at Fig. 1. The onset of specialization is signaled by the
breaking of the symmetry between the student-teacher overlaps , whereas significant
differences from the symmetric fixed point values of the student norms and overlaps
occur later . The escape time T associated with the above perturbation is
T({3) = ~ V2I< - 1(2I<
27]
+ {3)3/2
I< f3
(6)
Minimization of T with respect to f3 yields rr'pt = 4I<, i.e., the optimal f3 scales
with the number of hidden units, and
Topt
= 97r V2I{ 27]
1
V6I<
(7)
Trapping in the symmetric phase is therefore always inversely proportional to the
learning rate 7]. In the large I< limit it is proportional to the number of hidden nodes I< (T""'" 27rI</7]) for gradient descent , whereas it is independent of I<
[T""'" 3V37r/(27])] for the optimized adaptive back-propagation algorithm.
3.2
CONVERGENCE TO OPTIMAL GENERALIZATION
In order to predict the optimal learning rate 7]0pt and inverse temperature {30pt
for the convergence, we linearize the full equations of motion (4) around the zero
generalization error fixed point R* = Q* = 1 and S* = C* = O. The matrix M
of the resulting system of four coupled linear differential equations in q = 1 - Q,
c = C , r = 1 - R, and s = S is very complicated for arbitrary {3, I< and 7] , and its
eigenvalues and eigenvectors can therefore only be analysed numerically.
We illustrate the solution space with I< = 3 and two {3 values in Fig. 2a. We find
that the dynamics decompose into four modes: two slow modes associated with
eigenvalues Al and A2 and two fast modes associated with eigenvalues A3 and A4,
which are negative for all learning rates and whose magnitude is significantly larger.
A. H. L. WEST, D. SAAD
328
0 . 1 - , - - - - - - - - - - - - . . - . . , 2.05 .........- - - - - - - - - - - - . . . . ,
_.- .\1(1)
(a)
.\1 (fi oPt )
-
0.05
2.0
......... .\2(1)
----. .\2 (fiopt)
1.95
O.O~..,.....-----------+--+--I
!3
1.9
,\
-0.05
1.85
-0.1
1.8
0.0
0.5
1.0
1.5
5
10
I{
50 100
5001000
Figure 2: (a) The eigenvalues AI, A2 (see text) as a function of the learning rate 7] at
f{
3 for two values of (3: (3 1, and (3 (30pt
1.8314. For comparison we plot
2A2 and find that the optimal learning rate 7]0pt is given by the condition Al = 2A2
for (30Pt, but by the minimum of Al for (3 1. (b) The optimal inverse temperature
(30pt as a function of the number of hidden units f{ saturates for large f{.
=
=
=
=
=
The fast modes decay quickly and their influence on the long-time dynamics is negligible. They are therefore excluded from Fig. 2a and the following discussion. The
eigenvalue A2 is negative and linear in 7]. The eigenvalue Al is a non-linear function
of both (3 and 7] and negative for small 7]. For large 7], Al becomes positive and
training does not converge to the optimal solution defining the maximum learning rate 7]max as Al (7]max) = O. For all 7] < 7]rnax the generalization error decays
exponentionally to Eg * = O.
In order to identify the corresponding convergence time r, which is inversely proportional to the modulus of the eigenvalue associated with the slowest decay mode,
we expand the generalization error to second order in q, c, rand s. We find that
the mode associated with the linear eigenvalue A2 does not contribute to first order
terms, and controls only second order term in a decay rate of 2A2 . The learning
rate 7]0pt which provides the fastest asymptotic decay rate Aopt of the generalization
error is therefore either given by the condition Al(7]?Pt) = 2A2(7]?Pt) or alternatively
by min7](Al) if Al > 2A2 at the minimum of Al (see Fig. 2a).
We can further optimize convergence to optimal generalization by minimizing the
decay rate Aop t((3) with respect to (3 (see Fig. 2b). Numerically, we find that the
optimal inverse temperature (30pt saturates for large f{ at (30pt ~ 2.03. For large f{,
we find an associated optimal convergence time r opt ((30pt) ,. . ., 2.90f{ for adaptive
back-propagation optimized with respect to 7] and (3, which is an improvement
by 17% when compared to ropt(l)""'" 3.48f{ for gradient descent optimized with
respect to 7]. The optimal and maximal learning rates show an asymptotic 1/ f{
behaviour and 7]0pt((30Pt) ,. . ., 4.78/ f{, which is an increase by 20% compared to gradient descent. Both algorithms are quite stable as the maximal learning rates, for
which the learning process diverges, are about 30% higher than the optimal rates.
4
SUMMARY AND DISCUSSION
This research has been motivated by the dominance of the suboptimal symmetric
phase in on-line learning of two-layer feedforward networks trained by gradient
descent [2]. This trapping is emphasized for inappropriate small learning rates
but exists in all training scenarios, effecting the learning process considerably. We
Adaptive Back-Propagation in On-line Learning of Multilayer Networks
329
proposed an adaptive back-propagation training algorithm [Eq. (3)] parameterized
by an inverse temperature /3, which is designed to improve specialization of the
student nodes by enhancing differences in the activation between hidden units. Its
performance has been compared to gradient descent for a soft-committee student
network with J{ hidden units trying to learn a rule defined by an isotropic teacher
(Tnm = Dnm) of the same architecture.
A linear analysis of the equations of motion around the symmetric fixed point for
small learning rates has shown that optimized adaptive back-propagation characterized by /3opt 4J{ breaks the symmetry significantly faster . The effect is especially
pronounced for large networks, where the trapping time of gradient descent grows
T ex f{ /7] compared to T ex 1/7] for ~Pt . With increasing network size it seems to
become harder for a student node trained by gradient descent to distinguish between the many teacher nodes and to specialize on one of them. In the adaptive
back-propagation algorithm this effect can be eliminated by choosing /3opt ex J{.
=
An open question is how the choice of the optimal inverse temperature is effected for
large learning rates, where 7]2-terms cannot be neglected , as unbounded increase of
the learning rate causes uncontrolled growth of the student norms. However , the full
equations of motion are very difficult to analyse in the symmetric phase. Numerical
studies indicate that ~Pt is smaller but still scales with J{ and yields an overall
decrease in training time which is still significant. We also find that the optimal
learning rate 7]0pt, which exhibits the shortest symmetric phase, is significantly lower
in this regime than during convergence [4] .
During convergence, independent of which algorithm is used , the time constant for
decay to zero generalization error scales with J{, due to the necessary rescaling of
the learning rate by 1/1< as the typical quadratic deviation between teacher and
student output increases proportional to 1<. The reduction in training time with
adaptive back-propagation is 17% and independent of the number of hidden units in
contrast to the symmetric phase, where a factor 1< is gained . This can be explained
by the fact that each student node is already specialized on one teacher node and
the effect of other nodes in inhibiting further specialization is negligible. In fact,
at first it seems rather surprising that anything can be gained by not changing the
weights of the network according to their error gradient . The optimal setting of
/3 > 1, together with training at a larger learning rate, speeds up learning for small
activations and slows down learning for highly activated nodes. This is equivalent
to favouring rotational changes of the weight vectors over pure length changes to a
degree determined by /3.
We believe that the adaptive back-propagation algorithm investigated here will
be beneficial for any multilayer feedforward network and hope that this work will
motivate further theoretical research into the efficiency of training algorithms and
their systematic improvement.
References
[1] C. Cybenko, Math. Control Signals and Systems 2, 303 (1989).
[2] D. Saad and S. A. Solla, Phys. Rev. E 52, 4225 (1995).
[3] M. Biehl and H. Schwarze, 1. Phys. A 28, 643 (1995).
[4] A. West and D. Saad, in preparation (1995).
| 1156 |@word achievable:1 norm:5 seems:2 open:1 linearized:1 bn:3 harder:1 ld:2 reduction:1 initial:2 favouring:1 surprising:1 analysed:1 activation:8 readily:1 numerical:4 plot:1 designed:1 update:4 trapping:3 isotropic:6 provides:1 math:1 node:22 contribute:1 sigmoidal:2 unbounded:1 become:2 differential:3 qij:10 specialize:1 introduce:1 examine:1 mechanic:2 v2i:2 prolonged:2 inappropriate:1 increasing:1 becomes:1 r21:2 differentiation:1 ti:1 growth:1 scaled:1 control:2 unit:23 normally:1 positive:3 negligible:2 limit:2 dnm:1 initialization:2 studied:1 qii:3 fastest:1 range:3 significantly:6 integrating:1 cannot:1 close:1 influence:1 optimize:3 equivalent:1 map:4 rll:1 go:2 independently:1 pure:1 adjusts:1 rule:1 pt:18 play:1 user:2 exact:1 jfi:1 element:2 particularly:1 role:1 initializing:1 wj:1 connected:2 solla:1 decrease:2 broken:1 dynamic:5 neglected:2 trained:5 motivate:1 rewrite:1 efficiency:3 rin:7 easily:2 derivation:1 fast:3 labeling:1 choosing:1 whose:2 quite:1 widely:1 larger:2 biehl:1 ability:2 erf:1 analyse:1 final:1 eigenvalue:11 rr:1 maximal:2 remainder:1 relevant:3 amplified:1 magnify:1 pronounced:1 convergence:13 diverges:1 perfect:1 converges:2 spent:1 coupling:1 depending:1 ansgar:1 linearize:1 illustrate:1 measured:1 eq:4 implemented:1 q12:1 indicate:1 behaviour:2 generalization:19 decompose:1 cybenko:1 opt:4 adjusted:1 around:2 normal:1 mapping:1 predict:1 inhibiting:1 early:1 a2:9 birmingham:1 r31:2 minimization:1 hope:1 clearly:1 gaussian:1 c5ij:1 always:1 rather:1 varying:1 focus:1 improvement:3 vk:1 indicates:1 slowest:1 contrast:1 rigorous:2 cg:1 realizable:2 integrated:1 hidden:22 expand:2 interested:1 overall:1 classification:1 integration:1 f3:6 eliminated:1 identical:2 look:1 nearly:1 alter:1 escape:2 distinguishes:1 relabeling:1 phase:17 consisting:1 investigate:1 highly:1 generically:1 activated:1 integral:1 closer:1 necessary:1 desired:1 signaled:1 theoretical:1 minimal:1 instance:1 soft:2 earlier:1 altering:1 measuring:1 deviation:2 uniform:1 teacher:22 varies:1 considerably:1 ie:2 systematic:1 physic:1 off:1 together:1 quickly:2 squared:1 nm:1 q11:1 choose:1 dr:1 derivative:2 rescaling:1 student:39 explicitly:1 onset:4 vi:1 later:1 break:3 effected:1 complicated:2 mlps:1 variance:1 efficiently:2 yield:2 ofthe:2 identify:2 confirmed:1 drive:1 randomness:1 fo:3 phys:2 topt:1 associated:6 improves:1 back:22 higher:1 follow:1 response:1 rand:1 stage:1 propagation:22 defines:1 mode:8 schwarze:1 believe:1 grows:1 modulus:1 effect:3 normalized:2 evolution:4 analytically:1 excluded:1 symmetric:20 eg:3 during:3 self:1 anything:1 trying:1 outline:1 motion:8 temperature:7 fi:1 specialized:1 ji:1 b4:1 exponentially:1 nonsymmetric:1 approximates:1 interpret:1 numerically:3 significant:2 ai:3 stable:1 tlg:1 recent:1 apart:1 scenario:4 success:1 minimum:2 converge:2 shortest:1 signal:1 thermodynamic:1 full:2 faster:5 characterized:5 calculation:2 long:1 j3:2 regression:1 multilayer:6 enhancing:1 achieved:1 confined:1 whereas:4 interval:1 saad:5 seperately:1 feedforward:3 architecture:3 r33:2 suboptimal:5 reduce:1 idea:1 whether:2 motivated:2 specialization:8 cause:1 generally:1 eigenvectors:2 exist:1 r12:1 trapped:2 r22:2 group:1 dominance:1 four:2 drawn:3 changing:1 inverse:8 parameterized:1 aopt:1 acceptable:1 eh9:1 layer:4 uncontrolled:1 oftraining:1 distinguish:2 quadratic:1 strength:1 occur:1 ri:1 speed:1 department:1 according:2 smaller:1 beneficial:1 wi:4 rev:1 dv:1 explained:1 imitating:1 equation:19 describing:1 eventually:2 committee:2 v2:1 batch:1 a4:1 q33:2 especially:2 question:1 already:1 diagonal:2 exhibit:1 gradient:18 distance:1 simulated:1 majority:1 unstable:2 length:3 providing:2 minimizing:2 rotational:1 effecting:1 difficult:1 negative:4 slows:1 adjustable:2 finite:1 descent:17 jin:1 truncated:3 defining:1 saturates:2 rn:1 perturbation:4 arbitrary:3 lb:1 david:1 introduced:1 optimized:4 ropt:1 learned:1 dynamical:6 below:1 afd:1 regime:5 rnax:1 max:2 suitable:1 overlap:15 serially:1 scheme:1 improve:1 aston:1 inversely:2 aop:1 coupled:2 text:1 asymptotic:2 rationale:1 proportional:6 degree:2 dq:1 uncorrelated:1 elsewhere:1 summary:1 edinburgh:2 q22:2 calculated:1 xn:3 adaptive:20 bm:1 investigating:1 xi:4 alternatively:1 continuous:1 jz:1 learn:2 transfer:2 symmetry:8 investigated:1 main:1 whole:2 fig:14 west:4 q13:2 en:1 tl:3 slow:1 lc:1 exponential:1 breaking:4 down:1 tnm:5 emphasized:1 decay:7 a3:1 exists:1 gained:2 magnitude:1 te:1 expressed:1 scalar:1 presentation:2 towards:1 fw:3 change:3 typical:6 except:2 determined:2 averaging:1 perceptrons:1 preparation:1 ex:3 |
173 | 1,157 | Temporal coding
in the sub-millisecond range:
Model of barn owl auditory pathway
Richard Kempter*
Institut fur Theoretische Physik
Physik-Department der TU Munchen
D-85748 Garching bei Munchen
Germany
J. Leo van Hemmen
Institut fur Theoretische Physik
Physik-Department der TU Munchen
0-85748 Garching bei Munchen
Germany
Wulfram Gerstner
Institut fur Theoretische Physik
Physik-Department der TU Munchen
D-85748 Garching bei Munchen
Germany
Hermann Wagner
Institut fur Zoologie
Fakultiit fur Chemie und Biologie
D-85748 Garching bei Munchen
Germany
Abstract
Binaural coincidence detection is essential for the localization of
external sounds and requires auditory signal processing with high
temporal precision. We present an integrate-and-fire model of spike
processing in the auditory pathway of the barn owl. It is shown that
a temporal precision in the microsecond range can be achieved with
neuronal time constants which are at least one magnitude longer.
An important feature of our model is an unsupervised Hebbian
learning rule which leads to a temporal fine tuning of the neuronal
connections.
?email: kempter.wgerst.lvh@physik.tu-muenchen.de
Temporal Coding in the Submillisecond Range: Model of Bam Owl Auditory Pathway
1
125
Introduction
Owls are able to locate acoustic signals based on extraction of interaural time difference by coincidence detection [1, 2]. The spatial resolution of sound localization
found in experiments corresponds to a temporal resolution of auditory signal processing well below one millisecond. It follows that both the firing of spikes and their
transmission along the so-called time pathway of the auditory system must occur
with high temporal precision.
Each neuron in the nucleus magnocellularis, the second processing stage in the
ascending auditory pathway, responds to signals in a narrow frequency range. Its
spikes are phase locked to the external signal (Fig. 1a) for frequencies up to 8
kHz [3]. Axons from the nucleus magnocellularis project to the nucleus laminaris
where signals from the right and left ear converge. Owls use the interaural phase
difference for azimuthal sound localization. Since barn owls can locate signals with a
precision of one degree of azimuthal angle, the temporal precision of spike encoding
and transmission must be at least in the range of some 10 J.lS.
This poses at least two severe problems. First, the neural architecture has to be
adapted to operating with high temporal precision. Considering the fact that the
total delay from the ear to the nucleus magnocellularis is approximately 2-3 ms [4],
a temporal precision of some 10 J.lS requires some fine tuning, possibly based on
learning. Here we suggest that Hebbian learning is an appropriate mechanism. Second, neurons must operate with the necessary temporal precision. A firing precision
of some 10 J.ls seems truly remarkable considering the fact that the membrane time
constant is probably in the millisecond range. Nevertheless, it is shown below that
neuronal spikes can be transmitted with the required temporal precision.
2
Neuron model
We concentrate on a single frequency channel of the auditory pathway and model
a neuron of the nucleus magnocellularis. Since synapses are directly located on the
soma, the spatial structure of the neuron can be reduced to a single compartment.
In order to simplify the dynamics, we take an integrate-and-fire unit. Its membrane
potential changes according to
d
-u
dt
u
= -+ 1(t)
TO
(1)
where 1(t) is some input and TO is the membrane time constant. The neuron fires,
if u(t) crosses a threshold {) = 1. This defines a firing time to. After firing u is reset
to an initial value uo = O. Since auditory neurons are known to be fast, we assume
a membrane time constant of 2 ms. Note that this is shorter than in other areas of
the brain, but still a factor of 4 longer than the period of a 2 kHz sound signal.
The magnocellular neuron receives input from several presynaptic neurons 1 ~ k ~
J{. Each input spike at time t{ generates a current pulse which decays exponentially
with a fast time constant Tr = 0.02 ms. The magnitude of the current pulse depends
on the coupling strength h. The total input is
1(t)
f
t t
= L h: exp( --=-.!.
) O(t k,f
t{)
Tr
where O(x) is the unit step function and the sum runs over all input spikes.
(2)
R. KEMPTER, W. GERSTNER, J. L. VAN HEMMEN, H. WAGNER
126
a)
/\
foE-
h
T~
b
-I
I
I
vvt
I
I
I
o
/\
<p
I
I
I
I
I
21t
b)
t
t
Fig. 1. Principles of phase locking and learning. a) The stimulus consists of a sound
wave (top). Spikes of auditory nerve fibers leading to the nucleus magnocellularis
are phase-locked to the periodic wave, that is, they occur at a preferred phase in
relation to the sound, but with some jitter 0". Three examples of phase-locked
spike trains are indicated. b) Before learning (left), many auditory input fibers
converge to a neuron of the nucleus magnocellularis. Because of axonal delays
which vary between different fibers, spikes arrive incoherently even though they
are generated in a phase locked fashion. Due to averaging over several incoherent
inputs, the total postsynaptic potential (bottom left) of a magnocellular neuron
follows a rather smooth trajectory with no significant temporal structure. After
learning (right) most connections have disappeared and only a few strong contacts
remain. Input spikes now arrive coherently and the postsynaptic potential exhibits
a clear oscillatory structure. Note that firing must occur during the rising phase of
the oscillation. Thus output spikes will be phase locked.
Temporal Coding in the Submillisecond Range: Model of Bam Owl Auditory Pathway
127
All input signals belong to the same frequency channel with a carrier frequency of
2 kHz (period T = 0.5 ms), but the inputs arise from different presynaptic neurons
(1 ~ k ~ K). Their axons have different diameter and length leading to a signal
transmission delay ~k which varies between 2 and 3 ms [4]. Note that a delay as
small as 0.25 ms shifts the signal by half a period.
Each input signal consists of a periodic spike train subject to two types of noise.
First, a presynaptic neuron may not fire regularly every period but, on average,
every nth period only where n ~ 1/(vT) and v is the mean firing rate of the neuron.
For the sake of simplicity, we set n = 1. Second, the spikes may occur slightly too
early or too late compared to the mean delay~. Based on experimental results, we
assume a typical shift (1 = ?0.05 ms [3]. Specifically we assume in our model that
inputs from a presynaptic neuron k arrive with the probability density
1_ ~
P( tkJ) -_._
m= L...t exp
v2~(1
where
3
~k
[-(t{
-nT-
2(1 2
n=-OO
~k)2l
(3)
is the axonal transmission delay of input k (Fig. 1).
Temporal tuning through learning
We assume a developmental period of unsupervised learning during which a fine
tuning of the temporal characteristics of signal transmission takes place (Fig. Ib) .
Before learning the magnocellular neuron receives many inputs (K = 50) with weak
coupling (Jk = 1). Due to the broad distribution of delays the tptal input (2) has,
apart from fluctuations, no temporal structure. After learning, the magnocellular
neuron receives input from two or three presynaptic neurons only. The connections
to those neurons have become very effective; cf. Fig. 2.
a)
b)
30
<f
,-
20
<f
,-
10
0
2.0
30
2.5
20
10
0
2.0
3.0
~[ms)
c)
d)
30
<f
,-
20
2.5
~[ms)
3.0
30
<f
,-
10
0
2.0
2.5
~[ms]
3.0
20
10
0
2.0
2.5
3.0
~[ms)
Fig. 2 . Learning. We plot the number of synaptic contacts (y-axis) for each delay
~
(x-axis). (a) At the beginning, the neuron has contacts to 50 presynaptic neurons
with delays 2ms ~ ~ ~ 3ms. (b) and (c) During learning, some presynaptic neurons
increase their number of contacts, other contacts disappear. (d) After learning,
contacts to three presynaptic neurons with delays 2.25, 2.28, and 2.8 ms remain.
The remaining contacts are very strong.
128
R. KEMPfER, W. GERSTNER, J. L. VAN HEMMEN, H. WAGNER
The constant h: measures the total coupling strength between a presynaptic neuron
k and the postsynaptic neuron. Values of h: larger than one indicate that several
synapses have been formed. It has been estimated from anatomical data that a
fully developed magnocellular neuron receives inputs from as few as 1-4 presynaptic
neurons, but each presynaptic axon shows multiple branching near the postsynaptic
soma and makes up to one hundred synaptic contacts on the soma of the magnocellular neuron[5]. The result of our simulation study is consistent with this finding. In
our model, learning leads to a final state with a few but highly effective inputs. The
remaining inputs all have the same time delay modulo the period T of the stimulus.
Thus, learning leads to reduction of the number of input neurons contacts with a
nucleus magnocellularis neuron. This is the fine tuning of the neuronal connections
necessary for precise temporal coding (see below, section 4).
a)
--
t:: j
0.2
X
3:
0.0
0.5
0.0
o
5
X [ms]
10
o
5
[ms]
10
b)
1.0
--w
X
0.5
0.0
X
Fig. 3. (a) Time window of learning W(x). Along the x-axis we plot the time
difference between presynaptic and postsynaptic fiing x = t{ - tl:. The window
function W(x) has a positive and a negative phase. Learning is most effective, if
the postsynaptic spike is late by 0.08 ms (inset). (b) Postsynaptic potential {(x).
Each input spike evoked a postsynaptic potential which decays with a time constant
of 2 ms. Since synapses are located directly at the soma, the rise time is very
fast (see inset). Our learning scenario requires that the rise time of {(x) should be
approximately equal to the time x where W(x) has its maximum.
In our model, temporal tuning is achieved by a variant of Hebbian learning. In
standard Hebbian learning, synaptic weights are changed if pre- and postsynaptic
activity occurs simultaneously. In the context of temporal coding by spikes, the
concept of (simultaneous activity' has to be refined. We assume that a synapse k is
Temporal Coding in the Submillisecond Range: Model of Barn Owl Auditory Pathway
129
changed, if a presynaptic spike t{ and a postsynaptic spike to occur within a time
window W(t{ -to). More precisely, each pair of presynaptic and postsynaptic spikes
changes a synapse Jk by an amount
(4)
with a prefactor , = 0.2. Depending on the sign of W( x), a contact to a presynaptic
neuron is either increased or decreased. A decrease below Jk = 0 is not allowed.
In our model, we assume a function W(x) with two phases; cf. Fig. 3. For x ~
0, the function W(x) is positive. This leads to a strengthening (potentiation) of
the contact with a presynaptic neuron k which is active shortly before or after a
postsynaptic spike. Synaptic contacts which become active more than 3 ms later
than the postsynaptic spike are decreased . Note that the time window spans several
cycles of length T. The combination of decrease and increase balances the average
effects of potentiation and depression and leads to a normalization of the number
and weight of synapses. Learning is stopped after 50.000 cycles of length T.
4
Temporal coding after learning
After learning contacts remain to a small number of presynaptic neurons. Their
axonal transmission delays coincide or differ by multiples of the period T. Thus the
spikes arriving from the few different presynaptic neurons have approximately the
same phase and add up to an input signal (2) which retains, apart from fluctuations,
the periodicity of the external sound signal (Fig.4a).
a)
b)
-.
9-
+'"
CJ)
~
o
1t
21t
o
1t
21t
Fig. 4 . (a) Distribution of input phases after learning. The solid line shows the
number of instances that an input spike with phase <p has occured (arbitrary units).
The input consists of spikes from the three presynaptic neurons which have survived
after learning; cf. Fig. 1d. Due to the different delays, the mean input phase
v(lries slightly between the three input channels. The dashed curves show the phase
distribution of the individual channels, the solid line is the sum of the three dashed
curves. (b) Distribution of output phases after learning. The histogram of output
phases is sharply peaked. Comparison of the position of the maxima of the solid
curves in (a) and (b) shows that the output is phase locked to the input with a
relative delay fl<p which is related to the rise time of the postsynaptic potential.
130
R. KEMPTER, W. GERSTNER, J. L. VAN HEMMEN, H. WAGNER
Output spikes of the magnocellular neuron are generated by the integrate-and-fire
process (1). In FigAb we show a histogram of the phases of the output spikes. We
find that the phases have a narrow distribution around a peak value. Thus the
output is phase locked to the external signal. The width of the phase distribution
corresponds to a precision of 0.084 phase cycles which equals 42 jlS for a 2 kHz
stimulus. Note that the temporal precision of the output has improved compared
to the input where we had three channels with slightly different mean phases and a
variation of (T = 50jls each. The increase in the precision is due to the average over
three uncorrelated input signals.
We assume that the same principles are used during the following stages along the
auditory pathway. In the nucleus laminaris several hundred signals are combined.
This improves the signal-to-noise ratio further and a temporal precision below 10 jlS
could be achieved.
5
Discussion
We have demonstrated that precise temporal coding in the microsecond range is possible despite neuronal time constants in the millisecond range. Temporal refinement
has been achieved through a slow developmental learning rule. It is a correlation
based rule with a time window W which spans several milliseconds. Nevertheless
learning leads to a fine tuning of the connections supporting temporal coding with
a resolution of 42 jlS. The membrane time constant was set to 2 ms. This is nearly
two orders of magnitudes longer than the achieved resolution. In our model, there
is only one fast time constant which describes the typical duration of a input current pulse evoked by a presynaptic spike. Our value of Tr = 20 jlS corresponds to
a rise time of the postsynaptic potential of 100 jls. This seems to be realistic for
auditory neurons since synaptic contacts are located directly on the soma of the
postsynaptic neuron. The basic results of our model can also be applied to other
areas of the brain and can shed new light on some aspects of temporal coding with
slow neurons.
Acknowledgments: R.K. holds scholarship of the state of Bavaria. W.G. has been
supported by the Deutsche Forschungsgemeinschaft (DFG) under grant number He 1729/22. H.W. is a Heisenberg fellow of the DFG.
References
[1] L. A. Jeffress, J. Compo Physiol. Psychol. 41, 35 (1948).
[2] M. Konishi, Trends Neurosci . 9, 163 (1986).
[3] C. E. Carr and M. Konishi, J. Neurosci. 10,3227 (1990).
[4] W. E. Sullivan and M. Konishi, J. Neurosci. 4,1787 (1984).
[5] C. E. Carr and R. E. Boudreau, J. Compo Neurol. 314, 306 (1991).
| 1157 |@word rising:1 seems:2 physik:7 pulse:3 azimuthal:2 simulation:1 tr:3 solid:3 reduction:1 initial:1 current:3 nt:1 must:4 physiol:1 realistic:1 plot:2 half:1 beginning:1 compo:2 along:3 become:2 consists:3 pathway:9 interaural:2 brain:2 window:5 considering:2 project:1 deutsche:1 developed:1 finding:1 temporal:28 fellow:1 every:2 shed:1 unit:3 uo:1 grant:1 before:3 carrier:1 positive:2 despite:1 encoding:1 firing:6 fluctuation:2 approximately:3 jls:6 evoked:2 range:10 locked:7 acknowledgment:1 sullivan:1 survived:1 area:2 pre:1 suggest:1 context:1 demonstrated:1 l:3 duration:1 resolution:4 simplicity:1 rule:3 konishi:3 variation:1 modulo:1 trend:1 jk:3 located:3 bottom:1 prefactor:1 coincidence:2 cycle:3 tkj:1 decrease:2 und:1 developmental:2 locking:1 dynamic:1 localization:3 binaural:1 fiber:3 leo:1 train:2 fast:4 effective:3 jeffress:1 refined:1 larger:1 final:1 reset:1 strengthening:1 tu:4 transmission:6 disappeared:1 coupling:3 oo:1 depending:1 pose:1 strong:2 indicate:1 differ:1 concentrate:1 hermann:1 owl:8 potentiation:2 hold:1 around:1 barn:4 exp:2 vary:1 early:1 rather:1 fur:5 relation:1 germany:4 spatial:2 equal:2 extraction:1 vvt:1 broad:1 unsupervised:2 nearly:1 peaked:1 stimulus:3 simplify:1 richard:1 few:4 simultaneously:1 individual:1 dfg:2 phase:25 fire:5 detection:2 highly:1 severe:1 truly:1 light:1 necessary:2 shorter:1 institut:4 stopped:1 increased:1 instance:1 retains:1 hundred:2 delay:14 too:2 varies:1 periodic:2 combined:1 density:1 peak:1 ear:2 possibly:1 external:4 leading:2 potential:7 de:1 coding:10 depends:1 later:1 wave:2 formed:1 compartment:1 characteristic:1 theoretische:3 weak:1 trajectory:1 foe:1 oscillatory:1 synapsis:4 simultaneous:1 synaptic:5 email:1 frequency:5 auditory:15 occured:1 improves:1 cj:1 nerve:1 dt:1 improved:1 synapse:2 though:1 stage:2 correlation:1 receives:4 defines:1 indicated:1 magnocellularis:7 effect:1 concept:1 lvh:1 during:4 branching:1 width:1 m:20 carr:2 khz:4 exponentially:1 belong:1 he:1 significant:1 tuning:7 had:1 longer:3 operating:1 add:1 apart:2 scenario:1 vt:1 der:3 transmitted:1 converge:2 period:8 signal:19 dashed:2 multiple:2 sound:7 hebbian:4 smooth:1 cross:1 variant:1 basic:1 muenchen:1 submillisecond:3 histogram:2 normalization:1 achieved:5 fine:5 decreased:2 operate:1 probably:1 subject:1 regularly:1 axonal:3 near:1 forschungsgemeinschaft:1 architecture:1 laminaris:2 shift:2 depression:1 garching:4 clear:1 amount:1 diameter:1 reduced:1 millisecond:5 sign:1 estimated:1 anatomical:1 soma:5 nevertheless:2 threshold:1 sum:2 run:1 angle:1 jitter:1 arrive:3 place:1 oscillation:1 fl:1 activity:2 adapted:1 occur:5 strength:2 precisely:1 sharply:1 sake:1 generates:1 aspect:1 span:2 department:3 according:1 combination:1 membrane:5 remain:3 slightly:3 describes:1 postsynaptic:16 mechanism:1 ascending:1 munchen:7 v2:1 appropriate:1 shortly:1 top:1 remaining:2 cf:3 scholarship:1 disappear:1 magnocellular:7 contact:14 coherently:1 spike:28 occurs:1 responds:1 exhibit:1 presynaptic:20 length:3 ratio:1 balance:1 negative:1 rise:4 neuron:39 supporting:1 precise:2 biologie:1 locate:2 arbitrary:1 pair:1 required:1 connection:5 acoustic:1 narrow:2 able:1 below:5 bam:2 nth:1 axis:3 incoherent:1 psychol:1 relative:1 fully:1 kempter:4 remarkable:1 integrate:3 nucleus:9 degree:1 consistent:1 principle:2 uncorrelated:1 periodicity:1 changed:2 supported:1 arriving:1 heisenberg:1 wagner:4 van:4 curve:3 refinement:1 coincide:1 preferred:1 active:2 channel:5 gerstner:4 neurosci:3 noise:2 arise:1 allowed:1 neuronal:5 fig:11 tl:1 hemmen:4 fashion:1 slow:2 axon:3 precision:14 sub:1 position:1 ib:1 late:2 bei:4 inset:2 decay:2 neurol:1 essential:1 magnitude:3 corresponds:3 microsecond:2 wulfram:1 change:2 typical:2 specifically:1 averaging:1 called:1 total:4 experimental:1 fakultiit:1 |
174 | 1,158 | On the Computational Power of Noisy
Spiking Neurons
Wolfgang Maass
Institute for Theoretical Computer Science, Technische Universitaet Graz
Klosterwiesgasse 32/2, A-8010 Graz, Austria, e-mail: maass@igi.tu-graz.ac.at
Abstract
It has remained unknown whether one can in principle carry out
reliable digital computations with networks of biologically realistic
models for neurons. This article presents rigorous constructions
for simulating in real-time arbitrary given boolean circuits and finite automata with arbitrarily high reliability by networks of noisy
spiking neurons.
In addition we show that with the help of "shunting inhibition"
even networks of very unreliable spiking neurons can simulate in
real-time any McCulloch-Pitts neuron (or "threshold gate"), and
therefore any multilayer perceptron (or "threshold circuit") in a
reliable manner. These constructions provide a possible explanation for the fact that biological neural systems can carry out quite
complex computations within 100 msec.
It turns out that the assumption that these constructions require
about the shape of the EPSP's and the behaviour of the noise are
surprisingly weak.
1
Introduction
We consider networks that consist of a finite set V of neurons, a set E ~ V x V of
synapses, a weightwu,v ~ 0 and a response junctioncu,v : R+ -+ R for each synapse
W.MAASS
212
(u,v) E E (where R+ := {x E R: x ~ O}), and a threshold/unction Sv : R+
for each neuron v E V.
--t
R+
If F u ~ R + is the set of firing times of a neuron u, then the potential at the trigger
zone of neuron v at time t is given by Pv(t) :=
L
L
wu,v'
u : (u, v) E EsE Fu : s < t
eu,v(t - s). The threshold function Sv(t - t') quantifies the "reluctance" of v to
fire again at time t, if its last previous firing was at time t'. We assume that
Sv(O) E (0,00), Sv(x) = 00 for x E (0, 'TreJ] (for some constant 'TreJ > 0, the
"absolute refractory period"), and sup{Sv(x) : X ~ 'T} < 00 for any'T > 'TreJ.
In a deterministic model for a spiking neuron (Maass, 1995a, 1996) one can assume
that a neuron v fires exactly at those time points t when Pv(t) reaches (from below)
the value Sv(t - t'). We consider in this article a biologically more realistic model,
where as in (Gerstner, van Hemmen, 1994) the size of the difference Pv(t)-Sv(t-t')
just governs the probability that neuron v fires. The choice of the exact firing times
is left up to some unknown stochastic processes, and it may for example occur that
v does not fire in a time intervall during which Pv (t) - Sv(t - t') > 0, or that v fires
"spontaneously" at a time t when Pv(t) -Sv(t-t') < O. We assume that (apart from
their communication via potential changes) the stochastic processes for different
neurons v are independent. It turns out that the assumptions that one has to make
about this stochastic firing mechanism in order to prove our results are surprisingly
weak. We assume that there exist two arbitrary functions L, U : R X R+ ----1 [0,1] so
that L(~, i) provides a lower bound (and U(~, i) provides an upper bound) for the
probability that neuron v fires during a time intervall of length with the property
that Pv(t)-Sv(t-t') ~ ~ (respectively Pv(t)-Sv(t-t') ~ ~) for all tEl up to the
next firing of v (t' denotes the last firing time of v be/ore I). We just assume about
these functions Land U that they are non-decreasing in each of their two arguments
(for any fixed value of the other argument), that lim U(~, i) = for any fixed
e
~~-oo
i > 0, and that lim
~~OO
L(~,
e) > 0 for allY fixed
e~
?
R/6 (where R is the assumed
length of the rising segment of an EPSP, see below). The neurons are allowed to
be "arbitrarily noisy" in the sense that the difference lim L(~, i) - lim U(~, i)
~~OO
~~-oo
can be arbitrarily small. Hence our constructions also apply to neurons that exhibit
persistent firing failures, and they also allow for synapses that fail with a rather high
probability. Furthermore a detailed analysis of our constructions shows that we can
relax the somewhat dubious assumption that the noise-distributions for different
neurons are independent. Thus we are also able to deal with "systematic noise" in
the distribution of firing times of neurons in a pool (e.g. caused by changes in the
biochemical environment that simultaneously affect many neurons in a pool).
It turns out that it suffices to assume only the following rather weak properties of
the other functions involved in our model:
1) Each response function CU , I ) : R+ ----1 R is either excitatory or inhibitory
(and for the sake of biological realism one may assume that each neuron u induces
only one type of response). All excitatory response functions eu,v(x) have the value
On the Computational Power of Noisy Spiking Neurons
213
o for
x E [O,~u,v), and the value eE(X - ~u ,v) for x ~ ~u,v, where ~u,v ~ 0 is
the delay for this synapse between neurons u and v, and e E is the common shape
of all excitatory response functions ("EPSP's))). Corresponding assumptions are
made about the inhibitory response functions ("IPSP's))), whose common shape is
described by some function e I : R+ -+ {x E R : x ~ O}.
2) eE is continuous, eE(O) = 0, eE(X) = 0 for all sufficiently large x, and there
exists some parameter R > 0 such that e E is non-decreasing in [0, R], and some
parameter p > 0 such that eE(X + R/6) ~ p + eE (x) for all x E [O,2R/3].
3) _e I satisfies the same conditions as e E .
4) There exists a source BN- of negative "background noise", that contributes
to the potential Pv(t) of each neuron v an additive term that deviates for an arbitrarily long time interval by an arbitrarily small percentage from its average value
w; ~ 0 (which we can choose). One can delete this assumption if one assumes that
the firing threshold of neurons can be shifted by some other mechanism.
In section 3 we will assume in addition the availability of a corresponding positive
background noise BN+ with average value wt ~ O.
In a biological neuron tI one can interpret BN- and BN+ as the combined effect
of a continuous bombardment with a very large number of IPSP's (EPSP's) from
randomly firing neurons that arrive at remote synapses on the dendritic tree of v.
We assume that we can choose the values of delays ~u , v and weights Wu,v, wt ,w; .
We refer to all assumptions specified in this section as our "weak assumptions"
about noisy spiking neurons. It is easy to see that the most frequently studied
concrete model for noisy spiking neurons, the spike response model (Gerstner and
van Hemmen, 1994) satisfies these weak assumptions, and is hence a special case.
However not even for the more concrete spike response model (or any other model
for noisy spiking neurons) there exist any rigorous results about computations in
these models . In fact, one may view this article as being the first that provides
results about the computational complexity of neural networks for a neuron model
that is acceptable to many neurobiologistis as being reasonably realistic.
In this article we only address the problem of reliable digital computing with noisy
spiking neurons . For details of the proofs we refer to the forthcoming journal-version
of this extended abstract. For results about analog computations with noisy spiking
neurons we refer to Maass, 1995b.
2
Simulation of Boolean Circuits and Finite Automata with
Noisy Spiking Neurons
Theorem 1: For any deterministic finite automaton D one can construct a network N(D) consisting of any type of noisy spiking neurons that satisfy our weak
assumptions, so that N(D) can simulate computations of D of any given length
with arbitrarily high probability of correctness.
214
W.MAASS
Idea of the proof: Since the behaviour of a single noisy spiking neuron is completely
unreliable, we use instead pools A, B, ... of neurons as the basic building blocks in
our construction, where all neurons v in the same pool receive approximately the
same "input potential" Pv(t). The intricacies of our stochastic neuron model allow
us only to employ a "weak coding" of bits, where a "1" is represented by a pool A
during a time interval I, if at least PI ?IAI neurons in A fire (at least once) during I
(where PI > 0 is a suitable constant), and "0" is represented if at most Po ?IAI firings
of neurons occur in A during I, where Po with 0 < Po < PI is another constant (that
can be chosen arbitrarily small in our construction).
The described coding scheme is weak since it provides no useful upper bound (e.g.
1.5?Pl ?IAI) on the number of neurons that fire during I if A represents a "1" (nor on
the number of firings of a single neuron in A). It also does not impose constraints
on the exact timing of firings in A within I. However a "0" can be represented more
precisely in our model, by choosing po sufficiently small.
The proof of Theorem 1 shows that this weak coding of bits suffices for reliable
digital computations. The idea of these simulations is to introduce artificial negations into the computation, which allow us to exploit that "0" has a more precise
representation than "1". It is apparently impossible to simulate an AND-gate in a
straightforward fashion for a weak coding of bits, but one can simulate a NOR-gate
in a reliable manner.
?
Corollary 2: Any boolean function can be computed by a sufficiently large network
of noisy spiking neurons (that satisfy our weak assumptions) with arbitrarily high
probability of correctness.
3
Fast Simulation of Threshold Circuits via Shunting
Inhibition
For biologically realistic parameters, each computation step in the previously constructed network takes around 25 msec (see point b) in section 4}. However it
is well-known that biological neural systems can carry out complex computations
within just 100 msec (Churchland, Sejnowski, 1992). A closer inspection of the preceding construction shows, that one can simulate with the same speed also OR- and
NOR-gates with a much larger fan-in than just 2. However wellknown results from
theoretical computer science (see the results about the complexity class ACo in the
survey article by Johnson in (van Leeuwen, 1990)) imply that for any fixed number
of layers the computational power of circuits with gates for OR, NOR, AND, NOT
remains very weak, even if one allows any polynomial size fan-in for such gates.
In contrast to that, the construction in this section will show that by using a biologically more realistic model for a noisy spiking neuron, one can in principle simulate
within 100 msec 3 or more layers of a boolean circuit that employs substantially
more powerful boolean gates: threshold gates (Le. "Mc Culloch-Pitts neurons", also
called "perceptrons"). The use of these gates provides a giant leap in computational
215
On the Computational Power of Noisy Spiking Neurons
power for boolean circuits with a small number of layers: In spite of many years of
intensive research, one has not been able to exhibit a single concrete computational
problem in the complexity classes P or NP that can be shown to be not computable
by a polynomial size threshold circuit with 3 layers (for threshold circuits with
integer weights of unbounded size the same holds already for just 2 layers).
In the neuron model that we have employed so far in this article, we have assumed
(as it is common in the spike response model) that the potential Pv (t) at the trigger
zone of neuron v depends linearly on all the terms Wu ,v . cu,v(t - s). There exists
however ample biological evidence that this assumption is not appropriate for certain types of synapses. An example are synapses that carry out shunting inhibition
(see. e.g. (Abeles, 1991) and (Shepherd, 1990)). When a synapse of this type (located on the dendritic tree of a neuron v) is activated, it basically erases (through
a short circuit mechanism) for a short time all EPSP's that pass the location of
this synapse on their way to the trigger zone of v. However in contrast to those
IPSP's that occur linearly in the formula for Pv(t) , the activation of such synapse
for shunting inhibition has no impact on those EPSP's that travel to the trigger
ZOne of v through another part of its dendritic tree. We model shunting inhibition
in our framework as follows . We write r for the subset of all neurons 'Y in V that
can "veto" other synapses (u, v) via shunting inhibition (we assume that the neurons in r have no other role apart from that). We allow in our formal model that
certain 'Y in r are assigned as label to certain synapses (u, v) that have an excitatory
response function cu,v. If'Y is a label of (u, v), then this models the situation that
'Y can intercept EPSP's from u on their way to the trigger zone of v via shunting
inhibition. We then define
Pv(t) =
L (L
II
wtt ,tJ . Ett,v(t - s) .
s...,(t)) ,
u E V : (u, v) E E s E F tt : s < t
'Y is label of (u, v)
where we assume that S...,(t) E [0,1] is arbitrarily close to 0 for a short time interval
after neuron 'Y has fired , and else equal to 1. The firing mechanism for neurons
'Y E r is defined like for all other neurons.
Theorem 3: One can simulate any threshold circuit T by a sufficiently large network N(T) of noisy spiking neurons with shunting inhibition (with arbitrarily high
probability of correctness) . The computation time of N(T) does not depend on the
number of gates in each layer, and is proportional to the number of layers in the
threshold circuit T.
Idea of the proof of Theorem 3: It is already impossible to simulate in a straightforward manner an AND-gate with weak coding of bits. The same difficulties arise
in an even more drastic way if one wants to simulate a threshold gate with large
fan-in.
The left part of Figure 1 indicates that with the help of shunting inhibition one can
transform via an intermediate pool of neurons Bl the bit that is weakly encoded by
W.MAASS
216
Al into a contribution to Pv(t) for neurons v E C that is throughout a time interval
J arbitrarily close to 0 if Al encodes a "0", and arbitrarily close to some constant
P* > 0 if Al encodes a "I" (we will call this a "strong coding" of a bit). Obviously
it is rather easy to realize a threshold gate if one can make use of such strong coding
of bits.
8
E
E
)IAII
I
)IB11
E
SI
11
,- ,
:)
'--
r---------------------
I
I
,,
:,
,,
,
:,
?
lE
)
~
I )~-4[!]
-----+ C
-----+
H' ~
I
E
:,
:
:
)
:
I
----------------------~
Figure 1: Realization of a threshold gate G via shunting inhibition (SI).
The task of the module in Figure 1 is to simulate with noisy spiking neurons a
n
given boolean threshold gate G that outputs 1 if
L: Q:iXi
~
e,
and 0 else. For
i=I
simplicity Figure 1 shows only the pool Al whose firing activity encodes (in weak
coding) the first input bit Xl. The other input bits are represented (in weak coding)
simultaneously in pools A:l> ... , An parallel to AI. If Xl = 0, then the firing activity
in pool Al is low, hence the shunting inhibition from pool Bl intercepts those
EPSP's that are sent from BN+ to each neuron v in pool C . More precisely,
we assume that each pool Bi associated with a different input bit Xi carries out
shunting inhibition on a different subtree of the dendritic tree of such neurOn v
(where each such subtree receives EPSP's from BN+). If Xl = 1, the higher firing
activity in pool Al inhibits the neurons in BI for some time period. Hence during
the relevant time interval BN+ contributes an almost constant positive summand
to the potential Pv(t) of neurons v in C. By choosing wt and w; appropriately,
one can achieve that during this time interval the potential Pv(t) of neurons v in
11
C is arbitrarily much positive if
n
L: Q:iXi
~
e,
and arbitrarily much negative if
i=1
L: Q:iXi < e. Hence the activity level of C encodes the output bit of the threshold
i=l
gate G (in weak coding). The purpose of the subsequent pools D and F is to
synchronize (with the help of "double-negation") the output of this module via a
pacemaker or synfire chain PM. In this way one can achieve that all input "bits" to
another module that simulates a threshold gate On the next layer of circuit T arrive
?
simultaneously.
On the Computational Power of Noisy Spiking Neurons
4
217
ConcI usion
Our constructions throw new light on various experimental data, and on our attempts to understand neural computation and coding:
a) If One would record all firing times of a few arbitrarily chosen neurons in
our networks during many repetitions of the same computation, one is likely to
see that each run yields quite different seemingly random firing sequences, where
however a few firing patterns will occur more frequently than could be explained by
mere chance. This is consistent with the experimental results reported in (Abeles,
1991), and one should also note that the synfire chains of (Abeles, 1991) have many
features in common with the here constructed networks.
b) If one plugs in biologically realistic values (see (Shepherd, 1990), (Churchland, Sejnowski, 1992)) for the length of transmission delays (around 5 msec) and
the duration of EPSP's and IPSP's (around 15 msec for fast PSP's), then the computation time of our modules for NOR- and threshold gates comes out to be not
more than 25 msec. Hence in principle a multi-layer perceptron with up to 4 layers
can be simulated within 100 msec.
c) Our constructions provide new hypotheses about the computational roles
of regular and shunting inh'ibition, that go far beyond their usually assumed roles.
d) We provide new hypotheses regarding the computational role of randomly
firing neurons, and of EPSP's and IPSP's that arrive through synapses at distal
parts of biological neurons (see the use of BN+ and BN- in our constructions).
References:
M. Abeles. (1991) Corticonics: Neural Circuits of the Cerebral Cortex. Cambridge University Press.
P. S. Churchland, T . J . Sejnowski. (1992) The Computational Brain. MIT-Press.
W. Gerstner, J. L. van Hemmen . (1994) How to describe neuronal activity: spikes, rates,
or assemblies? Advances in Neural Information Processing Systems, vol. 6, Morgan
Kaufmann: 463-470.
W. Maass. (1995a) On the computational complexity of networks of spiking neuronS
(extended abstract). Advances in Neural Information Processing Systems, vol. 7
(Proceedings of NIPS '94), MIT-Press, 183-190.
W. Maass. (1995b) An efficient implementation of sigmoidal neural nets in temporal coding
with noisy spiking neurons. IGI-Report 422 der Technischen Universitiit Graz,
submitted for publication.
W. Maass. (1996) Lower bounds for the computational power of networks of spiking
neurons. N eu.ral Computation 8:1, to appear.
G. M. Shepherd. (1990) The Synaptic Organization of the Brain. Oxford University Press.
J. van Leeuwen, ed. (1990) Handbook of Theoretical Computer Science, vol. A: Algorithms and Complexity. MIT-Press.
| 1158 |@word cu:3 version:1 rising:1 polynomial:2 simulation:3 bn:9 carry:5 unction:1 activation:1 si:2 realize:1 additive:1 subsequent:1 realistic:6 shape:3 pacemaker:1 inspection:1 realism:1 short:3 record:1 provides:5 location:1 sigmoidal:1 unbounded:1 constructed:2 persistent:1 prove:1 introduce:1 manner:3 frequently:2 nor:5 multi:1 brain:2 decreasing:2 circuit:14 mcculloch:1 substantially:1 giant:1 temporal:1 ti:1 exactly:1 appear:1 positive:3 timing:1 era:1 oxford:1 firing:21 approximately:1 studied:1 bi:2 spontaneously:1 block:1 ib11:1 ett:1 regular:1 spite:1 close:3 impossible:2 intercept:2 deterministic:2 straightforward:2 go:1 duration:1 automaton:3 survey:1 simplicity:1 construction:12 trigger:5 exact:2 ixi:3 hypothesis:2 located:1 role:4 module:4 graz:4 remote:1 eu:3 environment:1 complexity:5 weakly:1 depend:1 segment:1 churchland:3 klosterwiesgasse:1 completely:1 po:4 represented:4 various:1 fast:2 describe:1 sejnowski:3 artificial:1 choosing:2 quite:2 whose:2 larger:1 encoded:1 relax:1 transform:1 noisy:19 seemingly:1 obviously:1 sequence:1 net:1 epsp:11 tu:1 relevant:1 realization:1 fired:1 achieve:2 double:1 transmission:1 help:3 oo:4 ac:1 strong:2 throw:1 come:1 stochastic:4 require:1 behaviour:2 suffices:2 biological:6 dendritic:4 pl:1 hold:1 sufficiently:4 around:3 pitt:2 purpose:1 travel:1 leap:1 label:3 correctness:3 iaii:1 repetition:1 mit:3 rather:3 publication:1 corollary:1 ral:1 indicates:1 contrast:2 rigorous:2 sense:1 biochemical:1 special:1 equal:1 construct:1 once:1 corticonics:1 represents:1 np:1 report:1 summand:1 employ:2 few:2 randomly:2 simultaneously:3 consisting:1 fire:8 negation:2 attempt:1 organization:1 light:1 activated:1 tj:1 chain:2 usion:1 fu:1 closer:1 tree:4 theoretical:3 delete:1 leeuwen:2 boolean:7 technische:1 subset:1 bombardment:1 delay:3 johnson:1 reported:1 sv:11 abele:4 combined:1 systematic:1 pool:14 concrete:3 again:1 choose:2 potential:7 coding:12 availability:1 satisfy:2 caused:1 igi:2 depends:1 view:1 wolfgang:1 apparently:1 sup:1 parallel:1 contribution:1 kaufmann:1 yield:1 weak:16 basically:1 mc:1 mere:1 submitted:1 synapsis:8 reach:1 synaptic:1 ed:1 trej:3 failure:1 involved:1 proof:4 associated:1 austria:1 lim:4 higher:1 response:10 synapse:5 iai:3 furthermore:1 just:5 reluctance:1 ally:1 receives:1 synfire:2 building:1 effect:1 hence:6 assigned:1 maass:10 deal:1 distal:1 during:9 tt:1 aco:1 common:4 dubious:1 spiking:22 refractory:1 cerebral:1 analog:1 interpret:1 refer:3 cambridge:1 ai:1 pm:1 reliability:1 cortex:1 inhibition:12 apart:2 wellknown:1 certain:3 arbitrarily:15 der:1 morgan:1 somewhat:1 impose:1 preceding:1 employed:1 period:2 ii:1 plug:1 long:1 shunting:13 impact:1 basic:1 multilayer:1 receive:1 addition:2 background:2 ore:1 want:1 interval:6 else:2 source:1 appropriately:1 shepherd:3 sent:1 simulates:1 ample:1 veto:1 integer:1 call:1 ee:6 intermediate:1 easy:2 affect:1 forthcoming:1 idea:3 regarding:1 computable:1 technischen:1 intensive:1 whether:1 ese:1 useful:1 governs:1 detailed:1 induces:1 exist:2 percentage:1 inhibitory:2 shifted:1 write:1 vol:3 threshold:18 year:1 wtt:1 run:1 powerful:1 arrive:3 throughout:1 almost:1 wu:3 acceptable:1 bit:12 bound:4 layer:10 fan:3 activity:5 occur:4 constraint:1 precisely:2 encodes:4 sake:1 simulate:10 argument:2 speed:1 inhibits:1 psp:1 biologically:5 explained:1 previously:1 remains:1 turn:3 mechanism:4 fail:1 drastic:1 apply:1 appropriate:1 simulating:1 gate:18 assumes:1 denotes:1 assembly:1 exploit:1 bl:2 already:2 spike:4 exhibit:2 simulated:1 mail:1 length:4 negative:2 implementation:1 unknown:2 upper:2 neuron:70 finite:4 situation:1 extended:2 communication:1 precise:1 inh:1 arbitrary:2 specified:1 nip:1 address:1 able:2 beyond:1 below:2 pattern:1 usually:1 reliable:5 explanation:1 power:7 suitable:1 difficulty:1 synchronize:1 scheme:1 imply:1 deviate:1 proportional:1 digital:3 consistent:1 article:6 principle:3 pi:3 land:1 excitatory:4 surprisingly:2 last:2 formal:1 allow:4 understand:1 perceptron:2 institute:1 absolute:1 van:5 made:1 far:2 unreliable:2 universitaet:1 handbook:1 assumed:3 ipsp:5 xi:1 continuous:2 quantifies:1 reasonably:1 tel:1 contributes:2 gerstner:3 complex:2 linearly:2 noise:5 arise:1 allowed:1 neuronal:1 hemmen:3 fashion:1 pv:15 msec:8 xl:3 theorem:4 remained:1 formula:1 evidence:1 consist:1 exists:3 subtree:2 intricacy:1 likely:1 satisfies:2 chance:1 universitiit:1 change:2 wt:3 called:1 pas:1 experimental:2 perceptrons:1 zone:5 |
175 | 1,159 | A Novel Channel Selection System in
Cochlear Implants Using Artificial Neural
Network
Marwan A. Jabri &
Raymond J. Wang
Systems Engineering and Design Automation Laboratory
Department of Electrical Engineering
The University of Sydney
NSW 2006, Australia
{marwan,jwwang}Osedal.usyd.edu.au
Abstract
State-of-the-art speech processors in cochlear implants perform
channel selection using a spectral maxima strategy. This strategy
can lead to confusions when high frequency features are needed
to discriminate between sounds. We present in this paper a novel
channel selection strategy based upon pattern recognition which allows "smart" channel selections to be made. The proposed strategy
is implemented using multi-layer perceptrons trained on a multispeaker labelled speech database. The input to the network are the
energy coefficients of N energy channels. The output of the system
are the indices of the M selected channels.
We compare the performance of our proposed system to that of
spectral maxima strategy, and show that our strategy can produce
significantly better results.
1
INTRODUCTION
A cochlear implant is a device used to provide the sensation of sound to those who
are profoundly deaf by means of electrical stimulation of residual auditory neurons.
It generally consists of a directional microphone, a wearable speech processor, a
head-set transmitter and an implanted receiver-stimulator module with an electrode
A Novel Channel Selection System in Cochlear Implants
911
array which all together provide an electrical representation of the speech signal to
the residual nerve fibres of the peripheral auditory system (Clark et ai, 1990).
Brain
Electrode
Array
Figure 1: A simplified schematic diagram ofthe cochlear implants
A simplified schematic diagram of the cochlear implants is shown in Figure 1. Speech
sounds are picked up by the directional microphone and sent to the speech processor.
The speech processor amplifies, filters and digitizes these signals, and then selects
and codes the appropriate sound information. The coded signal contains information as to which electrode to stimulate and the intensity level required to generate
the appropriate sound sensations. The signal is then sent to the receiver/stimulator
via the transmitter coil. The receiver/stimulator delivers electrical impulses to the
appropriate electrodes in the cochlea. These stimulated electrodes then directly
activate the hearing nerve in the inner ear, creating the sensation of sound, which
is then forwarded to the brain for interpretation. The entire process happens in
milliseconds.
For multi-channel cochlear implants, the task of the speech processor is to compute
the spectral energy of the electrical signals it receives, and to quantise them into
different levels. The energy spectrum is commonly divided into separate bands using
a filter bank of N (typically 20) bandpass filters with centre frequencies ranging from
250 Hz to 10 KHz. The bands of energy are allocated t.o electrodes in the patient's
implant on a one-to-one basis. Usually the most-apical bipolar electrode pairs are
allocated to the channels in tonotopic order. The limitations of implant systems
usually require only a selected number of the quantised energy levels to be fed to
the implanted electrode array (Abbas, 1993; Schouten, 1992).
The state-of-the-art speech processor for multi-channel implants performs channel
selection using spectral maxima strategy (McDermott et ai, 1992; Seligman & McDermott, 1994) . The maxima strategy selects the M (about 6) largest spectral
energy of the frequency spectrum as stimulation channels from a filter bank of
N (typically 20) bandpass. It is believed that compared to other channel selection techniques (FOF2, FOF1F2, MPEAK ... ), the maxima strategy increases the
amount of spectral information and improves the speech perception and recognition
performance.
However, maxima strategy relies heavily on the highest energies. This often leads
to the same levels being selected for different sounds , as the energy levels that
distinguish them are not high enough to be selected. For some speech signals,
912
M. A. JABRI, R. J. WANG
it does not cater for confusions and cannot discriminate between high frequency
features.
We present in this paper Artificial Neural Networks (ANN) techniques for implementing "smart" channel selection for cochlear implant systems. The input to the
proposed channel selection system consists of the energy coefficients (18 in our
experiments) and the output the indices of the selected channels (6 in our experiments). The neural network based selection system is trained on a multi-speaker
labelled speech and has been evaluated on a separated multi-speaker database not
used in the training phase. The most important feature of our ANN based channel
selection system is its ability to select the channels for stimulation on the basis of
the overall morphology of the energy spectrum and not only on the basis of the
maximal energy values.
2
THE PATTERN RECOGNITION BASED CHANNEL
SELECTION STRATEGY
Speech is the most natural form of human communication. The speech information
signal can be divided into phonemes, which share some common acoustic properties
with one another for a short interval of time. The phonemes are typically divided
into two broad classes: (a) vowels, which allow unrestricted airflow in the vocal
tract, and (b) consonants, which restrict airflow at some point and are weaker than
vowels. Different phonemes have different morphology in the energy spectrum.
Moreover, for different speakers and different speech sentences, the same phonemes
have different energy spectrum morphologies (Kent & Read, 1992). Therefore,
simple methods to select some of the most important channels for all the phoneme
patterns will not perform as good as the method that considers the spectrum in its
entirety.
The existing maxima strategy only refers to the spectrum amplitudes found in the
entire estimated spectrum without considering the morphology. Typically several
of the maxima results can be obtained from a single spectral peak. Therefore, for
some phoneme patterns, the selection result is good enough to represent the original phoneme. But for some others, some important features of the phoneme are
lost. This usually happens to those phonemes with important features in the high
frequency region. Due to the low amplitude of the high frequency in the spectrum
morphology, maxima methods are not capable to extract those high frequency features. The relationship between the desired M output channels and the energy
spectrum patterns is complex, and depending on the conditions, may be influenced
by many factors. As mentioned in the Introduction, channel selection methods that
make use of local information only in the energy spectrum are bound to produce
channel sub-sets where sounds may be confused. The confusions can be reduced if
"global" information of the energy spectrum is used in the selection process.
The channel selection approach we are proposing makes use of the overall energy
spectrum. This is achieved by turning the selection problem into that of a spectrum morphology pattern recognition one and hence, we call our approach Pattern
Recognition based Channel Selection (PRCS).
A Novel Channel Selection System in Cochlear Implants
2.1
913
PRCS STRATEGY
The PRCS strategy is implemented using two cascaded neural networks shown in
Figure 2:
? Spectral morphological classifier: Its inputs are the spectrum energy amplitudes of all the channels and its outputs all the transformations of the
inputs. The transformation between input and out.put can be seen as a
recognition, emphasis, and/or decaying of the inputs. The consequence is
that some inputs are amplified and some decayed, depending on the morphology of the spectrum. The classifier performs a non-linear mapping .
? M strongest of N classifier: It receives the output of morphological classifier
and applies a M strongest selection rule.
'IR. ----
C21 ..
Spectral
??
?
- - - - - Labeia
Morphological
CI.......,
MStrongaat
ofN
CIanIf..,
Figure 2: The pattern recognition based channel selection architecture
2.2
TRAINING AND TESTING DATA
The most difficult task in developing the proposed PRCS is to set up the labelled
training and testing data for the spectral morphological classifier.
The training and testing data sets have been constructed using the process shown
in Figure 3.
r
Hlmmlng
Window +
128 FFT
r--
18Ch8nnela
Quantlsatlor
& scaling
-
Chlinnel
labelling
r--
-
"
Training
&Teatlng
Sets
'Figure 3: The process of generating training and testing sets
The sounds in the data sets are speech extracted from the DARPA TIMIT multispeaker speech corpus (Fisher et ai, 1987) which contains a total of 6300 sentences,
10 sentences spoken by each of 630 speakers. The speech signal is sampled at 16KHz
rate with 16 bit precision. As the speech is nonstationary, to produce the energy
spectrum versus channel numbers, a short-time speech analysis method is used .
The Fast Fourier Transform with 8ms smooth Hamming window technique is applied to yield the energy spectrum . The hamming window has the shape of a raised
M. A. JABRI, R. J. WANG
914
cosine pulse:
h( n) = {
~.54 -
0.46 cos (J~n.
)
for 0 ~ n ~ N-l
otherwise
The time frame on which the speech analysis is performed is 4ms long and the
successive time frame windows overlap by 50% .
Using frequency allocations similar to that used in commercial cochlear implant
speech processors, the frequency range in the spectrum is divided into 18 channels
with each channel having the center frequencies of 250, 450, 650, 850 1050, 1250,
1450, 1650, 1895, 2177, 2500, 2873, 3300, 3866, 4580, 5307, 6218 and 7285Hz
respectively. Each energy spectrum from a time frame is quantised into these 18
frequency bands. The energy amplitude for each level is the sum of the amplitude
value of the energy for all the frequency components in the level.
The quantised energy spectrum is then labelled using a graphics based tool, called
LABEL, developed specially for this application. LABEL displays the spectrum
pattern including the unquantised spectrum, the signal source, speaker's name,
speech sentence, phoneme, signal pre-processing method and FFT results. All these
information assists labelling experts to allocate a score (1 to 18) to each channel.
The score reflects the importance of the information provided by each of the bands.
Hence, if six channels are only to be selected, the channels with the score 1 to 6
can be used and are highlighted. The labelling is necessary as a supervised neural
network training method is being used.
A total of 5000 energy spectrum patterns have been labelled. They are from 20
different speakers and different. spoken sentences. Of the 5000 example patterns,
4000 patterns are allocated for training and 1000 patterns for testing.
3
EXPERIMENTAL RESULTS
We have implemented and tested the PH.CS system as described above and our
experiments show that it has better performance than channel selection systems
used in present cochlear implant processors.
The PRCS system is effectively constructed as a multi-module neural network using
MUME (Jabri et ai, 1994). The back-propagation algorithm in an on-line mode is
used to train the MLP. The training patterns input components are the energy
amplitudes of the 18 channels and the teacher component consists of a "I" for a
channel to be selected and "0" for all others. The MLP is trained for up to 2000
epochs or when a minimum total mean squared error is reached. A learning rate 7J
of 0.01 is used (no weight decay).
We show the average performance of our PRCS in Table 1 where we also show the
performance of a leading commercial spectral maxima strategy called SPEAK on
the same test set. In the first column of this table we show the number of channels
that matched out of the 6 desired channels. For example, the first row corresponds
to the case where all 6 channels match the desired 6 channels in the test data base,
and so on. As Table 1 shows, the PRCS produces a significantly better performance
than the commercial strategy on the speech test set.
The selection performance to different phonemes is listed in Table 2. It clearly
A Novel Channel Selection System in Cochlear Implants
915
Table 1: The comparison of average performance between commercial and PRCS
system
II
The Channel Selections from the two different methods
PRCS results Commercial technique results
Fully matched
22 %
4%
5 matched
80 %
25 %
4 matched
98 %
57 %
3 matched
100 %
93 %
2 matched
100 %
99 %
1 matched
100 %
100 %
II
Table 2: PRCS channel selecting performance on different phoneme patterns
II
The P RCS results for different phoneme patterns
Phoneme
Fully matched 5 matched 4 matched
Stops
19 %
69 %
96 %
Fricatives
66 %
18 %
92 %
14 %
66 %
Nasals
96 %
Semivowels & Glides
14 %
79 %
95 %
Vowels
25 %
84 %
98 %
\I
3 matched
100 %
100 %
100 %
100%
100 %
shows that the PRCS strategy can cater for the features of all the speech spectrum
patterns.
To compare the practical performance of the PRes with the maxima strategies
we have developed a direct performance test system which allows us to play the
synthesized speech of the selected channels through post-speech synthesizer. Our
test shows that the PRCS produces more intelligible speech to the normal ears.
Sixteen different sentences spoken by sixteen people are tested using both maxima
and PRCS methods. It is found that the synthesized speech from PRCS has much
more high frequency features than that of the speech produced by the maxima
strategy. All listeners who were asked to take the test agreed that the quality of the
speech sound from PRCS is much better than those from the commercial maxima
channel selection system. The tape recording of the synthesized speech will be
available at the conference.
4
CONCLUSION
A pattern recognition based channel selection strategy for Cochlear Implants has
been presented. The strategy is based on a 18-72-18 MLP strongest selector. The
proposed channel selection strategy has been compared to a leading commercial
technique. Our simulation and play back results show that our machine learning
based technique produces significantly better channel selections.
916
M. A. JABRI, R. J. WANG
Reference
Abbas, P. J. (1993) Electrophysiology. "Cochlear Implants: Audiological Foundations" edited by R. S. Tyler, Singular Publishing Group, pp.317-355.
Clark, G. M., Tong, Y. C.& Patrick, J. F . (1990) Cochlear Prosthesis. Edi nborough:
Churchill Living stone.
Fisher, W. M., Zue, V., Bernstein, J. & Pallett, D. (1987) An Acoustic-Phonetic
Data Base. In 113th Meeting of Acoust Soc Am, May 1987
Jabri, M. A., Tinker, E. A. & Leerink, L. (1994) MUME - A Multi-Net MultiArchitecture Neural Simulation Environment. "Neural Network Simulation Environments", J. Skrzypek ed., Kluwer Academic Publishers.
Kent, R. D. & Read, C. (1992) The Acoustic Analysis of Speech. Whurr Publishers.
McDermott, H. J., McKay, C. M. & Vandali, A. E. (1992) A new portable sound
processor for the University of Melbourne / Nucleus Limited multielectrode cochlear
implant. J. Acoust. Soc. Am. 91(6), June 1992, pp.3367-3371
Schouten, M. E. H edited (1992) The Auditory Processing of Speech - From Sounds
to Words. Speech Research 10, Mouton de Groyter.
Seligman, P. & McDermott, H. (1994) Architecture of the SPECTRA 22 Speech
Processor. International Cochlear Implant, Speech and Hearing Symposium, Melbourne, October, 1994, p.254.
| 1159 |@word simulation:3 pulse:1 kent:2 nsw:1 contains:2 score:3 selecting:1 existing:1 synthesizer:1 shape:1 selected:8 device:1 short:2 successive:1 tinker:1 airflow:2 constructed:2 direct:1 symposium:1 consists:3 multi:7 brain:2 morphology:7 window:4 considering:1 confused:1 provided:1 moreover:1 matched:11 churchill:1 developed:2 proposing:1 spoken:3 acoust:2 transformation:2 bipolar:1 classifier:5 engineering:2 local:1 consequence:1 emphasis:1 au:1 co:1 limited:1 range:1 c21:1 practical:1 testing:5 lost:1 significantly:3 pre:1 word:1 vocal:1 refers:1 prc:15 cannot:1 selection:29 put:1 center:1 rule:1 array:3 commercial:7 heavily:1 play:2 speak:1 recognition:8 database:2 module:2 wang:4 electrical:5 region:1 mume:2 morphological:4 highest:1 edited:2 mentioned:1 environment:2 asked:1 trained:3 smart:2 upon:1 basis:3 darpa:1 listener:1 train:1 separated:1 fast:1 activate:1 artificial:2 otherwise:1 forwarded:1 ability:1 transform:1 highlighted:1 net:1 maximal:1 amplified:1 amplifies:1 electrode:8 produce:6 generating:1 tract:1 depending:2 semivowel:1 sydney:1 implemented:3 entirety:1 c:1 soc:2 sensation:3 filter:4 human:1 australia:1 implementing:1 require:1 normal:1 tyler:1 mapping:1 label:2 largest:1 pres:1 tool:1 reflects:1 clearly:1 fricative:1 june:1 transmitter:2 am:2 entire:2 typically:4 selects:2 overall:2 art:2 raised:1 having:1 broad:1 seligman:2 others:2 phase:1 vowel:3 mlp:3 capable:1 necessary:1 desired:3 prosthesis:1 melbourne:2 column:1 hearing:2 apical:1 mckay:1 graphic:1 teacher:1 decayed:1 peak:1 international:1 together:1 squared:1 ear:2 creating:1 expert:1 leading:2 de:1 automation:1 coefficient:2 performed:1 picked:1 reached:1 decaying:1 timit:1 ir:1 phoneme:14 who:2 yield:1 ofthe:1 directional:2 produced:1 processor:10 strongest:3 influenced:1 ed:1 energy:27 frequency:13 pp:2 hamming:2 wearable:1 auditory:3 sampled:1 stop:1 improves:1 amplitude:6 agreed:1 back:2 nerve:2 supervised:1 evaluated:1 receives:2 propagation:1 mode:1 quality:1 stimulate:1 impulse:1 name:1 hence:2 read:2 laboratory:1 speaker:6 cosine:1 m:2 stone:1 confusion:3 performs:2 delivers:1 multispeaker:2 ranging:1 novel:5 common:1 stimulation:3 khz:2 interpretation:1 kluwer:1 synthesized:3 ai:4 mouton:1 centre:1 base:2 patrick:1 phonetic:1 cater:2 meeting:1 mcdermott:4 seen:1 minimum:1 unrestricted:1 living:1 signal:10 ii:3 sound:12 smooth:1 match:1 academic:1 believed:1 long:1 rcs:1 divided:4 post:1 coded:1 schematic:2 implanted:2 patient:1 cochlea:1 represent:1 abbas:2 achieved:1 interval:1 diagram:2 singular:1 source:1 allocated:3 publisher:2 specially:1 hz:2 recording:1 sent:2 call:1 nonstationary:1 bernstein:1 enough:2 fft:2 deaf:1 architecture:2 restrict:1 inner:1 pallett:1 six:1 allocate:1 assist:1 speech:37 tape:1 generally:1 listed:1 nasal:1 amount:1 band:4 ph:1 reduced:1 generate:1 skrzypek:1 millisecond:1 estimated:1 profoundly:1 group:1 sum:1 fibre:1 quantised:3 scaling:1 bit:1 layer:1 bound:1 multielectrode:1 distinguish:1 display:1 fourier:1 department:1 developing:1 peripheral:1 tonotopic:1 happens:2 zue:1 needed:1 fed:1 available:1 spectral:11 appropriate:3 original:1 publishing:1 stimulator:3 strategy:22 usyd:1 separate:1 cochlear:17 considers:1 portable:1 code:1 index:2 relationship:1 difficult:1 october:1 design:1 perform:2 neuron:1 communication:1 head:1 frame:3 intensity:1 edi:1 pair:1 required:1 sentence:6 acoustic:3 usually:3 pattern:18 perception:1 including:1 overlap:1 natural:1 turning:1 cascaded:1 residual:2 leerink:1 extract:1 raymond:1 epoch:1 fully:2 limitation:1 allocation:1 versus:1 sixteen:2 clark:2 foundation:1 nucleus:1 bank:2 share:1 row:1 schouten:2 allow:1 weaker:1 made:1 commonly:1 simplified:2 selector:1 global:1 receiver:3 corpus:1 marwan:2 consonant:1 spectrum:26 table:6 stimulated:1 channel:49 complex:1 jabri:6 intelligible:1 tong:1 precision:1 sub:1 bandpass:2 ofn:1 decay:1 effectively:1 importance:1 ci:1 labelling:3 implant:19 electrophysiology:1 applies:1 corresponds:1 relies:1 extracted:1 coil:1 ann:2 labelled:5 fisher:2 microphone:2 total:3 called:2 discriminate:2 glide:1 experimental:1 perceptrons:1 select:2 people:1 tested:2 |
176 | 116 | 511
CONVERGENCE AND PATTERN STABILIZATION
IN THE BOLTZMANN MACHINE
MosheKam
Dept. of Electrical and Computer Eng.
Drexel University, Philadelphia PA 19104
Roger Cheng
Dept. of Electrical Eng.
Princeton University, NJ 08544
ABSTRACT
The Boltzmann Machine has been introduced as a means to perform
global optimization for multimodal objective functions using the
principles of simulated annealing. In this paper we consider its utility
as a spurious-free content-addressable memory, and provide bounds on
its performance in this context. We show how to exploit the machine's
ability to escape local minima, in order to use it, at a constant
temperature, for unambiguous associative pattern-retrieval in noisy
environments. An association rule, which creates a sphere of influence
around each stored pattern, is used along with the Machine's dynamics
to match the machine's noisy input with one of the pre-stored patterns.
Spurious fIxed points, whose regions of attraction are not recognized by
the rule, are skipped, due to the Machine's fInite probability to escape
from any state. The results apply to the Boltzmann machine and to the
asynchronous net of binary threshold elements (Hopfield model'). They
provide the network designer with worst-case and best-case bounds for
the network's performance, and allow polynomial-time tradeoff studies
of design parameters.
I. INTRODUCTION
The suggestion that artificial neural networks can be utilized for classification, pattern
recognition and associative recall has been the main theme of numerous studies which
appeared in the last decade (e.g. Rumelhart and McClelland (1986) and Grossberg (1988) and their references.) Among the most popular families of neural networks are fully
connected networks of binary threshold elements (e.g. Amari (1972), HopfIeld (1982).)
These structures, and the related family of fully connected networks of sigmOidal threshold
elements have been used as error-correcting decoders in many applications, among which
were interesting applications in optimization (Hopfield and Tank, 1985; Tank and
Hopfield, 1986; Kennedy and Chua, 1987.) A common drawback of the many studied
schemes is the abundance of 'spurious' local minima, which 'trap' the decoder in
undesirable, and often non-interpretable, states during the process of input I stored-pattern
association. It is generally accepted now that while the number of arbitrary binary patterns
that can be stored in a fully-connected network is of the order of magnitude of N (N =
number of the neurons in the network,) the number of created local attractors in the
512
Kam and Cheng
network's state space is exponential in N.
It was proposed (Acldey et al., 1985; Hinton and Sejnowski, 1986) that fully-connected
binary neural networks, which update their states on the basis of stochastic
state-reassessment rules, could be used for global optimization when the objective
function is multi-modal. The suggested architecture, the Boltzmann machine, is based on
the principles of simulated annealing ( Kirkpatrick et al., 1983; Geman and Geman, 1984;
Aarts et al., 1985; Szu, 1986,) and has been shown to perform interesting tasks of
decision making and optimization. However, the learning algorithm that was proposed for
the Machine, along with its "cooling" procedures, do not lend themselves to real-time
operation. Most studies so far have concentrated on the properties of the Machine in
global optimization and only few studies have mentioned possible utilization of the
Machine (at constant 'temperature') as a content-addressable memory (e.g. for local
optimization. )
In the present paper we propose to use the Boltzmann machine for associative retrieval,
and develop bounds on its performance as a content-addressable memory. We introduce a
learning algorithm for the Machine, which locally maximizes the stabilization probability
of learned patterns. We then proceed to calculate (in polynomial time) upper and lower
bounds on the probability that a tuple at a given initial Hamming distance from a stored
pattern will get attracted to that pattern. A proposed association rule creates a sphere of
influence around each stored pattern, and is indifferent to 'spurious' attractors. Due to the
fact that the Machine has a nonzero probability of escape from any state, the 'spurious'
attractors are ignored. The obtained bounds allow the assessment of retrieval probabilities,
different learning algorithms and necessary learning periods, network 'temperatures' and
coding schemes for items to be stored.
II. THE MACHINE AS A CONTENT ADDRESSABLE
MEMORY
The Boltzmann Machine is a multi-connected network of N simple processors called
probabilistic binary neurons. The ith neuron is characterized by N-1 real numbers
representing the synaptic weights (Wij, j=1,2, ... ,i-1,i+1, ... ,N; Wii is assumed to be zero
for all i), a real threshold ('tj) and a binary activity level (Ui E B ={ -1,1},) which we
shall also refer to as the neuron's state. The binary N-tuple U = [Ul,U2, ..? ,UN] is called
the network's state. We distinguish between two phases of the network's operation:
a) The leamjn& phase - when the network parameters Wij and 'ti are determined. This
determination could be achieved through autonomous learning of the binary pattern
environment by the network (unsupervised learning); through learning of the environment
with the help of a 'teacher' which supplies evaluative reinforcement signals (supervised
learning); or by an external flxed assignment of parameter values.
b) The production phase - when the network's state U is determined. This determination
could be performed synchronously by all neurons at the same predetermined time instants,
or asynchronously - each neuron reassesses its state independently of the other neurons at
random times. The decisions of the neurons regarding their states during reassessment can
be arrived at deterministically (the set of neuron inputs determines the neuron's state) or
Convergence and Pattem-Stabilization
stochastically (the set of neuron inputs shapes a probability distribution for the
state-selection law.)
We shall describe fast the (asynchronous and stochastic) production rule which our
network employs. At random times during the production phase, asynchronously and
independently of all other neurons, the ith neuron decides upon its next state, using the
probabilistic decision rule
1
with probabilty
1
--~~
l+e-TII
(1)
u?=
J
er
II
with probabilty --~~
-1
l+e-TII
N
where Lllii =
L
WijUj-ti
j=l~
is called the ith energy gap, and Te is a predetermined real number called temperature.
The state-updating rule (1) is related to the network's energy level which is described by
E=-.![~
~ w.. u.-t.)]
2 ?..J u.( ?..J
I
lJ
J
I
?
(2)
j=l~
i=l
If the network is to fmd a local minimum of E in equation (2), then the ith neuron, when
chosen (at random) for state updating, should choose deterministically
ui=sgn[
?
Wii Ui -
1i ]'
(3)
j=l~
We note that rule in equation (3) is obtained from rule in equation (1) as Te --+ O. This
deterministic choice of Ui guarantees, under symmetry conditions on the weights
(Wij=Wji), that the network's state will stabilize at afzxed point in the 2N -tuple state
space of the network (Hoptield, 1982), where
Definition I: A state UfE BN is called afued point in the state space of the N-neuron
network if
p .ro<~+1)= Uf U<'Y = Uf] = 1.
(4)
I
A fixed point found through iterations of equation (3) (with i chosen at random at each
iteration) may not be the global minimum of the energy in equation (2). A mechanism
which seeks the global minimum should avoid local-minimum "traps" by allowing
'uphill' climbing with respect to the value of E. The decision scheme of equation (1) is
devised for that purpose, allowing an increase in E with nonzero probability. This
provision for progress in the locally 'wrong' direction in order to reap a 'global' advantage
later, is in accordance with the principles of simulated annealing techniques, which are
used in multimodal optimization. In our case, the probabilities of choosing the locally
'right' decision (equation (3? and the locally 'wrong' decision are determined by the ratio
513
514
Kam and Cheng
of the energy gap ~i to the 'temperature' shaping constant Te .
The Boltzmann Machine has been proposed for global minimization and a considerable
amount of effort has been invested in devising a good cooling scheme, namely a means to
control T e in order to guarantee the finding of a global minimum in a short time (Geman
and Geman, 1984, Szu, 1987.) However, the network may also be used as a selective
content addressable memory which does not suffer from inadvertently-installed spurious
local minima.
We consider the following application of the Boltzmann Machine as a scheme for pattern
classification under noisy conditions: let an encoder emit a sequence of NXI binary code
vectors from a set of Q codewords (or 'patterns',) each having a probability of occurrence
of TIm (m = 1,2,... ,Q). The emitted pattern encounters noise and distortion before it
arrives at the decoder, resulting in some of its bits being reversed. The Boltzmann
Machine, which is the decoder at the receiving end, accepts this distorted pattern as its
initial state (U(O?, and observes the consequent time evolution of the network's state U.
At a certain time instant nO, the Machine will declare that the input pattern U(O) is to be
associated with pattern Bm if U at that instant (u(no? is 'close enough' to Bm. For this
purpose we defme
Definition 2: The dmax-sphere of influence ofpattern B m, a( Bm, d max) is
o(Bm,dmax)={UeBN : HD(U, Bm)~~}.
dmax is prespecified.
(5)
Letl:(~)=uo(Bm' ~)andletno be the smallest integer such that dnJel:(~~
m
Therule of atsociation is : associate dO) with Bm at time no, if dllo> which has evolved
from u<0) satisfies: U<llo>e o(Bm' ~~
Due to the finite probability of escape from any minimum, the energy minima which
correspond to spurious fIXed points are skipped by the network on its way to the energy
valleys induced by the designed flXed points (Le. Bl, ... ,BQ.)
Ill. ONE-STEP CONTRACTION PROBABILITIES
Using the association rule, the utility of the Boltzmann machine for error correction
involves the probabilities
P r {HD [lfn) ,BnJ ~ d.nax HD [If?),BnJ = d} m= 1,2, ... , Q
(6)
for predetermined n and d max . In order to calculate (6) we shall frrst calculate the
following one-step attraction probabilities:
P(Bm,d,B)=Pr{HDrd~ + 1), BnJ=d+B IHDrd~ ), BnJ=d}whereB= -1,0, 1 (7)
I
For B = -1 we obtain the probability of convergence ; For B = + 1 we obtain the
probability of divergence; For B = 0 we obtain the probability of stalemate.
An exact calculation of the attraction probabilities in equation (7) is time-exponential and
we shall therefore seek lower and upper bounds which can be calculated in polynomial
time. We shall reorder the weights of each neuron according to their contribution to the
Convergence and Pattern-Stabilization
~i
for each pattern, using the notation
w;n= {wit bm1 , wi2bm2"
~=max
?? , wiNbmN}
wr
~=max{wF-{~,~, ... ,'fs-l}}
(8)
i = 1,2, ... ,N, s =2,3, ... ,N, m= 1,2, ... ,Q
d
d
LetL1E~i(d)=~ -2L!f and~~(d)=~ -2L'fN+l-f"
1'=1
(9)
1'=1
These values represent the maximum and minimum values that the i th energy gap could
assume when the network is at HD of d from Bm. Using these extrema, we can fmd the
worst case attraction probabilities :
N
d -1)
pwc(B
m"
=.! ~ .[
ti' PI
N
U-1(brni)
(lOa)
AEJ.d)
1+
e---:re
AEJ.d)
U_l(bmi)e-~
!IF:J.d)
+
(lOb)
l+e----ye
and the best case attraction probabilities :
N
U
1
(b
.)
1
U
(b
.)
AE.Jd)]
d
pbc(B d -1)=.
- mJ+
-1 mJ
~,
N
~
+
e - T- (lla)
Li=1 [
AEJ.d)
AEuJd)
l+e--r
l+e--r
e
e
e
1- U-1(bmi)
AEJ.d)
(lIb)
l+e--r
e
where for both cases
-~)
(12)
P~)(Bm" d O)-l-P
(Bm' d, -l)-P~) (Bm' d, 1).
For the worst- (respectively, best-) case probabilities, we have used the extreme values of
.1Emi(d) to underestimate (overestimate) convergence and overestimate (underestimate)
divergence, given that there is a disagreement in d of the N positions between the
network's state and Bm ; we assume that errors are equally likely at each one of the bits.
IV. ESTIMATION OF RETRIEVAL PROBABILITIES
To estimate the retrieval probabilities, we shall study the Hamming distance of the
515
516
Kam and Cheng
network's state from a stored pattern. The evolution of the network from state to state, as
affecting the distance from a stored pattern, can be interpreted in terms of a birth-and-death
Markov process (e.g. Howard, 1971) with the probability transition matrix
0
0 0
Pbo
0 0
PII I-Pbl-PII Pbl
0
I-Pb2-P& Pb2 0
Pd2
0
0
0
I-Pt,o
'I' ,lPbbPdi)=
0
0
0
0
0 Pdt I-Pbk-Pdt Pbk 0
0
0
0
0
0 PdN-l I-PbN-CPdN-I PbN-I
0 0
I-PdN
PdN
(13)
where the birth probability Pbi is the divergence probability of increasing the lID from i
to i+ 1, and the death probability Pdi is the contraction probability of decreasing the HD
from i to i-I.
Given that an input pattern was at HD of do from Bm, the probability that after n steps
the network will associate it with Bm is
C\-
I
Il
P r {lin)--+BrrJ lID [dO) ,BrrJ = do} =
r [HD(U(n), Bm) =r
I lID(U(O), Bm) =<\,] (14)
r=O
where we can use the one-step bounds found in section III in order to calculate the
worst-case and best-case probabilities of association. Using equations (10) and (11) we
define two matrices for e~h pattern Bm; a worst case matrix,
and a best case matrix,~:
V:'
Worst case matrix
Best case matrix
Pbi=Pwc(Bm,i ,+1)
Pbi=pbc(Bm,i ,+1)
pdi=pwc(Bm, i ,-1)
Pdi=Pbc(Bm, i ,-1).
U sing these matrices, it is now possible to calculate lower and upper bounds for the
association probabilities needed in equation (14):
[1tdo ('I'~1n]r~ Pr[HD(U(n),Bm)=r I lID(U(O), Bm)= <\,] ~[1tdo ('I'~)n]r (ISa)
where [x]i indicatestheithelementofthevectorx, and1tdois the unit 1 xn+l vector
Convergence and Pattem-Stabilization
1
i=do
(ISb)
o
The bounds of equation 14(a) can be used to bound the association probability in equation
(13). The upper bound of the association probability is obtained by replacing the summed
terms in (13) by their upper-bound values:
'\P r {U(n)--+Bm11 lID [(f0) ,BnJ= do} S L[1tdo ('P~)nlr
(16a)
r=O
The lower bound cannot be treated similarly, since it is possible that at some instant of
time prior to the present time-step (n), the network has already associated its state U with
one of the other patterns. We shall therefore use as the lower bound on the convergence
probability in equation (14):
C\z.
L[1Ii1o(~nlrSPr{lfn)--iJnJ IHD[lf?),Bm}
(16b)
r=O
where the underlined matrix is the birth-and-death matrix (13) with
(16c)
o
1
am
Jli = min HD(B i, Bj )-
dmax
for i = Ili' Jli+ I, ... , N
j = 1, ... ,Q,j~i
(16d)
Equation (16c) and (16d) assume that the network wanders into the d max - sphere of
influence of a pattern other than Bi, whenever its distance from Bi is Ili or more. This
assumption is very conservative, since Ili represents the shortest distance to a competing
dmar sphere of influence, and the network could actually wander to distances larger than
Ili and still converge eventually into the dmax -sphere of influence of Bi.
CONCLUSION
We have presented how the Boltzmann Machine can be used as a content-addressable
memory, exploiting the stochastic nature of its state-selection procedure in order to escape
undesirable minima. An association rule in terms of patterns' spheres of influence is used,
along with the Machine's dynamics, in order to match an input tuple with one of the
predetermined stored patterns. The system is therefore indifferent to 'spurious' states,
whose spheres of influence are not recognized in the retrieval process. We have detailed a
technique to calculate the upper and lower bounds on retrieval probabilities of each stored
51 7
518
Kam and Cheng
pattern. These bounds are functions of the network's parameters (i.e. assignment or
learning rules, and the pattern sets); the initial Hamming distance from the stored pattern;
the association rule; and the number of production steps. They allow a polynomial-time
assessment of the network's capabilities as an associative memory for a given set of
patterns; a comparison of different coding schemes for patterns to be stored and retrieved;
an assessment of the length of the learning period necessary in order to guarantee a
pre specified probability of retrieval; and a comparison of different learning/assignment
rules for the network parameters. Examples and additional results are provided in a
companion paper (Kam and Cheng, 1989).
Acknowledgements
This work was supported by NSF grant IRI 8810186.
References
[1]
Aarts,E.H.L., Van Laarhoven,P.J.M. : "Statistical Cooling: A General
Approach to Combinatorial Optimization Problems," Phillips 1. Res., Vol. 40, 1985.
[2] Ackley,D.H., Hinton,J.E., Sejnowski,T J. : " A Learning Algorithm for
Boltzmann Machines," Cognitive Science, Vol. 19, pp. 147-169, 1985.
[3] Amari,S-I: "Learning Patterns and Pattern Sequences by Self-Organizing Nets of
Threshold Elements," IEEE Trans. Computers, Vol. C-21, No. 11, pp. 1197-1206, 1972.
[4] Geman,S., Geman,D. : "Stochastic Relaxation, Gibbs Distributions, and the
Bayesian Restoration of Images" IEEE Trans. Patte Anal. Mach. Int., pp. 721-741,1984.
[5] Grossberg,S.: "Nonlinear Neural Networks: Principles, Mechanisms, and
Architectures," Neural Networks, Vol. 1, 1988.
[6] Hebb,D.O.: The Organization of Behavior, New York:Wiley, 1949.
[7] Hinton,J.E., Sejnowski,T J. "Learning and Relearning in the Boltzmann
Machine," in [14]
[8] Hopfield,J.J.: "Neural Networks and Physical Systems with Emergent
Collective Computational Abilities," Proc. Nat. Acad. Sci. USA, pp. 2554-2558, 1982.
[9] Hopfield,J.J., Tank,D. :" 'Neural' Computation of Decision in Optimization
Problems," Biological Cybernetics, Vol. 52, pp. 1-12, 1985.
[10] Howard,R.A.: Dynamic Probabilistic Systems, New York:Wiley, 1971.
[11] Kam,M., Cheng,R.: " Decision Making with the Boltzmann Machine,"
Proceedings of the 1989 American Control Conference, Vol. 1, Pittsburgh, PA, 1989.
[12] Kennedy,M.P., Chua, L.O. :"Circuit Theoretic Solutions for Neural
Networks," Proceedings of the 1st Int. Con!. on Neural Networks, San Diego, CA, 1987.
[13] Kirkpatrick,S., Gellat,C.D.,Jr., Vecchi,M.P. : "Optimization by
Simulated Annealing," Science, 220, pp. 671-680, 1983.
[14] Rumelhart,D.E., McClelland,J.L. (editors): Parallel Distributed
Processing, Volume 1: Foundations, Cambridge:MIT press, 1986.
[15] Szu,H.: "Fast Simulated Annealing," in Denker,J.S.(editor) : Neural
Networks for Computing, New York:American Inst. Phys., Vol. 51.,pp. 420-425, 1986.
[16] Tank,D.W., Hopfield, J.J. : "Simple 'Neural' Optimization Networks," IEEE
Transactions on Circuits and Systems, Vol. CAS-33, No.5, pp. 533-541, 1986.
| 116 |@word polynomial:4 seek:2 bn:1 eng:2 llo:1 contraction:2 reap:1 initial:3 attracted:1 fn:1 predetermined:4 shape:1 designed:1 interpretable:1 update:1 devising:1 item:1 pwc:3 ith:4 short:1 prespecified:1 chua:2 sigmoidal:1 along:3 supply:1 introduce:1 uphill:1 behavior:1 themselves:1 multi:2 decreasing:1 lib:1 increasing:1 provided:1 notation:1 maximizes:1 circuit:2 evolved:1 interpreted:1 finding:1 extremum:1 nj:1 guarantee:3 ti:3 ro:1 wrong:2 utilization:1 control:2 uo:1 unit:1 grant:1 overestimate:2 before:1 declare:1 local:7 accordance:1 installed:1 acad:1 mach:1 studied:1 bi:3 grossberg:2 lf:1 procedure:2 addressable:6 lla:1 pre:2 get:1 cannot:1 undesirable:2 selection:2 valley:1 close:1 context:1 influence:8 nxi:1 deterministic:1 iri:1 independently:2 wit:1 correcting:1 rule:14 attraction:5 bm1:1 hd:9 jli:2 autonomous:1 pt:1 diego:1 exact:1 pa:2 element:4 rumelhart:2 recognition:1 szu:3 utilized:1 updating:2 associate:2 cooling:3 geman:6 ackley:1 electrical:2 worst:6 calculate:6 laarhoven:1 region:1 connected:5 observes:1 mentioned:1 environment:3 ui:4 dynamic:3 creates:2 upon:1 basis:1 multimodal:2 hopfield:7 emergent:1 fast:2 describe:1 sejnowski:3 artificial:1 choosing:1 birth:3 whose:2 larger:1 distortion:1 amari:2 encoder:1 ability:2 invested:1 noisy:3 asynchronously:2 associative:4 advantage:1 sequence:2 net:2 propose:1 organizing:1 frrst:1 exploiting:1 convergence:7 ufe:1 help:1 tim:1 develop:1 progress:1 involves:1 pii:2 direction:1 drawback:1 stochastic:4 stabilization:5 sgn:1 pbn:2 biological:1 correction:1 around:2 bj:1 smallest:1 purpose:2 estimation:1 proc:1 combinatorial:1 nlr:1 minimization:1 mit:1 avoid:1 skipped:2 pdi:3 wf:1 am:1 inst:1 lj:1 spurious:8 wij:3 selective:1 tank:4 classification:2 among:2 ill:1 summed:1 having:1 represents:1 unsupervised:1 escape:5 few:1 employ:1 divergence:3 phase:4 attractor:3 flxed:2 organization:1 indifferent:2 kirkpatrick:2 arrives:1 extreme:1 tj:1 emit:1 tuple:4 necessary:2 bq:1 kam:6 iv:1 pb2:2 re:2 restoration:1 assignment:3 stored:13 teacher:1 st:1 evaluative:1 probabilistic:3 receiving:1 choose:1 external:1 stochastically:1 cognitive:1 american:2 li:1 tii:2 coding:2 stabilize:1 int:2 performed:1 later:1 capability:1 parallel:1 contribution:1 il:1 correspond:1 climbing:1 bayesian:1 kennedy:2 cybernetics:1 processor:1 aarts:2 phys:1 whenever:1 synaptic:1 definition:2 underestimate:2 energy:7 pp:8 associated:2 hamming:3 pbc:3 con:1 popular:1 recall:1 provision:1 shaping:1 actually:1 supervised:1 modal:1 lfn:2 roger:1 replacing:1 nonlinear:1 assessment:3 usa:1 ye:1 evolution:2 nonzero:2 death:3 pbk:2 during:3 lob:1 self:1 unambiguous:1 arrived:1 theoretic:1 temperature:5 image:1 common:1 physical:1 volume:1 association:10 refer:1 cambridge:1 gibbs:1 phillips:1 aej:4 similarly:1 f0:1 retrieved:1 certain:1 underlined:1 binary:9 pd2:1 wji:1 minimum:12 additional:1 recognized:2 converge:1 ihd:1 shortest:1 period:2 signal:1 ii:2 isa:1 match:2 characterized:1 determination:2 calculation:1 sphere:8 retrieval:8 lin:1 devised:1 equally:1 ae:1 iteration:2 represent:1 achieved:1 affecting:1 annealing:5 induced:1 emitted:1 integer:1 iii:1 enough:1 architecture:2 competing:1 regarding:1 tradeoff:1 pdn:3 utility:2 ul:1 effort:1 suffer:1 f:1 york:3 proceed:1 ignored:1 generally:1 probabilty:2 detailed:1 amount:1 bnj:5 locally:4 concentrated:1 mcclelland:2 wanders:1 nsf:1 designer:1 wr:1 shall:7 vol:8 threshold:5 relaxation:1 pattem:2 distorted:1 family:2 pbi:3 decision:8 bit:2 bound:16 distinguish:1 cheng:7 activity:1 emi:1 vecchi:1 min:1 uf:2 according:1 jr:1 lid:5 making:2 pr:2 equation:14 dmax:5 eventually:1 mechanism:2 needed:1 end:1 operation:2 wii:2 apply:1 denker:1 pdt:2 disagreement:1 occurrence:1 encounter:1 jd:1 instant:4 exploit:1 bl:1 objective:2 already:1 codewords:1 distance:7 reversed:1 simulated:5 sci:1 decoder:4 code:1 length:1 ratio:1 tdo:3 design:1 anal:1 collective:1 boltzmann:14 perform:2 allowing:2 upper:6 neuron:16 markov:1 howard:2 finite:2 sing:1 hinton:3 synchronously:1 arbitrary:1 introduced:1 namely:1 defme:1 specified:1 learned:1 accepts:1 patte:1 trans:2 suggested:1 pattern:35 appeared:1 max:5 memory:7 lend:1 treated:1 drexel:1 representing:1 scheme:6 numerous:1 created:1 philadelphia:1 prior:1 acknowledgement:1 law:1 wander:1 fully:4 suggestion:1 interesting:2 foundation:1 principle:4 editor:2 pi:1 production:4 loa:1 last:1 free:1 asynchronous:2 supported:1 allow:3 van:1 distributed:1 calculated:1 xn:1 transition:1 reinforcement:1 san:1 bm:26 far:1 transaction:1 global:8 decides:1 isb:1 pittsburgh:1 assumed:1 reorder:1 un:1 decade:1 mj:2 nature:1 ca:2 symmetry:1 main:1 bmi:2 noise:1 fmd:2 hebb:1 wiley:2 theme:1 position:1 deterministically:2 exponential:2 abundance:1 companion:1 er:1 consequent:1 trap:2 magnitude:1 te:3 nat:1 relearning:1 gap:3 likely:1 u2:1 determines:1 satisfies:1 content:6 considerable:1 determined:3 pbo:1 conservative:1 called:5 accepted:1 ili:4 inadvertently:1 dept:2 princeton:1 |
177 | 1,160 | Harmony Networks Do Not Work
Rene Gourley
School of Computing Science
Simon Fraser University
Burnaby, B.C., V5A 1S6, Canada
gourley@mprgate.mpr.ca
Abstract
Harmony networks have been proposed as a means by which connectionist models can perform symbolic computation. Indeed, proponents claim that a harmony network can be built that constructs
parse trees for strings in a context free language. This paper shows
that harmony networks do not work in the following sense: they
construct many outputs that are not valid parse trees.
In order to show that the notion of systematicity is compatible with connectionism,
Paul Smolensky, Geraldine Legendre and Yoshiro Miyata (Smolensky, Legendre,
and Miyata 1992; Smolen sky 1993; Smolen sky, Legendre, and Miyata 1994) proposed a mechanism, "Harmony Theory," by which connectionist models purportedly
perform structure sensitive operations without implementing classical algorithms.
Harmony theory describes a "harmony network" which, in the course of reaching a
stable equilibrium, apparently computes parse trees that are valid according to the
rules of a particular context-free grammar.
Harmony networks consist of four major components which will be explained in
detail in Section 1. The four components are,
Tensor Representation: A means to interpret the activation vector of a connectionist system as a parse tree for a string in a context-free language.
Harmony: A function that maps all possible parse trees to the non-positive integers so that a parse tree is valid if and only if its harmony is zero.
Energy: A function that maps the set of activation vectors to the real numbers
and which is minimized by certain connectionist networks!.
Recursive Construction: A system for determining the weight matrix of a connectionist network so that if its activation vector is interpreted as a parse
1 Smolensky, Legendre and Miyata use the term "harmony" to refer to both energy and
harmony. To distinguish between them, we will use the term that is often used to describe
the Lyapunov function of dynamic systems, "energy" (see for example Golden 1986).
R. GOURLEY
32
tree, then the network's energy is the negation of the harmony of that parse
tree.
Smolen sky et al. contend that, in the process of minimizing their energy values,
harmony networks implicitly maximize the harmony of the parse tree represented by
their activation vector. Thus, if the harmony network reaches a stable equilibrium
where the energy is equal to zero, the parse tree that is represented by the activation
vector must be a valid parse tree:
When the lower-level description of the activation-spreading process satisfies certain mathematical properties, this process can be
analyzed on a higher level as the construction of that structure
including the given input structure which maximizes Harmony.
(Smolensky 1993, p848, emphasis is original)
Unfortunately, harmony networks do not work - they do not always construct
maximum-harmony parse trees. The problem is that the energy function is defined
on the values of the activation vector. By contrast, the harmony function is defined
on possible parse trees. Section 2 of this paper shows that these two domains are
not equal, that is, there are some activation vectors that do not represent any parse
tree.
The recursive construction merely guarantees that the energy function passes
through zero at the appropriate points; its minima are unrestricted . So, while
it may be the case that the energy and harmony functions are negations of one
another, it is not always the case that a local minimum of one is a local maximum
of the other. More succinctly, the harmony network will find minima that are not
even trees, let alone valid parse trees.
The reason why harmony networks do not work is straightforward. Section 3 shows
that the weight matrix must have only negative eigenvalues, for otherwise the network constructs structures which are not valid trees. Section 4 shows that if the
weight matrix has only negative eigenvalues, then the energy function admits only
a single zero - the origin. Furthermore, we show that the origin cannot be interpreted as a valid parse tree. Thus, the stable points of a harmony network are not
valid parse trees.
1
1.1
HARMONY NETWORKS
TENSOR REPRESENTATION
Harmony theory makes use of tensor products (Smolensky 1990; Smolensky, Legendre, and Miyata 1992; Legendre, Miyata, and Smolensky 1991) to convolve symbols
with their roles. The resulting products are then added to represent a labelled tree
using the harmony network's activation vector. The particular tensor product used
is very simple:
(aI, a2,? ? ?, an) <8> (b l , b2,.?., bm ) =
(alb l , a l b2, ... , a}b m , a2bl, a2 b2, ... , a2bm,
anb m )
If two tensors of differing dimensions are to be added , then they are essentially
concatenated.
.. . ,
Binary trees are represented with this tensor product using the following recursive
rules:
1. The tensor representation of a tree containing no vertices is O.
33
Harmony Networks Do Not Work
Table 1: Rules for determining harmony and the weight matrix. Let G = (V, E, P, S)
be a context-free grammar of the type suggested in section 1.2. The rules for
determining the harmony of a tree labelled with V and E are shown in the second
column. The rules for determining the system of equations for recursive construction
are shown in the third column. (Smolensky, Legendre, and Miyata 1992; Smolensky
1993)
Grammar
Element
S
xEE
x E
{S}
V\
--
x
yz
or x
yE P
Harmony Rule
Energy Equation
For every node labelled
S add -1 to H(T).
For every node labelled
x add -1 to H(T).
For every node labelled
x add -2 or -3 to H(T)
depending on whether
or not x appears on
the left of a production with two symbols
on the right.
For every edge where
x is the parent and y
is the left child add 2.
Similarly, add 2 every
time z is the right child
of x.
Include (S+00r,)Wroot(S+00rr) = 2
in the system of equations
Include (x +60r,)Wroot (x +60r,) = 2
in the system of equations
=
Include (x+60r,)Wroot(x+00r,)
4
or 6 in the system of equations, depending on whether or not x appears on the
left of a production with two symbols
on the right.
Include in the system of equations,
(x + 60 r,)Wroot (6 + y 0 r,) = -2
(0 + y 0 r,)Wroot(x + 60 r,) = -2
(x + 60 r,)Wroot(O + z 0 r,) = -2
(6 + z 0 r,)Wroot(x + 6? r,) = -2
2. If A is the root of a tree, and TL, TR are the tensor product representations
of its left subtree and right subtree respectively, then A + TL 0 r, + TR 0 rr
is the tensor representation of the whole tree.
The vectors, r" and rr are called "role vectors" and indicate the roles of left child
and right child.
1.2
HARMONY
Harmony (Legendre, Miyata, and Smolensky 1990; Smolensky, Legendre, and Miyata 1992) describes a way to determine the well-formedness of a potential parse tree
with respect to a particular context free grammar. Without loss of generality, we
can assume that the right-hand side of each production has at most two symbols,
and if a production has two symbols on the right, then it is the only production for
the variable on its left side. For a given binary tree, T, we compute the harmony
of T, H(T) by first adding the negative contributions of all the nodes according to
their labels, and then adding the contributions of the edges (see first two columns
of table 1).
34
1.3
R.GOURLEY
ENERGY
Under certain conditions, some connectionist models are known to admit the following energy or Lyapunov function (see Legendre, Miyata, and Smolensky 1991):
1
E(a) = --atWa
2
Here, W is the weight matrix of the connectionist network, and a is its activation
vector. Every non-equilibrium change in the activation vector results in a strict
decrease in the network's energy. In effect, the connectionist network serves to
minimize its energy as it moves towards equilibrium.
1.4
RECURSIVE CONSTRUCTION
Smolensky, Legendre, and Miyata (1992) proposed that the recursive structure of
their tensor representations together with the local nature of the harmony calculation could be used to construct the weight matrix for a network whose energy
function is the negation of the harmony of the tree represented by the activation
vector. First construct a matrix W root which satisfies a system of equations. The
system of equations is found by including equations for every symbol and production in the grammar, as shown in column three of table 1. Gourley (1995) shows
that if W is constructed from copies of W root according to a particular formula, and
if aT is a tensor representation for a tree, T, then E(aT) = -H(T).
2
SOME ACTIVATIONS ARE NOT TREES
As noted above, the reason why harmony networks do not work is that they seek
minima in their state space which may not coincide with parse tree representations.
One way to amelioarate this would be to make every possible activation vector
represent some parse tree. If every activation vector represents some parse tree,
then the rules that determine the weight matrix will ensure that the energy minima
agree with the valid parse trees. Unfortunately, in that case, the system of equations
used to determine W root has no solution.
If every activation vector is to represent some parse tree, and the symbols of the
grammar are two dimensional, then there are symbols represented by each vector,
(Xl, xt), (Xl, X2), (X2' xt), and (X2' X2), where Xl 1= X2 . These symbols must satisfy
the equations given in table 1 , and so,
XiWrootll
X~Wrootll
Xi{Wrootll
Wroot12
Wroot~l
XIX2 W root12
XIX2 W root:n
XIX2Wrootl~ XIX2 W root :n
+
+
x~(Wrootll
+
+
+
+ Wroot12
+ Wroot~~)
+ x~Wroot:n
+ xiWroot~2
+ Wroot~l + Wrootn)
+
Because hi E {2, 4, 6}, there must be a pair hi, hj which are equal. In that
case, it can be shown using Gaussian elimination that there is no solution for
Wrootll , Wrootl~' Wroot~l , Wroot~~. Similarly, if the symbols are represented by vectors of dimension three or greater, the same contradiction occurs.
Thus there are some activation vectors that do not represent any tree - valid or
invalid. The question now becomes one of determining whether all of the harmony
network's stable equilibria are valid parse trees.
35
Harmony Networks Do Not Work
b
a
Figure 1: Energy functions of two-dimensional harmony networks. In each case, the
points i and f respectively represent an initial and a final state of the network. In
a, one eigenvector is positive and the other is negative; the hashed plane represents
the plane E
0 which intersects the energy function and the vertical axis at the
origin. In b, one eigenvalue is negative while the other is zero; The heavy line
represents the intersection of the surface with the plane E 0 and it intersects the
vertical axis at the origin.
=
=
3
NON-NEGATIVE EIGENVECTORS YIELD
NON-TREES
If any of the eigenvalues of the weight matrix, W, is positive, then it is easy to show
that the harmony network will seek a stable equilibrium that does not represent
a parse tree at all. Let A > 0 be a positive eigenvalue of W, and let e be an
eigenvector, corresponding to A, that falls within the state space. Then,
E(e)
1
1
= --etWe
= --Aete
2
2
< O.
Because the energy drops below zero, the harmony network would have to undergo
an energy increase in order to find a zero-energy stable equilibrium. This cannot
happen, and so, the network reaches an equilibrium with energy strictly less than
zero.
Figure la illustrates the energy function of a harmony network where one eigenvalue
is positive. Because harmony is the negation of energy, in this figure all the valid
parse trees rest on the hashed plane, and all the invalid parse trees are above it. As
we can see, the harmony network with positive eigenvalues will certainly find stable
equilibria which are not valid parse tree representations.
Now, suppose W, the weight matrix, has a zero eigenvalue. If e is an eigenvector
corresponding to that eigenvalue, then for every real a, aWe = O. Consequently,
one of the following must be true:
1. ae is not a stable equilibrium. In that case, the energy function must drop
below zero, yielding a sub-zero stable equilibrium - a stable equilibrium
that does not represent any tree.
2. ae is a stable equilibrium.
Then for every a, ae must be a
valid tree representation.
Such a situation is represented in fig-
R. GOURLEY
36
Figure 2: The energy function of a two-dimensional harmony network where both
eigenvalues are negative. The vertical axis pierces the surface at the origin , and the
points i and f respectively represent an initial and a final state of the network .
ure Ib where the set of all points ae is represented by the heavy
line. This implies that there is a symbol, (al, a2, . . . , an), such that
Ckl(al , a2, .. . ,an),Ck2(al,a2, . . . ,an), .. . ,an2+l(al,a2, ... , an) are also all
symbols. As before, this implies that W root must satisfy the equation,
?al, ... , an)
hi hi E
2" ' {2 4 6}
a ,?
"
+ 0- ? r,) t Wroot?al, ... , an) + 0- 0 r,)
for i = 1 ... n 2 + 1. Again using Gaussian elimination, it can be shown that
there is no solution to this system of equations.
In either case, the harmony network admits stable equilibria that do not represent
any tree. Thus, the eigenvalues must all be negative.
4
NEGATIVE EIGENVECTORS YIELD NON-TREES
If all the eigenvalues of the weight matrix are negative, then the energy function has
a very special shape: it is a paraboloid centered on the origin and concave in the
direction of positive energy. This is easily seen by considering the first and second
derivatives of E:
8E(x) _ _ ~
8x; L..j
W, .. x .
'.1'
8 2 E(x) 8x;8x; -
-W,. .
'.1
Clearly, all the first derivatives are zero at the origin, and so, it is a critical point.
Now the origin is a strict minimum if all the roots of the following well-known
equation are positive:
0= det
= det I-W - All
det 1- W - All is the characteristic polynomial of -W . If A is a root then it is an
eigenvalue of - W, or equivalently, it is the negative of an eigenvalue of W . Because
all of W's eigenvalues are negative, the origin is a strict minimum, and indeed it is
the only minimum. Such a harmony network is illustrated in Figure 2.
37
Hannony Networks Do Not Work
Thus the origin is the only stable point where the energy is zero, but it cannot
represent a parse tree which is valid for the grammar. If it does, then
S + TL 0 r,
+ TR (9 rr =
(0, . . . ,0)
where TL, TR are appropriate left and right subtree representations, and S is the
start symbol of the grammar. Because each of the subtrees is multiplied by either
or rr, they are not the same dimension as S, and are consequently concatenated
instead of added. Therefore S = O. But then, Wroot must satisfy the equation
r,
(0 + 0(9 r,)Wroot(O + 0 (9 r,) =-2
This is impossible, and so, the origin is not a valid tree representation.
5
CONCLUSION
This paper has shown that in every case, a harmony network will reach stable
equilibria that are not valid parse trees. This is not unexpected. Because the
energy function is a very simple function, it would be more surprising if such a
connectionist system could construct complicated structures such as parse trees for
a context free grammar.
Acknowledgements
The author thanks Dr. Robert Hadley and Dr. Arvind Gupta, both of Simon Fraser
University, for their invaluable comments on a draft of this paper.
References
Golden, R. (1986). The 'brain-state-in-a-box' neural model is a gradient descent
algorithm. Journal of Mathematical Psychology 30, 73-80.
Gourley, R. (1995) . Tensor represenations and harmony theory: A critical analysis.
Master's thesis, Simon Fraser University, Burnaby, Canada. In preparation.
Legendre, G., Y. Miyata, and P. Smolensky (1990). Harmonic grammar - a formal
multi-level connectionist theory of linguistic well-formedness: Theoretical foundations. In Proceedings of the Twelfth National Conference on Cognitive Science,
Cambridge, MA, pp . 385- 395. Lawrence Erlbaum.
Legendre, G ., Y. Miyata, and P. Smolensky (1991) . Distributedrecursive structure
processing. In B. Mayoh (Ed.), Proceedings of the 1991 Scandinavian Conference
on Artificial Intelligence , Amsterdam, pp. 47-53. lOS Press.
Smolensky, P. (1990) . Tensor product variable binding and the representation of
symbolic structures in connectionist systems. Artificial Intelligence 46, 159-216.
Smolensky, P. (1993). Harmonic grammars for formal languages. In S. Hanson,
J. Cowan, and C. Giles (Eds.), Advances in Neural Information Processing Systems
5, pp. 847-854 . San Mateo: Morgan Kauffman .
Smolensky, P., G. Legendre, and Y. Miyata (1992). Principles for an integrated
connectionist/symbolic theory of higher cognition. Technical Report CU-CS-60092, University of Colorado Computer Science Department.
Smolensky, P., G. Legendre, and Y . Miyata (1994) . Integrating connectionist and
symbolic computation for the theory of language. In V. Honavar and L. Uhr (Eds.),
Artificial Intelligence and Neural Networks : Steps Toward Principled Integration,
pp. 509-530. Boston: Academic Press.
| 1160 |@word cu:1 polynomial:1 twelfth:1 seek:2 tr:4 smolen:3 initial:2 surprising:1 activation:17 must:10 happen:1 shape:1 drop:2 alone:1 intelligence:3 plane:4 ck2:1 draft:1 node:4 mathematical:2 constructed:1 indeed:2 multi:1 brain:1 considering:1 becomes:1 awe:1 maximizes:1 interpreted:2 string:2 eigenvector:3 differing:1 guarantee:1 sky:3 every:13 golden:2 concave:1 positive:8 before:1 local:3 ure:1 emphasis:1 mateo:1 recursive:6 integrating:1 symbolic:4 cannot:3 context:6 impossible:1 map:2 straightforward:1 contradiction:1 rule:7 s6:1 notion:1 construction:5 suppose:1 colorado:1 origin:11 element:1 role:3 decrease:1 principled:1 dynamic:1 easily:1 represented:8 intersects:2 describe:1 artificial:3 whose:1 otherwise:1 grammar:11 final:2 eigenvalue:15 rr:5 product:6 ckl:1 description:1 los:1 parent:1 depending:2 school:1 c:1 indicate:1 implies:2 lyapunov:2 direction:1 uhr:1 centered:1 elimination:2 implementing:1 formedness:2 connectionism:1 strictly:1 equilibrium:15 lawrence:1 cognition:1 claim:1 major:1 a2:6 proponent:1 harmony:50 spreading:1 label:1 sensitive:1 clearly:1 always:2 gaussian:2 reaching:1 hj:1 linguistic:1 contrast:1 sense:1 burnaby:2 integrated:1 special:1 integration:1 equal:3 construct:7 represents:3 minimized:1 connectionist:13 report:1 national:1 negation:4 geraldine:1 paraboloid:1 certainly:1 analyzed:1 yielding:1 subtrees:1 edge:2 tree:50 theoretical:1 yoshiro:1 wroot:16 column:4 giles:1 vertex:1 erlbaum:1 thanks:1 together:1 again:1 thesis:1 containing:1 dr:2 admit:1 cognitive:1 derivative:2 potential:1 b2:3 satisfy:3 hannony:1 systematicity:1 root:9 apparently:1 start:1 complicated:1 simon:3 contribution:2 minimize:1 v5a:1 characteristic:1 yield:2 reach:3 ed:3 energy:31 pp:4 appears:2 higher:2 box:1 generality:1 furthermore:1 hand:1 parse:31 alb:1 effect:1 ye:1 true:1 illustrated:1 hadley:1 noted:1 invaluable:1 harmonic:2 interpret:1 refer:1 rene:1 cambridge:1 ai:1 similarly:2 language:4 stable:14 hashed:2 scandinavian:1 surface:2 an2:1 add:5 certain:3 binary:2 seen:1 minimum:8 unrestricted:1 greater:1 morgan:1 determine:3 maximize:1 technical:1 academic:1 calculation:1 arvind:1 fraser:3 ae:4 essentially:1 represent:11 rest:1 pass:1 strict:3 comment:1 undergo:1 cowan:1 integer:1 easy:1 psychology:1 det:3 whether:3 eigenvectors:2 four:2 merely:1 master:1 hi:4 distinguish:1 represenations:1 x2:5 department:1 according:3 honavar:1 legendre:15 describes:2 explained:1 equation:15 agree:1 mechanism:1 serf:1 operation:1 multiplied:1 appropriate:2 original:1 convolve:1 include:4 ensure:1 concatenated:2 yz:1 classical:1 tensor:13 move:1 added:3 question:1 occurs:1 gradient:1 reason:2 toward:1 minimizing:1 equivalently:1 unfortunately:2 robert:1 negative:12 perform:2 contend:1 vertical:3 descent:1 situation:1 anb:1 canada:2 mpr:1 pair:1 hanson:1 suggested:1 below:2 kauffman:1 smolensky:19 built:1 including:2 critical:2 axis:3 acknowledgement:1 determining:5 loss:1 foundation:1 principle:1 heavy:2 production:6 compatible:1 course:1 succinctly:1 free:6 copy:1 side:2 formal:2 fall:1 dimension:3 valid:17 computes:1 author:1 coincide:1 san:1 bm:1 implicitly:1 xi:1 why:2 table:4 nature:1 ca:1 miyata:15 domain:1 whole:1 paul:1 child:4 fig:1 tl:4 purportedly:1 sub:1 xl:3 ib:1 third:1 formula:1 xt:2 symbol:13 admits:2 gupta:1 consist:1 adding:2 pierce:1 subtree:3 illustrates:1 boston:1 intersection:1 unexpected:1 amsterdam:1 binding:1 satisfies:2 xix2:3 ma:1 consequently:2 invalid:2 towards:1 labelled:5 change:1 called:1 la:1 preparation:1 |
178 | 1,161 | An Information-theoretic Learning
Algorithm for Neural Network
Classification
David J. Miller
Department of Electrical Engineering
The Pennsylvania State University
State College, Pa: 16802
Ajit Rao, Kenneth Rose, and Allen Gersho
Department of Electrical and Computer Engineering
University of California
Santa Barbara, Ca. 93106
Abstract
A new learning algorithm is developed for the design of statistical
classifiers minimizing the rate of misclassification. The method,
which is based on ideas from information theory and analogies to
statistical physics, assigns data to classes in probability. The distributions are chosen to minimize the expected classification error
while simultaneously enforcing the classifier's structure and a level
of "randomness" measured by Shannon's entropy. Achievement of
the classifier structure is quantified by an associated cost. The constrained optimization problem is equivalent to the minimization of
a Helmholtz free energy, and the resulting optimization method
is a basic extension of the deterministic annealing algorithm that
explicitly enforces structural constraints on assignments while reducing the entropy and expected cost with temperature. In the
limit of low temperature, the error rate is minimized directly and a
hard classifier with the requisite structure is obtained. This learning algorithm can be used to design a variety of classifier structures.
The approach is compared with standard methods for radial basis
function design and is demonstrated to substantially outperform
other design methods on several benchmark examples, while often retaining design complexity comparable to, or only moderately
greater than that of strict descent-based methods.
592
1
D. NnLLER.A.RAO.K. ROSE.A. GERSHO
Introduction
The problem of designing a statistical classifier to minimize the probability of misclassification or a more general risk measure has been a topic of continuing interest
since the 1950s. Recently, with the increase in power of serial and parallel computing
resources, a number of complex neural network classifier structures have been proposed, along with associated learning algorithms to design them. While these structures offer great potential for classification, this potenl ial cannot be fully realized
without effective learning procedures well-matched to the minimllm classificationerror oh.iective. Methods such as back propagation which approximate class targets
in a sqllared error sense do not directly minimize the probability of error. Rather,
it has been shown that these approaches design networks to approximate the class
a posteriori probabilities. The probability estimates can then be used to form a decision rule. While large networks can in principle accurately approximate the Bayes
discriminant, in practice the network size must be constrained to avoid overfitting
the (finite) training set. Thus, discriminative learning techniques, e.g. (Juang and
Katagiri, 1992), which seek to directly minimize classification error may achieve
better results. However, these methods may still be susceptible to finding shallow
local minima far from the global minimum.
As an alternative to strict descent-based procedures, we propose a new deterministic learning algorithm for statistical classifier design with a demonstrated potential
for avoiding local optima of the cost. Several deterministic, annealing-based techniques have been proposed for avoiding nonglobal optima in computer vision and
image processing (Yuille, 1990), (Geiger and Girosi,1991), in combinatorial optimization, and elsewhere. Our approach is derived based on ideas from information
theory and statistical physics, and builds on the probabilistic framework of the deterministic annealing (DA) approach to clustering and related problems (Rose et
al., 1990,1992,1993). In the DA approach for data clustering, the probability distributions are chosen to minimize the expected clustering cost, given a constraint
on the level of randomness, as measured by Shannon's entropy 1.
In this work, the DA approach is extended in a novel way, most significantly to
incorporate structural constraints on data assignments, but also to minimize the
probability of error as the cost. While the general approach we suggest is likely
applicable to problems of structured vector quantization and regression as well, we
focus on the classification problem here. Most design methods have been developed
for specific classifier structures. In this work, we will develop a general approach but
only demonstrate results for RBF classifiers. The design of nearest prototype and
MLP classifiers is considered in (Miller et al., 1995a,b). Our method provides substantial performance gains over conventional designs for all of these structures, while
retaining design complexity in many cases comparable to the strict descent methods. Our approach often designs small networks to achieve training set performance
that can only be obtained by a much larger network designed in a conventional way.
The design of smaller networks may translate to superior performance outside the
training set.
INote that in (Rose et al., 1990,1992,1993), the DA method was formally derived using
the maximum entropy principle. Here we emphasize the alternative, but mathematically
equivalent description that the chosen distributions minimize the expected cost given constrained entropy. This formulation may have more intuitive appeal for the optimization
problem at hand.
An Information-theoretic Learning Algorithm for Neural Network Classification
2
2.1
593
Classifier Design Formulation
Problem Statement
Let T = {(x,c)} be a training set of N labelled vectors, where x E 'R n is a feature
vector and c E I is its class label from an index set I. A classifier is a mapping
C : 'R n _ I, which assigns a class label in I to each vector in 'Rn. Typically, the
classifier is represented by a set of model parameters A. The classifier specifies a
partitioning of the feature space into regions Rj
{x E 'R n : C(x) = j}, where
n
URj 'R and Rj 0. It also induces a partitioning of the training set into
j
=
=
n =
j
=
sets 7j C T, where 7j
{{x,c} : x E Rj,(x,c) E T}. A training pair (x,c) E T
is misc1assified if C(x) "# c. The performance measure of primary interest is the
empirical error fraction P e of the classifier, i.e. the fraction of the training set (for
generalization purposes, the fraction of the test set) which is misclassified:
Pe =
2.
L
6(c,C(x? =
~L
L
6(c,j),
(1)
N (X,c)ET
jEI (X,C)ETj
where 6( c, j) = 1 if c "# j and 0 otherwise. In this work, we will assume that the
classifier produces an output Fj(x) associated with each class, and uses a "winnertake-all" classification fll Ie:
R j == {x E'R n : Fj (x) ~ Fk(X) "Ik E I}.
(2)
This rule is consistent with MLP and RBF-based classification.
2.2
Randomized Classifier Partition
As in the original DA approach for clustering (Rose et aI., 1990,1992), we cast
the optimization problem in a framework in which data are assigned to classes
in probability. Accordingly, we define the probabilities of association between a
feature x and the class regions, i.e. {P[x E R j ]}. As our design method, which
optimizes over these probabilities, must ultimately form a classifier that makes
"hard" decisions based on a specified network model, the distributions must be
chosen to be consistent with the decision rule of the model. In other words, we
need to introduce randomness into the classifier's partition. Clearly, there are many
ways one could define probability distributions which are consistent with the hard
partition at some limit. We use an information-theoretic approach. We measure the
randomness or uncertainty by Shannon's entropy, and determine the distribution
for a given level of entropy. At the limit of zero entropy we should recover a hard
partition. For now, suppose that the values of the model parameters A have been
fixed. We can then write an objective function whose maximization determines the
hard partition for a given A:
1
Fh = N ~
L
Fj(x).
(3)
JEI (X,c)ETj
Note specifically that maximizing (3) over all possible partitions captures the decision rule of (2). The probabilistic generalization of (3) is
1
F= N L
LP[x E Rj]Fj(x),
(4)
(X,c)ET j
where the (randomized) partition is now represented by association probabilities,
and the corresponding entropy is
1
H =- N L
LP[x E Rj)logP[x E Rj).
(5)
(X,c)ET j
D. MILLER, A. RAO, K. ROSE, A. GERSHO
594
We determine the distribution at a given level of randomness as the one which
maximizes F while maintaining H at a prescribed level iI:
max
{P[XERj]}
F subject to H
= iI.
(6)
The result is the best probabilistic partition, in the sense of F, at the specified level
of randomness. For iI = 0 we get back the hard partition maximizing (3). At any
iI, the solution of(6) is the Gibbs distribution
e'YFj(X)
P[x E Rj]
== Pjl~(A)
= E e'YF" (X) ,
(7)
k
where 'Y is the Lagrange multiplier. For 'Y --t 0, the associations become increasingly uniform, while for 'Y --t 00, they revert to hard classifications, equivalent to
application of the rule in (2). Note that the probabilities depend on A through the
network outputs. Here we have emphasized this dependence through our choice of
concise notation.
2.3
Information-Theoretic Classifier Design
Until now we have formulated a controlled way of introducing randomness into
the classifier's partition while enforcing its structural constraint. However, the
derivation assumed that the model parameters were given, and thus produced only
the form of the distribution Pjl~(A), without actually prescribing how to choose the
valw's of its parameter set. Moreover the derivation did not consider the ultimate
goal of minimizing the probability of error. Here we remedy both shortcomings.
The method we suggest gradually enforces formation of a hard classifier minimizing
the probability of error. We start with a highly random classifier and a high expected
misclassification cost. We then gradually reduce both the randomness and the cost
in a deterministic learning process which enforces formation of a hard classifier
with the requisite structure. As before, we need to introduce randomness into the
partition while enforcing the classifier's structure, only now we are also interested
in minimizing the expected misclassification cost. While satisfying these multiple
objectives may appear to be a formidable task, the problem is greatly simplified by
restricting the choice of random classifiers to the set of distributions {Pjl~(A)} as
given in (7) - these random classifiers naturally enforce the structural constraint
through 'Y. Thus, from the parametrized set {Pjl~(A)}, we seek that distribution
which minimizes the average misclassification cost while constraining the entropy:
(8)
subject to
The solution yields the best random classifier in the sense of minimum < Pe > for a
given iI. At the limit of zero entropy, we should get the best hard classifier in the
sense of Pe with the desired structure, i.e. satisfying (2).
The constrained minimization (8) is equivalent to the unconstrained minimization
of the Lagrangian:
(9)
min L == minfj < Pe > -H,
A,'Y
A,'Y
An Infonnation-theoretic Learning Algorithm for Neural Network Classification
595
=
where {3 is the Lagrange multiplier associated with (8). For {3 0, the sole objective is entropy maximization, which is achie\"ed by the uniform distribution. This
solution, which is the global minimum for L at {3 = 0, can be obtained by choosing , = O. At the other end of the spectrum, for {3 - 00, the sole objective is
to minimize < P e >, and is achieved by choosing a non-random (hard) classifier
(hence minimizing P e ). The hard solution satisfies the classification rule (2) and is
obtained for , - 00.
Motivation for minimizing the Lagrangian can be obtained from a physical perspective by noting that L is the Helmholtz free energy of a simulated system, with
< Pe > the "energy", H the system entropy, and ~ the "temperature". Thus, from
this physical view we can suggest a deterministic annealing (DA) process which
involves minimizing L starting at the global minimum for {3 = 0 (high temperature)
and tracking the solution while increasing {3 towards infinity (zero temperature).
In this way, we obtain a sequence of solutions of decreasing entropy and average
misclassification cost. Each such solution is the best random classifier in the sense
of < Pe > for a given level of randomness. The annealing process is useful for
avoiding local optima of the cost < Pe >, and minimizes < Pe > directly at low
temperature . While this annealing process ostensibly involves the quantities Hand
< Pe >, the restriction to {PjIAA)} from (7) ensures that the process also enforces
the structural constraint on the classifier in a controlled way. Note in particular
that, has not lost its interpretation as a Lagrange multiplier determining F. Thus,
, = 0 means that F is unconstrained - we are free to choose the uniform distribution . Similarly, sending, - 00 requires maximizing F - hence the hard solution.
Since, is chosen to minimize L, this parameter effectively determines the level of
F - the level of structural constraint - consistent with Hand < P e > for a given
{3. As {3 is increased, the entropy constraint is relaxed, allowing greater satisfaction
of both the minimum < P e > and maximum F objectives. Thus, annealing in {3
gradually enforces both the structural constraint (via ,) and the minimum < Pe >
objective 2.
Our formulation clearly identifies what distinguishes the annealing approach from
direct descent procedures. Note that a descent method could be obtained by simply
neglecting the constraint on the entropy, instead choosing to directly minimize <
Pe > over the parameter set. This minimization will directly lead to a hard classifier,
and is akin to the method described in (Juang and Katagiri, 1992) as well as other
related approaches which attempt to directly minimize a smoothed probability of
error cost. However, as we will experimentally verify through simulations, our
annealing approach outperforms design based on directly minimizing < P e >.
For conciseness, we will not derive necessary optimality conditions for minimizing
the Lagrangian at a give temperature, nor will we specialize the formulation for
individual classification structures here. The reader is referred to (Miller et al.,
1995a) for these details.
3
Experimental Comparisons
We demonstrate the performance of our design approach in comparison with other
methods for the normalized RBF structure (Moody and Darken, 1989). For the DA
method, steepest descent was used to minimize L at a sequence of exponentially
increasing {3, given by (3(n + 1)
a:{3(n) , for a: between 1.05 and 1.1. We have
found that much of the optimization occurs at or near a critical temperature in the
=
2While not shown here, the method does converge directly for
enforces the classifier's structure.
f3 -
00,
and at this limit
596
D.~LLER,A.RAO,K.ROSE,A.GERSHO
Method
M
P e (tram)
Pe (test)
DA
4
0.11
0.13
30
0.028
0.167
4
0.33
0.35
TR-RHF
10
30
0.162 0.145
0.165 0.168
50
0.129
0.179
MU-ltBJ<'
10
50
0.3
0.19
0.37 0.18
\jPe
10
0.18
0.20
Table 1: A comparison of DA with known design techniques for RBF classification
on the 40-dimensional noisy waveform data from (Breiman et al., 1980).
solution process. Beyond this critical temperature, the annealing process can often
be "quenched" to zero temperature by sending I ---+ 00 without incurring significant
performance loss. Quenching the process often makes the design complexity of our
method comparable to that of descent-based methods such as back propagation or
gradient descent on < Pe >.
We have compared our RBF design approach with the mf,thod in (Moody
and Darken, 1989) (MD-RBF), with a method described ill (Tarassenko and
Roberts,1994) (TR-RBF), with the approach in (Musavi et al., 1992), and with
steepest descent on < P e > (G-RBF). MD-RBF combines unsupervised learning
of receptive field parameters with supervis,'d learning of the weights from the receptive fields so as to minimize the squared distance to target class outputs. The
primary advantage of this approach is its modest design complexity. However, the
recept i\"c fields are not optimized in a supervised fashion, which can cause performance degradation. TR-RBF optimizes all of the RBF parameters to approximate
target class outputs. This design is more complex than MD-RBF and achieves better performance for a given model size. However, as aforementioned, the TR-RBF
design objective is not equivalent to minimizing Pe , but rather to approximating
the Bayes-optimal discriminant. While direct descent on < P e > may minimize
the "right" objective, problems of local optima may be quite severe. In fact, we
have found that the performance of all of these methods can be quite poor without
a judicious initialization. For all of these methods, we have employed the unsupervised learning phase described in (Moody and Darken, 1989) (based on Isodata
clustering and variance estimation) as model initialization. Then, steepest descent
was performed on the respective cost surface. We have found that the complexity
of our design is typically 1-5 times that of TR-RB F or G-RBF (though occasionally
our design is actually faster than G-RBF). Accordingly, we have chosen the best
results based on five random initializations for these techniques, and compared with
the single DA design run.
One example reported here is the 40D "noisy" waveform data used in (Breiman et
al., 1980) (obtained from the DC-Irvine machine learning database repository.). We
split the 5000 vectors into equal size training and test sets. Our results in Table
I demonstrate quite substantial performance gains over all the other methods, and
performance quite close to the estimated Bayes rate of 14%. Note in particular
that the other methods perform quite poorly for a small number of receptive fields
(M), and need to increase M to achieve training set performance comparable to
our approach. However, performance on the test set does not necessarily improve,
and may degrade for increasing M.
To further justify this claim, we compared our design with results reported in
(Musavi et al., 1992), for the two and eight dimensional mixture examples. For
the 2D example, our method achieved Petro-in
6.0% for a 400 point training set
and Pe, ?? , = 6.1% on a 20,000 point test set, using M = 3 units (These results
are near-optimal, based on the Bayes rate.). By contrast, the method of Musavi et
=
An Information-theoretic Learning Algorithm for Neural Network Classification
597
al. used 86 receptive fields and achieved P et ??t = 9.26%. For the 8D example and
M = 5, our method achieved Petr,.;n = 8% and P et ?? t = 9.4% (again near-optimal),
while the method in (Musavui et al., 1992) achieved P et ? 5t = 12.0% using M = 128.
In summary, we have proposed a new, information-theoretic learning algorithm for
classifier design, demonstrated to outperform other design methods, and with general applicability to a variety of structures. Future work may investigate important
applications, such as recognition problems for speech and images. Moreover, our
extension of DA to incorporate structure is likely applicable to structured vector
quantizer design and to regression modelling. These problems will be considered in
future work.
Acknowledgements
This work was supported in part by the National Science Foundation under grant
no. NCR-9314335, the University of California M(( 'BO program, DSP Group,
Inc. Echo Speech Corporation, Moseley Associates, 1'\ ill ional Semiconductor Corp.,
Qualcomm, Inc., Rockwell International Corporation, Speech Technology Labs, and
Texas Instruments, Inc.
References
L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. The Wadsworth Statistics/Probability Series, Belmont,CA., 1980.
D. Geiger and F. Girosi. Parallel and deterministic algorithms from MRFs: Surface
reconstruction. IEEE Trans. on Patt. Anal. and Mach. Intell., 13:401- 412, 1991.
B.-H. Juang and S. Katagiri. Discriminative learning for minimum error classification. IEEE Trans. on Sig. Proc., 40:3043-3054, 1992.
D. Miller, A. Rao, K. Rose, and A. Gersho. A global optimization technique for
statistical classifier design. (Submitted for publication.), 1995.
D. Miller, A. Rao, K. Rose, and A. Gersho. A maximum entropy framework for
optimal statistical classification. In IEEE Workshop on Neural Networks for Signal
Processing.),1995.
J. Moody and C. J. Darken. Fast learning in locally-tuned processing units. Neural
Comp., 1:281-294, 1989.
M. T. Musavi, W. Ahmed, K. H. Chan, K. B. Faris, and D. M. Hummels. On the
training of radial basis function classifiers. Neural Networks, 5:595--604, 1992.
K. Rose, E. Gurewitz, and G. C. Fox. Statistical mechanics and phase transitions
in clustering. Phys. Rev. Lett., 65:945--948, 1990.
K. Rose, E. Gurewitz, and G. C. Fox. Vector quantization by deterministic annealing. IEEE Trans. on Inform. Theory, 38:1249-1258, 1992.
K. Rose, E. Gurewitz, and G. C. Fox. Constrained clustering as an optimization
method. IEEE Trans. on Patt. Anal. and Mach. Intell., 15:785-794, 1993.
L. Tarassenko and S. Roberts. Supervised and unsupervised learning in radial basis
function classifiers. lEE Proc.- Vis. Image Sig. Proc., 141:210-216, 1994.
A. L. Yuille. Generalized deformable models, statistical physics, and matching
problems. Ne 'ural Comp., 2:1-24, 1990.
| 1161 |@word repository:1 simulation:1 seek:2 concise:1 tr:5 series:1 tram:1 tuned:1 outperforms:1 must:3 belmont:1 partition:11 girosi:2 designed:1 recept:1 accordingly:2 ial:1 steepest:3 provides:1 quantizer:1 ional:1 five:1 along:1 direct:2 become:1 ik:1 specialize:1 combine:1 introduce:2 expected:6 nor:1 mechanic:1 decreasing:1 increasing:3 matched:1 notation:1 maximizes:1 moreover:2 formidable:1 what:1 substantially:1 minimizes:2 developed:2 finding:1 corporation:2 classifier:39 partitioning:2 unit:2 grant:1 appear:1 before:1 engineering:2 local:4 limit:5 semiconductor:1 mach:2 initialization:3 quantified:1 enforces:6 practice:1 lost:1 procedure:3 empirical:1 significantly:1 matching:1 word:1 radial:3 quenched:1 suggest:3 get:2 cannot:1 close:1 risk:1 restriction:1 equivalent:5 deterministic:8 lagrangian:3 demonstrated:3 conventional:2 maximizing:3 starting:1 assigns:2 rule:6 oh:1 target:3 suppose:1 us:1 designing:1 sig:2 pa:1 associate:1 helmholtz:2 satisfying:2 recognition:1 tarassenko:2 database:1 electrical:2 capture:1 region:2 ensures:1 rose:12 substantial:2 mu:1 complexity:5 moderately:1 ultimately:1 depend:1 yuille:2 basis:3 represented:2 derivation:2 revert:1 fast:1 effective:1 shortcoming:1 formation:2 outside:1 choosing:3 whose:1 quite:5 larger:1 otherwise:1 qualcomm:1 statistic:1 noisy:2 echo:1 sequence:2 advantage:1 propose:1 reconstruction:1 translate:1 poorly:1 achieve:3 deformable:1 description:1 intuitive:1 achievement:1 juang:3 etj:2 optimum:4 rhf:1 produce:1 derive:1 develop:1 measured:2 nearest:1 sole:2 involves:2 waveform:2 generalization:2 mathematically:1 extension:2 considered:2 great:1 mapping:1 claim:1 quenching:1 achieves:1 fh:1 purpose:1 estimation:1 proc:3 applicable:2 combinatorial:1 label:2 infonnation:1 hummels:1 minimization:4 clearly:2 rather:2 avoid:1 breiman:3 publication:1 derived:2 focus:1 dsp:1 modelling:1 greatly:1 contrast:1 sense:5 posteriori:1 mrfs:1 prescribing:1 typically:2 misclassified:1 interested:1 classification:17 ill:2 aforementioned:1 retaining:2 constrained:5 wadsworth:1 field:5 equal:1 f3:1 unsupervised:3 future:2 minimized:1 distinguishes:1 simultaneously:1 national:1 intell:2 individual:1 phase:2 attempt:1 friedman:1 interest:2 mlp:2 highly:1 investigate:1 severe:1 mixture:1 neglecting:1 necessary:1 respective:1 modest:1 fox:3 tree:1 continuing:1 desired:1 increased:1 rao:6 logp:1 assignment:2 maximization:2 cost:14 introducing:1 applicability:1 uniform:3 rockwell:1 reported:2 international:1 randomized:2 ie:1 probabilistic:3 physic:3 lee:1 moody:4 squared:1 again:1 choose:2 potential:2 inc:3 explicitly:1 vi:1 performed:1 view:1 lab:1 start:1 bayes:4 recover:1 parallel:2 minimize:14 variance:1 miller:6 yield:1 accurately:1 produced:1 comp:2 randomness:10 submitted:1 phys:1 inform:1 ed:1 energy:3 naturally:1 associated:4 conciseness:1 gain:2 irvine:1 actually:2 back:3 supervised:2 formulation:4 though:1 until:1 hand:3 propagation:2 petr:1 yf:1 verify:1 multiplier:3 remedy:1 normalized:1 hence:2 assigned:1 generalized:1 stone:1 theoretic:7 demonstrate:3 allen:1 temperature:10 fj:4 image:3 novel:1 recently:1 superior:1 physical:2 exponentially:1 association:3 interpretation:1 significant:1 gibbs:1 ai:1 unconstrained:2 fk:1 similarly:1 winnertake:1 katagiri:3 surface:2 chan:1 perspective:1 optimizes:2 barbara:1 occasionally:1 corp:1 minimum:8 greater:2 relaxed:1 employed:1 determine:2 converge:1 ller:1 signal:1 ii:5 ural:1 multiple:1 rj:7 faster:1 ahmed:1 offer:1 serial:1 controlled:2 basic:1 regression:3 vision:1 achieved:5 annealing:11 strict:3 subject:2 structural:7 near:3 noting:1 constraining:1 split:1 variety:2 pennsylvania:1 reduce:1 idea:2 prototype:1 texas:1 ultimate:1 akin:1 speech:3 cause:1 useful:1 santa:1 locally:1 induces:1 specifies:1 outperform:2 estimated:1 rb:1 patt:2 write:1 group:1 kenneth:1 fraction:3 run:1 uncertainty:1 reader:1 geiger:2 decision:4 comparable:4 fll:1 nonglobal:1 constraint:10 infinity:1 prescribed:1 min:1 optimality:1 department:2 structured:2 poor:1 smaller:1 increasingly:1 lp:2 shallow:1 rev:1 inote:1 gradually:3 resource:1 ostensibly:1 gersho:6 instrument:1 end:1 sending:2 incurring:1 eight:1 enforce:1 alternative:2 original:1 clustering:7 maintaining:1 build:1 approximating:1 objective:8 realized:1 quantity:1 occurs:1 receptive:4 primary:2 dependence:1 md:3 gradient:1 distance:1 simulated:1 parametrized:1 degrade:1 topic:1 discriminant:2 enforcing:3 index:1 minimizing:10 susceptible:1 olshen:1 robert:2 statement:1 design:33 anal:2 perform:1 allowing:1 darken:4 benchmark:1 finite:1 descent:11 extended:1 dc:1 rn:1 ajit:1 smoothed:1 david:1 pair:1 cast:1 specified:2 optimized:1 california:2 trans:4 beyond:1 program:1 max:1 power:1 misclassification:6 satisfaction:1 critical:2 improve:1 technology:1 ne:1 isodata:1 identifies:1 gurewitz:3 acknowledgement:1 determining:1 fully:1 loss:1 urj:1 analogy:1 foundation:1 consistent:4 principle:2 elsewhere:1 summary:1 supported:1 free:3 lett:1 transition:1 simplified:1 far:1 approximate:4 emphasize:1 global:4 overfitting:1 assumed:1 discriminative:2 spectrum:1 table:2 pjl:4 ca:2 complex:2 necessarily:1 da:11 did:1 motivation:1 referred:1 fashion:1 pe:15 specific:1 emphasized:1 appeal:1 workshop:1 quantization:2 restricting:1 effectively:1 mf:1 entropy:17 simply:1 likely:2 jei:2 lagrange:3 tracking:1 bo:1 determines:2 satisfies:1 goal:1 formulated:1 rbf:15 towards:1 labelled:1 hard:14 experimentally:1 judicious:1 specifically:1 reducing:1 justify:1 degradation:1 experimental:1 shannon:3 moseley:1 formally:1 college:1 ncr:1 incorporate:2 requisite:2 avoiding:3 |
179 | 1,162 | Experiments with Neural Networks for Real
Time Implementation of Control
P. K. Campbell, M. Dale, H. L. Ferra and A. Kowalczyk
Telstra Research Laboratories
770 Blackburn Road Clayton, Vic. 3168, Australia
{p.campbell, m.dale, h.ferra, a.kowalczyk}@trl.oz.au
Abstract
This paper describes a neural network based controller for allocating
capacity in a telecommunications network. This system was proposed in
order to overcome a "real time" response constraint. Two basic
architectures are evaluated: 1) a feedforward network-heuristic and; 2) a
feedforward network-recurrent network. These architectures are
compared against a linear programming (LP) optimiser as a benchmark.
This LP optimiser was also used as a teacher to label the data samples
for the feedforward neural network training algorithm. It is found that
the systems are able to provide a traffic throughput of 99% and 95%,
respectively, of the throughput obtained by the linear programming
solution. Once trained, the neural network based solutions are found in a
fraction of the time required by the LP optimiser.
1 Introduction
Among the many virtues of neural networks are their efficiency, in terms of both execution
time and required memory for storing a structure, and their practical ability to approximate
complex functions. A typical drawback is the usually "data hungry" training algorithm.
However, if training data can be computer generated off line, then this problem may be
overcome. In many applications the algorithm used to generate the solution may be
impractical to implement in real time. In such cases a neural network substitute can
become crucial for the feasibility of the project. This paper presents preliminary results for
a non-linear optimization problem using a neural network. The application in question is
that of capacity allocation in an optical communications network. The work in this area is
continuing and so far we have only explored a few possibilities.
2 Application: Bandwidth Allocation in SDH Networks
Synchronous Digital Hierarchy (SDH) is a new standard for digital transmission over
optical fibres [3] adopted for Australia and Europe equivalent to the SONET
(Synchronous Optical NETwork) standard in North America. The architecture of the
particular SDH network researched in this paper is shown in Figure 1 (a).
1)
Nodes at the periphery of the SDH network are switches that handle individual
calls.
P. CAMPBELL, M. DALE, H. L. FERRA, A. KOWALCZYK
974
2) Each switch concentrates traffic for another switch into a number of streams.
3)
Each stream is then transferred to a Digital Cross-Connect (DXC) for switching and
transmission to its destination by allocating to it one of several alternative virtual
paths.
The task at hand is the dynamic allocation of capacities to these virtual paths in order to
maximize SDH network throughput.
This is a non-linear optimization task since the virtual path capacities and the constraints,
i.e. the physical limit on capacity of links between DXC's, are quantized, and the objective
function (Erlang blocking) depends in a highly non-linear fashion on the allocated
capacities and demands. Such tasks can be solved 'optimally' with the use of classical
linear programming techniques [5], but such an approach is time-consuming - for large
SDH networks the task could even require hours to complete.
One of the major features of an SDH network is that it can be remotely reconfigured using
software controls. Reconfiguration of the SDH network can become necessary when
traffic demands vary, or when failures occur in the DXC's or the links connecting them.
Reconfiguration in the case of failure must be extremely fast, with a need for restoration
times under 60 ms [1].
(b)
output: path
capacities
synaptic weights
(22302)
hidden units:
'AND' gates
(l10)
thresholds
(738,67 used)
input
o
link
capacities
DXC (Digital
Cross-Connect)
?
offered
traffic
Switch
Figure 1
(a) Example of an Inter-City SDH/SONET Network Topology used in experiments.
(b) Example of an architecture of the mask perceptron generated in experiments.
In our particular case, there are three virtual paths allocated between any pair of switches,
each using a different set of links between DXC's of the SDH network. Calls from one
switch to another can be sent along any of the virtual paths, leading to 126 paths in total (7
switches to 6 other switches, each with 3 paths).
The path capacities are normally set to give a predefined throughput. This is known as the
"steady state". If links in the SDH network become partially damaged or completely cut,
the operation of the SDH network moves away from the steady state and the path
capacities must be reconfigured to satisfy the traffic demands subject to the following
constraints:
(i) Capacities have integer values (between 0 and 64 with each unit corresponding to a
2 Mb/s stream, or 30 Erlangs),
(ii) The total capacity of all virtual paths through anyone link of the SDH network
Experiments with Neural Networks for Real Time Implementation of Control
975
cannot exceed the physical capacity of that link.
The neural network training data consisted of 13 link capacities and 42 traffic demand
values, representing situations in which the operation of one or more links is degraded
(completely or partially). The output data consisted of 126 integer values representing the
difference between the steady state path capacities and the final allocated path capacities.
3 Previous Work
The problem of optimal SDH network reconfiguration has been researched already. In
particular Gopal et. al. proposed a heuristic greedy search algorithm [4] to solve this nonlinear integer programming problem. Herzberg in [5] reformulated this non-linear integer
optimization problem as a linear programming (LP) task, Herzberg and Bye in [6]
investigated application of a simplex algorithm to solve the LP problem, whilst Bye [2]
considered an application of a Hopfield neural network for this task, and finally Leckie [8]
used another set of AI inspired heuristics to solve the optimization task.
All of these approaches have practical deficiencies; the linear programming is slow, while
the heuristic approaches are relatively inaccurate and the Hopfield neural network method
(simulated on a serial computer) suffers from both problems.
In a previous paper Campbell et al. [10] investigated application of a mask perceptron to
the problem of reconfiguration for a "toy" SDH network. The work presented here
expands on the work in that paper, with the idea of using a second stage mask perceptron
in a recurrent mode to reduce link violationslunderutilizations.
4 The Neural Controller Architecture
Instead of using the neural network to solve the optimization task, e.g. as a substitute for
the simplex algorithm, it is taught to replicate the optimal LP solution provided by it.
We decided to use a two stage approach in our experiments. For the first stage we
developed a feedforward network able to produce an approximate solution. More
precisely, we used a collection of 2000 random examples for which the linear
programming solution of capacity allocations had been pre-computed to develop a
feedforward neural network able to approximate these solutions.
Then, for a new example, such an "approximate" neural network solution was rounded to
the nearest integer, to satisfy constraint (i), and used to seed the second stage providing
refinement and enforcement of constraint (ii).
For the second stage experiments we initially used a heuristic module based on the Gopal
et al. approach [4]. The heuristic firstly reduces the capacities assigned to all paths which
cause a physical capacity violation on any links, then subsequently increases the capacities
assigned to paths across links which are being under-utilized.
We also investigated an approach for the second stage which uses another feedforward
neural network. The teaching signal for the second stage neural network is the difference
between the outputs from the first stage neural network alone and the combined first stage
neural networkiheuristic solution. This time the input data consisted of 13 link usage
values (either a link violation or underutilization) and 42 values representing the amount
of traffic lost per path for the current capacity allocations. The second stage neural
network had 126 outputs representing the correction to the first stage neural network's
outputs.
The second stage neural network is run in a recurrent mode, adjusting by small steps the
currently allocated link capacities, thereby attempting to iteratively move closer to the
combined neural-heuristic solution by removing the link violations and under-utilizations
left behind by the first stage network.
The setup used during simulation is shown in Figure 2. For each particular instance tested
the network was initialised with the solution from the first stage neural network. The
offered traffic (demand) and the available maximum link capacities were used to
determine the extent of any link violations or underutilizations as well as the amount of
lost traffic (demand satisfaction). This data formed the initial input to the second stage
network. The outputs of the neural network were then used to check the quality of the
976
P. CAMPBELL, M. DALE, H. L. FERRA, A. KOWALCZYK
solution, and iteration continued until either no link violations occurred or a preset
maximum number of iterations had been performed.
offered traffic
link capacities
computation of
constraint-demand
satisfaction
[........ ~ ........-. ....
-----~(+)
!
solution (t-l)
solution (t)
correction (t)
!
I!
initialization:
solution (0)
from stage 1
demand satisfaction (t-l
42 inputs
link capacities
violation!underutilization (t-l)
13 inputs
Figure 2. Recurrent Network used for second stage experiments.
When computing the constraint satisfaction the outputs of the neural network where
combined and rounded to give integer link violations/under-utilizations. This means that
in many cases small corrections made by the network are discarded and no further
improvement is possible. In order to overcome this we introduced a scheme whereby
errors (link violations/under-utilizations) are occasionally amplified to allow the network a
chance of removing them. This scheme works as follows :
1) an instance is iterated until it has either no link violations or until 10 iterations have
been performed;
2) if any link violations are still present then the size of the errors are multiplied by an
amplification factor (> 1);
3)
a further maximum of 10 iterations are performed;
4) if subsequently link violations persist then the amplification factor is increased;
the procedure repeats until either all link violations are removed or the amplification factor
reaches some fixed value.
S Description of Neural Networks Generated
The first stage feedforward neural network is a mask perceptron [7], c.f. Figure 1 (b). Each
input is passed through a number of arbitrarily chosen binary threshold units. There were a
total of 738 thresholds for the 55 inputs. The task for the mask perceptron training
algorithm [7] is to select a set of useful thresholds and hidden units out of thousands of
possibilities and then to set weights to minimize the mean-square-error on the training set.
The mask perceptron training algorithm automatically selected 67 of these units for direct
connection to the output units and a further 110 hidden units ("AND" gates) whose
Experiments with Neural Networks for Real Time Implementation of Control
977
outputs are again connected to the neural network outputs, giving 22,302 connections in
all.
Such neural networks are very rapid to simulate since the only operations required are
comparison and additions.
For the recurrent network used in the second stage we also used a mask perceptron. The
training algori thIn used for the recurrent network was the same as for the first stage, in
particular note that no gradual adaptation was employed. The inputs to the network are
passed through 589 arbitrarily chosen binary threshold units. Of these 35 were selected by
the training algorithm for direct connection to the output units via 4410 weighted links.
6 Results
The results are presented in Table 1 and Figure 3. The values in the table represent the
traffic throughput of the SDH network, for the respective methods, as a percentage of the
throughput determined by the LP solution. Both the neural networks were trained using
2000 instances and tested against a different set of 2000 instances. However for the
recurrent network approximately 20% of these cases still had link violations after
simulation so the values in Table 1 are for the 80% of valid solutions obtained from either
the training or test set.
Solution type
Feedforward Net/Heuristic
Feedforward Net/Recurrent Net
Gopal-S
Gopal-O
Training
99.08%
94.93% (*)
96.38%
85.63%
Test
98 .90%,
94.76%(*)
96.20%
85.43%
(*) these numbers are for the 1635 training and 1608 test instances (out of 2000) for which the
recurrent network achieved a solution with no link violations after simulation as described in
Section 3.
Table 1. Efficiency of solutions measured by average fraction of the ' optimal'
throughput of the LP solution
As a comparison we implemented two solely heuristic algorithms. We refer to these as
Gopal-S and Gopal-O. Both employ the same scheme described earlier for the Gopal et al.
heuristic. The difference between the two is that Gopal-S uses the steady state solution as
an initial starting point to determine virtual path capacities for a degraded network,
whereas Gopal-O starts from a point where all path capacities are initially set to zero.
Referring to Figure 3, link capacity ratio denotes the total link capacity of the degraded
SDH network relative to the total link capacity of the steady state SDH network. A low
value of link capacity ratio indicates a heavily degraded network. The traffic throughput
ratio denotes the ratio between the throughput obtained by the method in question, and the
throughput of the steady state solution.
Each dot in the graphs in Figure 3 represents one of the 2000 test set cases. It is clear from
the figure that the neural network/heuristic approach is able to find better solutions for
heavily degraded networks than each of the other approaches. Overall the clustering of
dots for the neural network/heuristic combination is tighter (in the y-direction) and closer
to 1.00 than for any of the other methods. The results for the recurrent network are very
encouraging being qUalitatively quite close to those for the Gopal-S algorithm.
All experiments were run on a SPARCStation 20. The neural network training took a few
minutes. During simulation the neural network took an average of 9 ms per test case with
a further 36.5 ms for the heuristic, for a total of 45.5 ms. On average the Gopal-S
algorithm required 55.3 ms and the Gopal-O algorithm required 43.7 ms per test case. The
recurrent network solution required an average of 55.9 ms per test case. The optimal
solutions calculated using the linear programming algorithm took between 2 and 60
seconds per case on a SPARCStation 10.
978
P. CAMPBELL, M. DALE, H. L. FERRA, A. KOWALCZYK
Neural Network/Heuristic
Recurrent Neural Network
1.00
.2
~
0.95
8.
0.90
.r:
0>
is
0 .85
.!.!
~
0 .80
.c
t~
?? , _ ._0 ?? _ ? ?? ? ? ? ?? : ? ? ?? :.'???
0.60
0 .10
0 .80
0.90
0.70 0.50
1.00
link Capacity Ratio
1.00
ra
cr
~
.r:
0 .95
6
0.85
,g
0 .80
0 .90
"
" - -' - "~':'
.2
r.;
..... ...~ ...... -... --- -.. ..
0 .95
cr
~ 0.90
.r:
0>
0>
5
0.85
.~
0 .80
.ct~
~
~
1.00
1.00
? ? ? :? ? :? ?:,,~i~ffI~
.-. -,.
0.60 0 .70 0 .80 0.90
Link Capacity Ratio
Gopal-O
Gopal-S
.ct-
_ ?? ? ? ? _????????
0.75
0.70 0 .50
.2
:.~'?? : ? ???? :. '0""
~
0 .75
0.70 0.50
0.60
0.70
0.80
0 .90
link Capacity Ratio
1.00
0 .75
0.70 0.50
0.60 0.70 0.80 0 .90
Link Capacily Ratio
100
Figure 3. Experimental results for the Inter-City SDH network (Fig. 1) on the
independent test set of 2000 random cases. On the x axis we have the ratio
between the total link capacity of the degraded SDH network and the steady state
SDH network. On the y axis we have the ratio between the throughput obtained
by the method in question, and the throughput of the steady state solution.
Fig 3. (a) shows results for the neural network combined with the heuristic
second stage. Fig 3. (b) shows results for the recurrent neural network second
stage. Fig 3. (c) shows results for the heuristic only, initialised by the steady state
(Gopal-S) and Fig 3. (d) has the results for the heuristic initialised by zero
(Gopal-O).
7 Discussion and Conclusions
The combined neural network/heuristic approach performs very well across the whole
range of degrees of SDH network degradation tested. The results obtained in this paper are
consistent with those found in [10]. The average accuracy of -99% and fast solution
generation times ? ffJ ms) highlight this approach as a possible candidate for
implementation in a real system, especially when one considers the easily achievable
speed increase available from parallelizing the neural network. The mask perceptron used
in these experiments is well suited for simulation on a DSP (or other hardware) : the
operations required are only comparisons, calculation of logical "AND" and the
summation of synaptic weights (no multiplications or any non-linear transfonnations are
required).
The interesting thing to note is the relatively good perfonnance of the recurrent network,
namely that it is able to handle over 80% of cases achieving very good perfonnance when
compared against the neural network/heuristic solution (95% of the quality of the teacher).
One thing to bear in mind is that the heuristic approach is highly tuned to producing a
solution which satisfies the constraints, changing the capacity of one link at a time until
the desired goal is achieved. On the other hand the recurrent network is generic and does
not target the constraints in such a specific manner, making quite crude global changes in
Experiments with Neural Networks for Real Time Implementation of Control
979
one hit, and yet is still able to achieve a reasonable level of performance. While the speed
for the recurrent network was lower on average than for the heuristic solution in our
experiments, this is not a major problem since many improvements are still possible and
the results reported here are only preliminary, but serve to show what is possible. It is
planned to continue the SOH network experiment in the future; with more investigation on
the recurrent network for the second stage and also more complex SDH architectures.
Acknowledgments
The research and development reported here has the active support of various sections and
individuals within the Telstra Research Laboratories (TRL), especially Dr. C. Leckie, Mr.
P. Sember, Dr. M. Herzberg, Mr. A. Herschtal and Dr. L. Campbell. The permission of the
Managing Director, Research and Information Technology, Telstra, to publish this paper is
acknowledged.
The research and development reported here has the active support of various sections and
individuals within the Telstra Research Laboratories (TRL), especially Dr. C. Leckie and
Mr. P. Sember who were responsible for the creation and trialling of the programs
designed to produce the testing and training data.
The SOH application was possible due to co-operation of a number of our colleagues in
TRL, in particular Dr. L. Campbell (who suggested this particular application), Dr. M.
Herzberg and Mr. A. Herschtal.
The permission of the Managing Director, Research and Information Technology, Telstra,
to publish this paper is acknowledged.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
E. Booker, Cross-connect at a Crossroads, Telephony, Vol. 215, 1988, pp. 63-65.
S. Bye, A Connectionist Approach to SDH Bandwidth Management, Proceedings
of the 19th International Conference on Artificial Neural Networks (ICANN-93),
Brighton Conference Centre, UK, 1993, pp. 286-290.
R. Gillan, Advanced Network Architectures Exploiting the Synchronous Digital
Hierarchy, Telecommunications Journal of Australia 39, 1989, pp. 39-42.
G. Gopal, C. Kim and A. Weinrib, Algorithms for Reconfigurable Networks,
Proceedings of the 13th International Teletraffic Congress (ITC-13), Copenhagen,
Denmark, 1991, pp. 341-347.
M. Herzberg, Network Bandwidth Management - A New Direction in Network
Management, Proceedings of the 6th Australian Teletraffic Research Seminar,
Wollongong, Australia, pp. 218-225.
M. Herzberg and S. Bye, Bandwidth Management in Reconfigurable Networks,
Australian Telecommunications Research 27, 1993, pp 57-70.
A. Kowalczyk and H.L. Ferra, Developing Higher Order Networks with
Empirically Selected Units, IEEE Transactions on Neural Networks, pp. 698-711,
1994.
C. Leckie, A Connectionist Approach to Telecommunication Network
Optimisation, in Complex Systems: Mechanism of Adaptation, R.J. Stonier and
X.H. Yu, eds., lOS Press, Amsterdam, 1994.
M. Schwartz, Telecommunications Networks, Addison-Wesley, Readings,
Massachusetts, 1987.
p. Campbell, H.L. Ferra, A. Kowalczyk, C. Leckie and P. Sember, Neural Networks
in Real Time Decision Making, Proceedings of the International Workshop on
Applications of Neural Networks to Telecommunications 2 (IWANNT-95), Ed. J
Alspector et. al. Lawrence Erlbaum Associates, New Jersey, 1995, pp. 273-280.
| 1162 |@word achievable:1 replicate:1 gradual:1 simulation:5 thereby:1 initial:2 tuned:1 current:1 yet:1 must:2 designed:1 alone:1 greedy:1 selected:3 quantized:1 node:1 firstly:1 along:1 direct:2 become:3 director:2 manner:1 inter:2 mask:8 ra:1 rapid:1 alspector:1 telstra:5 inspired:1 automatically:1 researched:2 encouraging:1 project:1 provided:1 what:1 developed:1 whilst:1 sparcstation:2 impractical:1 expands:1 transfonnations:1 hit:1 uk:1 control:5 unit:10 normally:1 utilization:3 schwartz:1 producing:1 congress:1 limit:1 switching:1 path:18 solely:1 approximately:1 au:1 initialization:1 co:1 range:1 decided:1 practical:2 acknowledgment:1 responsible:1 testing:1 lost:2 implement:1 procedure:1 area:1 remotely:1 pre:1 road:1 cannot:1 close:1 equivalent:1 starting:1 continued:1 handle:2 hierarchy:2 target:1 damaged:1 heavily:2 programming:8 us:2 associate:1 utilized:1 cut:1 persist:1 blocking:1 module:1 solved:1 thousand:1 connected:1 removed:1 dynamic:1 trained:2 creation:1 serve:1 efficiency:2 completely:2 easily:1 hopfield:2 various:2 america:1 jersey:1 fast:2 artificial:1 whose:1 heuristic:21 quite:2 solve:4 ability:1 final:1 net:3 took:3 mb:1 adaptation:2 achieve:1 amplified:1 oz:1 amplification:3 description:1 los:1 exploiting:1 transmission:2 produce:2 recurrent:17 develop:1 measured:1 nearest:1 dxc:5 implemented:1 australian:2 concentrate:1 direction:2 drawback:1 subsequently:2 australia:4 virtual:7 require:1 preliminary:2 investigation:1 tighter:1 summation:1 correction:3 considered:1 seed:1 lawrence:1 major:2 vary:1 label:1 currently:1 soh:2 city:2 weighted:1 gopal:17 reconfigured:2 cr:2 dsp:1 improvement:2 check:1 indicates:1 kim:1 inaccurate:1 initially:2 hidden:3 booker:1 overall:1 among:1 development:2 once:1 blackburn:1 represents:1 yu:1 throughput:12 thin:1 future:1 simplex:2 connectionist:2 few:2 employ:1 individual:3 possibility:2 highly:2 violation:14 behind:1 predefined:1 allocating:2 closer:2 erlang:2 necessary:1 respective:1 perfonnance:2 continuing:1 desired:1 ferra:7 instance:5 increased:1 earlier:1 planned:1 restoration:1 erlbaum:1 optimally:1 reported:3 connect:3 teacher:2 crossroad:1 combined:5 referring:1 international:3 destination:1 off:1 rounded:2 connecting:1 again:1 algori:1 management:4 dr:6 leading:1 toy:1 sember:3 north:1 satisfy:2 depends:1 stream:3 performed:3 traffic:12 start:1 minimize:1 formed:1 square:1 degraded:6 accuracy:1 who:2 iterated:1 reach:1 suffers:1 synaptic:2 ed:2 against:3 failure:2 colleague:1 initialised:3 pp:8 adjusting:1 massachusetts:1 logical:1 campbell:9 wesley:1 higher:1 response:1 evaluated:1 stage:23 until:5 hand:2 nonlinear:1 mode:2 quality:2 usage:1 consisted:3 assigned:2 laboratory:3 iteratively:1 during:2 whereby:1 steady:9 m:8 brighton:1 complete:1 performs:1 hungry:1 physical:3 empirically:1 occurred:1 refer:1 ai:1 teaching:1 centre:1 had:4 dot:2 europe:1 periphery:1 occasionally:1 binary:2 arbitrarily:2 continue:1 optimiser:3 mr:4 employed:1 managing:2 determine:2 maximize:1 signal:1 ii:2 reduces:1 calculation:1 cross:3 serial:1 feasibility:1 basic:1 controller:2 optimisation:1 itc:1 publish:2 iteration:4 represent:1 achieved:2 addition:1 whereas:1 crucial:1 allocated:4 subject:1 sent:1 thing:2 call:2 integer:6 feedforward:9 exceed:1 switch:8 architecture:7 topology:1 bandwidth:4 reduce:1 idea:1 synchronous:3 passed:2 reformulated:1 cause:1 useful:1 clear:1 amount:2 hardware:1 generate:1 percentage:1 per:5 vol:1 taught:1 threshold:5 achieving:1 acknowledged:2 changing:1 graph:1 fraction:2 fibre:1 run:2 telecommunication:6 reasonable:1 decision:1 ct:2 sdh:24 occur:1 constraint:9 deficiency:1 precisely:1 software:1 simulate:1 anyone:1 extremely:1 speed:2 attempting:1 optical:3 relatively:2 transferred:1 developing:1 combination:1 describes:1 across:2 lp:8 making:2 ffi:1 mechanism:1 enforcement:1 mind:1 addison:1 adopted:1 available:2 operation:5 multiplied:1 away:1 kowalczyk:7 generic:1 alternative:1 permission:2 gate:2 substitute:2 denotes:2 clustering:1 giving:1 especially:3 classical:1 teletraffic:2 objective:1 move:2 question:3 already:1 link:40 simulated:1 capacity:36 extent:1 considers:1 denmark:1 providing:1 ratio:10 trl:4 setup:1 implementation:5 discarded:1 benchmark:1 situation:1 communication:1 parallelizing:1 clayton:1 introduced:1 pair:1 required:8 namely:1 copenhagen:1 connection:3 hour:1 able:6 suggested:1 usually:1 reading:1 program:1 memory:1 satisfaction:4 underutilization:2 advanced:1 representing:4 scheme:3 vic:1 l10:1 technology:2 axis:2 multiplication:1 relative:1 highlight:1 bear:1 generation:1 interesting:1 allocation:5 telephony:1 digital:5 degree:1 offered:3 consistent:1 storing:1 repeat:1 bye:4 allow:1 perceptron:8 overcome:3 calculated:1 valid:1 dale:5 collection:1 refinement:1 made:1 qualitatively:1 far:1 transaction:1 approximate:4 global:1 active:2 consuming:1 search:1 table:4 investigated:3 complex:3 icann:1 whole:1 fig:5 fashion:1 slow:1 seminar:1 candidate:1 crude:1 removing:2 minute:1 specific:1 reconfigurable:2 explored:1 virtue:1 workshop:1 herschtal:2 execution:1 demand:8 suited:1 amsterdam:1 ffj:1 partially:2 chance:1 satisfies:1 goal:1 change:1 typical:1 determined:1 reconfiguration:4 preset:1 degradation:1 total:7 experimental:1 select:1 support:2 tested:3 |
180 | 1,163 | Generalisation of A Class of Continuous
Neural Networks
John Shawe-Taylor
Dept of Computer Science,
Royal Holloway, University of London,
Egham, Surrey TW20 OEX, UK
Email: johnCdcs.rhbnc.ac . uk
Jieyu Zhao*
IDSIA, Corso Elvezia 36,
6900-Lugano, Switzerland
Email: jieyuCcarota.idsia.ch
Abstract
We propose a way of using boolean circuits to perform real valued
computation in a way that naturally extends their boolean functionality. The functionality of multiple fan in threshold gates in
this model is shown to mimic that of a hardware implementation
of continuous Neural Networks. A Vapnik-Chervonenkis dimension
and sample size analysis for the systems is performed giving best
known sample sizes for a real valued Neural Network. Experimental results confirm the conclusion that the sample sizes required for
the networks are significantly smaller than for sigmoidal networks.
1
Introduction
Recent developments in complexity theory have addressed the question of complexity of computation over the real numbers. More recently attempts have been
made to introduce some computational cost related to the accuracy of the computations [5]. The model proposed in this paper weakens the computational power
still further by relying on classical boolean circuits to perform the computation using a simple encoding of the real values. Using this encoding we also show that
Teo circuits interpreted in the model correspond to a Neural Network design referred to as Bit Stream Neural Networks, which have been developed for hardware
implementation [8].
With the perspective afforded by the general approach considered here, we are also
able to analyse the Bit Stream Neural Networks (or indeed any other adaptive system based on the technique), giving VC dimension and sample size bounds for PAC
learning. The sample sizes obtained are very similar to those for threshold networks,
*Work performed while at Royal Holloway, University of London
1. SHAWE-TAYLOR, J. ZHAO
268
despite their being derived by very different techniques. They give the best bounds
for neural networks involving smooth activation functions, being significantly lower
than the bounds obtained recently for sigmoidal networks [4, 7].
We subsequently present simulation results showing that Bit Stream Neural Networks based on the technique can be used to solve a standard benchmark problem.
The results of the simulations support the theoretical finding that for the same
sample size generalisation will be better for the Bit Stream Neural Networks than
for classical sigmoidal networks. It should also be stressed that the approach is
very general - being applicable to any boolean circuit - and by its definition employs compact digital hardware. This fact motivates the introduction of the model,
though it will not play an important part in this paper.
2
Definitions and Basic Results
A boolean circuit is a directed acyclic graph whose nodes are referred to as gates,
with a single output node of out-degree zero. The nodes with in-degree zero are
termed input nodes. The nodes that are not input nodes are computational nodes.
There is a boolean function associated with each computational node of arity equal
to its in-degree. The function computed by a boolean network is determined by
assigning (input) values to its input nodes and performing the function at each
computational node once its input values are determined. The result is the value
at the output node. The class TCo is defined to be those functions that can be
computed by a family of polynomially sized Boolean circuits with unrestricted fanin and constant depth, where the gates are either NOT or THRESHOLD.
In order to use the boolean circuits to compute with real numbers we use the method
of stochastic computing to encode real numbers as bit streams. The encoding we
will use is to consider the stream of binary bits, for which the l's are generated
independently at random with probability p, as representing the number p. This
is referred to as a Bernoulli sequence of probability p. In this representation, the
multiplication of two independently generated streams can be achieved by a simple
AND gate, since the probability of a Ion the output stream is equal to P1P2, where
Pl is the probability of a 1 on the first input stream and P2 is the probability of
a 1 on the second input stream. Hence, in this representation the boolean circuit
consisting of a single AND gate can compute the product of its two inputs.
More background information about stochastic computing can be found in the work
of Gaines [1]. The analysis we provide is made by treating the calculations as exact
real valued computations. In a practical (hardware) implementation real bit streams
would have to be generated [3] and the question of the accuracy of a delivered result
arlses.
In the applications considered here the output values are used to determine a binary
value by comparing with a threshold of 0.5. Unless the actual output is exactly 1 or
(which can happen), then however many bits are collected at the output there is a
slight probability that an incorrect classification will be made. Hence, the number
of bits required is a function of the difference between the actual output and 0.5
and the level of confidence required in the correctness of the classification.
?
Definition 1 The real function computed by a boolean circuit C, which computes
the boolean function
fe : {O, l}n -* {O, I},
is the function
ge : [0, lr -* [0,1],
Generalisation of a Class of Continuous Neural Networks
269
obtained by coding each input independently as a Bernoulli sequence and interpreting
the output as a similar sequence.
Hence, by the discussion above we have for the circuit C consisting of a single AND
gate, the function ge is given by ge(:l:1, :1:2) = :1:1:1:2.
We now give a proposition showing that the definition of real computation given
above is well-defined and generalises the Boolean computation performed by the
circuit.
Proposition 2 The bit stream on the output of a boolean circuit computing a real
function is a Bernoulli sequence . The real function ge computed by an n input
boolean circuit C can be expressed in terms of the corresponding boolean function
fe as follows:
n
o:E{O,1}"
i=1
In particular, gel{o,)}" = fe ?
Proof: The output bit stream is a Bernoulli sequence, since the behaviour at each
time step is independent of the behaviour at previous time sequences, assuming the
input sequences are independent. Let the probability of a 1 in the output sequence
be p. Hence, ge (:I:) = p. At any given time the input to the circuit must be one
of the 2n possible binary vectors a. P:l(a) gives the probability of the vector a
occurring. Hence, the expected value of the output of the circuit is given in the
proposition statement, but by the properties of a Bernoulli sequence this value is
also p. The final claim holds since Po: (a) = 1, w hile Po: (a') = 0 for a # a' .?
Hence, the function computed by a circuit can be denoted by a polynomial of degree
n, though the representation given above may involve exponentially many terms.
This representation will therefore only be used for theoretical analysis.
3
Bit Stream Neural Networks
In this section we describe a neural network model based on stochastic computing
and show that it corresponds to taking TCo circuits in the framework considered
in Section 2.
A Stochastic Bit Stream Neuron is a processing unit which carries out very simple
operations on its input bit streams. All input bit streams are combined with their
corresponding weight bit streams and then the weighted bits are summed up. The
final total is compared to a threshold value. If the sum is larger than the threshold
the neuron gives an output 1, otherwise O.
There are two different versions of the Stochastic Bit Stream Neuron corresponding
to the different data representations. The definitions are given as follows.
Definition 3 (AND-SBSN): A n-input AND version Stochastic Bit Stream Neuron has n weights in the range [-1,1j and n inputs in the range [0,1 j, which are all
unipolar representations of Bernoulli sequences. An extra sign bit is attached to
each weight Bernoulli sequence. The threshold 9 is an integer lying between -n to n
which is randomly generated according to the threshold probability density function
?( 9). The computations performed during each operational cycle are
J. SHAWE-TA YLOR, J. ZHAO
270
(1) combining respectively the n bits from n input Bernoulli sequences with the
corresponding n bits from n weight Bernoulli sequences using the AND operation.
(2) assigning n weight sign bits to the corresponding output bits of the AND gate,
summing up all the n signed output bits and then comparing the total with the
randomly generated threshold value. If the total is not less than the threshold value,
the AND-SBSN outputs 1, otherwise it outputs O.
We can now present the main result characterising the functionality of a Stochastic
Bit Stream Neural Network as the real function of an Teo circuit.
Theorem 4 The functionality of a family of feedforward networks of Bit Stream
Neurons with constant depth organised into layers with interconnections only between adjacent layers corresponds to the function gc for an TCo circuit C of depth
twice that of the network. The number of input streams is equal to the number
of network inputs while the number of parameters is at most twice the number of
weights.
Proof: Consider first an individual neuron. We construct a circuit whose real
functionality matches that of the neuron. The circuit has two layers. The first
consists of a series of AND gates. Each gate links one input line of the neuron
with its corresponding weight input. The outputs of these gates are linked into a
threshold gate with fixed threshold 2d for the AND-SBSN, where d is the number
of input lines to the neuron. The threshold distribution of the AND SBSN is now
simulated by having a series of 2d additional inputs to the threshold gate. The
number of additional input streams required to simulate the threshold depends on
how general a distribution is allowed for the threshold. We consider three cases:
1. If the threshold is fixed (i.e. not programmable), then no additional inputs
are required, since the actual threshold can be suitably adapted.
2. If the threshold distribution is always focussed on one value (which can
be varied), then an additional flog2(2d)1 (rlog2(d)l) inputs are required to
specify the binary value of this number. A circuit feeding the corresponding
number of 1's to the threshold gate is not hard to construct.
3. In the fully general case any series of 2d + 1 (d + 1) numbers summing to
one can be assigned as the probabilities of the possible values
4>(0),4>(1), ... , 4>(t),
where t
2d for the AND SBSN. We now construct a circuit
which takes t input streams and passes the I-bits to the threshold
gate of all the inputs up to the first input stream carrying a O.
No fUrther input is passed to the threshold gate.
In other words
Threshold gate receives s
q.
Input streams 1, ... , s have bit 1 and
bits of input
either s = t or input stream s + 1 has
input o.
We now set the probability p, of stream s as follows;
PI
p,
1 - 4>(0)
1 - 2:;~~ 4>( i)
1 - 2:;~g 4>( i)
for s = 2, ... , t
With these values the probability of the threshold gate receiving s bits is
4>( s) as required.
Generalisation of a Class of Continuous Neural Networks
271
This completes the replacement of a single neuron. Clearly, we can replace all
neurons in a network in the same manner and construct a network with the required
properties provided connections do not 'shortcut' layers, since this would create
interactions between bits in different time slots. _
4
VC Dimension and Sample Sizes
In order to perform a VC Dimension and sample size analysis of the Bit Stream
Neural Networks described in the previous section we introduce the following general
framework .
Definition 5 For a set Q of smooth functions f : R n x Rl -+ R, the class F is
defined as
F = Fg = {fw Ifw{x) = f{x, w), f E Q}.
The corresponding classification class obtained by taking a fixed set of s of the functions from Q, thresholding the corresponding functions from F at 0 and combining them (with the same parameter vector) in some logical formula will be denoted
H,{F). We will denote H1{F) by H{F).
In our case we will consider a set of circuits C each with n + l input connections, n
labelled as the input vector and l identified as parameter input connections. Note
that if circuits have too few input connections, we can pad them with dummy ones.
The set g will then be the set
Q=Qe={gc!CEC},
while Fgc will be denoted by Fe.
We now quote some of the results of [7] which uses the techniques of Karpinski and
MacIntyre [4] to derive sample sizes for classes of smoothly parametrised functions .
Proposition 6 [7} Let Q be the set of polynomials P of degree at most d with
P : R n x Rl -+ Rand
F
= Fg = {PwIPw{x) = p{x, w),p E g}.
Hence , there are l adjustable parameters and the input dimension is n . Then the
VC-dimension of the class H,{Fe) is bounded above by
log2{2{2d)l)
+ 1711og 2 {s).
Corollary 7 For a set of circuits C, with n input connections and l parameter
connections, the VC-dimension of the class H,{Fe) is bounded above by
Proof: By Proposition 2 the function gc computed by a circuit C with t input
connections has the form
gc{x)
=
L
t
P;e(a)fc{a),
where
P;e{a)
= II xfi{l- xd 1 -
cxi ).
i=l
Hence, gc( x) is a polynomial of degree t. In the case considered the number t of
input connections is n + l. The result follows from the proposition. _
J. SHAWE-TAYLOR. 1. ZHAO
272
Proposition 8 [7] Let 9 be the set of polynomials P of degree at most d with
p: 'R n X 'Rl -+ 'R and
F = Fg = {PwIPw(x) = p(x, w),p E g}.
Hence , there are l adjustable parameters and the input dimension is n. If a function
h E H.(F) correctly computes a function on a sample of m inputs drawn independently according to a fixed probability distribution, where
m
~ "",(e, 0) = e(1 ~ y'?) [Uln ( 4e~) + In (2l/(~ -
1))
then with probability at least 1 - 0 the error rate of h will be less than
drawn according to the same distribution.
1
E
on inputs
Corollary 9 For a set of circuits C, with n input connections and l parameter
connections, If a function h E H.(Fc) correctly computes a function on a sample
of m inputs drawn independently according to a fixed probability distribution, where
m
~ "",(e, 0) =
e(1
~ y?) [Uln ( 4eJs~n
+l)) +
In
Cl/(~ -
then with probability at least 1 - 0 the error rate of h will be less than
drawn according to the same distribution.
1))
E
1
on inputs
Proof: As in the proof of the previous corollary, we need only observe that the
functions gC for C E C are polynomials of degree at most n + l. ?
Note that the best known sample sizes for threshold networks are given in [6]:
m
~ "",(e, 0) = e(1 ~ y'?) [2Wln (6~) + In (l/(lo- 1)) 1'
where W is the number of adaptable weights (parameters) and N is the number
of computational nodes in the network. Hence, the bounds given above are almost
identical to those for threshold networks, despite the underlying techniques used to
derive them being entirely different.
One surprising fact about the above results is that the VC dimension and sample
sizes are independent of the complexity of the circuit (except in as much as it must
have the required number of inputs). Hence, additional layers of fixed computation
cannot increase the sample complexity above the bound given).
5
Simulation Results
The Monk's problems which were the basis of a first international comparison of
learning algorithms, are derived from a domain in which each training example
is represented by six discrete-valued attributes. Each problem involves learning a
binary function defined over this domain, from a sample of training examples of
this function. The 'true' concepts underlying each Monk's problem are given by:
MONK-I: (attributet = attribute2)
or (attribute5 = 1)
MONK-2: (attributei = 1)
for EXACTLY TWO i E {I, 2, ... , 6}
MONK-3: (attribute5 = 3 and attribute4 = 1)
or (attribute5 =1= 4 and attribute2 =1= 3)
Generalisation of a Class of Continuous Neural Networks
273
There are 124, 169 and 122 samples in the training sets of MONK-I, MONK-2 and
MONK-3 respectively. The testing set has 432 patterns. The network had 17 input
units, 10 hidden units, 1 output unit, and was fully connected. Two networks were
used for each problem. The first was a standard multi-layer perceptron with sigmoid
activation function trained using the backpropagation algorithm (BP Network).
The second network had the same architecture, but used bit stream neurons in place
of sigmoid ones (BSN Network). The functionality of the neurons was simulated
using probability generating functions to compute the probability values of the bit
streams output at each neuron. The backpropagation algorithm was adapted to
train these networks by computing the derivative of the output probability value
with respect to the individual inputs to that neuron [8].
Experiments were performed with and without noise in the training examples.
There is 5% additional noise (misclassifications) in the training set of MONK-3.
The results for the Monk's problems using the moment generating function simulation are shown as follows:
BP Network
BSN Network
training testing training testing
100%
97.7%
MONK-l
100%
86.6%
MONK-2
100%
100%
84.2%
100%
97.1%
83.3%
98.4%
98.6%
MONK-3
It can be seen that the generalisation of the BSN network is much better than
that of a general multilayer backpropagation network. The results on MONK-3
problem is extremely good. The results reported by Hassibi and Stork [2] using a
sophisticated weight pruning technique are only 93.4% correct for the training set
and 97.2% correct for the testing set.
References
[1] B. R. Gaines, Stochastic Computing Systems, Advances in Information Systems Science 2 (1969) pp37-172.
[2] B. Hassibi and D.G. Stork, Second order derivatives for network pruning: Optimal brain surgeon, Advances in Neural Information Processing System, Vol
5 (1993) 164-171.
[3] P. Jeavons, D.A. Cohen and J. Shawe-Taylor, Generating Binary Sequences
for Stochastic Computing, IEEE Trans on Information Theory, 40 (3) (1994)
716-720.
[4] M. Karpinski and A. MacIntyre, Bounding VC-Dimension for Neural Networks:
Progress and Prospects, Proceedings of EuroCOLT'95, 1995, pp. 337-341,
Springer Lecture Notes in Artificial Intelligence, 904.
[5] P. Koiran, A Weak Version of the Blum, Shub and Smale Model, ESPRIT
Working Group NeuroCOLT Technical Report Series, NC-TR-94-5, 1994.
[6] J. Shawe-Taylor, Threshold Network Learning in the Presence of Equivalences,
Proceedings of NIPS 4, 1991, pp. 879-886.
[7] J. Shawe-Taylor, Sample Sizes for Sigmoidal Networks, to appear in the Proceedings of Eighth Conference on Computational Learning Theory, COLT'95,
1995.
[8] John Shawe-Taylor, Peter Jeavons and Max van Daalen, "Probabilistic Bit
Stream Neural Chip: Theory", Connection Science, Vol 3, No 3, 1991.
| 1163 |@word version:3 polynomial:5 suitably:1 simulation:4 tr:1 carry:1 moment:1 series:4 chervonenkis:1 tco:3 comparing:2 surprising:1 activation:2 assigning:2 must:2 john:2 happen:1 treating:1 intelligence:1 monk:14 lr:1 node:12 sigmoidal:4 incorrect:1 consists:1 manner:1 introduce:2 expected:1 indeed:1 multi:1 brain:1 relying:1 eurocolt:1 actual:3 provided:1 bounded:2 underlying:2 circuit:29 interpreted:1 developed:1 finding:1 xd:1 exactly:2 esprit:1 uk:2 unit:4 appear:1 despite:2 encoding:3 signed:1 twice:2 equivalence:1 range:2 directed:1 practical:1 testing:4 backpropagation:3 tw20:1 significantly:2 confidence:1 word:1 unipolar:1 cannot:1 independently:5 play:1 exact:1 us:1 idsia:2 cycle:1 connected:1 prospect:1 complexity:4 trained:1 carrying:1 surgeon:1 basis:1 po:2 chip:1 represented:1 train:1 describe:1 london:2 artificial:1 whose:2 larger:1 valued:4 solve:1 otherwise:2 interconnection:1 analyse:1 delivered:1 final:2 sequence:14 propose:1 interaction:1 product:1 combining:2 generating:3 weakens:1 derive:2 ac:1 progress:1 p2:1 involves:1 switzerland:1 correct:2 functionality:6 attribute:1 subsequently:1 vc:7 stochastic:9 behaviour:2 feeding:1 proposition:7 pl:1 hold:1 lying:1 considered:4 claim:1 koiran:1 applicable:1 quote:1 teo:2 correctness:1 create:1 weighted:1 clearly:1 always:1 og:1 corollary:3 encode:1 derived:2 bernoulli:9 pad:1 hidden:1 classification:3 colt:1 denoted:3 wln:1 development:1 summed:1 equal:3 once:1 construct:4 having:1 identical:1 mimic:1 report:1 employ:1 few:1 randomly:2 individual:2 consisting:2 replacement:1 attempt:1 ejs:1 parametrised:1 unless:1 taylor:7 gaines:2 oex:1 theoretical:2 boolean:16 cost:1 too:1 reported:1 combined:1 density:1 international:1 probabilistic:1 receiving:1 zhao:4 derivative:2 coding:1 depends:1 stream:33 performed:5 h1:1 linked:1 accuracy:2 correspond:1 weak:1 email:2 definition:7 surrey:1 corso:1 pp:2 naturally:1 associated:1 proof:5 logical:1 sophisticated:1 adaptable:1 ta:1 specify:1 rand:1 though:2 working:1 receives:1 concept:1 true:1 hence:11 assigned:1 adjacent:1 during:1 qe:1 interpreting:1 characterising:1 recently:2 sigmoid:2 rl:3 stork:2 attached:1 exponentially:1 cohen:1 slight:1 shawe:8 had:2 recent:1 perspective:1 fgc:1 termed:1 binary:6 seen:1 unrestricted:1 additional:6 bsn:3 determine:1 ii:1 multiple:1 smooth:2 generalises:1 match:1 technical:1 calculation:1 involving:1 basic:1 multilayer:1 karpinski:2 achieved:1 ion:1 background:1 addressed:1 completes:1 extra:1 pass:1 integer:1 presence:1 feedforward:1 misclassifications:1 architecture:1 identified:1 hile:1 xfi:1 six:1 rhbnc:1 passed:1 peter:1 programmable:1 involve:1 hardware:4 macintyre:2 sign:2 dummy:1 correctly:2 discrete:1 vol:2 group:1 threshold:27 blum:1 drawn:4 graph:1 sum:1 extends:1 family:2 almost:1 place:1 bit:36 entirely:1 bound:5 layer:6 fan:1 adapted:2 bp:2 afforded:1 simulate:1 extremely:1 performing:1 according:5 smaller:1 ge:5 operation:2 observe:1 egham:1 gate:17 log2:1 giving:2 classical:2 question:2 link:1 simulated:2 neurocolt:1 collected:1 assuming:1 gel:1 nc:1 fe:6 statement:1 smale:1 implementation:3 design:1 motivates:1 shub:1 adjustable:2 perform:3 neuron:15 benchmark:1 rlog2:1 gc:6 varied:1 required:9 connection:11 nip:1 trans:1 able:1 pattern:1 eighth:1 royal:2 max:1 power:1 representing:1 multiplication:1 fully:2 lecture:1 organised:1 acyclic:1 digital:1 degree:8 thresholding:1 pi:1 lo:1 perceptron:1 taking:2 focussed:1 fg:3 van:1 dimension:10 depth:3 computes:3 made:3 adaptive:1 polynomially:1 pruning:2 compact:1 confirm:1 summing:2 continuous:5 operational:1 cl:1 domain:2 main:1 bounding:1 noise:2 allowed:1 referred:3 cxi:1 hassibi:2 lugano:1 theorem:1 formula:1 cec:1 pac:1 showing:2 arity:1 jeavons:2 vapnik:1 elvezia:1 occurring:1 fanin:1 smoothly:1 fc:2 expressed:1 springer:1 ch:1 corresponds:2 slot:1 sized:1 labelled:1 replace:1 shortcut:1 hard:1 fw:1 generalisation:6 determined:2 except:1 total:3 experimental:1 holloway:2 support:1 stressed:1 dept:1 |
181 | 1,164 | Learning to Predict
Visibility and Invisibility
from Occlusion Events
Jonathan A. Marshall
Richard K. Alley
Robert S. Hubbard
Department of Computer Science, CB 3175, Sitterson Hall
University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.
marshall@cs.unc.edu, 919-962-1887, fax 919-962-1799
Abstract
Visual occlusion events constitute a major source of depth information.
This paper presents a self-organizing neural network that learns to detect,
represent, and predict the visibility and invisibility relationships that arise
during occlusion events, after a period of exposure to motion sequences
containing occlusion and disocclusion events. The network develops two
parallel opponent channels or "chains" of lateral excitatory connections
for every resolvable motion trajectory. One channel, the "On" chain or
"visible" chain, is activated when a moving stimulus is visible. The other
channel, the "Off" chain or "invisible" chain, carries a persistent, amodal
representation that predicts the motion of a formerly visible stimulus that
becomes invisible due to occlusion. The learning rule uses disinhibition
from the On chain to trigger learning in the Off chain. The On and
Off chain neurons can learn separate associations with object depth ordering. The results are closely related to the recent discovery (Assad &
Maunsell, 1995) of neurons in macaque monkey posterior parietal cortex
that respond selectively to inferred motion of invisible stimuli.
1
INTRODUCTION: LEARNING ABOUT OCCLUSION
EVENTS
Visual occlusion events constitute a major source of depth information. Yet little is known about the neural mechanisms by which visual systems use occlusion
events to infer the depth relations among visual objects. What is the structure of
such mechanisms? Some possible answers to this question are revealed through an
analysis of learning rules that can cause such mechanisms to self-organize.
Evidence from psychophysics (Kaplan, 1969; Nakayama & Shimojo, 1992;
Nakayama, Shimojo, & Silverman, 1989; Shimojo, Silverman, & Nakayama,
1988, 1989; Yonas, Craton, & Thompson, 1987) and neurophysiology (Assad &
Maunsell, 1995; Frost, 1993) suggests that the process of determining relative
depth from occlusion events operates at an early stage of visual processing. Marshall (1991) describes evidence that suggests that the same early processing mechanisms maintain a representation of temporarily occluded objects for some amount
Learning to Predict Visibility and Invisibility from Occlusion Events
817
of time after they have disappeared behind an occluder, and that these representations of invisible objects interact with other object representations, in much the
same manner as do representations of visible objects. The evidence includes the
phenomena of kinetic subjective contours (Kellman & Cohen, 1984), motion viewed
through a slit (Parks' Camel) (Parks, 1965) , illusory occlusion (Ramachandran, Inada, & Kiama, 1986) , and interocular occlusion sequencing (Shimojo, Silverman, &
Nakayama, 1988).
2
PERCEPTION OF OCCLUSION AND
DISOCCLUSION EVENTS: AN ANALYSIS
The neural network model exploits the visual changes that occur at occlusion boundaries to form a mechanism for detecting and representing object visibility/invisibility
information. The set of learning rules used in this model is an extended version of
one that has been used before to describe the formation of neural mechanisms for
a variety of other visual processing functions (Hubbard & Marshall, 1994; Marshall, 1989, 1990ac, 1991, 1992; Martin & Marshall, 1993).
Our analysis is derived from the following visual predictivity principle, which
may be postulated as a fundamental principle of neural organization in visual systems: Visual systems represent the world in terms of predictions of its appearance,
and they reorganize themselves to generate better predictions. To maximize the correctness and completeness of its predictions, a visual system would need to predict
the motions and visibility/invisibility of all objects in a scene. Among other things,
it would need to predict the disappearance of an object moving behind an occluder
and the reappearance of an object emerging from behind an occluder.
A consequence of this postulate is that occluded objects must, at some level,
continue to be represented even though they are invisible. Moreover, the representation of an object must distinguish whether the object is visible or invisible;
otherwise, the visual system could not determine whether its representations predict
visibility or invisibility, which would contravene the predictivity principle. Thus,
simple single-channel prediction schemes like the one described by Marshall (1989,
1990a) are inadequate to represent occlusion and disocclusion events.
3
A MODEL FOR GROUNDED LEARNING TO
PREDICT VISIBILITY AND INVISIBILITY
The initial structure of the Visible/Invisible network model is given in Figure 1A.
The network self-organizes in response to a training regime containing many input
sequences representing motion with and without occlusion and disocclusion events.
After a period of self-organization, the specific connections that a neuron receives
(Figure 1B) determine whether it responds to visible or invisible objects. A neuron
that responds to visible objects would have strong bottom-up input connections,
and it would also have strong time-delayed lateral excitatory input connections. A
neuron that responds selectively to invisible objects would not have strong bottomup connections, but it would have strong lateral excitatory input connections. These
lateral inputs would transmit to the neuron evidence that a previously visible object
existed. The neurons that respond to invisible objects must operate in a way that
allows lateral input excitation alone to activate the neurons supraliminally, in the
absence of bottom-up input excitation from actual visible objects.
4
4.1
SIMULATION OF A SIMPLIFIED NETWORK
INITIAL NETWORK STRUCTURE
The simulated network, shown in Figure 2, describes a simplified onedimensional subnetwork (Marshall & Alley, 1993) of the more general twodimensional network. Layer 1 is restricted to a set of motion-sensitive neurons
corresponding to one rightward motion trajectory.
The L+ connections in the simulation have a signal transmission latency of
one time unit. Restricting the lateral connections to a single time delay and to a
single direction limits the simulation to representing a single speed and direction of
motion; these results are therefore preliminary. This restriction reduced the number
of connections and made the simulation much faster.
818
J. A. MARSHALL, R. K. ALLEY, R. S. HUBBARD
(A)
0
(B)
0 ,' 9
.
Figure 1: Model of a self-organized occlusion-event detector network. (A) Network is initially
organized nonspecifically, so that each neuron receives roughly homogeneous input connections:
feedforward, bottom-up excitatory ("B+" ) connections from a preprocessing stage of motion-tuned
neurons (bottom-up solid arrows), lateral inhibitory ("L-") connections (dotted arrows), and timedelayed lateral excitatory ("L+") connections (lateral solid arrows) . (B) After exposure during
a developmental period to many motion sequences containing occlusion and disocclusion events,
the network learns a highly specific connection structure. The previously homogeneous network
bifurcates into two parallel opponent channels for every resolvable motion trajectory: some neurons
keep their bottom- up connections and others lose them . The channels for one trajectory are shown .
Neurons from the two opponent channels are strongly linked by lateral inhibitory connections
(dotted arrows). Time-delayed lateral excitatory connections cause stimulus information (priming
excitation, or "prediction signals") to propagate along the channels.
Layer 2
Layer 1
Figure 2: Simula.tion results. (Left) Simulated network structure before training. Neurons are
wired homogeneously from the input la.yer. (Right) After training, some of the neurons lose their
bottom-up input connections.
4.2
USING DISINHIBITION TO CONTROL THE LEARNING OF
OCCLUSION RELATIONS
This paper describes one method for learning occlusion relations. Other
methods may also work. The method involves extending the EXIN (excitatory+inhibitory) learning scheme described by Marshall (1992, 1995). The EXIN
scheme uses a variant of a Hebb rule to govern learning in the bottom-up and timedelayed lateral excitatory connections, plus an anti-Hebb rule to govern learning in
the lateral inhibitory connections.
The EX IN system was extended by letting inhibitory connections exert a disinhibitory effect under certain regulated conditions. The disinhibition rule was chosen
because it constitutes a simple way that the unexpected failure of a neuron to become activated (e.g., when an object disappears behind an occluder) can cause some
other neuron to become activated . That other neuron can then learn, becoming selective for invisible object motion. Thus, the representations of visible objects are
protected from losing their bottom-up input connections during occlusion events.
In this way, the network can learn separate representations for visible and invisible stimuli. The representations of invisible objects are allowed to develop only
to the extent that the neurons representing visible objects explicitly disclaim the
"right" to represent the objects. These properties prevent the network from losing complete grounded contact with actual bottom-up visual input, while at the
same time allowing some neurons to lose their direct bottom-up input connections.
The disinhibition produces an excitatory response at the target neurons . Disinhibition is generated according to the following rule: When a neuron has strong,
Learning to Predict Visibility and Invisibility from Occlusion Events
819
active lateral excitatory input connections and strong but inactive bottom-up input
connections, then it tends to disinhibit the neurons to which it projects inhibitory
connections. This implements a type of differencing operation between lateral and
bottom-up excitation. Because the disinhibition tends to excite the recipient neurons, it causes one (or possibly more) of the recipient neurons to become active and
thereby enables that neuron to learn.
The lateral excitation that a neuron receives can be viewed as a prediction of
the neuron's activation. If that prediction is not matched by actual bottom-up
excitation, then a shortfall (prediction failure) has occurred, probably indicating
an occlusion event.
Each neuron's disinhibition input was combined with its bottom-up excitatory
input and its lateral excitatory input to form a total excitatory input signal. Either bottom-up excitation or disinhibition alone could contribute toward a neuron's
excitation. However, lateral excitation could merely amplify the other signals and
could not alone excite the neuron. This prevented neurons from learning in response
to lateral excitation alone.
4.3 DISINH.IBITION LETS THE NETWORK LEARN TO
RESPOND TO INVISIBLE OBJECTS
During continuous motion sequences, without occlusion or disocclusion, the
system operates similarly to a system with the standard EXIN learning rules (Marshall, 1990b, 1995): lateral excitatory "chains" of connections are learned across
sequences of neurons along a motion trajectory. Marshall (1990a) showed that such
chains form in 2-D networks with multiple speeds and multiple directions of motion.
During occlusion events, some predictive lateral excitatory signals reach neurons that have strong but inactive bottom-up excitatory connections. The neurons
reached by this excitation pattern disinhibit, rather than inhibit, their competitor
neurons. Over the course of many occlusion events, such neurons become increasingly selective for the inferred motion of an invisible object: their bottom-up input
connections weaken, and their lateral inhibitory input connections strengthen.
More than one neuron receives L+ signals after every neuron activation; the
recipients of each neuron's L+ output connections represent the (learned) possible sequents of the neuron's activation. But at most one of those sequents actually
receives both B+ and L+ signals: the one that corresponds to the actual stimulus. This winner neuron receives the disinhibition from the other neurons receiving
L+ excitation; its competitive advantage over the other neurons is thus reinforced.
4.4 SIMULATION TRAINING
The sequences of input training data consisted of a single visual feature moving
with constant velocity across the I-D visual field. When this stimulus was visible,
its presence was indicated by strong activation of an input neuron in Layer 1.
While occluded, the stimulus would produce no activation in Layer 1. The stimulus
occasionally disappeared "behind" an occluder and reappeared at a later time and
spatial position farther along the same trajectory. After some duration, the stimulus
was removed and replaced by a new stimulus. The starting positions and lifetimes
of the stimuli and occluders were varied randomly within a fixed range.
The network was trained for 25,000 input pattern presentations. The stability of
the connection weights was verified by additional training for 50,000 presentations.
4.5 SIMULATION RESULTS: ARCHITECTURE
The second stage of neurons gradually underwent a self-organized bifurcation
into two distinct pools of neurons, as shown in Figure 2B. These pools consist of two
parallel opponent channels or "chains" of lateral excitatory connections for every
resolvable motion trajectory. One channel, the "On" chain or "visible" chain, was
active when a moving stimulus became visible. The other channel, the "Off" chain
or "invisible" chain, was active when a formerly visible stimulus became invisible.
The model is thus named the Visible/Invisible model. The bifurcation may be
analogous to the activity-dependent stratification of cat retinal gan~lion cells into
separate On and Off layers, described by Bodnarenko and Chalupa (1993).
4.6 SIMULATION RESULTS: OPERATION
The On chain carries a predictive modal representation of the visible stimulus.
The Off chain carries a persistent, amodal representation that predicts the motion
820
J. A. MARSHALL, R. K. ALLEY, R. S. HUBBARD
of the invisible stimulus. The shading of the neurons in Figure 3 shows the neuron
acti vat ions of the final, trained network simulation during an occlusion-disocclusion
sequence. The following noteworthy behaviors were observed in the test.
? When the stimulus was visible, it was represented by activation in the
On channel.
? When the stimulus became invisible, its representation was carried in the
Off channel. The Off channel did not become active until the visible stimulus disappeared.
? The activations representing the visible stimulus became stronger (toward
an asymptote) at successive spatial positions, because of the propagation
of accumulating evidence for the presence of the stimulus (Martin & Marshall, 1993).
? The activation representing the invisible stimulus decayed at successive
spatial positions. Thus, representations of invisible stimuli did not remain
active indefinitely.
? When the stimulus reappeared (after a sufficiently brief occlusion), its activation in the On channel was greater than its initial activation in the
On channel. Thus, the representation carried across the Off channel helps
maintain the perceptual stability of the stimulus despite its being temporarily occluded along parts of its trajectory.
f'ffffTff fflIfff
Lay.,'
Layer 1
~?.
: .
."
(. '.'L... ..>::
;?. .; .::::.
?.??.??i. . '.;.;..".':. ,'.?
..
~::
~',
-,
:<:P}::;";;:;':'r
Figure 3: Simulated network operation after learning. The learning pr()c~d~'re cahses the representation of each trajectory to split into two parallel opponent channels. The Visible and Invisible
channel pair for a single trajectory are shown . The display has been arranged so that all the
Visible channel neurons are on the same row (Layer 2, lower row); likewise the Invisible channel
neurons (Layer 2, upper row). Solid arrows indicate excitatory connections. Gray arrows indicate
lateral inhibitory connections. (Left) The network's responses to an unbroken rightward motion
of the stimulus are shown. The activities of the network at successive moments in time have been
combined into a single network display; each horizontal position in the figure represents a different
moment in time as well as a different position in the network. The stimulus successively activates
motion detectors (solid circles) in Layer 1. The activation of the responding neuron in the second layer builds toward an asymptote, reaching full activation by the fourth frame . (Right) The
network's responses to a broken (occluded) rightward motion sequence are shown. When the stimulus reaches the region indicated by gray shading, it disappears behind a simulated occluder. The
network responds by successively activating neurons in the Invisible channel. When the stimulus
emerges from behind the occluder (end of gray shading) , it is again represented by activation in
the Visible channel.
5
5.1
DISCUSSION
PSYCHOPHYSICAL ISSUES AND PREDICTIONS
Several visual phenomena (Burr, 1980; Piaget, 1954; Shimojo, Silverman, &
Nakayama, 1988) support the notion that early processing mechanisms maintain a
dynamic representation of temporarily occluded objects for some amount of time
after they disappear (Marshall, 1991). In general, the duration of such representations should vary as a function of many factors, including top-down cognitive
expectations, stimulus complexity, and Gestalt grouping.
5.2
ALTERNATIVE MECHANISMS
Another model besides the Visible/Invisible model was studied extensively: a
Visible/Virtual system, which would develop some neurons that respond to visible
objects and others that respond to both visible and invisible objects (Le., to "virtual" objects). There is a functional equivalence between such a Visible/.Virtual
system and a Visible/Invisible system: the same information about visibllity and
invisibility can be determined by examining the activations of the neurons. Activity
in a Virtual channel neuron, paired with inactivity in a corresponding Visible channel neuron, would indicate the presence of an invisible stimulus.
Learning to Predict Visibility and Invisibility from Occlusion Events
821
NEUROPHYSIOLOGICAL CORRELATES
Assad and Maunsell (1995) recently described their remarkable discovery ofneurons in macaque monkey posterior parietal cortex that respond selectively to the
inferred motion of invisible stimuli. This type of neuron responded more strongly to
the disappearance and reappearance of a stimulus in a task where the stimulus' "inferred" trajectory would pass through the neuron's receptive field than in a task
where the stimulus would disappear and reappear in the same position. Most of
these neurons also had a strong off-response, which in the present models is closely
correlated with inferred motion. Thus, the results of Assad and Maunsell (1995)
are more directly consistent with the Visible/Virtual model than with the Visible/Invisible model. Although this paper describes only one of these models, both
models merit investigation.
5.4 LEARNING ASSOCIATIONS BETWEEN VISIBILITY AND
RELATIVE DEPTH
The activation of neurons in the Off channels is highly correlated with the activation of other neurons elsewhere in the visual system, specifically neurons whose
activation indicates the presence of other objects acting as occluders. Simple associative Hebb-type learning lets such occluder-indicator neurons and the Off channel
neurons gradually establish reciprocal excitatory connections to each other.
After such reciprocal excitatory connections have been learned, activation of
occluder-indicator neurons at a given spatial position causes the network to favor
the Off channel in its predictions - i.e., to predict that a moving object will be
invisible at that position. Thus, the network learns to use occlusion information to
generate better predictions of the visibility/invisibility of objects.
Conversely, the activation of Off channel neurons causes the occluder-indicator
neurons to receive excitation. The disappearance of an object excites the representation of an occluder at that location. If the representation of the occluder was
not previously activated, then the excitation from the Off channel may even be
strong enough to activate it alone. Thus, disappearance of moving visual objects
constitutes evidence for the presence of an inferred occluder. These results will be
described in a later paper.
5.5 LIMITATIONS AND FUTURE WORK
The Visible/Invisible model presented in this paper describes SOme of the processes that may be involved in detecting and representing depth from occlusion
events. There are other major issues that have not been addressed in this paper.
For example, how can the system handle real 2-D or 3-D objects, composed of many
visual features grouped together across space, instead of mere point stimuli? How
can it handle partial occlusion of objects? How can it handle nonlinear trajectories?
How exactly can the associative links between occluding and occluded objects be
formed? How can it handle transparency?
5.3
6
CONCLUSIONS
Perception of relative depth from occlusion events is a powerful, useful, but poorlyunderstood capability of human and animal visual systems. We have presented
an analysis based on predictivity: a visual system that can predict the visibility /invisibility of objects during occlusion events possesses (ipso facto) a good representation of relative depth. The analysis implies that the representations for visible
and invisible objects must be distinguishable. We have implemented a model system
in which distinct representations for visible and invisible features self-organize in response to exposure to motion sequences containing simulated occlusion and disocclusion events. When a moving feature fails to appear approximately where and when
it is predicted to appear, the mismatch between prediction and the actual image
triggers an unsupervised learning rule. Over many motions, the learning leads to a
bifurcation of a network layer into two parallel opponent channels of neurons. Prediction signals in the network are carried along motion trajectories by specific chains
of lateral excitatory connections. These chains also cause the representation of invisible features to propagate for a limited time along the features' trajectories. The
network uses shortfall (differencing) and disinhibition to maintain grounding of the
representations of invisible features.
822
J. A. MARSHALL, R. K. ALLEY, R. S. HUBBARD
Acknowledgements
Supported in part by ONR (NOOOl4-93-1-0208), NEI (EY09669), a UNC-CH Junior Faculty DeveloP!llent Award, an ORAU Junior Faculty Enhancement Award from Oak Ridge
Associated Universities, the Univ. of Minnesota Center for Research in Learning, Perception, and Cognition, NICHHD (HD-07151), and the Minnesota Supercomputer Institute.
We thank Kevin Martin, Stephen Aylward, Eliza Graves, Albert Nigrin, Vinay Gupta,
Geor~e Kalarickal", Charles SchmItt, Viswanath Srikanth, David Van Essen, Christof Koch,
and Ennio Mingolla for valuable discussions.
References
Assad JA, Maunsell JHR (1995) Neuronal correlates of inferred motion in macaque posterior parietal cortex. Nature 373:518-521.
Bodnarenko SR, Chalupa LM (1993) Stratification of On and Off ganglion cell dendrites
depends on glutamate-mediated afferent activity in the developing retina. Nature
364:144-146.
Burr D (1980) Motion smear. Nature 284:164-165.
Frost BJ (1993) Subcortical analysis of visual motion: Relative motion, figure-ground
discrimination and induced optic flow. Visual Motion and Its Role in the Stabilization
of Gaze, Miles FA, Wallman J (Eds). Amsterdam: Elsevier Science, 159-175.
Hubbard RS, Marshall JA (1994) Self-organizing neural network model of the visual inertia phenomenon in motion perception. Technical Report 94-001, Department of
Computer Science, University of North Carolina at Chapel Hill. 26 pp.
Kaplan GA (1969) Kinetic disruption of optical texture: The perception of depth at an
edge. Perception fj Psychophysics 6:193-198.
Kellman PJ, Cohen MH (1984) Kinetic subjective contours. Perception fj Psychophysics
35:237-244.
Marshall JA (1989) Self-organizing neural network architectures for computing visual
depth from motion parallax. Proceedings of the International Joint Conference on
Neural Networks, Washington DC, 11:227-234.
Marshall JA (1990a) Self-organizing neural networks for perception of visual motion. Neural Networks 3:45-74.
Marshall JA (1990b) A self-organizing scale-sensitive neural network. Proceedings of the
International Joint Conference on Neural Networks, San Diego, CA, 111:649-654.
Marshall JA (1990c) Adaptive neural methods for multiplexing oriented edges. Intelligent Robots and Computer Vision IX: Neural, Biological, and 3-D Methods,
Casasent DP (Ed), Proceedings of the SPIE 1382, Boston, MA, 282- 291.
Marshall JA (1991) Challenges of vision theory: Self-organization of neural mechanisms
for stable steering of obiect-g!ouping data in visual motion perception. Stochastic
and Neural Methods in Signal Processing, Image Processing, and Computer Vision,
Chen SS (Ed), Proceedings of the SPIE 1569, San Diego, CA, 200-215.
Marshall JA (1992) Unsupervised learning of contextual constraints in neural networks
for simultaneous visual processing of multiple objects. Neural and Stochastic Methods in Image and Signal Processing, Chen SS (Ed), Proceedings of the SPIE 1766,
San Diego, CA, 84-93.
Marshall JA (1995) Adaptive perceptual pattern recognition by self-organizing neural networks: Context, uncertainty, multiplicity, and scale. Neural Networks 8:335-362.
Marshall JA, Alley RK (1993) A self-organizing neural network that learns to detect and
represent visual depth from occlusion events. Proceedings of the AAAI Fall Symposium
on Machine Learning and Computer Vision, Bowyer K, Hall L (Eds), 70-74.
Martin KE, Marshall JA (1993) Unsmearing visual motion: Development of long-range
horizontal intrinsic connections. Advances in Neural Information Processing Systems, 5, Hanson SJ, Cowan JD, Giles CL (Eds). San Mateo, CA: Morgan Kaufmann
Publishers, 417-424.
Nakayama K, Shimojo S (1992) Experiencing and perceiving visual surfaces. Science
257:1357-1363.
Nakayama K, Shimojo S, Silverman GH (1989) Stereoscopic depth: Its relation to image
segmentation, grouping, and the recognition of occluded objects. Perception 18:55-68.
Parks T (1965) Post-retinal visual storage American Journal of Psychology 78:145-147.
Piaget J (1954) The Construction of Reality in the Child. New York: Basic Books.
Ramachandran VS, Inada V, Kiama G (1986) Perception of illusory occlusion in apparent
motion. Vision Research 26:1741-1749.
Shimojo S, Silverman GH, Nakayama K (1989) Occlusion and the solution to the aperture
problem for motion. Vision Research 29:619-626.
Yonas A, Craton LG, Thompson WB (1987) Relative motion: Kinetic information for the
order of depth at an edge. Perception fj Psychophysics 41:53-59.
| 1164 |@word neurophysiology:1 faculty:2 version:1 stronger:1 simulation:8 carolina:2 propagate:2 reappearance:2 r:1 thereby:1 solid:4 shading:3 carry:3 moment:2 initial:3 tuned:1 subjective:2 contextual:1 activation:19 yet:1 must:4 visible:37 enables:1 visibility:12 reappeared:2 asymptote:2 discrimination:1 alone:5 v:1 reciprocal:2 farther:1 indefinitely:1 detecting:2 completeness:1 contribute:1 location:1 successive:3 oak:1 along:6 direct:1 become:5 symposium:1 persistent:2 acti:1 burr:2 parallax:1 manner:1 behavior:1 themselves:1 roughly:1 occluder:13 little:1 actual:5 becomes:1 project:1 moreover:1 matched:1 what:1 monkey:2 emerging:1 every:4 exactly:1 facto:1 control:1 unit:1 maunsell:5 appear:2 organize:2 christof:1 before:2 occluders:2 tends:2 limit:1 consequence:1 despite:1 becoming:1 noteworthy:1 approximately:1 plus:1 exert:1 studied:1 nigrin:1 equivalence:1 suggests:2 conversely:1 unbroken:1 inada:2 mateo:1 limited:1 range:2 implement:1 silverman:6 reappear:1 unc:2 amplify:1 ga:1 twodimensional:1 context:1 storage:1 accumulating:1 restriction:1 center:1 exposure:3 starting:1 duration:2 thompson:2 ke:1 chapel:2 rule:9 hd:1 stability:2 handle:4 notion:1 analogous:1 transmit:1 target:1 trigger:2 diego:3 strengthen:1 experiencing:1 losing:2 homogeneous:2 us:3 construction:1 velocity:1 simula:1 recognition:2 chalupa:2 lay:1 viswanath:1 predicts:2 bottom:17 reorganize:1 observed:1 role:1 region:1 ordering:1 inhibit:1 removed:1 valuable:1 developmental:1 govern:2 broken:1 complexity:1 occluded:8 dynamic:1 trained:2 predictive:2 rightward:3 mh:1 joint:2 represented:3 cat:1 univ:1 distinct:2 describe:1 activate:2 formation:1 kevin:1 whose:1 apparent:1 s:2 otherwise:1 favor:1 final:1 associative:2 sequence:9 advantage:1 bifurcates:1 organizing:7 enhancement:1 transmission:1 extending:1 wired:1 disappeared:3 produce:2 object:43 help:1 develop:3 ac:1 excites:1 strong:10 implemented:1 c:1 involves:1 indicate:3 implies:1 predicted:1 direction:3 closely:2 stochastic:2 stabilization:1 human:1 virtual:5 ja:11 activating:1 preliminary:1 investigation:1 biological:1 sufficiently:1 hall:2 koch:1 ground:1 cb:1 cognition:1 predict:11 bj:1 lm:1 ennio:1 major:3 vary:1 early:3 lose:3 sensitive:2 hubbard:6 grouped:1 correctness:1 activates:1 rather:1 reaching:1 srikanth:1 derived:1 sequencing:1 indicates:1 detect:2 elsevier:1 dependent:1 initially:1 relation:4 selective:2 issue:2 among:2 development:1 animal:1 spatial:4 psychophysics:4 bifurcation:3 field:2 exin:3 washington:1 stratification:2 represents:1 park:3 unsupervised:2 constitutes:2 future:1 report:1 others:2 stimulus:36 develops:1 richard:1 retina:1 alley:6 randomly:1 oriented:1 composed:1 intelligent:1 delayed:2 replaced:1 occlusion:38 maintain:4 organization:3 highly:2 essen:1 activated:4 behind:7 chain:19 edge:3 partial:1 re:1 circle:1 weaken:1 giles:1 amodal:2 marshall:26 wb:1 delay:1 examining:1 inadequate:1 answer:1 combined:2 decayed:1 fundamental:1 international:2 shortfall:2 off:16 receiving:1 pool:2 gaze:1 together:1 unsmearing:1 again:1 postulate:1 aaai:1 successively:2 containing:4 possibly:1 cognitive:1 book:1 american:1 retinal:2 north:2 includes:1 postulated:1 explicitly:1 afferent:1 depends:1 tion:1 later:2 invisibility:12 linked:1 reached:1 competitive:1 parallel:5 capability:1 formed:1 became:4 responded:1 kaufmann:1 likewise:1 reinforced:1 interocular:1 mere:1 trajectory:14 detector:2 simultaneous:1 reach:2 ed:6 failure:2 competitor:1 pp:1 involved:1 disocclusion:8 associated:1 spie:3 illusory:2 emerges:1 organized:3 segmentation:1 actually:1 response:7 modal:1 arranged:1 though:1 strongly:2 lifetime:1 stage:3 until:1 ramachandran:2 receives:6 horizontal:2 nonlinear:1 propagation:1 indicated:2 gray:3 grounding:1 effect:1 consisted:1 mile:1 during:7 self:14 disinhibition:10 excitation:14 hill:2 smear:1 complete:1 ridge:1 invisible:37 motion:41 gh:2 fj:3 disruption:1 image:4 recently:1 charles:1 functional:1 cohen:2 winner:1 association:2 occurred:1 onedimensional:1 similarly:1 had:1 minnesota:2 moving:7 robot:1 stable:1 cortex:3 surface:1 posterior:3 recent:1 showed:1 occasionally:1 certain:1 onr:1 continue:1 morgan:1 additional:1 greater:1 steering:1 determine:2 maximize:1 period:3 signal:10 stephen:1 multiple:3 full:1 infer:1 transparency:1 technical:1 faster:1 long:1 post:1 prevented:1 award:2 paired:1 vat:1 casasent:1 prediction:13 variant:1 basic:1 vision:6 expectation:1 albert:1 represent:6 grounded:2 cell:2 ion:1 receive:1 addressed:1 source:2 publisher:1 operate:1 posse:1 sr:1 probably:1 induced:1 thing:1 cowan:1 flow:1 camel:1 presence:5 revealed:1 feedforward:1 split:1 enough:1 variety:1 psychology:1 architecture:2 inactive:2 whether:3 inactivity:1 mingolla:1 eliza:1 york:1 cause:7 constitute:2 useful:1 latency:1 amount:2 extensively:1 reduced:1 generate:2 inhibitory:8 dotted:2 stereoscopic:1 prevent:1 pj:1 verified:1 merely:1 fourth:1 respond:6 powerful:1 uncertainty:1 named:1 bowyer:1 layer:12 distinguish:1 display:2 existed:1 activity:4 occur:1 optic:1 constraint:1 scene:1 multiplexing:1 speed:2 optical:1 martin:4 department:2 developing:1 according:1 describes:5 frost:2 across:4 increasingly:1 remain:1 restricted:1 gradually:2 pr:1 multiplicity:1 previously:3 mechanism:9 letting:1 merit:1 end:1 piaget:2 operation:3 opponent:6 homogeneously:1 alternative:1 supercomputer:1 jd:1 recipient:3 responding:1 top:1 gan:1 exploit:1 build:1 establish:1 disappear:2 contact:1 psychophysical:1 question:1 receptive:1 fa:1 disappearance:4 responds:4 subnetwork:1 regulated:1 dp:1 separate:3 link:1 lateral:25 simulated:5 thank:1 extent:1 toward:3 besides:1 relationship:1 nc:1 differencing:2 lg:1 robert:1 kaplan:2 allowing:1 upper:1 neuron:69 anti:1 parietal:3 extended:2 frame:1 dc:1 varied:1 nei:1 timedelayed:2 inferred:7 david:1 pair:1 junior:2 connection:39 hanson:1 learned:3 macaque:3 lion:1 perception:12 pattern:3 mismatch:1 regime:1 challenge:1 including:1 event:25 indicator:3 glutamate:1 representing:7 scheme:3 noool4:1 brief:1 disappears:2 carried:3 mediated:1 fax:1 formerly:2 discovery:2 acknowledgement:1 determining:1 relative:6 graf:1 limitation:1 subcortical:1 remarkable:1 consistent:1 schmitt:1 principle:3 row:3 excitatory:21 course:1 elsewhere:1 supported:1 institute:1 fall:1 underwent:1 van:1 boundary:1 depth:14 world:1 contour:2 inertia:1 made:1 adaptive:2 predictivity:3 simplified:2 preprocessing:1 san:4 gestalt:1 correlate:2 sj:1 aperture:1 keep:1 active:6 excite:2 shimojo:8 bottomup:1 continuous:1 protected:1 reality:1 channel:31 learn:5 nature:3 correlated:2 ca:4 nakayama:8 vinay:1 dendrite:1 interact:1 priming:1 cl:1 did:2 arrow:6 arise:1 allowed:1 child:1 neuronal:1 hebb:3 fails:1 position:9 perceptual:2 learns:4 ix:1 resolvable:3 down:1 rk:1 specific:3 gupta:1 evidence:6 grouping:2 consist:1 intrinsic:1 restricting:1 texture:1 yer:1 chen:2 boston:1 distinguishable:1 appearance:1 ganglion:1 neurophysiological:1 visual:32 unexpected:1 amsterdam:1 temporarily:3 ch:1 corresponds:1 kinetic:4 ma:1 viewed:2 presentation:2 absence:1 change:1 determined:1 specifically:1 operates:2 perceiving:1 acting:1 total:1 pas:1 slit:1 la:1 organizes:1 indicating:1 selectively:3 assad:5 occluding:1 support:1 jonathan:1 phenomenon:3 ex:1 |
182 | 1,165 | Constructive Algorithms for Hierarchical
Mixtures of Experts
S.R.Waterhouse
A.J.Robinson
Cambridge University Engineering Department,
Trumpington St., Cambridge, CB2 1PZ, England.
Tel: [+44] 1223 332754, Fax: [+44] 1223 332662,
Email: srw1001.ajr@eng.cam.ac.uk
Abstract
We present two additions to the hierarchical mixture of experts
(HME) architecture. By applying a likelihood splitting criteria to
each expert in the HME we "grow" the tree adaptively during training. Secondly, by considering only the most probable path through
the tree we may "prune" branches away, either temporarily, or permanently if they become redundant . We demonstrate results for
the growing and path pruning algorithms which show significant
speed ups and more efficient use of parameters over the standard
fixed structure in discriminating between two interlocking spirals
and classifying 8-bit parity patterns.
INTRODUCTION
The HME (Jordan & Jacobs 1994) is a tree structured network whose terminal
nodes are simple function approximators in the case of regression or classifiers in the
case of classification. The outputs of the terminal nodes or experts are recursively
combined upwards towards the root node, to form the overall output of the network,
by "gates" which are situated at the non-terminal nodes.
The HME has clear similarities with tree based statistical methods such as Classification and Regression Trees (CART) (Breiman, Friedman, Olshen & Stone 1984).
We may consider the gate as replacing the set of "questions" which are asked at
each branch of CART. From this analogy, we may consider the application of the
splitting rules used to build CART. We start with a simple tree consisting of two
experts and one gate. After partially training this simple tree we apply the splitting criterion to each terminal node. This evaluates the log-likelihood increase by
splitting each expert into two experts and a gate. The split which yields the best
increase in log-likelihood is then added permanently to the tree. This process of
training followed by growing continues until the desired modelling power is reached.
585
Constructive Algorithms for Hierarchical Mixtures of Experts
Figure 1: A simple mixture of experts.
This approach is reminiscent of Cascade Correlation (Fahlman & Lebiere 1990) in
which new hidden nodes are added to a multi-layer perceptron and trained while
the rest of the network is kept fixed.
The HME also has similarities with model merging techniques such as stacked regression (Wolpert 1993), in which explicit partitions of the training set are combined. However the HME differs from model merging in that each expert considers
the whole input space in forming its output. Whilst this allows the network more
flexibility since each gate may implicitly partition the whole input space in a "soft"
manner, it leads to unnecessarily long computation in the case of near optimally
trained models. At anyone time only a few paths through a large network may
have high probability. In order to overcome this drawback, we introduce the idea
of "path pruning" which considers only those paths from the root node which have
probability greater than a certain threshold.
CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS
The mixture of experts, shown in Figure 1, consists of a set of "experts" which
perform local function approximation . The expert outputs are combined by a gate
to form the overall output. In the hierarchical case, the experts are themselves
mixtures of further experts , thus extending the architecture in a tree structured
fashion. Each terminal node or "expert" may take on a variety of forms, depending
on the application. In the case of multi-way classification, each expert outputs a
vector Yj in which element m is the conditional probability of class m (m = 1 ... M)
which is computed using the soft max function:
P(Cm Ix(n>, Wj)
=
exp(w~jx(n?)
It exp(w~.kX(n?)
k=1
where Wj
class i.
= [wlj W2j
... WMj]
is the parameter matrix for expert j and
Ci
denotes
The outputs of the experts are combined using a "gate" which sits at the nonterminal nodes. The gate outputs are estimates of the conditional probability of
selecting the daughters of the non-terminal node given the input and the path taken
to that node from the root node. This is once again computed using the softmax
function:
~ (.), ~) =exp(~ J~(.?
P(Zj I
where';
j.
= [';1';2
It
<xp(
~f ~(.?
... ';f] is the parameter matrix for the gate, and
Zj
denotes expert
586
S. R. WATERHOUSE, A. 1. ROBINSON
The overall output is given by a probabilistic mixture in which the gate outputs
are the mixture weights and the expert outputs are the mixture components. The
probability of class m is then given by:
]
P(cmlz(n),8)
=
I: P(zilz(n), ~)P(Cmlz(n), Wi).
i=1
A straightforward extension of this model also gives us the conditional probability
ht) of selecting expert j given input zen) and correct class
Ck,
In order to train the HME to perform classification we maximise the log likelihood
= l:~=1 l:~=1 t~) log P(cmIz(n), 8), where the variable t~) is one if m is the correct
class at exemplar (n) and zero otherwise. This is done via the expectation maximisation (EM) algorithm of Dempster, Laird & Rubin (1977), as described by Jordan
& Jacobs (1994).
L
TREE GROWING
The standard HME differs from most tree
based statistical models in that its architecture
is fixed. By relaxing this constraint and allowing the tree to grow, we achieve a greater degree of flexibility in the network. Following the
work on CART we start with a simple tree, for
instance with two experts and one gate which
we train for a small number of cycles. Given
this semi-trained network, we then make a set
of candidate splits {&} of terminal nodes {z;}.
Each split involves replacin~ an expert Zi with
a pair of new experts {Zu}j=1 and a gate, as
shown in Figure 2.
\
\
\
\
L(P+I)
L(P)
We wish to select eventually only the "best"
split S out of these candidate splits. Let us
define the best split as being that which maxFigure 2: Making a canimises the increase in overall log-likelihood due
didate split of a terminal
to the split, IlL = L(P+1) - L(P) where L(P) is the
node.
likelihood at the pth generation of the tree. If
we make the constraint that all the parameters
of the tree remain fixed apart from the parameters of the new split whenever a candidate split is made, then the maximisation
is simplified into a dependency on the increases in the local likelihoods {Li} of the
nodes {Zi}. We thus constrain the tree growing process to be localised such that we
find the node which gains the most by being split.
max M(&)
i
_ I
max M?
i
=max(Ly*1)
i
I
L(P?
I
Constructive Algorithms for Hierarchical Mixtures of Experts
587
Figure 3: Growing the HME. This figure shows the addition of a pair of experts to
the partially grown tree.
where
L~+l)
=
n
m
n
m
L L t~) log L P(zijlz(n), c;;,zi)P(cmlz(n), zij, wij)
j
This splitting rule is similar in form to the CART splitting criterion which uses
maximisation of the entropy of the node split, equivalent to our local increase in
lop;-likelihood.
TIle final growing algorithm starts with a tree of generation p and firstly fixes the
parameters of all non-terminal nodes. All terminal nodes are then split into two
experts and a gate. A split is only made if the sum of posterior probabilities En h~n),
as described (1), at the node is greater than a small threshold. This prevents splits
being made on nodes which have very little data assigned to them. In order to break
symmetry, the new experts of a split are initialised by adding small random noise
to the original expert parameters. The gate parameters are set to small random
weights. For each node i, we then evaluate M; by training the tree using the
standard EM method. Since all non-terminal node parameters are fixed the only
changes to the log-likelihood are due the new splits. Since the parameters of each
split are thus independent of one another, all splits can be trained at once, removing
the need to train multiple trees separately.
After each split has been evaluated, the best split is chosen. This split is kept and
all other splits are discarded. The original tree structure is then recovered except
for the additional winning split, as shown in Figure 3. The new tree, of generation
p + I is then trained as usual using EM. At present the decision on when to add
a new split to the tree is fairly straightforward: a candidate split is made after
training the fixed tree for a set number of iterations. An alternative scheme we
have investigated is to make a split when the overall log-likelihood of the fixed tree
has not increased for a set number of cycles. In addition, splits are rejected if they
add too little to the local log-likelihood.
Although we have not discussed the issue of over-fitting in this paper, a number
of techniques to prevent over-fitting can be used in the HME. The most simple
technique, akin to those used in CART, involves growing a large tree and successively
removing nodes from the tree until the performance on a cross validation set reaches
an optimum. Alternatively the Bayesian techniques of Waterhouse, MacKay &
Robinson (1995) could be applied.
S. R. WATERHOUSE, A. J. ROBINSON
588
Tree growing simulations
This algorithm was used to solve the 8-bit parity classification task. We compared
the growing algorithm to a fixed HME with depth of 4 and binary branches. As can
be seen in Figures 4(a) and (b), the factorisation enabled by the growing algorithm
significantly speeds up computation over the standard fixed structure. The final tree
shape obtained is shown in Figure 4(c). We showed in an earlier paper (Waterhouse
& Robinson 1994) that the XOR problem may be solved using at least 2 experts
and a gate. The 8 bit parity problem is therefore being solved by a series of XOR
classifiers, each gated by its parent node, which is an intuitively appealing form
with an efficient use of parameters.
-200oL----1~0----2~0~--~~~--~4~0--~W
Time
(a) Evolution of log-likelihood vs.
time in CPU seconds.
O'()OI
0.001
(c) Final tree structure obtained from
(i), showing utilisation U; of each node
where U; = L: P(z;, R;I:c(n?) I N, and Ri
is the path t~en from the root node
to node i .
-50
8
,E
~-100
"I
.?'
-150
-2000 1 2
3
4
5
6
Generation
(b) Evolution of log-likelihood for (i)
vs generations of tree.
Figure 4: HME GROWING ON THE 8 BIT PARITY PROBLEM;(i) growing HME with 6
generations; (ii) 4 deep binary branching HME (no growing).
PATH PRUNING
If we consider the HME to be a good model for the data generation process, the
case for path pruning becomes clear. In a tree with sufficient depth to model the
Constructive Algorithms for Hierarchical Mixtures of Experts
589
underlying sub-processes producing each data point, we would expect the activation
of each expert to tend to binary values such that only one expert is selected at each
time exemplar.
The path pruning scheme is depicted in Figure 5. The pruning scheme utilises
the "activation" of each node at each exemplar. The activation is defined as
the product of node probabilities along a path from the root node to the current node, lin) = Li log P(zi/R i , :.:(n?), where Ri is the path taken to node i from
the root node . If .l}n) for node l at exemplar n falls below a threshold value,
ft, then we ignore the subtree Sl and we backtrack up to the parent node of l.
During training this involves not accumulating the statistics of the subtree Sl; during evaluation it involves
setting the output of subtree Sl to
zero. In addition to this path pruning scheme we can use the activation of the nodes to do more permanent pruning. If the overall utilisation Vi = Ln P(Zi, Rd:.:(n?)IN of a node
falls below a small threshold, then a
node is pruned completely from the
tree. The sister subtrees of the removed node then subsume their parent nodes. This process is used solely
to improve computational efficiency
in this paper, although conceivably
it could be used as a regularisation
method, akin to the brain surgery
techniques of Cun, Denker & Solla
..... _---_ .. - ..
(1990). In such a scheme, however,
a more useful measure of node utiliFigure 5: Path pruning in the HME.
sation would be the effective number
of parameters (Moody 1992).
Path pruning simulations
Figure 6 shows the application of the pruning algorithm to the task of discriminating
between two interlocking spirals. With no pruning the solution to the two-spirals
takes over 4,000 CPU seconds, whereas with pruning the solution is achieved in 155
CPU seconds.
One problem which we encountered when implementing this algorithm was in computing updates for the parameters of the tree in the case of high pruning thresholds.
If a node is visited too few times during a training pass, it will sometimes have too
little data to form reliable statistics and thus the new parameter values may be
unreliable and lead to instability. This is particularly likely when the gates are
saturated. To avoid this saturation we use a simplified version of the regularisation
scheme described in Waterhouse et al. (1995).
CONCLUSIONS
We have presented two extensions to the standard HME architecture. By pruning
branches either during training or evaluation we may significantly reduce the computational requirements of the HME. By applying tree growing we allow greater
flexibility in the HME which results in faster training and more efficient use of
parameters.
S. R. WATERHOUSE, A. J. ROBINSON
590
(a)
(b)
0
-20
"C
0
0
;5
~
T0>
0
,.,fi
"
-40
,:..
(iii
(iv)
.i
-60
....' :1
,.,.
, ,
, ,
,I
(c)
-80
....J
-100
I .
-120
.""
~ -,~ '.'- '
10
100
( .' /
."..: '"
' ,,,..
Time (5)
/
1000
Figure 6: The effect of pruning on the two spirals classification problem by a 8
deep binary branching hme:(a) Log-likelihood vs. Time (CPU seconds), with log pruning
thresholds for experts and gates f: (i) f = -5. 6,(ii) f = -lO,(iii) f = -15,(iv) no pruning,
(b) training set for two-spirals task; the two classes are indicated by crosses and circles,
(c) Solution to two spirals problem.
References
Breiman, L., Friedman, J., Olshen, R. & Stone, C . J. (1984), Classification and
Regression Trees, Wadswoth and Brooks/Cole.
Cun, Y . L., Denker, J. S. & Solla, S. A. (1990), Optimal brain damage, in D. S.
Touretzky, ed., 'Advances in Neural Information Processing Systems 2', Morgan Kaufmann, pp. 598-605.
Dempster, A. P., Laird, N. M. & Rubin, D. B. (1977), 'Maximum likelihood from
incomplete data via the EM algorithm', Journal of the Royal Statistical Society,
Series B 39, 1-38.
Fahlman, S. E. & Lebiere, C. (1990), The Cascade-Correlation learning architecture, Technical Report CMU-CS-90-100, School of Computer Science, Carnegie
Mellon University, Pittsburgh, PA 15213.
Jordan, M. I. & Jacobs, R. A. (1994), 'Hierarchical Mixtures of Experts and the
EM algorithm', Neural Computation 6, 181-214.
Moody, J. E. (1992), The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems, in J. E. Moody, S. J.
Hanson & R. P. Lippmann, eds, 'Advances in Neural Information Processing
Systems 4', Morgan Kaufmann, San Mateo, California, pp. 847-854.
Waterhouse, S. R. & Robinson, A. J . (1994), Classification using hierarchical mixtures of experts, in 'IEEE Workshop on Neural Networks for Signal Processing',
pp. 177-186.
Waterhouse, S. R., MacKay, D. J. C. & Robinson, A. J . (1995), Bayesian methods
for mixtures of experts, in M. C. M. D. S. Touretzky & M. E. Hasselmo, eds,
'Advances in Neural Information Processing Systems 8', MIT Press.
Wolpert, D. H . (1993), Stacked generalization, Technical Report LA-UR-90-3460,
The Santa Fe Institute, 1660 Old Pecos Trail, Suite A, Santa Fe, NM, 87501.
| 1165 |@word version:1 simulation:2 eng:1 jacob:3 didate:1 recursively:1 series:2 selecting:2 zij:1 recovered:1 current:1 activation:4 reminiscent:1 partition:2 shape:1 update:1 v:3 selected:1 node:43 sits:1 firstly:1 along:1 become:1 consists:1 fitting:2 introduce:1 manner:1 themselves:1 growing:14 multi:2 ol:1 terminal:11 brain:2 little:3 cpu:4 considering:1 becomes:1 underlying:1 cm:1 whilst:1 suite:1 classifier:2 uk:1 ly:1 producing:1 maximise:1 engineering:1 local:4 path:15 solely:1 mateo:1 relaxing:1 zilz:1 lop:1 yj:1 maximisation:3 differs:2 cb2:1 cascade:2 significantly:2 ups:1 applying:2 instability:1 accumulating:1 equivalent:1 interlocking:2 straightforward:2 splitting:6 factorisation:1 rule:2 enabled:1 us:1 trail:1 pa:1 element:1 particularly:1 continues:1 ft:1 solved:2 wj:2 cycle:2 solla:2 removed:1 dempster:2 asked:1 cam:1 trained:5 efficiency:1 completely:1 grown:1 stacked:2 train:3 effective:2 w2j:1 whose:1 solve:1 otherwise:1 statistic:2 laird:2 final:3 product:1 flexibility:3 achieve:1 parent:3 optimum:1 extending:1 requirement:1 depending:1 ac:1 exemplar:4 nonterminal:1 school:1 c:1 involves:4 drawback:1 correct:2 implementing:1 fix:1 generalization:2 probable:1 secondly:1 extension:2 exp:3 jx:1 visited:1 cole:1 hasselmo:1 mit:1 ck:1 avoid:1 breiman:2 modelling:1 likelihood:15 hidden:1 wij:1 overall:6 classification:9 ill:1 issue:1 softmax:1 fairly:1 mackay:2 once:2 unnecessarily:1 report:2 few:2 consisting:1 friedman:2 evaluation:2 saturated:1 mixture:15 subtrees:1 tree:36 iv:2 incomplete:1 old:1 desired:1 circle:1 wmj:1 instance:1 soft:2 increased:1 earlier:1 too:3 optimally:1 dependency:1 combined:4 adaptively:1 st:1 discriminating:2 probabilistic:1 moody:3 again:1 nm:1 successively:1 zen:1 tile:1 expert:39 li:2 hme:20 permanent:1 vi:1 root:6 break:1 reached:1 start:3 oi:1 xor:2 kaufmann:2 yield:1 sation:1 bayesian:2 backtrack:1 reach:1 touretzky:2 whenever:1 ed:3 email:1 evaluates:1 initialised:1 pp:3 lebiere:2 gain:1 done:1 evaluated:1 rejected:1 wlj:1 until:2 correlation:2 replacing:1 nonlinear:1 indicated:1 effect:1 evolution:2 regularization:1 assigned:1 utilisation:2 during:5 branching:2 criterion:3 stone:2 demonstrate:1 upwards:1 fi:1 discussed:1 significant:1 mellon:1 cambridge:2 rd:1 similarity:2 add:2 posterior:1 showed:1 apart:1 certain:1 binary:4 approximators:1 seen:1 morgan:2 greater:4 additional:1 utilises:1 prune:1 redundant:1 signal:1 semi:1 branch:4 multiple:1 ii:2 technical:2 faster:1 england:1 cross:2 long:1 lin:1 regression:4 expectation:1 cmu:1 iteration:1 sometimes:1 achieved:1 addition:4 whereas:1 separately:1 grow:2 rest:1 cart:6 tend:1 jordan:3 near:1 split:28 spiral:6 iii:2 variety:1 zi:5 architecture:5 reduce:1 idea:1 t0:1 akin:2 deep:2 useful:1 clear:2 santa:2 situated:1 sl:3 zj:2 sister:1 carnegie:1 threshold:6 prevent:1 ht:1 kept:2 sum:1 pecos:1 decision:1 bit:4 layer:1 followed:1 encountered:1 constraint:2 constrain:1 ri:2 speed:2 anyone:1 pruned:1 department:1 trumpington:1 structured:2 remain:1 em:5 ur:1 wi:1 appealing:1 cun:2 making:1 conceivably:1 intuitively:1 taken:2 ln:1 eventually:1 apply:1 denker:2 hierarchical:9 away:1 alternative:1 permanently:2 gate:17 ajr:1 original:2 denotes:2 build:1 society:1 surgery:1 question:1 added:2 damage:1 usual:1 considers:2 olshen:2 fe:2 localised:1 daughter:1 perform:2 allowing:1 gated:1 discarded:1 subsume:1 pair:2 hanson:1 california:1 robinson:8 brook:1 below:2 pattern:1 saturation:1 max:4 reliable:1 royal:1 power:1 scheme:6 improve:1 fax:1 waterhouse:9 regularisation:2 expect:1 generation:7 analogy:1 validation:1 degree:1 sufficient:1 xp:1 rubin:2 classifying:1 lo:1 parity:4 fahlman:2 allow:1 perceptron:1 institute:1 fall:2 overcome:1 depth:2 made:4 san:1 simplified:2 pth:1 pruning:18 lippmann:1 ignore:1 implicitly:1 unreliable:1 pittsburgh:1 alternatively:1 tel:1 symmetry:1 investigated:1 whole:2 noise:1 en:2 fashion:1 sub:1 explicit:1 wish:1 winning:1 candidate:4 ix:1 removing:2 zu:1 showing:1 pz:1 workshop:1 merging:2 adding:1 ci:1 subtree:3 kx:1 wolpert:2 entropy:1 depicted:1 likely:1 forming:1 prevents:1 temporarily:1 partially:2 conditional:3 towards:1 srw1001:1 change:1 except:1 pas:1 la:1 select:1 constructive:4 evaluate:1 |
183 | 1,166 | Model Matching and SFMD
Computation
Steve Rehfuss and Dan Hammerstrom
Department of Computer Science and Engineering
Oregon Graduate Institute of Science and Technology
P.O.Box 91000, Portland, OR 97291-1000 USA
stever@cse.ogi.edu, strom@asi.com
Abstract
In systems that process sensory data there is frequently a model
matching stage where class hypotheses are combined to recognize a
complex entity. We introduce a new model of parallelism, the Single
Function Multiple Data (SFMD) model, appropriate to this stage.
SFMD functionality can be added with small hardware expense to
certain existing SIMD architectures, and as an incremental addition
to the programming model. Adding SFMD to an SIMD machine
will not only allow faster model matching, but also increase its
flexibility as a general purpose machine and its scope in performing
the initial stages of sensory processing.
1
INTRODUCTION
In systems that process sensory data there is frequently a post-classification stage
where several independent class hypotheses are combined into the recognition of
a more complex entity. Examples include matching word models with a string
of observation probabilities, and matching visual object models with collections
of edges or other features. Current parallel computer architectures for processing
sensory data focus on the classification and pre-classification stages (Hammerstrom
1990).This is reasonable, as those stages likely have the largest potential for speedup
through parallel execution. Nonetheless, the model-matching stage is also suitable
for parallelism, as each model may be matched independently of the others.
We introduce a new style of parallelism, Single Function Multiple Data (SFMD),
that is suitable for the model-matching stage. The handling of interprocessor synchronization distinguishes the SFMD model from the SIMD and MIMD models:
SIMD synchronizes implicitly at each instruction, SFMD synchronizes implicitly
at conditional expression or loop boundaries, and MIMD synchronizes explicitly at
S. REHFUSS, D. HAMMERSTROM
714
arbitrary inter-processor communication points. Compared to MIMD, the use of
implicit synchronization makes SFMD easier to program and cheaper to implement.
Compared to SIMD, the larger granularity of synchronization gives SFMD increased
flexibility and power.
SFMD functionality can be added with small hardware expense to SIMD architectures already having a high degree of processor autonomy. It can be presented as an
incremental addition to programmer's picture of the machine, and applied as a compiler optimization to existing code written in an SIMD version of 'C'. Adding SFMD
to an SIMD machine will not only allow faster model matching, but also increase
its flexibility as a general purpose machine, and increase its scope in performing the
initial stages of sensory processing.
2
SIMD ARCHITECTURE AND PROGRAMMING
As background, we first review SIMD parallelism. In SIMD, multiple processing
elements, or PEs, simultaneously execute identical instruction sequences, each processing different data. The instruction stream is produced by a controller, or sequencer. Generally, each PE has a certain amount of local memory, which only it
can access directly. All PEs execute a given -instruction in the stream at the same
time, so are synchronized at each instruction. Thus synchronization is implicit, the
hardware need not support it, and the programmer need (can) not manage it. SIMD
architectures differ in the functionality of their PEs. If PEs can independently address local memory at differing locations, rather than all having to access the same
address at a given step, the architecture is said to have local addressing. If PEs can
independently determine whether to execute a given instruction, rather than having
this determined by the sequencer, the architecture has local conditional execution.
Note that all PEs see the same instruction stream, yet a given PE executes only
one branch of any if-then-else, and so must idle while other PEs execute the other
branch. This is the cost of synchronizing at each instruction.
3
MODEL MATCHING
We view models as pieces of a priori knowledge, interrelating their components.
Models are matched against some hypothesis set of possible features. Matching
produces a correspondence between components of the model and elements of the
hypothesis set, and also aligns the model and the set ("pose estimation" in vision,
and "time-alignment" in speech). An essential fact is that, because models are
known a priori, in cases where there are many models it is usually possible and
profitable to construct an index into the set of models. Use of the index at runtime
restricts the set of models that need actually be matched to a few, high-probability
ones.
Model-matching is a common stage in sensory data processing. Phoneme, character
and word HMMs are models, where the hypothesis set is a string of observations
and the matching process is either of the usual Viterbi or trellis procedures. For
phonemes and characters, the HMMs used typically all have the same graph structure, so control flow in the matching process is not model-dependent and may be
encoded in the instruction stream. Word models have differing structure, and control flow is model-dependent. In vision, model-matching has been used in a variety
of complicated ways (cf. (Suetens, FUa & Hanson 1992)), for example, graph models
may have constraints between node attribute values, to be resolved during matching.
Model Matching and SFMD Computation
4
715
DATA AND KNOWLEDGE PARALLELISM
SIMD is a type of computer architecture. At the algorithm level, it corresponds
to data parallelism. Data parallelism, applying the same procedure in parallel to
multiple pieces of data, is the most common explicit parallelization technique. and is
the essence of the Single Program Multiple Data (SPMD) programming model. On
a distributed memory machine, SPMD can be stylized as "given a limited amount
of (algorithmic) knowledge to be applied to a large piece of data, distribute the data
and broadcast the knowledge" .
In sensory processing systems, conversely, one may have a large amount of knowledge (many models) that need to be applied to a (smallish) piece of data, for example, a speech signal frame or segment, or a restricted region of an image. In this
case, it makes sense to "distribute the knowledge and broadcast the data". Modelmatching often works well on an SIMD architecture, e.g. for identical phoneme
models. However, when matching requires differing control flow between models,
an SIMD implementation can be inefficient.
Data and knowledge parallelism are asymmetrical, however, in two ways. First,
all data must normally be processed, while there are usually indexing techniques
that greatly restrict the number of models that actually must be matched. Second, processing an array element frequently requires information about neighboring
elements; when the data is partitioned among multiple processors, this may require inter-processor communication and synchronization. Conversely, models on
different processors can be matched to data in their local memories without any
inter-processor communication. The latter observation leads to the SFMD model.
5
PROGRAMMING MODEL
We view support for SFMD as functionality to be added to an existing SIMD machine to increase its flexibility, scope, and power. As such, the SFMD programming
model should be an extension of the SIMD one. Given an SIMD architecture with
the local addressing and local conditional execution, SFMD programming is made
available at the assembly language level by adding three constructs:
distribute n tells the sequencer and PEs that the next n instructions are to be
distributed for independent execution on the PEs. We call the next n
instructions an SFMD block.
sync tells the individual PEs to suspend execution and signal the controller (barrier
synchronization). This is a no-op if not within an SFMD block.
branch-local one or more local branch instruction(s), including a loop construct;
the branch target must lie within the enclosing SFMD block. This is a
no-op if not within an SFMD block.
We further require that code within an SFMD block contain only references to PElocal memory; none to global (sequencer) variables, to external memory or to the
local memory of another PE. It must also contain no inter-PE communication ..
When the PEs are independently executing an SFMD block, we say that the system
is in SFMD mode, and refer to normal execution as SIMD mode.
When programming in a data-parallel 'C'-like language for an SIMD machine, use of
SFMD functionality can be an optimization performed by the compiler, completely
hidden from the user. Variable type and usage analysis can determine for any given
block of code whether the constraints on non-local references are met, and emit
S. REHFUSS, D. HAMMERSTROM
716
code for SFMD execution if so. No new problems are introduced for debugging, as
SFMD execution is semantically equivalent to executing on each PE sequentially,
and can be executed this way during debugging.
To the programmer, SFMD ameliorates two inefficiencies o~SIMD programming: (i)
in conditionals, a PE need not be idle while other PEs execute the branch it didn't
take, and (ii) loops and recursions may execute a processor-dependent number of
times.
6
HARDWARE MODEL AND COST
We are interested in embedded, "delivery system" applications. Such systems must
have few chips; scalability to 100's or 1000's of chips is not an issue. Parallelism
is thus achieved with multiple PEs per chip. As off-chip I/O is always expensive
compared to computation!, such chips can contain only a relatively small number
of processors. Thus, as feature size decreases, area will go to local memory and
processor complexity, rather than more processors.
Adding SFMD functionality to an architecture whose PEs have local addressing
and local conditional execution is straightforward. Here we outline an example
implementation. Hardware for branch tests and decoding sequencer instructions
in the instruction register (IR) already exists. Local memory is suitable for local
addressing. A very simple "micro-sequencer" must be added, consisting essentially
of a program counter (PC) and instruction buffer (1M), and some simple decode
logic. The existing PE output path can be used for the barrier synchronization. A
1-bit path from the sequencer to each PE is added for interrupting local execution.
Execution of a distribute n instruction on a PE causes the next n instructions to
be stored sequentially in 1M, starting at the current address In the PC. The (n+ 1) 'st
instruction is executed in SPMD mode, it is typically either a branch-local to start
execution, or possibly a sync if the instructions are just being cached2 .
Almost the entire cost of providing SFMD functionality is silicon area used by the
1M. The 1M contains inner loop code, or model-driven conditional code, which is
likely to be small. For a 256 4-byte instruction buffer on the current ASI CNAPS
1064, having 64 PEs with 4KB memory each, this is about 11% of the chip area;
for a hypothetical 16 PE, 16K per PE chip of the same size, it is 3%. These
numbers are large, but as feature size decreases, the incremental cost of adding
SFMD functionality to an SIMD architecture quickly becomes small.
7
PERFORMANCE
What performance improvement may be expected by adding SFMD to SIMD? There
are two basic components, improvement on branches, and improvement on nested
loops, where the inner loop count varies locally.
U nnested (equiprobable) branches speed up most when the branch bodies have the
same size, with a factor of 2 improvement. For nested branches of depth d, the
factor is 2d, but these are probably unusual. An exception would be applying a
decision tree classifier in a data-parallel way.
To examine improvement on nested loops, suppose we have a set of N models (or
any independent tasks) to be evaluated en an architecture with P processors. On
IE.g., due to limited pin count, pad area, and slower clock off-chip.
2For example, if the distributed code is a subroutine that will be encountered again.
Model Matching and SFMD Computation
717
an SFMD architecture, we partition the set into P groups, assign each group to a
processor, and have each processor evaluate all the models in its group. If evaluating
the j'th model of the i'th group takes time t~;tmd), then the total time is
N;
Tsfmd
= rriax
"t(~fmd)
i=l ~ 1J
2:::
(1)
j=l
where Ni is the size of the i'th group,
1 Ni = N. On an SIMD architecture, we
partition the set into N! Pl groups of size P and sequentially evaluate each group
in parallel. Each group has a model that takes the most time to evaluate; SIMD
execution forces the whole group to have this time complexity. So, evaluating a
single group, G i , takes time maxj t~;imd), where j indexes over the elements of the
group, 1 ::; j ::; P. The total time for SIMD execution is then
r
rNIPl
T simd=
L
i=l
P
(simd)
maxt?
?
'-1 1J
(2)
J-
Ignoring data-dependent branching and taking t~;imd) = t~;tmd) == tij, we see that
optimal (i, j)-indexing of the N models for either case is a bin packing problem. As
such, (i, j)-indexing will be heuristic, and we examine Tsimd!Tsfmd by simulation.
It should be clear that the expected improvement due to SFMD cannot be large
unless the outer loop count is large. So, for model matching, improvement on nested
loops is likely not an important factor, as usually only a few models are matched
at once.
To examine the possible magnitude of the effect in general, we look instead at
multiplication of an input vector by a large sparse matrix. Rows are partitioned
among the PEs, and each PE computes all the row-vector inner products for its set
of rows 3 . Tsfmd is given by equation (1), with {tijl1 ::; j ::; Nd the set of all rows
for processor i. Tsimd is given by equation (2), with {tijl1 ::; j ::; P} the set of rows
executed by all processors at time i. Here tij is the time to perform a row-vector
inner product.
Under a variety of choices of matrix size (256 x 256 to 2048 x 2048), number of
processors (16,32,64), distribution of elements (uniform, clustered around the diagonal), and sparsity (fraction of nonzero elements from 0.001 to 0.4) we get that the
ratio Tsimd!Tsfmd decreases from around 2.2-2.7 for sparsities near 0.001, to 1.2
for sparsities near 0.06, and to 1.1 or less for more dense matrices (Figure 1). The
effect is thus not dramatic.
As an example of the potential utility of SFMD functionality for model matching,
we consider interpretation tree search (ITS), a technique used in vision4 . ITS is
a technique for establishing a correspondence between image and model features.
It consists essentially of depth-first search (DFS) , where a node on level d of the
tree corresponds to a pairing of image features with the first d model features.
The search is limited by a variety of unary and binary geometric constraints on
the allowed pairings. Search complexity implies small models are matched to small
3We assume the assignment of rows to PEs is independent of the number of nonzero
elements in the rows. If not, then for N ? P, simply sorting rows by number of elements
and then assigning row i to processor i mod P is a good enough packing heuristic to make
Ts;tTld ~ T s / md .
4See (Grimson 1990) for a complete description of ITS and for the complexity results
alluded to here.
718
' .0
...
,.
..,
!
"
t
!
~
spar ..
?..
? .'
?
"
? ? par?? . af_ . gnu ?
1.6
1.'
1.2
?
. ,., .
+..
:. .
.....
:.n ....
??+
:.
??
1. '
Matzi c ??
t:,-.+.
~ ~
? f :...
?
. q ??
1
0.000 1
Figure 1: Sparse matrices: speedup vs. sparsity
numbers of data features, so distributing models and data to local memories is
practical.
To examine the effect of SFMD on this form of model matching, we performed some
simple simulations. To match a model with D features to a set of B data points,
we attempt to match the first model feature with each data point in order, with
some probability of success, Pmatch. If we succeed, we attempt to match the second
model feature with one of the remaining B-1 data points, and so on. If we match
all D features, we then check for global consistency of the correspondence, with
some probability of success, Pcheck. This procedure is equivalent to DFS in a tree
with branching factor B -d at level d of the tree, 1 ~ d ~ D, where the probability
of expanding any given node is Pmatch, and the probability of stopping the search
at any given leaf is 1 - Pcheck.
By writing the search as an iteration managing an explicit stack, one obtains a loop
with some common code and some code conditional on whether the current node
has any child nodes left to be expanded. The bulk of the "no-child" code deals with
leaf nodes, consisting of testing for global consistency and recording solutions. The
relative performance of SIMD and SFMD thus depends mainly on the probability,
Pleat, that the node being traversed is a leaf. If, for each iteration, the time for the
leaf code is taken to be 1, that for common code is t, and that for the non-leaf code
is k, then
t+k+1
(3)
TsimdlTstmd = t + (1- p)k + p'
Panel 1 of figure 2 shows values of P from a variety of simulations of ITS, with
B,D E {8, 10, 12, 14, 16}, Pmatch E {0.1,0.2, liB}, Pcheck E {0,1}. Grimson (1990)
reports searches on realistic data of around 5000-10000 expansions; this corresponds
to P ~ 0.2 - 0.4. Panel 2 of figure 2 shows how equation 3 behaves for P in this
regime and for realistic values of k. We see speedups in the range 2-4 unless the
leaf code is very small. In fact, the code for global consistency checking is typically
larger than that for local consistency, corresponding to log2 k < 0.
8
OTHER USES
There are a number of uses for SFMD, other than model matching. First, common
"subroutines" involving branching may be kept in the 1M. Analysis of code for IEEE
floating point emulation on an SIMD machine shows an expected 2x improvement by
Model Matching and SFMD Computation
719
protwlbl1it.y o f t.raver.lng a l . . t i n ITS
..
0.7
.~
...
0.5
o
4
0.'
o
l
. .....'".. ....
'
'
~.
0.1
Figure 2: DFS speedup. Panel 1 shows the probability, p, of traversing a leaf. Panel
2 plots equation 3 for realistic values of p and k, with t = 0.1.
using SFMD. Second, simple PE-local searches and sorts should show a significant,
sub-2x, improvement in expected time. Third, more speculatively, different PEs
can execute entirely different tasks by having the SFMD block consist of a single
(nested) if-then-else. This would allow a form of (highly synchronized) pipeline
parallelism by communicating results in SIMD mode after the end of the SFMD
block.
9
CONCLUSION
We have introduced the SFMD computation model as a natural way of implementing the common task of model matching, and have shown how it extends SIMD
computing, giving it greater flexibility and power. SFMD functionality can easily,
and relatively cheaply, be added to existing SIMD designs already having a high
degree of processor autonomy. The addition can be made without altering the
user's programming model or environment. We have argued that technology trends
will force multiple-processor-per-chip systems to increase processor complexity and
memory, rather than increase the number of processors model per chip, and believe
that the SFMD model is a natural step in that evolution.
Acknowledgements
The first author gratefully acknowledges support under ARPA/ONR grants N0001494-C-0130, N00014-92-J-4062, and N00014-94-1-0071.
References
Grimson, W. E . L. (1990), Object Recognition by Computer: The Role of Geometric Constraints, MIT Press.
Hammerstrom, D. (1990), A VLSI architecture for high-performance, low-cost, on-chip
learning, in 'The Proceedings of the IJCNN'.
Suetens, P., Fua, P. & Hanson, A. J. (1992), 'Computational strategies for object recognition', Computing Surveys 24(1), 5 - 61.
| 1166 |@word version:1 nd:1 instruction:20 simulation:3 dramatic:1 initial:2 inefficiency:1 contains:1 existing:5 current:4 com:1 yet:1 assigning:1 written:1 must:7 realistic:3 partition:2 plot:1 v:1 leaf:7 cse:1 location:1 node:7 interprocessor:1 pairing:2 consists:1 dan:1 sync:2 introduce:2 inter:4 expected:4 frequently:3 examine:4 lib:1 becomes:1 matched:7 panel:4 didn:1 what:1 string:2 tmd:2 differing:3 hypothetical:1 runtime:1 classifier:1 control:3 normally:1 grant:1 engineering:1 local:21 establishing:1 path:2 conversely:2 hmms:2 limited:3 graduate:1 range:1 practical:1 testing:1 block:9 implement:1 procedure:3 sequencer:7 area:4 asi:2 spmd:3 matching:24 word:3 pre:1 idle:2 get:1 cannot:1 applying:2 writing:1 equivalent:2 go:1 straightforward:1 starting:1 independently:4 cnaps:1 survey:1 communicating:1 array:1 profitable:1 target:1 suppose:1 user:2 decode:1 programming:9 us:2 hypothesis:5 element:9 trend:1 recognition:3 expensive:1 role:1 region:1 decrease:3 counter:1 grimson:3 environment:1 complexity:5 segment:1 completely:1 packing:2 resolved:1 stylized:1 easily:1 chip:11 tell:2 whose:1 encoded:1 larger:2 heuristic:2 say:1 sequence:1 product:2 neighboring:1 loop:10 flexibility:5 description:1 scalability:1 produce:1 incremental:3 executing:2 object:3 pose:1 op:2 implies:1 synchronized:2 differ:1 met:1 imd:2 emulation:1 dfs:3 functionality:10 attribute:1 kb:1 programmer:3 implementing:1 bin:1 require:2 argued:1 assign:1 clustered:1 traversed:1 extension:1 pl:1 around:3 normal:1 scope:3 viterbi:1 algorithmic:1 purpose:2 estimation:1 largest:1 mit:1 always:1 rather:4 focus:1 improvement:9 portland:1 check:1 mainly:1 rehfuss:3 greatly:1 sense:1 dependent:4 stopping:1 unary:1 typically:3 entire:1 pad:1 hidden:1 vlsi:1 subroutine:2 interested:1 issue:1 classification:3 among:2 priori:2 construct:3 simd:33 having:6 once:1 identical:2 synchronizing:1 look:1 others:1 report:1 micro:1 few:3 distinguishes:1 equiprobable:1 simultaneously:1 recognize:1 individual:1 cheaper:1 maxj:1 floating:1 consisting:2 attempt:2 highly:1 alignment:1 pc:2 emit:1 edge:1 traversing:1 unless:2 tree:5 arpa:1 increased:1 altering:1 assignment:1 cost:5 addressing:4 uniform:1 stored:1 varies:1 combined:2 st:1 ie:1 off:2 decoding:1 quickly:1 again:1 speculatively:1 manage:1 broadcast:2 possibly:1 external:1 inefficient:1 style:1 potential:2 distribute:4 oregon:1 explicitly:1 register:1 depends:1 stream:4 piece:4 performed:2 view:2 compiler:2 start:1 sort:1 parallel:6 complicated:1 ir:1 ni:2 phoneme:3 produced:1 none:1 processor:21 executes:1 aligns:1 against:1 nonetheless:1 knowledge:7 actually:2 steve:1 fua:2 execute:7 box:1 evaluated:1 just:1 stage:10 implicit:2 clock:1 synchronizes:3 mode:4 believe:1 usage:1 effect:3 usa:1 contain:3 asymmetrical:1 evolution:1 nonzero:2 strom:1 deal:1 ogi:1 during:2 branching:3 essence:1 outline:1 complete:1 image:3 common:6 behaves:1 interpretation:1 refer:1 silicon:1 significant:1 consistency:4 language:2 gratefully:1 access:2 driven:1 certain:2 buffer:2 n00014:2 binary:1 success:2 onr:1 greater:1 managing:1 determine:2 signal:2 ii:1 branch:12 multiple:8 faster:2 match:4 spar:1 post:1 ameliorates:1 involving:1 basic:1 controller:2 essentially:2 vision:2 iteration:2 achieved:1 background:1 conditionals:1 addition:3 else:2 parallelization:1 probably:1 recording:1 flow:3 mod:1 call:1 near:2 granularity:1 enough:1 pmatch:3 variety:4 architecture:16 restrict:1 inner:4 whether:3 expression:1 utility:1 distributing:1 speech:2 cause:1 generally:1 tij:2 clear:1 amount:3 locally:1 hardware:5 processed:1 restricts:1 per:4 bulk:1 group:11 kept:1 graph:2 fraction:1 lng:1 almost:1 reasonable:1 extends:1 interrupting:1 delivery:1 decision:1 bit:1 entirely:1 gnu:1 correspondence:3 encountered:1 ijcnn:1 constraint:4 speed:1 performing:2 expanded:1 relatively:2 speedup:4 department:1 debugging:2 character:2 partitioned:2 restricted:1 indexing:3 taken:1 pipeline:1 equation:4 alluded:1 pin:1 count:3 end:1 unusual:1 available:1 appropriate:1 slower:1 hammerstrom:5 remaining:1 include:1 cf:1 assembly:1 log2:1 giving:1 added:6 already:3 strategy:1 usual:1 diagonal:1 md:1 said:1 entity:2 outer:1 code:16 index:3 mimd:3 providing:1 ratio:1 executed:3 expense:2 implementation:2 enclosing:1 design:1 perform:1 observation:3 t:1 communication:4 frame:1 stack:1 arbitrary:1 introduced:2 hanson:2 address:3 parallelism:10 usually:3 regime:1 sparsity:4 program:3 including:1 memory:12 power:3 suitable:3 natural:2 force:2 recursion:1 technology:2 picture:1 acknowledges:1 byte:1 review:1 geometric:2 acknowledgement:1 checking:1 multiplication:1 relative:1 synchronization:7 embedded:1 par:1 degree:2 maxt:1 autonomy:2 row:10 allow:3 institute:1 taking:1 barrier:2 sparse:2 distributed:3 boundary:1 depth:2 evaluating:2 computes:1 sensory:7 author:1 collection:1 made:2 obtains:1 implicitly:2 logic:1 global:4 sequentially:3 search:8 expanding:1 ignoring:1 expansion:1 complex:2 dense:1 whole:1 allowed:1 child:2 fmd:1 body:1 en:1 trellis:1 sub:1 explicit:2 lie:1 pe:31 third:1 essential:1 exists:1 consist:1 adding:6 magnitude:1 execution:14 sorting:1 easier:1 simply:1 likely:3 cheaply:1 visual:1 corresponds:3 nested:5 succeed:1 conditional:6 determined:1 semantically:1 total:2 exception:1 support:3 latter:1 evaluate:3 handling:1 |
184 | 1,167 | Bayesian Methods for Mixtures of Experts
Steve Waterhouse
Cambridge University
Engineering Department
Cambridge CB2 1PZ
England
Tel: [+44] 1223 332754
srw1001@eng.cam.ac.uk
David MacKay
Cavendish Laboratory
Madingley Rd.
Cambridge CB3 OHE
England
Tel: [+44] 1223 337238
mackay@mrao .cam.ac.uk
Tony Robinson
Cambridge University
Engineering Department
Cambridge CB2 1PZ
England.
Tel: [+44] 1223 332815
ajr@eng.cam.ac.uk
ABSTRACT
We present a Bayesian framework for inferring the parameters of
a mixture of experts model based on ensemble learning by variational free energy minimisation. The Bayesian approach avoids the
over-fitting and noise level under-estimation problems of traditional
maximum likelihood inference. We demonstrate these methods on
artificial problems and sunspot time series prediction.
INTRODUCTION
The task of estimating the parameters of adaptive models such as artificial neural
networks using Maximum Likelihood (ML) is well documented ego Geman, Bienenstock & Doursat (1992). ML estimates typically lead to models with high variance,
a process known as "over-fitting". ML also yields over-confident predictions; in
regression problems for example, ML underestimates the noise level. This problem
is particularly dominant in models where the ratio of the number of data points in
the training set to the number of parameters in the model is low. In this paper we
consider inference of the parameters of the hierarchical mixture of experts (HME)
architecture (Jordan & Jacobs 1994). This model consists of a series of "experts,"
each modelling different processes assumed to be underlying causes of the data.
Since each expert may focus on a different subset of the data which may be arbitrarily small, the possibility of over-fitting of each process is increased. We use
Bayesian methods (MacKay 1992a) to avoid over-fitting by specifying prior belief
in various aspects of the model and marginalising over parameter uncertainty.
The use of regularisation or "weight decay" corresponds to the prior assumption
that the model should have smooth outputs. This is equivalent to a prior p(ela) on
the parameters e of the model , where a are the hyperparameters of the prior . Given
a set of priors we may specify a posterior distribution of the parameters given data
D,
p(eID, a, 11) ex: p(Dle, 1l)p(e\a, 11),
(1)
where the variable 11 encompasses the assumptions of model architecture, type of
regularisation used and assumed noise model. Maximising the posterior gives us
the most probable parameters eMP. We may then set the hyperparameters either
by cross-validation, or by finding the maximum of the posterior distribution of the
352
S. WATERHOUSE, D. MACKAY, T. ROBINSON
hyperparameters P(aID), also known as the "evidence" (Gull 1989). In this paper
we describe a method, motivated by the Expectation Maximisation (EM) algorithm
of Dempster, Laird & Rubin (1977) and the principle of ensemble learning by variational free energy minimisation (Hinton & van Camp 1993, Neal & Hinton 1993)
which achieves simultaneous optimisation of the parameters and hyper parameters
of the HME. We then demonstrate this algorithm on two simulated examples and a
time series prediction task. In each task the use of the Bayesian methods prevents
over-fitting of the data and gives better prediction performance. Before we describe
this algorithm, we will specify the model and its associated priors.
MIXTURES OF EXPERTS
The mixture of experts architecture (Jordan & Jacobs 1994) consists of a set of
"experts" which perform local function approximation. The expert outputs are
combined by a "gate" to form the overall output. In the hierarchical case, the
experts are themselves mixtures of further experts, thus extending the network in
a tree structured fashion. The model is a generative one in which we assume that
data are generated in the domain by a series of J independent processes which
are selected in a stochastic manner. We specify a set of indicator variables Z =
{zt) : j = 1 ... J, n = 1 ... N}, where zt) is 1 if the output y(n) was generated
by expert j and zero otherwise. Consider the case of regression over a data set
D = {x(n) E 9tk , y(n) E 9tP, n = 1 ... N} with p = 1. We specify that the conditional
probability of the scalar output y(n) given the input vector x(n) at exemplar (n) is
J
p(y(n)lx(n), 8)
=L
P(zt)lx(n), ~j)p(y(n)lx(n), Wj, {3j),
(2)
j=1
where {~j E 9t k } is the set of gate parameters, and {(Wj E 9tk), {3J the set of expert
parameters. In this case, p(y(n)lx(n), Wj,{3j) is a Gaussian:
(3)
where 1/ {3j is the variance of expert j, I and Jt) = !}(x(n), Wj) is the output of expert
j, giving a probabilistic mixture model. In this paper we restrict the expert output
to be a linear function of the input, !}(x(n>, Wj) = w"f x(n). We model the action of
selecting process j with the gate, the outputs of which are given by the softmax
function of the inner products of the input vector 2 and the gate parameter vectors.
The conditional probability of selecting expert j given input x(n) is thus:
(4)
A straightforward extension of this model also gives us the conditional probability
h;n) of expert j having been selected given input x(n) and output y(n) ,
hj")
=P(zj') =11/'), z ('), 9) =gj") ~j") /
t. g~')~j(')
.
(5)
1Although {Jj is a parameter of expert j, in common with MacKay (1992a) we consider
it as a hyperparameter on the Gaussian noise prior.
2In all notation, we assume that the input vector is augmented by a constant term,
which avoids the need to specify a "bias" term in the parameter vectors.
353
Bayesian Methods for Mixtures of Experts
PRIORS
We assume a separable prior on the parameters
e of the model:
J
p(ela)
=IT P(~jltl)P(wjlaj)
(6)
i=l
where {aj} and {tl} are the hyperparameters for the parameter vectors of the experts
and the gate respectively. We assume Gaussian priors on the parameters of the
experts {Wj} and the gate {~j}' for example:
(7)
For simplicity of notation, we shall refer to the set of all smoothness hyperparameters as a = {tl, aj} and the set of all noise level hyperparameters as /3 = {/3j}.
Finally, we assume Gamma priors on the hyperparameters {tl, aj, /3j} of the priors,
for example:
1
P(log /3j IPlh up) = r(pp)
(/3,) p~
u~
exp( -
/3j / up),
(8)
where up, Pp are the hyper-hyperparameters which specify the range in which we
expect the noise levels /3j to lie.
INFERRING PARAMETERS USING ENSEMBLE LEARNING
The EM algorithm was used by Jordan & Jacobs (1994) to train the HME in a
maximum likelihood framework. In the EM algorithm we specify a complete data set
{D,Zl which includes the observed ~ataD and the set ~fi~dic~tor variables Z. Given
e(m- 15, the E step of the EM algOrIthm computes a dIstrIbutIOn P(ZID, e(m-l? over
Z. The M step then maximises the expected value of the complete data likelihood
P(D, ZI e) over this distribution. In the case of the HME, the indicator variables
Z = {{zt)}f=tl~=l specify which expert was responsible for generating the data at
each time.
We now outline an algorithm for the simultaneous optimisation of the parameters
e and hyperparameters a and /3, using the framework of ensemble learning by
variational free energy minimisation (Hinton & van Camp 1993). Rather than
optimising a point estimate of e, a and /3, we optimise a distribution over these
parameters. This builds on Neal & Hinton's (1993) description ofthe EM algorithm
in terms of variational free energy minimisation.
We first specify an approximating ensemble Q(w,~, a, /3, Z) which we optimise so that
it approximates the posterior distribution P(w,~, a, /3, ZID, J{) well. The objective
function chosen to measure the quality of the approximation is the variational free
energy,
F(Q) =
J
dw
d~ da d/3 dZ Q(w,~, a,/3,Z) log P( w,
Q(;, ~,;,/3'iJ{)'
,a, ,Z,D
where the joint probability of parameters
data Z and observed data D is given by,
{w,~},
(9)
hyperparameters, {a,/3}, missing
354
S. WATERHOUSE, D. MACKAY, T. ROBINSON
P(w,~,a,{3,Z,DIJI)
P(,u)
=
nJ P(~jl,u)P(aj)P(wjlaj)P({3jlpj, Vj) nN (P(Z}") = ll:r:(n), ~j)p(y(n)I:r:(n), Wj, (3j?)
FI
~
J
(10)
_I
The free energy can be viewed as the sum of the negative log evidence -log P(DIJI)
and the Kullback-Leibler divergence between Q and P(w,~, a,{3,ZID,JI). F is
bounded below by -log P(DIJ{), with equality when Q =P(w,~, a, (3, ZID, JI).
We constrain the approximating ensemble Q to be separable in the form
Q(w,~,a,{3,Z) = Q(w)Q(~)Q(a)Q({3)Q(Z). We find the optimal separable distribution Q by considering separately the optimisation of F over each separate ensemble
component QO with all other components fixed.
Optimising Qw(w) and
Q~(~).
As a functional of Qw(w), F is
F
=
J
dwQw(w)
[L
]
iwJWj+
ti/n)~(y(n)_yj"?2+10gQw(W)l
+const (ll)
n=1
where for any variable a, a denotes I da Q(a) a . Noting that the w dependent terms
are the log of a posterior distribution and that a divergence I Q log(QIP) is minimised
by setting Q = P, we can write down the distribution Qw(w) that minimises this
expression. For given data and Qa, Q/J' Qz, Q~, the optimising distribution Q~Pt(w) is
This is a set of J Gaussian distributions with means {Wj}, which can be found
exactly by quadratic optimisation. We denote the variance covariance matrices of
Q~ft(Wj) by {Lwj } . The analogous expression for the gates Q?\~) is obtained in a
similar fashion and is given by
We approximate each Q~P\~j) by a Gaussian distribution fitted at its maximum
~j = ~j with variance covariance matrix l:~j.
Optimising Qz(Z)
By a similar procedure, the optimal distribution Q~P\z) is given by
(14)
where
(15)
Bayesian Methods for Mixtures of Experts
355
and ~j is the value of l;j computed above. The standard E-step gives us a distribution
of Z given a fixed value of parameters and the data, as shown in equation (5). In
this case, by finding the optimal Qz(Z) we obtain the alternative expression of (15),
with dependencies on the uncertainty of the experts' predictions. Ideally (if we
did not made the assumption of a separable distribution Q) Qz might be expected
to contain an additional effect of the uncertainty in the gate parameters. We can
introduce this by the method of MacKay (1992b) for marginalising classifiers, in the
case of binary gates.
Optimising Qa(a) and QfJ(f3)
Finally, for the hyperparameter distributions, the optimal values of ensemble functions give values for aj and {3j as
1 _
wJ
Wj
+ 21 utlj + Trace1:Wj
k+ 2paj
lij -
An analogous procedure is used to set the hyperparameters {Jl} of the gate.
MAKING PREDICTIONS
In order to make predictions using the model, we must maryinaiise over the parameters and hyper parameters to get the predictive distribution . We use the optimal
distributions QoptO to approximate the posterior distribution.
For the experts, the marginalised outputs are given by ~(N+I)
= h(x(N+l), w7P ),
with
variance a yl2 aj ?.Bj = x(N+I)Tl:wx(N+I)
+ aJ2' where aJ2 = 1 / PJ
it. We may also marginalise
J
over the gate parameters (MacKay 1992b) to give marginalised outputs for the gates.
The predictive distribution is then a mixture of Gaussians, with mean and variance
given by its first and second moments,
J
y(N+1)
= Lg/N+l)y/N+I);
i=1
J
= ""' g .(N+I)(a2ylai ..B, + VI
(",(N+I?2) _
yla ..B L..J I
a2
(,,(N+I?2
V
?
(17)
i=1
SIMULATIONS
Artificial Data
In order to test the performance of the Bayesian method, we constructed two artificial data sets. Both data sets consist of a known function corrupted by additive
zero mean Gaussian noise. The first data set, shown in Figure (1a) consists of
100 points from a piecewise linear function in which the leftmost portion is corrupted with noise of variance 3 times greater than the rightmost portion. The
second data set, shown in Figure (1b) consists of 100 points from the function
get) = 4. 26(e- t - 4e- 2t + 3e- 3t ), corrupted by Gaussian noise of constant variance
0.44. We trained a number of models on these data sets, and they provide a typical
set of results for the maximum likelihood and Bayesian methods , together with the
error bars on the Bayesian solutions. The model architecture used was a 6 deep
binary hierarchy of linear experts. In both cases, the ML solutions tend to overfit
the noise in the data set. The Bayesian solutions, on the other hand, are both
smooth functions which are better approximations to the underlying functions.
Time Series Prediction
The Bayesian method was also evaluated on a time series prediction problem. This
consists of yearly readings of sunspot activity from 1700 to 1979, and was first
S. WATERHOUSE. D. MACKAY. T. ROBINSON
356
-
--
"' ' . ---I
-~.
-
-
~
~
~i
\~~
?
.,'
'.
.....
~
. "
'"
. ,1 ~
i "~
~i.
. ...
"~..
...
"1,
I .
j~ ..
"
.
:
... .. . .. . ..... . ...
ii "
, , ; ..
j
'I
i
! ~ ,l
~,\, '
-Original function
? Original + Noise
. -ML solution
- - Bayesian solution
... Error bars
. ???
.. .
i / ??
/
...?
(b)
(a)
Figure 1: The effect of regularisation on fitting known functions corrupted with noise.
considered in the connectionist community by Weigend, Huberman & Rumelhart
(1990), who used an MLP with 8 hidden tanh units, to predict the coming year's
activity based on the activities of the previous 12 years. This data set was chosen
since it consists of a relatively small number of examples and thus the probability
of over-fitting sizeable models is large. In previous work, we considered the use of a
mixture of 7 experts on this problem. Due to the problems of over-fitting inherent
in ML however, we were constrained to using cross validation to stop the training
early. This also constrained the selection of the model order, since the branches of
deep networks tend to become "pinched off" during ML training , resulting in local
minima during training. The Bayesian method avoids this over-fitting of the gates
and allows us to use very large models.
Table 1: Single step prediction on the Sunspots data set using a lag vector of 12 years.
NMSE is the mean squared prediction error normalised by the variance of the entire
record from 1700 to 1979. The models used were; WHR: Weigend et al's MLP result;
1HME_7_CV: mixture of 7 experts trained via maximum likelihood and using a 10 %
cross validation scheme; 8HME2-ML & 8HME2J3ayes: 8 deep binary HME,trained via
maximum likelihood (ML) and Bayesian method (Bayes).
MODEL
WHR
1HME7_CV
8HME2_ML
8HME2..Bayes
Train NMSE
1700-1920
0.082
0.061
0.052
0.079
Test NMSE
1921-1955 1956-1979
0.086
0.35
0.089
0.27
0.162
0.41
0.089
0.26
Table 1 shows the results obtained using a variety of methods on the sunspots
task. The Bayesian method performs significantly better on the test sets than the
maximum likelihood method (8HME2_ML), and is competitive with the MLP of
Weigend et al (WHR). It should be noted that even though the number of parameters in the 8 deep binary HME (4992) used is much larger than the number of
training examples (209), the Bayesian method still avoids over-fitting of the data.
This allows us to specify large models and avoids the need for prior architecture
selection, although in some cases such selection may be advantageous, for example
if the number of processes inherent in the data is known a-priori.
Bayesian Methods for Mixtures of Experts
357
In our experience with linear experts, the smoothness prior on the output function
of the expert does not have an important effect; the prior on the gates and the
Bayesian inference of the noise level are the important factors. We expect that the
smoothness prior would become more important if the experts used more complex
basis functions .
DISCUSSION
The EM algorithm is a special case of the ensemble learning algorithm presented
here : the EM algorithm is obtained if we constrain Qe(8) and Qf3(f3) to be delta
functions and fix a = O. The Bayesian ensemble works better because it includes
regularization and because the uncertainty of the parameters is taken into account
when predictions are made . It could be of interest in future work to investigate how
other models trained by EM could benefit from the ensemble learning approach
such as hidden Markov models.
The Bayesian method of avoiding over-fitting has been shown to lend itself naturally
to the mixture of experts architecture. The Bayesian approach can be implemented
practically with only a small computational overhead and gives significantly better
performance than the ML model.
References
Dempster, A. P., Laird , N. M. & Rubin, D. B. (1977), 'Maximum likelihood from
incomplete data via the EM algorithm', Journal of the Royal Statistical Society,
Series B 39, 1- 38.
Geman, S., Bienenstock, E. & Doursat, R. (1992), 'Neural networks and the bias
/ variance dilemma', Neural Computation 5, 1-58.
Gull, S. F. (1989), Developments in maximum entropy data analysis, in J . Skilling,
ed., 'Maximum Entropy and Bayesian Methods, Cambridge 1988', Kluwer,
Dordrecht, pp. 53-71.
Hinton, G. E. & van Camp, D. (1993), Keeping neural networks simple by minimizing the description length of the weights, To appear in: Proceedings of
COLT-93.
Jordan, M. I. & Jacobs, R. A. (1994), 'Hierarchical Mixtures of Experts and the
EM algorithm', Neural Computation 6, 181- 214.
MacKay, D. J . C. (1992a), 'Bayesian interpolation', Neural Computation 4(3),415447.
MacKay, D. J . C. (1992b), 'The evidence framework applied to classification networks' , Neural Computation 4(5), 698- 714.
Neal, R. M. & Hinton, G. E. (1993), 'A new view of the EM algorithm that justifies incremental and other variants'. Submitted to Biometrika. Available at
URL:ftp:/ /ftp.cs.toronto.edu/pub/radford/www.
Weigend , A. S., Huberman, B. A. & Rumelhart, D. E. (1990), 'Predicting the future:
a connectionist approach ', International Journal of Neural Systems 1, 193- 209.
| 1167 |@word advantageous:1 simulation:1 covariance:2 eng:2 jacob:4 moment:1 series:7 selecting:2 pub:1 rightmost:1 must:1 additive:1 wx:1 generative:1 selected:2 record:1 toronto:1 lx:4 constructed:1 become:2 consists:6 fitting:11 overhead:1 introduce:1 manner:1 expected:2 themselves:1 considering:1 estimating:1 underlying:2 notation:2 bounded:1 qw:3 finding:2 nj:1 ti:1 exactly:1 biometrika:1 classifier:1 uk:3 zl:1 unit:1 appear:1 before:1 engineering:2 local:2 interpolation:1 might:1 specifying:1 range:1 responsible:1 yj:1 maximisation:1 diji:2 cb2:2 procedure:2 significantly:2 get:2 selection:3 www:1 equivalent:1 dz:1 missing:1 straightforward:1 simplicity:1 dw:1 cavendish:1 analogous:2 pt:1 hierarchy:1 ego:1 rumelhart:2 particularly:1 geman:2 observed:2 ft:1 wj:12 dempster:2 ideally:1 cam:3 trained:4 predictive:2 dilemma:1 basis:1 joint:1 aj2:2 various:1 train:2 describe:2 artificial:4 hyper:3 dordrecht:1 lag:1 larger:1 otherwise:1 itself:1 laird:2 product:1 coming:1 description:2 extending:1 generating:1 incremental:1 tk:2 ftp:2 ac:3 minimises:1 exemplar:1 ij:1 implemented:1 c:1 stochastic:1 fix:1 probable:1 extension:1 practically:1 considered:2 exp:1 bj:1 predict:1 tor:1 achieves:1 early:1 a2:1 estimation:1 tanh:1 gaussian:7 rather:1 avoid:1 hj:1 minimisation:4 focus:1 modelling:1 likelihood:9 camp:3 inference:3 dependent:1 nn:1 typically:1 entire:1 bienenstock:2 hidden:2 overall:1 classification:1 colt:1 priori:1 development:1 constrained:2 softmax:1 mackay:11 special:1 f3:2 having:1 optimising:5 future:2 connectionist:2 piecewise:1 inherent:2 gamma:1 divergence:2 mlp:3 interest:1 dle:1 possibility:1 investigate:1 mixture:15 experience:1 tree:1 incomplete:1 gull:2 fitted:1 increased:1 tp:1 subset:1 dij:1 dependency:1 corrupted:4 combined:1 confident:1 international:1 probabilistic:1 off:1 minimised:1 together:1 squared:1 expert:35 account:1 hme:6 sizeable:1 includes:2 vi:1 qfj:1 view:1 portion:2 competitive:1 bayes:2 variance:10 who:1 ensemble:11 yield:1 ofthe:1 bayesian:24 submitted:1 simultaneous:2 ed:1 underestimate:1 energy:6 pp:3 naturally:1 associated:1 stop:1 zid:4 steve:1 specify:10 evaluated:1 though:1 marginalising:2 overfit:1 hand:1 qf3:1 qo:1 aj:6 quality:1 effect:3 contain:1 equality:1 regularization:1 laboratory:1 leibler:1 neal:3 yl2:1 ll:2 during:2 noted:1 qe:1 leftmost:1 outline:1 complete:2 demonstrate:2 performs:1 variational:5 fi:2 common:1 functional:1 ji:2 jl:2 approximates:1 kluwer:1 refer:1 cambridge:6 smoothness:3 rd:1 gj:1 dominant:1 posterior:6 binary:4 arbitrarily:1 minimum:1 additional:1 greater:1 ii:1 branch:1 smooth:2 england:3 cross:3 dic:1 prediction:12 variant:1 regression:2 optimisation:4 expectation:1 separately:1 doursat:2 tend:2 jordan:4 noting:1 variety:1 zi:1 architecture:6 restrict:1 inner:1 motivated:1 expression:3 url:1 cause:1 jj:1 action:1 deep:4 pinched:1 mrao:1 documented:1 zj:1 delta:1 write:1 hyperparameter:2 shall:1 cb3:1 pj:1 sum:1 year:3 weigend:4 uncertainty:4 quadratic:1 activity:3 constrain:2 aspect:1 separable:4 relatively:1 department:2 structured:1 em:11 making:1 ohe:1 taken:1 equation:1 available:1 gaussians:1 hierarchical:3 skilling:1 alternative:1 gate:14 ajr:1 original:2 denotes:1 tony:1 const:1 yearly:1 giving:1 build:1 approximating:2 society:1 objective:1 traditional:1 eid:1 separate:1 simulated:1 maximising:1 length:1 ratio:1 minimizing:1 paj:1 lg:1 negative:1 zt:4 perform:1 maximises:1 markov:1 hinton:6 yla:1 community:1 david:1 marginalise:1 robinson:4 qa:2 bar:2 lwj:1 below:1 reading:1 encompasses:1 optimise:2 royal:1 lend:1 belief:1 ela:2 predicting:1 indicator:2 marginalised:2 scheme:1 lij:1 prior:16 waterhouse:4 regularisation:3 expect:2 validation:3 rubin:2 principle:1 free:6 keeping:1 bias:2 normalised:1 emp:1 van:3 benefit:1 whr:3 avoids:5 computes:1 made:2 adaptive:1 approximate:2 kullback:1 ml:11 assumed:2 table:2 qz:4 tel:3 qip:1 complex:1 domain:1 da:2 vj:1 did:1 noise:13 hyperparameters:11 nmse:3 augmented:1 tl:5 sunspot:4 fashion:2 aid:1 inferring:2 lie:1 down:1 jt:1 pz:2 decay:1 evidence:3 consist:1 justifies:1 entropy:2 prevents:1 scalar:1 radford:1 corresponds:1 conditional:3 viewed:1 srw1001:1 typical:1 huberman:2 avoiding:1 ex:1 |
185 | 1,168 | Human Face Detection in Visual Scenes
Henry A. Rowley
Shumeet Baluja
Takeo Kanade
har@cs.cmu.edu
baluja@cs.cmu.edu
tk@cs.cmu.edu
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Abstract
We present a neural network-based face detection system. A retinally
connected neural network examines small windows of an image, and
decides whether each window contains a face. The system arbitrates
between multiple networks to improve performance over a single network.
We use a bootstrap algorithm for training, which adds false detections
into the training set as training progresses. This eliminates the difficult
task of manually selecting non-face training examples, which must be
chosen to span the entire space of non-face images. Comparisons with
another state-of-the-art face detection system are presented; our system
has better performance in terms of detection and false-positive rates.
1 INTRODUCTION
In this paper, we present a neural network-based algorithm to detect frontal views of faces
in gray-scale images. The algorithms and training methods are general, and can be applied
to other views of faces, as well as to similar object and pattern recognition problems.
Training a neural network for the face detection task is challenging because of the difficulty
in characterizing prototypical "non-face" images. Unlike in face recognition, where the
classes to be discriminated are different faces, in face detection, the two classes to be
discriminated are "images containing faces" and "images not containing faces". It is easy
to get a representative sample of images which contain faces, but much harder to get a
representative sample of those which do not. The size of the training set for the second
class can grow very quickly.
We avoid the problem of using a huge training set of non-faces by selectively adding images
to the training set as training progresses [Sung and Poggio, 1994]. This "bootstrapping"
method reduces the size of the training set needed. Detailed descriptions of this training
method, along with the network architecture are given in Section 2. In Section 3 the
performance of the system is examined. We find that the system is able to detect 92.9% of
faces with an acceptable number of false positives. Section 4 compares this system with a
similar system. Conclusions and directions for future research are presented in Section 5.
2
DESCRIPTION OF THE SYSTEM
Our system consists of two major parts: a set of neural network-based filters, and a system
to combine the filter outputs. Below, we describe the design and training of the filters,
876
H. A. ROWLEY, S. BALUJA, T. KANADE
which scan the input image for faces. This is followed by descriptions of algorithms for
arbitrating among multiple networks and for merging multiple overlapping detections.
2.1
STAGE ONE: A NEURAL -NETWORK-BASED FILTER
The first component of our system is a filter that receives as input a small square region of
the image, and generates an output ranging from 1 to -1, signifying the presence or absence
of a face, respectively. To detect faces anywhere in the input, the filter must be applied at
every location in the image. To allow detection of faces larger than the window size, the
input image is repeatedly reduced in size (by subsampling), and the filter is applied at each
size. The set of scaled input images is known as an "image pyramid", and is illustrated in
Figure 1. The filter itself must have some invariance to position and scale. The amount
of invariance in the filter determines the number of scales and positions at which the filter
must be applied.
With these points in mind, we can give the filtering algorithm (see Figure I). It consists
of two main steps: a preprocessing step, followed by a forward pass through a neural
network. The preprocessing consists of lighting correction, which equalizes the intensity
values across the window, followed by histogram equalization, which expands the range of
intensities in the window [Sung and Poggio, 19941 The preprocessed window is used as
the input to the neural network. The network has retinal connections to its input layer; the
receptive fields of each hidden unit are shown in the figure. Although the figure shows a
single hidden unit for each subregion of the input, these units can be replicated. Similar
architectures are commonly used in speech and character recognition tasks [Waibel et al.,
1989, Le Cun et al., 19891
Input image pyramid
Extracted window
Correct lighting
/
Preprocessing
Neural network
Figure 1: The basic algorithm used for face detection.
Examples of output from a single filter are shown in Figure 2. In the figure, each box
represents the position and size of a window to which the neural network gave a positive
response. The network has some invariance to position and scale, which results in mUltiple
boxes around some faces. Note that there are some false detections; we present methods to
eliminate them in Section 2.2. We next describe the training of the network which generated
this output.
2.1.1
Training Stage One
To train a neural network to serve as an accurate filter, a large number of face and non-face
images are needed. Nearly 1050 face examples were gathered from face databases at CMU
and Harvard. The images contained faces of various sizes, orientations, positions, and
intensities. The eyes and upper lip of each face were located manually, and these points
were used to normalize each face to the same scale, orientation, and position. A 20-by-20
pixel region containing the face is extracted and preprocessed (by apply lighting correction
and histogram equalization). In the training set, 15 faces were created from each original
image, by slightly rotating (up to 10?), scaling (90%-110%), translating (up to half a pixel),
Human Face Detection in Visual Scenes
877
Figure 2: Images with all
the above threshold detections indicated by boxes.
Figure 3: Example face images, randomly mirrored, rotated, translated. and scaled
by small amounts.
and mirroring each face. A few example images are shown in Figure 3.
It is difficult to collect a representative set of non-faces. Instead of collecting the images
before training is started, the images are collected during training, as follows [Sung and
Poggio, 1994]:
I. Create 1000 non-face images using random pixel intensities.
2. Train a neural network to produce an output of 1 for the face examples, and -I for
the non-face examples.
3. Run the system on an image of scenery which contains no faces. Collect subimages
in which the network incorrectly identifies a face (an output activation> 0).
4. Select up to 250 of these subimages at random, and add them into the training set.
Go to step 2.
Some examples of non-faces that are collected during training are shown in Figure 4. We
used 120 images for collecting negative examples in this bootstrapping manner. A typical
training run selects approximately 8000 non-face images from the 146,212,178 subimages
that are available at all locations and scales in the scenery images.
Figure 4: Some non-face examples which are collected during training.
2.2 STAGE TWO: ARBITRATION AND MERGING OVERLAPPING
DETECTIONS
The examples in Figure 2 showed that just one network cannot eliminate all false detections.
To reduce the number of false positives, we apply two networks, and use arbitration to
produce the final decision. Each network is trained in a similar manner, with random
initial weights, random initial non-face images, and random permutations of the order of
presentation of the scenery images. The detection and false positive rates of the individual
networks are quite close. However, because of different training conditions and because
of self-selection of negative training examples, the networks will have different biases and
will make different errors.
For the work presented here, we used very simple arbitration strategies. Each detection
by a filter at a particular position and scale is recorded in an image pyramid. One way to
H. A. ROWLEY, S. BALUJA, T. KANADE
878
combine two such pyramids is by ANDing. This strategy signals a detection only if both
networks detect a face at precisely the same scale and position. This ensures that, if a
particular false detection is made by only one network, the combined output will not have
that error. The disadvantage is that if an actual face is detected by only one network, it will
be lost in the combination. Similar heuristics, such as ORing the outputs, were also tried.
Further heuristics (applied either before or after the arbitration step) can be used to improve
the performance of the system. Note that in Figure 2, most faces are detected at multiple
nearby positions or scales, while false detections often occur at single locations. At each
location in an image pyramid representing detections, the number of detections within a
specified neighborhood of that location can be counted. If the number is above a threshold,
then that location is classified as a face. These detections are then collapsed down to a
single point, located at their centroid. when this is done before arbitration, the centroid
locations rather than the actual outputs from the networks are ANDed together.
If we further assume that a position is correctly identified as a face, then all other detections
which overlap it are likely to be errors, and can therefore be eliminated. There are relatively
few cases in which this heuristic fails; however, one such case is illustrated in the left two
faces in Figure 2B, in which one face partially occludes another. Together, the steps of
combining multiple detections and eliminating overlapping detections will be referred to as
merging detections. In the next section, we show that by merging detections and arbitrating
among multiple networks, we can reduce the false detection rate significantly.
3 EMPIRICAL RESULTS
A large number of experiments were performed to evaluate the system. Because of space .
restrictions only a few results are reported here; further results are presented in [Rowley et
al., 1995]. We first show an analysis of which features the neural network is using to detect
faces, and then present the error rates of the system over two large test sets.
3.1
SENSITIVITY ANALYSIS
In order to determine which part of the input image the network uses to decide whether
the input is a face, we performed a sensitivity analysis using the method of [Baluja and
Pomerleau, 1995]. We collected a test set of face images (based on the training database, but
with different randomized scales, translations, and rotations than were used for training),
and used a set of negative examples collected during the training of an earlier version of
the system. Each of the 20-by-20 pixel input images was divided into 100 two-by-two
pixel subimages. For each subimage in turn, we went through the test set, replacing that
subimage with random noise, and tested the neural network. The sum of squared errors
made by the network is an indication of how important that portion of the image is for the
detection task. Plots of the error rates for two networks we developed are shown in Figure 5.
FigureS: Sum of squared errors (zaxis) on a small test resulting from
adding noise to various portions of
the input image (horizontal plane),
for two networks. Network 1 uses
two sets of the hidden units illustrated in Figure 1, while network 2
uses three sets.
The networks rely most heavily on the eyes, then on the nose, and then on the mouth
(Figure 5). Anecdotally, we have seen this behavior on several real test images: the
network's accuracy decreases more when an eye is occluded than when the mouth is
occluded. Further, when both eyes of a face are occluded, it is rarely detected.
Human Face Detection in Visual Scenes
879
3.2 TESTING
The system was tested on two large sets of images. Test Set A was collected at CMU, and
consists of 42 scanned photographs, newspaper pictures, images collected from the World
Wide Web, and digitized television pictures. Test set B consists of 23 images provided
by Sung and Poggio; it was used in [Sung and Poggio, 1994] to measure the accuracy
of their system. These test sets require the system to analyze 22,053,124 and 9,678,084
windows, respectively. Table 1 shows the performance for the two networks working alone,
the effect of overlap elimination and collapsing multiple detections, and the results of using
ANDing and ~Ring for arbitration. Each system has a better false positive rate (but a worse
detection rate) on Test Set A than on Test Set B, because of differences in the types of
images in the two sets. Note that for systems using arbitration, the ratio of false detections
to windows examined is extremely low, ranging from 1 in 146,638 to 1 in 5,513,281,
depending on the type of arbitration used. Figure 6 shows some example output images
from the system, produced by merging the detections from networks 1 and 2, and ANDing
the results. Using another neural network to arbitrate among the two networks gives about
the same performance as the simpler schemes presented above [Rowley et ai., 1995].
Table 1: Detection and Error Rates
Test Set A
Test Set B
# miss 1 Detect rate # miss 1 Detect rate
Type
False detects 1Rate False detects 1 Rate
System
0) Ideal System
0/169
100.0%
01155
100.0%
Single
network,
no
heuristics
Single
network,
with
heuristics
Arbitrating
among
two
networks
4
1) Network 1 (52 hidden
units, 2905 connections)
2) Network 2 (7~ hidden
units, 4357 connections)
3) Network 1 4- merge
detections
4) Network 2 4- merge
detections
5) Networks 1 and 2 4- AND
4- merge detections
6) Networks 1 and 2 4merge detections 4- AND
7) Networks 1 and 2 4merge 4- OR 4- merge
0
17
507
20
385
24
222
27
179
52
4
36
15
26
90
0/22053124
89.9%
1143497
~~.2%
1157281
85.8%
1199338
84.0%
11123202
69.2%
115513281
78.7%
111470208
84.6%
11245035
0
11
353
10
347
12
126
13
123
34
3
20
15
11
64
0/9678084
92.9%
1127417
93.5%
1127891
92.3%
1176810
91.6%
1178684
78.1%
113226028
~7.1%
11645206
92.9%
11151220
COMPARISON TO OTHER SYSTEMS
[Sung and Poggio, 1994] reports a face-detection system based on clustering techniques.
Their system, like ours, passes a small window over all portions of the image, and determines
whether a face exists in each window. Their system uses a supervised clustering method
with six "face" and six "non-face" clusters. Two distance metrics measure the distance of
an input image to the prototype clusters. The first metric measures the "partial" distance
between the test pattern and the cluster's 75 most significant eigenvectors. The second
distance metric is the Euclidean distance between the test pattern and its projection in
the 75 dimensional subspace. These distance measures have close ties with Principal
Components Analysis (PeA), as described in [Sung and Poggio, 1994]. The last step in
their system is to use either a perceptron or a neural network with a hidden layer, trained
to classify points using the two distances to each of the clusters (a total of 24 inputs).
Their system is trained with 4000 positive examples, and nearly 47500 negative examples
collected in the "bootstrap" manner. In comparison, our system uses approximately 16000
positive examples and 8000 negative examples.
Table 2 shows the accuracy of their system on Test Set B, along with the results of our
880
H. A. ROWLEY, S. BALUJA, T. KANADE
Figure 6: Output produced by System 6 in Table 1. For each image, three numbers are shown:
the number of faces in the image, the number of faces detected correctly, and the number of false
detections. Some notes on specific images: Although the system was not trained on hand-drawn
faces, it detects them in K and R. One false detect is present in both D and R. Faces are missed in D
(removed because a false detect overlapped it), B (one due to occlusion, and one due to large angle),
and in N (babies with fingers in their mouths are not well represented in training data). Images B,
D, F, K, L, and M were provided by Sung and Poggio at MIT. Images A, G, 0, and P were scanned
from photographs, image R was obtained with a CCD camera, images J and N were scanned from
newspapers, images H, I, and Q were scanned from printed photographs, and image C was obtained
off of the World Wide Web. Images P and B correspond to Figures 2A and 2B.
881
Human Face Detection in Visual Scenes
system using a variety of arbitration heuristics. In [Sung and Poggio, 1994], only 149 faces
were labelled in the test set, while we labelled 155 (some are difficult for either system to
detect). The number of missed faces is therefore six more than the values listed in their
paper. Also note that [Sung and Poggio, 1994] check a slightly smaller number of windows
over the entire test set; this is taken into account when computing the false detection rates.
The table shows that we can achieve higher detection rates with fewer false detections.
Table 2: Comparison of [Sung and Poggio, 1994] and Our System on Test Set B
System
5) Networks 1 and 2 ~ AND ~ merge
6) Networks 1 and 2 ~ merge ~ AND
7) Networks 1 and 2 ~ merge ~ OR ~ merge
[Sung and Poggio, 1994] (Multi-layer network)
[Sung and Poggio, 1994] (Perceptron)
5
II
Missed
faces
34
20
11
36
28
Detect
rate
78.l%
87.l%
92.9%
76.8%
81.9%
I False
detects
Rate
3 113226028
15
11645206
64
11151220
5 111929655
13
11742175
CONCLUSIONS AND FUTURE RESEARCH
Our algorithm can detect up to 92.9% of faces in a set of test images with an acceptable
number of false positives. This is a higher detection rate than [Sung and Poggio, 1994]. The
system can be made more conservative by varying the arbitration heuristics or thresholds.
Currently, the system does not use temporal coherence to focus attention on particular
portions of the image. In motion sequences, the location of a face in one frame is a strong
predictor of the location of a face in next frame. Standard tracking methods can be applied to
focus the detector's attention. The system's accuracy might be improved with more positive
examples for training, by using separate networks to recognize different head orientations,
or by applying more sophisticated image preprocessing and normalization techniques.
Acknowledgements
The authors thank Kah-Kay Sung and Dr. Tomaso Poggio (at MIT), Dr. Woodward Yang (at Harvard),
and Michael Smith (at CMU) for providing training and testing images. We also thank Eugene Fink,
Xue-Mei Wang, and Hao-Chi Wong for comments on drafts of this paper.
This work was partially supported by a grant from Siemens Corporate Research, Inc., and by the
Department of the Army, Army Research Office under grant number DAAH04-94-G-0006. Shu meet
Baluja was supported by a National Science Foundation Graduate Fellowship. The views and
conclusions in this document are those of the authors, and should not be interpreted as necessarily
representing official policies or endorsements, either expressed or implied, of the sponsoring agencies.
References
[Baluja and Pomerleau, 1995] Shumeet Baluja and Dean Pomerleau. Encouraging distributed input
reliance in spatially constrained artificial neural networks: Applications to visual scene analysis
and control. Submitted, 1995.
[Le Cun et al., 1989] Y. Le Cun, B. Boser, 1. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropogation applied to handwritten zip code recognition. Neural
Computation, 1:541-551, 1989.
[Rowley et al., 1995] Henry A. Rowley, Shumeet Baluja, and Takeo Kanade. Human face detection
in visual scenes. CMU-CS-95-158R, Carnegie Mellon University, November 1995. Also available
at http://www.cs.cmu.edul11ar/faces.html.
[Sung and Poggio, 1994] Kah-Kay Sung and Tomaso Poggio. Example-based learning for viewbased human face detection. A.I. Memo 1521, CBCL Paper 112, MIT, December 1994.
[Waibel et al., 1989] Alex Waibel, Toshiyuki Hanazawa, Geoffrey Hinton, Kiyohiro Shikano, and
Kevin J. Lang. Phoneme recognition using time-delay neural networks. Readings in Speech
Recognition, pages 393-404, 1989.
| 1168 |@word version:1 eliminating:1 tried:1 harder:1 initial:2 contains:2 selecting:1 ours:1 document:1 activation:1 lang:1 must:4 takeo:2 occludes:1 plot:1 alone:1 half:1 fewer:1 plane:1 smith:1 draft:1 location:9 simpler:1 along:2 consists:5 combine:2 manner:3 tomaso:2 behavior:1 multi:1 chi:1 detects:4 actual:2 encouraging:1 window:13 provided:2 interpreted:1 developed:1 bootstrapping:2 sung:17 temporal:1 every:1 collecting:2 expands:1 fink:1 tie:1 scaled:2 control:1 unit:6 grant:2 positive:10 before:3 shumeet:3 meet:1 approximately:2 merge:10 might:1 examined:2 collect:2 challenging:1 range:1 graduate:1 kah:2 camera:1 testing:2 lost:1 bootstrap:2 mei:1 empirical:1 significantly:1 printed:1 projection:1 get:2 cannot:1 close:2 selection:1 collapsed:1 applying:1 equalization:2 wong:1 restriction:1 www:1 dean:1 go:1 attention:2 examines:1 kay:2 anding:3 heavily:1 us:5 pa:1 harvard:2 overlapped:1 recognition:6 located:2 database:2 wang:1 region:2 ensures:1 connected:1 went:1 decrease:1 removed:1 agency:1 rowley:8 occluded:3 trained:4 serve:1 translated:1 various:2 represented:1 finger:1 train:2 describe:2 detected:4 artificial:1 kevin:1 equalizes:1 neighborhood:1 quite:1 heuristic:7 larger:1 itself:1 hanazawa:1 final:1 sequence:1 indication:1 combining:1 achieve:1 description:3 normalize:1 cluster:4 produce:2 ring:1 rotated:1 tk:1 object:1 depending:1 school:1 progress:2 strong:1 subregion:1 c:5 direction:1 correct:1 filter:13 pea:1 human:6 translating:1 elimination:1 require:1 correction:2 around:1 cbcl:1 major:1 currently:1 jackel:1 hubbard:1 create:1 mit:3 rather:1 avoid:1 varying:1 office:1 focus:2 daah04:1 check:1 centroid:2 detect:12 entire:2 eliminate:2 hidden:6 selects:1 pixel:5 among:4 orientation:3 html:1 art:1 constrained:1 field:1 eliminated:1 manually:2 represents:1 nearly:2 future:2 report:1 few:3 randomly:1 recognize:1 national:1 individual:1 occlusion:1 retinally:1 detection:49 huge:1 henderson:1 har:1 accurate:1 partial:1 poggio:17 euclidean:1 rotating:1 classify:1 earlier:1 disadvantage:1 predictor:1 delay:1 reported:1 xue:1 combined:1 sensitivity:2 randomized:1 off:1 michael:1 together:2 quickly:1 squared:2 recorded:1 containing:3 collapsing:1 worse:1 dr:2 account:1 retinal:1 inc:1 performed:2 view:3 analyze:1 portion:4 square:1 accuracy:4 phoneme:1 gathered:1 correspond:1 toshiyuki:1 handwritten:1 produced:2 lighting:3 classified:1 submitted:1 viewbased:1 detector:1 sophisticated:1 higher:2 supervised:1 response:1 improved:1 done:1 box:3 anywhere:1 stage:3 just:1 hand:1 working:1 receives:1 horizontal:1 web:2 replacing:1 overlapping:3 gray:1 indicated:1 usa:1 effect:1 contain:1 spatially:1 illustrated:3 during:4 self:1 sponsoring:1 motion:1 image:59 ranging:2 rotation:1 discriminated:2 mellon:2 significant:1 backpropogation:1 ai:1 henry:2 add:2 showed:1 baby:1 seen:1 zip:1 determine:1 signal:1 ii:1 multiple:8 corporate:1 reduces:1 divided:1 basic:1 cmu:8 metric:3 histogram:2 normalization:1 pyramid:5 fellowship:1 grow:1 eliminates:1 unlike:1 pass:1 comment:1 december:1 presence:1 ideal:1 yang:1 easy:1 variety:1 gave:1 architecture:2 identified:1 reduce:2 prototype:1 whether:3 six:3 speech:2 repeatedly:1 mirroring:1 detailed:1 eigenvectors:1 listed:1 amount:2 reduced:1 http:1 mirrored:1 correctly:2 carnegie:2 reliance:1 threshold:3 drawn:1 preprocessed:2 sum:2 run:2 angle:1 decide:1 missed:3 endorsement:1 decision:1 acceptable:2 scaling:1 coherence:1 oring:1 layer:3 followed:3 occur:1 scanned:4 precisely:1 alex:1 scene:6 nearby:1 generates:1 span:1 extremely:1 relatively:1 department:1 waibel:3 combination:1 across:1 slightly:2 smaller:1 character:1 cun:3 taken:1 turn:1 needed:2 mind:1 nose:1 available:2 apply:2 denker:1 original:1 clustering:2 subsampling:1 ccd:1 implied:1 receptive:1 strategy:2 subspace:1 distance:7 separate:1 thank:2 collected:8 code:1 ratio:1 providing:1 difficult:3 hao:1 shu:1 negative:5 memo:1 design:1 pomerleau:3 policy:1 upper:1 howard:1 november:1 incorrectly:1 hinton:1 head:1 digitized:1 frame:2 intensity:4 specified:1 connection:3 boser:1 woodward:1 able:1 below:1 pattern:3 reading:1 mouth:3 overlap:2 difficulty:1 rely:1 representing:2 scheme:1 improve:2 eye:4 picture:2 identifies:1 created:1 started:1 eugene:1 acknowledgement:1 permutation:1 prototypical:1 filtering:1 geoffrey:1 foundation:1 translation:1 supported:2 last:1 bias:1 allow:1 perceptron:2 wide:2 face:76 characterizing:1 subimage:2 distributed:1 world:2 forward:1 commonly:1 made:3 preprocessing:4 replicated:1 author:2 counted:1 newspaper:2 decides:1 pittsburgh:1 shikano:1 table:6 lip:1 kanade:5 anecdotally:1 necessarily:1 official:1 main:1 noise:2 representative:3 referred:1 fails:1 position:10 down:1 specific:1 exists:1 false:21 adding:2 merging:5 subimages:4 television:1 photograph:3 likely:1 army:2 visual:6 expressed:1 contained:1 tracking:1 partially:2 determines:2 extracted:2 scenery:3 presentation:1 labelled:2 absence:1 baluja:10 typical:1 miss:2 principal:1 conservative:1 total:1 pas:1 invariance:3 siemens:1 rarely:1 selectively:1 select:1 scan:1 signifying:1 frontal:1 evaluate:1 arbitration:10 tested:2 |
186 | 1,169 | Temporal Difference Learning in
Continuous Time and Space
Kenji Doya
doya~hip.atr.co.jp
ATR Human Information Processing Research Laboratories
2-2 Hikaridai, Seika.-cho, Soraku-gun, Kyoto 619-02, Japan
Abstract
A continuous-time, continuous-state version of the temporal difference (TD) algorithm is derived in order to facilitate the application
of reinforcement learning to real-world control tasks and neurobiological modeling. An optimal nonlinear feedback control law was
also derived using the derivatives of the value function. The performance of the algorithms was tested in a task of swinging up a
pendulum with limited torque. Both the "critic" that specifies the
paths to the upright position and the "actor" that works as a nonlinear feedback controller were successfully implemented by radial
basis function (RBF) networks.
1
INTRODUCTION
The temporal-difference (TD) algorithm (Sutton, 1988) for delayed reinforcement
learning has been applied to a variety of tasks, such as robot navigation, board
games, and biological modeling (Houk et al., 1994). Elucidation of the relationship
between TD learning and dynamic programming (DP) has provided good theoretical
insights (Barto et al., 1995). However, conventional TD algorithms were based on
discrete-time, discrete-state formulations. In applying these algorithms to control
problems, time, space and action had to be appropriately discretized using a priori
knowledge or by trial and error. Furthermore, when a TD algorithm is used for
neurobiological modeling, discrete-time operation is often very unnatural.
There have been several attempts to extend TD-like algorithms to continuous cases.
Bradtke et al. (1994) showed convergence results for DP-based algorithms for a
discrete-time, continuous-state linear system with a quadratic cost. Bradtke and
Duff (1995) derived TD-like algorithms for continuous-time, discrete-state systems
(semi-Markov decision problems). Baird (1993) proposed the "advantage updating"
algorithm by modifying Q-Iearning so that it works with arbitrary small time steps .
K.DOYA
1074
In this paper, we derive a TD learning algorithm for continuous-time, continuousstate, nonlinear control problems. The correspondence of the continuous-time version to the conventional discrete-time version is also shown. The performance of
the algorithm was tested in a nonlinear control task of swinging up a pendulum
with limited torque.
2
CONTINUOUS-TIME TD LEARNING
We consider a continuous-time dynamical system (plant)
x(t) = f(x(t), u(t))
(1)
where x E X eRn is the state and u E U C Rm is the control input (action). We
denote the immediate reinforcement (evaluation) for the state and the action as
r(t) = r(x(t), u(t)).
(2)
Our goal is to find a feedback control law (policy)
u(t)
= JL(x(t))
(3)
that maximizes the expected reinforcement for a certain period in the future. To
be specific, for a given control law JL, we define the "value" of the state x(t) as
V!L(x(t)) =
1
00
1
,-t
-e-- r(x(s), u(s))ds,
(4)
t
r
where x(s) and u(s) (t < s < 00) follow the system dynamics (1) and the control
law (3). Our problem now is to find an optimal control law JL* that maximizes
V!L(x) for any state x E X. Note that r is the time scale of "imminence-weighting"
and the scaling factor ~ is used for normalization, i.e., ft OO ~e- ':;:t ds = 1.
2.1
T
TD ERROR
The basic idea in TD learning is to predict future reinforcement in an on-line manner. We first derive a local consistency condition for the value function V!L(x). By
differentiating (4) by t, we have
d
r dt V!L(x(t)) = V!L(x(t)) - r(t).
(5)
Let P(t) be the prediction of the value function V!L(x(t)) from x(t) (output of the
"critic"). If the prediction is perfect, it should satisfy rP(t) = P(t) - r(t). If this
is not satisfied, the prediction should be adjusted to decrease the inconsistency
f(t) = r(t) - P(t)
+ rP(t).
(6)
This is a continuous version of the temporal difference error.
2.2
EULER DIFFERENTIATION: TD(O)
The relationship between the above continuous-time TD error and the discrete-time
TD error (Sutton, 1988)
f(t)
= r(t) + ,,(P(t) -
P(t - ~t)
(7)
can be easily seen by a backward Euler approximation of p(t). By substituting
p(t) = (P(t) - P(t - ~t))/~t into (6), we have
f=r(t)+
~t
[(1- ~t)P(t)-P(t-~t)] .
1075
Temporal Difference in Learning in Continuous Time and Space
This coincides with (7) if we make the "discount factor" '"Y = 1- ~t ~ e-'?, except
for the scaling factor It '
Now let us consider a case when the prediction of the value function is given by
(8)
where bi O are basis functions (e.g., sigmoid, Gaussian, etc) and Vi are the weights.
The gradient descent of the squared TD error is given by
~Vi
ex: _
o~r2(t)
ex: - r et)
[(1 _~t) oP(t) _ oP(t - ~t)] .
OVi
T
OVi
OVi
In order to "back-up" the information about the future reinforcement to correct the
prediction in the past, we should modify pet - ~t) rather than pet) in the above
formula. This results in the learning rule
~Vi ex: ret) OP(~~ ~t)
= r(t)bi(x(t -
~t)) .
(9)
This is equivalent to the TD(O) algorithm that uses the "eligibility trace" from the
previous time step.
2.3
SMOOTH DIFFERENTIATION: TD(-\)
The Euler approximation of a time derivative is susceptible to noise (e.g., when
we use stochastic control for exploration) . Alternatively, we can use a "smooth"
differentiation algorithm that uses a weighted average of the past input, such as
pet)
~
pet) - Pet)
where
~
Tc
dd pet) = pet) - pet)
t
and Tc is the time constant of the differentiation. The corresponding gradient descent algorithm is
~Vi ex: _ O~;2(t)
ex: ret) o~(t) = r(t)bi(t) ,
Vi
(10)
UVi
where bi is the eligibility trace for the weight
dTc dtbi(t) = bi(x(t)) - bi(t) .
(11)
Note that this is equivalent to the TD(-\) algorithm (Sutton, 1988) with -\ = 1if we discretize the above equation with time step ~t.
3
At
Tc
OPTIMAL CONTROL BY VALUE GRADIENT
3.1
HJB EQUATION
The value function V * for an optimal control J..L* is defined as
V*(x(t)) = max
U[t,oo)
[1
t
00
1 . -t r(x(s), u(s))ds ] .
-e--
(12)
T
T
According to the principle of dynamic programming (Bryson and Ho, 1975), we
consider optimization in two phases, [t, t + ~t] and [t + ~t , 00), resulting in the
expression
V * (x(t)) =
.
max
U[t,HAt)
[I +
t At
t
1 ? :;:- t r(x(s), u(s))ds + e--'?V* (x(t
_eT
1.
+ ~t))
K.DOYA
1076
By Taylor expanding the value at t
V*(x(t
+ f:l.t as
av*
+ f:l.t)) = V*(x(t)) + ax(t) f(x(t), u(t))f:l.t + O(f:l.t)
and then taking f:l.t to zero, we have a differential constraint for the optimal value
function
av*
]
V*(t) = max [r(x(t), u(t)) + T
- f(x(t), u(t)) .
(13)
ax
U(t)EU
This is a variant of the Hamilton-Jacobi-Bellman equation (Bryson and Ho, 1975)
for a discounted case.
3.2
OPTIMAL NONLINEAR FEEDBACK CONTROL
When the reinforcement r(x, u) is convex with respect to the control u, and the
vector field f(x, u) is linear with respect to u, the optimization problem in (13) has
a unique solution. The condition for the optimal control is
ar(x, u)
au
av* af(x, u) _ 0
au
+T ax
(14)
-.
Now we consider the case when the cost for control is given by a convex potential
function GjO for each control input
2:=
f(x, u) = rx(x) -
Gj(Uj),
j
where reinforcement for the state r x (x) is still unknown. We also assume that the
input gain of the system
b -(x) = af(x, u)
J
au-J
is available. In this case, the optimal condition (14) for
-Gj(Uj)
+T
Uj
is given by
av*
ax bj(x) = O.
Noting that the derivative G'O is a monotonic function since GO is convex, we have
the optimal feedback control law
Uj
= (G')-1 ( T av*
ax b(x) ) .
Particularly, when the amplitude of control is bounded as
enforce this constraint using a control cost
(15)
IUj
I < uj&X,
we can
~
Gj(Uj)
=
Cj
IoUi
g-l(s)ds,
(16)
where g-10 is an inverse sigmoid function that diverges at ?1 (Hopfield, 1984). In
this case, the optimal feedback control law is given by
max
Uj
= ujax g ( u ~j
T
av*
)
ax bj(x) .
(17)
In the limit of Cj -70, this results in the "bang-bang" control law
Uj
=
Ujmax'
SIgn
[av*
ax b j (x )] .
(18)
1077
Temporal Difference in Learning in Continuous Time and Space
Figure 1: A pendulum with limited torque. The dynamics is given by m18
-f-tiJ + mglsinO + T. Parameters were m = I = 1, 9 = 9.8, and f-t = 0.0l.
th
trials
(b)
(a)
20
~\~iii
17 .5
15
12.5
0.
.-"
10
7 .5
I, I
i~!
' :1
trials
(c)
th
(d)
Figure 2: Left: The learning curves for (a) optimal control and (c) actor-critic.
Lup: time during which 101 < 90?. Right: (b) The predicted value function P after
100 trials of optimal control. (d) The output of the controller after 100 trials with
actor-critic learning. The thick gray line shows the trajectory of the pendulum. th:
(degrees), om: iJ (degrees/sec).
o
1078
4
K.DOYA
ACTOR-CRITIC
When the information about the control cost, the input gain of the system, or the
gradient of the value function is not available, we cannot use the above optimal
control law. However, the TD error (6) can be used as "internal reinforcement" for
training a stochastic controller, or an "actor" (Barto et al., 1983).
In the simulation below, we combined our TD algorithm for the critic with a reinforcement learning algorithm for real-valued output (Gullapalli, 1990). The output
of the controller was given by
u;(t) = ujU g
(~W;,b'(X(t)) + <1n;(t)) ,
(19)
where nj(t) is normalized Gaussian noise and Wji is a weight. The size of this perturbation was changed based on the predicted performance by (Y = (Yo exp( -P(t)).
The connection weights were changed by
!:l.Wji
5
ex f(t)nj(t)bi(x(t)).
(20)
SIMULATION
The performance of the above continuous-time TD algorithm was tested on a task
of swinging up a pendulum with limited torque (Figure 1). Control of this onedegree-of-freedom system is trivial near the upright equilibrium. However, bringing
the pendulum near the upright position is not if we set the maximal torque Tmax
smaller than mgl. The controller has to swing the pendulum several times to
build up enough momentum to bring it upright. Furthermore, the controller has to
decelerate the pendulum early enough to avoid falling over.
We used a radial basis function (RBF) network to approximate the value function
for the state of the pendulum x = (8,8). We prepared a fixed set of 12 x 12 Gaussian
basis functions. This is a natural extension of the "boxes" approach previously used
to control inverted pendulums (Barto et al., 1983). The immediate reinforcement
was given by the height of the tip of the pendulum, i.e., rx = cos 8.
5.1
OPTIMAL CONTROL
First, we used the optimal control law (17) with the predicted value function P
instead of V?. We added noise to the control command to enhance exploration.
The torque was given by
Tmax aP(x)
)
T = Tmaxg ( - - r - - b + (Yn(t) ,
c
ax
where g(x) = ~ tan- 1 ( ~x) (Hopfield, 1984). Note that the input gain b =
(0, 1/mI2)T was constant. Parameters were rm ax = 5, c = 0.1, (Yo = 0.01, r = 1.0,
and rc = 0.1.
Each run was started from a random 8 and was continued for 20 seconds. Within
ten trials, the value function P became accurate enough to be able to swing up and
hold the pendulum (Figure 2a). An example of the predicted value function P after
100 trials is shown in Figure 2b. The paths toward the upright position, which were
implicitly determined by the dynamical properties of the system, can be seen as the
ridges of the value function. We also had successful results when the reinforcement
was given only near the goal: rx = 1 if 181 < 30?, -1 otherwise.
Temporal Difference in Learning in Continuous Time and Space
5.2
1079
ACTOR-CRITIC
Next, we tested the actor-critic learning scheme as described above. The controller
was also implemented by a RBF network with the same 12 x 12 basis functions as
the critic network. It took about one hundred trials to achieve reliable performance
(Figure 2c). Figure 2d shows an example of the output of the controller after 100
trials. We can see nearly linear feedback in the neighborhood of the upright position
and a non-linear torque field away from the equilibrium.
6
CONCLUSION
We derived a continuous-time, continuous-state version of the TD algorithm and
showed its applicability to a nonlinear control task. One advantage of continuous
formulation is that we can derive an explicit form of optimal control law as in (17)
using derivative information, whereas a one-ply search for the best action is usually
required in discrete formulations.
References
Baird III, L. C. (1993). Advantage updating. Technical Report WL-TR-93-1146,
Wright Laboratory, Wright-Patterson Air Force Base, OH 45433-7301, USA.
Barto, A. G. , Bradtke, S. J., and Singh, S. P. (1995). Learning to act using real-time
dynamic programming. Artificial Intelligence, 72:81-138.
Barto, A. G., Sutton, R. S., and Anderson, C. W. (1983). Neuronlike adaptive
elements that can solve difficult learning control problems. IEEE Transactions
on System, Man, and Cybernetics, SMC-13:834-846.
Bradtke, S. J. and Duff, M. O. (1995). Reinforcement learning methods for
continuous-time Markov decision problems. In Tesauro, G., Touretzky, D. S.,
and Leen, T. K., editors, Advances in Neural Information Processing Systems
7, pages 393-400. MIT Press, Cambridge, MA.
Bradtke, S. J ., Ydstie, B. E., and Barto, A. G. (1994). Adaptive linear quadratic
control using policy iteration. CMPSCI Technical Report 94-49, University of
Massachusetts, Amherst, MA.
Bryson, Jr., A. E .. and Ho, Y.-C. (1975). Applied Optimal Control. Hemisphere
Publishing, New York, 2nd edition.
GuUapalli, V. (1990) . A stochastic reinforcement learning algorithm for learning
real-valued functions. Neural Networks, 3:671-192.
Hopfield, J. J. (1984). Neurons with graded response have collective computational
properties like those of two-state neurons. Proceedings of National Academy of
Science, 81 :3088-3092.
Houk, J . C., Adams, J. L., and Barto, A. G. (1994). A model of how the basal
ganglia generate and use neural signlas that predict renforcement. In Houk,
J. C., Davis, J. L., and Beiser, D. G. , editors, Models of Information Processing
in the Basal Ganglia, pages 249--270. MIT Press, Cambrigde, MA.
Sutton, R. S. (1988). Learning to predict by the methods of temporal difference.
Machine Learning, 3:9--44.
| 1169 |@word trial:9 version:5 nd:1 simulation:2 lup:1 tr:1 past:2 intelligence:1 mgl:1 cambrigde:1 height:1 rc:1 differential:1 hjb:1 manner:1 expected:1 seika:1 discretized:1 torque:7 bellman:1 discounted:1 td:22 provided:1 bounded:1 maximizes:2 ret:2 differentiation:4 nj:2 temporal:8 act:1 iearning:1 rm:2 control:37 yn:1 hamilton:1 local:1 modify:1 limit:1 sutton:5 path:2 ap:1 tmax:2 au:3 co:2 limited:4 smc:1 bi:7 unique:1 ydstie:1 radial:2 cannot:1 applying:1 conventional:2 equivalent:2 go:1 convex:3 swinging:3 insight:1 rule:1 continued:1 oh:1 tan:1 programming:3 us:2 element:1 particularly:1 updating:2 ft:1 eu:1 decrease:1 dynamic:5 singh:1 m18:1 patterson:1 basis:5 easily:1 hopfield:3 artificial:1 neighborhood:1 valued:2 solve:1 otherwise:1 mi2:1 advantage:3 took:1 maximal:1 achieve:1 academy:1 convergence:1 diverges:1 perfect:1 adam:1 derive:3 oo:2 ij:1 op:3 implemented:2 kenji:1 predicted:4 thick:1 correct:1 dtc:1 modifying:1 stochastic:3 exploration:2 human:1 biological:1 adjusted:1 extension:1 hold:1 wright:2 houk:3 exp:1 equilibrium:2 predict:3 bj:2 substituting:1 early:1 wl:1 successfully:1 weighted:1 mit:2 gaussian:3 rather:1 avoid:1 barto:7 command:1 derived:4 ax:9 yo:2 bryson:3 cmpsci:1 priori:1 field:2 nearly:1 future:3 report:2 national:1 delayed:1 phase:1 attempt:1 freedom:1 neuronlike:1 evaluation:1 navigation:1 accurate:1 taylor:1 theoretical:1 hip:1 modeling:3 ar:1 cost:4 applicability:1 euler:3 hundred:1 successful:1 cho:1 combined:1 amherst:1 tip:1 enhance:1 squared:1 satisfied:1 derivative:4 japan:1 potential:1 sec:1 baird:2 satisfy:1 vi:5 pendulum:12 om:1 air:1 became:1 rx:3 trajectory:1 cybernetics:1 touretzky:1 jacobi:1 gain:3 massachusetts:1 knowledge:1 cj:2 amplitude:1 back:1 dt:1 follow:1 response:1 formulation:3 leen:1 box:1 anderson:1 furthermore:2 d:5 nonlinear:6 gray:1 usa:1 facilitate:1 normalized:1 swing:2 laboratory:2 game:1 during:1 eligibility:2 davis:1 coincides:1 ridge:1 bradtke:5 bring:1 sigmoid:2 jp:1 jl:3 extend:1 cambridge:1 consistency:1 had:2 robot:1 actor:7 gj:3 etc:1 base:1 continuousstate:1 showed:2 hemisphere:1 tesauro:1 certain:1 inconsistency:1 wji:2 inverted:1 seen:2 period:1 beiser:1 semi:1 kyoto:1 smooth:2 technical:2 af:2 prediction:5 variant:1 basic:1 controller:8 iteration:1 normalization:1 whereas:1 appropriately:1 bringing:1 near:3 noting:1 iii:2 enough:3 variety:1 idea:1 ovi:3 gullapalli:1 expression:1 unnatural:1 soraku:1 york:1 action:4 tij:1 discount:1 prepared:1 ten:1 generate:1 specifies:1 sign:1 discrete:8 basal:2 falling:1 backward:1 run:1 inverse:1 doya:5 decision:2 scaling:2 correspondence:1 quadratic:2 constraint:2 ern:1 uvi:1 according:1 jr:1 smaller:1 equation:3 previously:1 available:2 operation:1 decelerate:1 away:1 enforce:1 ho:3 rp:2 hat:1 publishing:1 elucidation:1 uj:8 hikaridai:1 graded:1 build:1 added:1 gradient:4 dp:2 atr:2 gun:1 trivial:1 toward:1 pet:8 relationship:2 difficult:1 susceptible:1 trace:2 collective:1 policy:2 unknown:1 discretize:1 av:7 neuron:2 markov:2 descent:2 immediate:2 perturbation:1 duff:2 arbitrary:1 required:1 connection:1 able:1 dynamical:2 below:1 usually:1 max:4 reliable:1 natural:1 force:1 scheme:1 started:1 law:11 plant:1 degree:2 principle:1 editor:2 dd:1 critic:9 changed:2 taking:1 differentiating:1 feedback:7 curve:1 world:1 reinforcement:14 adaptive:2 transaction:1 approximate:1 implicitly:1 neurobiological:2 alternatively:1 continuous:20 search:1 expanding:1 iuj:1 noise:3 edition:1 board:1 position:4 momentum:1 explicit:1 ply:1 weighting:1 formula:1 specific:1 r2:1 tc:3 ganglion:2 monotonic:1 ma:3 goal:2 bang:2 rbf:3 man:1 upright:6 except:1 determined:1 ioui:1 internal:1 tested:4 ex:6 |
187 | 1,170 | A Neural Network Classifier for
the 11000 OCR Chip
John C. Platt and Timothy P. Allen
Synaptics, Inc.
2698 Orchard Parkway
San Jose, CA 95134
platt@synaptics.com, tpa@synaptics.com
Abstract
This paper describes a neural network classifier for the 11000 chip, which
optically reads the E13B font characters at the bottom of checks . The
first layer of the neural network is a hardware linear classifier which
recognizes the characters in this font . A second software neural layer
is implemented on an inexpensive microprocessor to clean up the results of the first layer. The hardware linear classifier is mathematically
specified using constraints and an optimization principle. The weights
of the classifier are found using the active set method, similar to Vapnik's separating hyperplane algorithm. In 7.5 minutes ofSPARC 2 time,
the method solves for 1523 Lagrange mUltipliers, which is equivalent to
training on a data set of approximately 128,000 examples. The resulting network performs quite well: when tested on a test set of 1500 real
checks, it has a 99.995% character accuracy rate.
1
A BRIEF OVERVIEW OF THE 11000 CHIP
At Synaptics, we have created the 11000, an analog VLSI chip that, when combined
with associated software, optically reads the E13B font from the bottom of checks.
This E13B font is shown in figure 1. The overall architecture of the 11000 chip
is shown in figure 2. The 11000 recognizes checks hand-swiped through a slot. A
lens focuses the image of the bottom of the check onto the retina. The retina has
circuitry which locates the vertical position of the characters on the check . The
retina then sends an image vertically centered around a possible character to the
classifier.
The classifier in the nooo has a tough job. It must be very accurate and immune
to noise and ink scribbles in the input . Therefore , we decided to use an integrated
segmentation and recognition approach (Martin & Pittman, 1992)(Platt, et al.,
1992). When the classifier produces a strong response, we know that a character is
horizontally centered in the retina.
939
A Neural Network Classifier for the 11000 OCR Chip
Figure 1: The E13B font, as seen by the 11000 chip
11000 chip
~----------l
I
I
I
linear
retina
Slot
for
check
18 by 24 image
vertically positioned
I
I
__ ~l_ Ji
G__ ~ ~l:i:r
winner
take
mIcroprocessor
best character
hypothesis
42 confidences
Figure 2: The overall architecture of the 11000 chip
We decided to use analog VLSI to minimize the silicon area of the classifier. Because of the analog implementation, we decided to use a linear template classifier ,
with fixed weights in silicon to minimize area. The weights are encoded as lengths
of transistors acting as current sources . We trained the classifier using only the
specification of the font , because we did not have the real E13B data at the time of
classifier design . The design of the classifier is described in the next section.
As shown in figure 2, the input to the classifier is an 18 by 24 pixel image taken
from the retina at a rate of 20 thousand frames per second. The templates in the
classifier are 18 by 22 pixels. Each template is evaluated in three different vertical
positions , to allow the retina to send a slightly vertically mis-aligned character. The
output of the classifier is a set of 42 confidences, one for each of the 14 characters in
the font in three different vertical positions. These confidences are fed to a winnertake-all circuit (Lazzaro , et al. , 1989), which finds the confidence and the identity
of the best character hypothesis.
2
SPECIFYING THE BEHAVIOR OF THE CLASSIFIER
Let us consider the training of one template corresponding to one of the characters
in the font. The template takes a vector of pixels as input . For ease of analog
implementation, the template is a linear neuron with no bias input:
(1)
where 0 is the output of the template, Wi are the weights of the template, and Ii
are the input pixels of the template.
We will now mathematically express the training of the templates as three types
of constraints on the weights of the template . The input vectors used by these
constraints are the ideal characters taken from the specification of the font .
The first type of constraint on the template is that the output of the template
should be above 1 when the character that corresponds to the template is centered
940
1. C. PLAIT, T. P. ALLEN
I ! II !
Figure 3: Examples of images from the bad set for the templates trained to detect
the zero character. These images are E13B characters that have been horizontally
and vertically offset from the center of the image. The black border around each of
the characters shows the boundary of the input field. Notice the variety of horizontal
and vertical shifts of the different characters.
in the horizontal field. Call the vector of pixels of this centered character Gi . This
constraint is stated as:
(2)
The second type of constraint on the template is to have an output much lower than
1 when incorrect or offset characters are applied to the template. We collect these
incorrect and offset characters into a set of pixel vectors jjj, which we call the "bad
set." The constraint that the output of the template be lower than a constant c for
all of the vectors in the bad set is expressed as :
L wiBf :s c
Vj
(3)
Together, constraints (2) and (3) permit use of a simple threshold to distinguish
between a positive classifier response and a negative one.
The bad set contains examples of the correct character for the template that are
horizontally offset by at least two pixels and vertically offset by up to one pixel. In
addition, examples of all other characters are added to the bad set at every horizontal offset and with vertical offsets of up to one pixel (see figure 3). Vertically offset
examples are added to make the classifier resistant to characters whose baselines
are slightly mismatched.
The third type of constraint on the template requires that the output be invariant
to the addition of a constant to all of the input pixels. This constraint makes the
classifier immune to any changes in the background lighting level, k. This constraint
is equivalent to requiring the sum of the weights to be zero:
(4)
Finally, an optimization principle is necessary to choose between all possible weight
vectors that fulfill constraints (2), (3), and (4). We minimize the perturbation of
the output of the template given uncorrelated random noise on the input. This
optimization principle is similar to training on a large data set, instead of simply
the ideal characters described by the specification. This optimization principle is
equivalent to minimizing the sum of the square of the weights:
minLWl
(5)
Expressing the training of the classifier as a combination of constraints and an
optimization principle allows us to compactly define its behavior. For example,
the combination of constraints (3) and (4) allows the classifier to be immune to
situations when two partial characters appear in the image at the same time. The
confluence of two characters in the image can be described as:
I?verlap = k + B! + B':
(6)
,
'I
941
A Neural Network Classifier for the 11000 OCR Chip
where k is a background value and B! and B[ are partial characters from the bad
set that appears on the left side and right side of the image, respectively. The
output of the template is then:
ooverlap
= 2: Wi(k + BI + BD = 2: Wjk + 2: wiBI + 2: WiB[ < 2c
(7)
Constraints (3) and (4) thus limit the output of the neuron to less than 2c when
two partial characters appear in the input. Therefore, we want c to be less than
0.5. In order to get a 2:1 margin, we choose c = 0.25.
The classifier is trained only on individual partial characters instead of all possible
combinations of partial characters. Therefore, we can specify the classifier using
only 1523 constraints, instead of creating a training set of approximately 128,000
possible combinations of partial characters. Applying these constraints is therefore
much faster than back-propagation on the entire data set.
Equations (2), (3) and (5) describe the optimization problem solved by Vapnik
(Vapnik, 1982) for constructing a hyperplane that separates two classes. Vapnik
solves this optimization problem by converting it into a dual space, where the inequality constraints become much simpler. However, we add the equality constraint
(4), which does not allow us to directly use Vapnik's dual space method . To overcome this limitation, we use the active set method, which can fulfill any extra linear
equality or inequality constraints. The active set method is described in the next
section.
3
THE ACTIVE SET METHOD
Notice that constraints (2), (3), and (4) are all linear in Wi. Therefore, minimizing (5) with these constraints is simply quadratic programming with a mixture of
equality and inequality constraints. This problem can be solved using the active set
method from optimization theory (Gill, et al., 1981).
When the quadratic programming problem is solved, some of the inequality constraints and all of the equality constraints will be "active." In other words, the active constraints affect the solution as equality constraints. The system has "bumped
into" these constraints. All other constraints will be inactive; they will not affect
the solution.
Once we know which constraints are active, we can easily solve the quadratic minimization problem with equality constraints via Lagrange multipliers. The solution
is a saddle point of the function:
~ 2: wl + 2: Ak(2: Akj Wj
- Ck)
(8)
i
k
i
where Ak is the Lagrange multiplier of the kth active constraint, and A kj and Ck
are the linear and constant coefficients of the kth active constraint. For example,
if constraint (2) is the kth active constraint, then Ak = G and Ck = 1. The saddle
point can be found via the set of linear equations:
Wi
2: AkAki
- 2)2: AjiAki)-lCj
-
(9)
k
(10)
j
The active set method determines which inequality constraints belong in the active
set by iteratively solving equation (10) above. At every step, one inequality constraint is either made active, or inactive. A constraint can be moved to the active
J. C. PLAIT, T. P. ALLEN
942
Action: move
ro ?
L
Xspace
Xto here, make constraint 13 inactive
~
A13=0
on this line
1 ..
\.
solution from
equation (10)
constramt 19
violated on this line
Figure 4: The position along the step where the constraints become violated or the
Lagrange multipliers become zero can be computed analytically. The algorithm then
takes the largest possible step without violating constraints or having the Lagrange
multipliers become zero.
set if the inequality constraint is violated. A constraint can be moved off the active
set if its Lagrange multiplier has changed sign 1 .
Each step of the active set method attempts to adjust the vector of Lagrange multipliers to the values provided by equation (10). Let us parameterize the step from
the old to the new Lagrange multipliers via a parameter a:
X= XO + a8X
(11)
where Xo is the vector of Lagrange multipliers before the step, 8X is the step, and
when a = 1, the step is completed. Now, the amount of constraint violation and the
Lagrange multipliers are linear functions of this a. Therefore, we can analytically
derive the a at which a constraint is violated or a Lagrange multiplier changes sign
(see figure 4). For currently inactive constraints, the a for constraint violation is:
ak = -
Ck + Lj AJ Li AjiAki
Lj 8Aj Li AjiAki
(12)
For a currently active constraint, the a for a Lagrange multiplier sign change is
simply:
(13)
We choose the constraint that has a smallest positive ak. If the smallest ak is
greater than 1, then the system has found the solution, and the final weights are
computed from the Lagrange multipliers at the end of the step. Otherwise, if the kth
constraint is active, we make it inactive, and vice versa. We then set the Lagrange
multipliers to be the interpolated values from equation (11) with a = ak. We finally
re-evaluate equation (10) with the updated active set 2 .
When this optimization algorithm is applied to the E13B font, the templates that
result are shown in figure 5. When applied to characters that obey the specification,
the classifier is guaranteed to give a 2:1 margin between the correct peak and any
false peak caused by the confluence of two partial characters. Each template has
1523 constraints and takes 7.5 minutes on a SPARe 2 to train. Back-propagation on
the 128,000 training examples that are equivalent to the constraints would obviously
require much more computation time.
IThe sign of the Lagrange multiplier indicates on which side of the inequality constraint
the constrained minimum lies.
2 For more details on active set methods, such as how to recognize infeasible constraints,
consult (Gill, et al., 1981).
A Neural Network Classifier for the 11000 OCR Chip
943
Figure 5: The weights for the fourteen E13B templates. The light pixels correspond
to positive weights, while the dark pixels correspond to negative weights.
pinger neuron
14 output neurons
2 hidden neurons
history of
11000 outputs
~
14 outputs
of the 11000
(Every vertical column
contains 13 zeros)
spatial window
of 15 pixels
Figure 6: The software second layer
4
THE SOFTWARE SECOND LAYER
As a test of the linear classifier, we fabricated the 11000 and tested it with E 13B
characters on real checks. The system worked when the printing on the check obeyed
the contrast specification of the font. However, some check printing companies use
very light or very dark printing. Therefore, there was no single threshold that could
consistently read the lightly printed checks without hallucinating characters on the
dark checks. The retina shown in figure 2 does not have automatie gain control
(AGC). One solution would have been to refabricate the chip using an AGC retina.
However, we opted for a simpler solution.
The output of the 11000 chip is a 2-bit confidence level and a character code that
is sent to an inexpensive microprocessor every 50 microseconds. Because this output bandwidth is low, it is feasible to put a small software second layer into this
microprocessor to post-process and clean up the output of the 11000.
The architecture of this software second layer is shown in figure 6. The input to
the second layer is a linearly time-warped history of the output of the 11000 chip.
The time warping makes the second layer immune to changes in the velocity of the
check in the slot. There is one output neuron that is a "pinger." That is, it is
trained to turn on when the input to the 11000 chip is centered over any character
(Platt, et al. , 1992) (Martin & Pittman, 1992). There are fourteen other neurons
that each correspond to a character in the font. These neurons are trained to turn
on when the appropriate character is centered in the field, and otherwise turn off.
The classification output is the output of the fourteen neurons only when the pinger
neuron is on. Thus, the pinger neuron aids in segmentation.
Considering the entire network spanning both the hardware first layer and software
J. C. PLATT. T. P. ALLEN
944
second layer, we have constructed a non-standard TDNN (Waibel, et. al., 1989)
which recognizes characters.
We trained the second layer using standard back-propagation, with a training set
gathered from real checks. Because the nooo output bandwidth is quite low, collecting the data and training the network was not onerous. The second layer was
trained on a data set of approximately 1000 real checks.
5
OVERALL PERFORMANCE
When the hardware first layer in the 11000 is combined with the software second
layer, the performance of the system on real checks is quite impressive. We gathered
a test set of 1500 real checks from across the country. This test set contained a
variety of light and dark checks with unusual backgrounds. We swiped this test set
through one system. Out of the 1500 test checks, the system only failed to read 2,
due to staple holes in important locations of certain characters. As such , this test
yielded a 99.995% character accuracy on real data.
6
CONCLUSIONS
For the 11000 analog VLSI OCR chip, we have created an effective hardware linear
classifier that recognizes the E13B font. The behavior of this classifier was specified
using constrained optimization. The classifier was designed to have a predictable
margin of classification, be immune to lighting variations, and be resistant to random input noise. The classifier was trained using the active set method, which is an
enhancement of Vapnik's separating hyperplane algorithm. We used the active set
method to find the weights of a template in 7.5 minutes of SPARC 2 time, instead of
training on a data set with 128,000 examples. To make the overall system resistant
to contrast variation, we separately trained a software second layer on top of this
first hardware layer, thereby constructing a non-standard TDNN.
The application discussed in this paper shows the utility of using the active set
method to very rapidly create either a stand-alone linear classifier or a first layer of
a multi-layer network.
References
P. Gill, W. Murray, M. Wright (1981), Practical Optimization, Section 5.2, Academic Press .
J. Lazzaro, S. Ryckebusch, M. Mahowald, C. Mead (1989), "Winner-Take-All Networks of O(N) Complexity," Advances in Neural Information Processing Systems,
1, D. Touretzky, ed., Morgan-Kaufmann, San Mateo, CA.
G. Martin, M. Rashid (1992), "Recognizing Overlapping Hand-Printed Characters
by Centered-Object Integrated Segmentation and Recognition," Advances in Neural
Information Processing Systems, 4, Moody, J., Hanson, S., Lippmann, R., eds.,
Morgan-Kaufmann, San Mateo, CA.
J. Platt, J. Decker, and J. LeMoncheck (1992), Convolutional Neural Networks for
the Combined Segmentation and Recognition of Machine Printed Characters, USPS
5th Advanced Technology Conference, 2, 701-713.
V. Vapnik (1982), Estimation of Dependencies Based on Empirical Data, Addendum I, Section 2, Springer-Verlag.
A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, K. Lang (1989), "Phoneme
Recognition Using Time-Delay Neural Networks," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, pp. 328-339.
| 1170 |@word agc:2 thereby:1 contains:2 optically:2 current:1 com:2 lang:1 must:1 bd:1 john:1 designed:1 alone:1 location:1 simpler:2 along:1 constructed:1 become:4 incorrect:2 behavior:3 multi:1 company:1 window:1 considering:1 provided:1 circuit:1 fabricated:1 every:4 collecting:1 ro:1 classifier:34 platt:6 control:1 appear:2 positive:3 before:1 vertically:6 limit:1 ak:7 mead:1 approximately:3 black:1 mateo:2 specifying:1 collect:1 ease:1 bi:1 decided:3 practical:1 area:2 empirical:1 printed:3 confidence:5 word:1 staple:1 get:1 onto:1 confluence:2 put:1 applying:1 equivalent:4 center:1 send:1 variation:2 updated:1 a13:1 programming:2 hypothesis:2 velocity:1 recognition:4 bottom:3 solved:3 parameterize:1 thousand:1 wj:1 predictable:1 complexity:1 trained:9 solving:1 ithe:1 usps:1 compactly:1 easily:1 chip:16 train:1 describe:1 effective:1 quite:3 encoded:1 whose:1 solve:1 otherwise:2 gi:1 hanazawa:1 final:1 obviously:1 transistor:1 aligned:1 rapidly:1 moved:2 wjk:1 enhancement:1 produce:1 object:1 derive:1 job:1 solves:2 tpa:1 implemented:1 strong:1 bumped:1 correct:2 centered:7 spare:1 require:1 mathematically:2 around:2 wright:1 circuitry:1 smallest:2 estimation:1 currently:2 largest:1 wl:1 vice:1 create:1 minimization:1 fulfill:2 ck:4 focus:1 consistently:1 check:19 indicates:1 contrast:2 opted:1 baseline:1 detect:1 integrated:2 entire:2 lj:2 hidden:1 vlsi:3 pixel:13 overall:4 dual:2 classification:2 constrained:2 spatial:1 field:3 once:1 having:1 retina:9 recognize:1 individual:1 attempt:1 adjust:1 violation:2 mixture:1 light:3 accurate:1 partial:7 necessary:1 old:1 re:1 column:1 mahowald:1 recognizing:1 delay:1 obeyed:1 dependency:1 combined:3 peak:2 akj:1 off:2 together:1 moody:1 choose:3 pittman:2 creating:1 warped:1 li:2 coefficient:1 inc:1 caused:1 minimize:3 square:1 accuracy:2 convolutional:1 kaufmann:2 phoneme:1 correspond:3 gathered:2 lighting:2 history:2 touretzky:1 ed:2 inexpensive:2 pp:1 associated:1 mi:1 gain:1 segmentation:4 positioned:1 back:3 appears:1 violating:1 response:2 specify:1 evaluated:1 hand:2 horizontal:3 overlapping:1 propagation:3 aj:2 requiring:1 multiplier:15 equality:6 analytically:2 read:4 iteratively:1 performs:1 allen:4 image:10 ji:1 overview:1 fourteen:3 winner:2 analog:5 belong:1 discussed:1 silicon:2 expressing:1 versa:1 winnertake:1 immune:5 specification:5 resistant:3 synaptics:4 impressive:1 add:1 lcj:1 sparc:1 certain:1 verlag:1 inequality:8 wib:1 morgan:2 seen:1 minimum:1 greater:1 gill:3 converting:1 signal:1 ii:2 faster:1 academic:1 post:1 locates:1 addendum:1 addition:2 background:3 want:1 separately:1 source:1 sends:1 country:1 extra:1 sent:1 tough:1 call:2 consult:1 ideal:2 variety:2 affect:2 architecture:3 bandwidth:2 shift:1 inactive:5 hallucinating:1 utility:1 speech:1 lazzaro:2 action:1 amount:1 dark:4 hardware:6 notice:2 sign:4 per:1 vol:1 express:1 threshold:2 clean:2 sum:2 jose:1 bit:1 layer:19 guaranteed:1 distinguish:1 quadratic:3 yielded:1 constraint:54 worked:1 software:9 interpolated:1 lightly:1 jjj:1 martin:3 waibel:2 orchard:1 combination:4 describes:1 slightly:2 across:1 character:44 wi:4 invariant:1 xo:2 taken:2 equation:7 turn:3 know:2 fed:1 end:1 unusual:1 permit:1 obey:1 ocr:5 appropriate:1 top:1 recognizes:4 completed:1 murray:1 ink:1 warping:1 move:1 added:2 font:13 ryckebusch:1 kth:4 separate:1 separating:2 xto:1 spanning:1 length:1 code:1 minimizing:2 stated:1 negative:2 implementation:2 design:2 vertical:6 neuron:11 rashid:1 situation:1 hinton:1 frame:1 perturbation:1 nooo:2 specified:2 hanson:1 acoustic:1 lemoncheck:1 advanced:1 technology:1 brief:1 created:2 tdnn:2 kj:1 limitation:1 principle:5 uncorrelated:1 changed:1 infeasible:1 l_:1 bias:1 allow:2 side:3 mismatched:1 template:26 boundary:1 overcome:1 stand:1 made:1 san:3 scribble:1 transaction:1 lippmann:1 active:24 parkway:1 shikano:1 onerous:1 ca:3 microprocessor:4 constructing:2 vj:1 did:1 decker:1 linearly:1 border:1 noise:3 aid:1 position:4 lie:1 third:1 printing:3 minute:3 bad:6 offset:8 vapnik:7 false:1 hole:1 margin:3 timothy:1 simply:3 saddle:2 failed:1 lagrange:15 horizontally:3 expressed:1 contained:1 springer:1 corresponds:1 determines:1 slot:3 identity:1 microsecond:1 feasible:1 change:4 hyperplane:3 acting:1 lens:1 violated:4 evaluate:1 tested:2 |
188 | 1,171 | Using Pairs of Data-Points to Define
Splits for Decision Trees
Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Toronto, Ontario, M5S lA4, Canada
hinton@cs.toronto.edu
Michael Revow
Department of Computer Science
University of Toronto
Toronto, Ontario, M5S lA4, Canada
revow@cs.toronto.edu
Abstract
Conventional binary classification trees such as CART either split
the data using axis-aligned hyperplanes or they perform a computationally expensive search in the continuous space of hyperplanes
with unrestricted orientations. We show that the limitations of the
former can be overcome without resorting to the latter. For every
pair of training data-points, there is one hyperplane that is orthogonal to the line joining the data-points and bisects this line. Such
hyperplanes are plausible candidates for splits. In a comparison
on a suite of 12 datasets we found that this method of generating
candidate splits outperformed the standard methods, particularly
when the training sets were small.
1
Introduction
Binary decision trees come in many flavours, but they all rely on splitting the set of
k-dimensional data-points at each internal node into two disjoint sets. Each split is
usually performed by projecting the data onto some direction in the k-dimensional
space and then thresholding the scalar value of the projection. There are two
commonly used methods of picking a projection direction. The simplest method is
to restrict the allowable directions to the k axes defined by the data. This is the
default method used in CART [1]. If this set of directions is too restrictive, the
usual alternative is to search general directions in the full k-dimensional space or
general directions in a space defined by a subset of the k axes.
Projections onto one of the k axes defined by the the data have many advantages
G. E. HINTON, M. REVOW
508
over projections onto a more general direction:
1. It is very efficient to perform the projection for each of the data-points. We
simply ignore the values of the data-point on the other axes.
2. For N data-points, it is feasible to consider all possible axis-aligned projections and thresholds because there are only k possible projections and
for each of these there are at most N - 1 threshold values that yield different splits. Selecting from a fixed set of projections and thresholds is
simpler than searching the k-dimensional continuous space of hyperplanes
that correspond to unrestricted projections and thresholds.
3. Since a split is selected from only about N k candidates, it takes only about
log2 N + log2 k bits to define the split. So it should be possible to use many
more of these axis-aligned splits before overfitting occurs than if we use more
general hyperplanes. If the data-points are in general position, each subset
of size k defines a different hyperplane so there are N!/k!(N - k)! distinctly
different hyperplanes and if k < < N it takes approximately k log2 N bits
to specify one of them.
For some datasets, the restriction to axis-aligned projections is too limiting. This
is especially true for high-dimensional data, like images , in which there are strong
correlations between the intensities of neighbouring pixels. In such cases, many
axis-aligned boundaries may be required to approximate a planar boundary that
is not axis-aligned, so it is natural to consider unrestricted projections and some
versions of the CART program allow this. Unfortunately this greatly increases the
computational burden and the search may get trapped in local minima. Also significant care must be exercised to avoid overfitting. There is, however, an intermediate
approach which allows the projections to be non-axis-aligned but preserves all three
of the attractive properties of axis-aligned projections: It is trivial to decide which
side of the resulting hyperplane a given data-point lies on; the hyperplanes can be
selected from a modest-sized set of sensible candidates; and hence many splits can
be used before overfitting occurs because only a few bits are required to specify each
split.
2
Using two data-points to define a projection
Each pair of data-points defines a direction in the data space. This direction is a
plausible candidate for a projection to be used in splitting the data, especially if
it is a classification task and the two data-points are in different classes. For each
such direction, we could consider all of the N - 1 possible thresholds that would
give different splits, or, to save time and reduce complexity, we could only consider
the threshold value that is halfway between the two data-points that define the
projection. If we use this threshold value, each pair of data-points defines exactly
one hyperplane and we call the two data-points the "poles" of this hyperplane.
For a general k-dimensional hyperplane it requires O( k) operations to decide
whether a data-point, C, is on one side or the other. But we can save a factor
of k by using hyperplanes defined by pairs of data-points. If we already know the
distances of C from each of the two poles, A, B then we only need to compare
Using Pairs of Data Points to Define Splits for Decision Trees
509
B
A
Figure 1: A hyperplane orthogonal to the line joining points A and B. We can
quickly determine on which side a test point, G, lies by comparing the distances
AG and BG.
these two distances (see figure 1).1 So if we are willing to do O(kN2) operations to
compute all the pairwise distances between the data-points, we can then decide in
constant time which side of the hyperplane a point lies on.
As we are building the decision tree, we need to compute the gain in performance
from using each possible split at each existing terminal node. Since all the terminal
nodes combined contain N data-points and there are N(N - 1)/2 possible splits 2
this takes time O(N 3) instead of O(kN3). So the work in computing all the pairwise
distances is trivial compared with the savings.
Using the Minimum Description Length framework, it is clear that pole-pair splits
can be described very cheaply, so a lot of them can be used before overfitting occurs.
When applying MDL to a supervised learning task we can assume that the receiver
gets to see the input vectors for free. It is only the output vectors that need to be
communicated. So if splits are selected from a set of N (N -1) /2 possibilities that is
determined by the input vectors, it takes only about 210g2 N bits to communicate
a split to a receiver. Even if we allow all N - 1 possible threshold values along
the projection defined by two data-points, it takes only about 310g2 N bits. So the
number of these splits that can be used before overfitting occurs should be greater by
a factor of about k/2 or k/3 than for general hyperplanes. Assuming that k ? N,
the same line of argument suggests that even more axis-aligned planes can be used,
but only by a factor of about 2 or 3.
To summarize, the hyperplanes planes defined by pairs of data-points are computationally convenient and seem like natural candidates for good splits. They overcome
the major weakness of axis-aligned splits and, because they can be specified in a
modest number of bits, they may be more effective than fully general hyperplanes
when the training set is small.
1 If the threshold value is not midway between the poles, we can still save a factor of k
but we need to compute (d~c - d1c )/2dAB instead of just the sign of this expression.
2Since we only consider splits in which the poles are in different classes, this number
ignores a factor that is independent of N.
510
3
G. E. HINTON, M. REVOW
Building the decision tree
We want to compare the "pole-pair" method of generating candidate hyperplanes
with the standard axis-aligned method and the method that uses unrestricted hyperplanes. We can see no reason to expect strong interactions between the method
of building the tree and the method of generating the candidate hyperplanes, but
to minimize confounding effects we always use exactly the same method of building
the decision tree.
We faithfully followed the method described in [1], except for a small modification
where the code that was kindly supplied by Leo Breiman used a slightly different
method for determining the amount of pruning.
Training a decision tree involves two distinct stages. In the first stage, nodes are
repeatedly split until each terminal node is "pure" which means that all of its datapoints belong to the same class. The pure tree therefore fits the training data
perfectly. A node is split by considering all candidate decision planes and choosing
the one that maximizes the decrease in impurity. Breiman et. al recommend using
the Gini index to measure impurity.3 If pUlt) is the probability of class j at node
t, then the Gini index is 1 - 2: j p2(jlt).
Clearly the tree obtained at the end of the first stage will overfit the data and so in
the second stage the tree is pruned by recombining nodes. For a tree, T i , with ITil
terminal nodes we consider the regularized cost:
(1)
where E is the classification error and Q is a pruning parameter. In "weakest-link"
pruning the terminal nodes are eliminated in the order which keeps (1) minimal as
Q increases. This leads to a particular sequence, T
= {TI' T2, ... Tk} of subtrees,
in which ITII > IT21 ... > ITkl. We call this the "main" sequence of subtrees because
they are trained on all of the training data.
The last remaining issue to be resolved is which tree in the main sequence to use.
The simplest method is to use a separate validation set and choose the tree size
that gives best classification on it. Unfortunately, many of the datasets we used
were too small to hold back a reserved validation set. So we always used 10-fold
cross validation to pick the size of the tree. We first grew 10 different subsidiary
trees until their terminal nodes were pure, using 9/10 of the data for training each of
them. Then we pruned back each of these pure subsidiary trees, as above, producing
10 sequences of subsidiary subtrees. These subsidiary sequences could then be used
for estimating the performance of each subtree in the main sequence. For each of
the main subtrees, Ti , we found the largest tree in each subsidiary sequence that
was no larger than Ti and estimated the performance of Ti to be the average of the
performance achieved by each subsidiary subtree on the 1/10 of the data that was
not used for training that subsidiary tree. We then chose the Ti that achieved the
best performance estimate and used it on the test set4. Results are expressed as
3Impurity is not an information measure but, like an information measure, it is minimized when all the nodes are pure and maximized when all classes at each node have equal
probability.
4This differs from the conventional application of cross validation, where it is used to
Using Pairs of Data Points to Define Splits for Decision Trees
Size (N)
Classes (e)
Attributes (k)
511
JR
TR
LV
DB
BC
GL
VW
WN
VH
WV
IS
SN
150
3
4
215
3
5
345
2
6
768
2
8
683
2
9
163
2
9
990
178
3
13
846
4
18
2100
3
21
351
2
34
208
2
60
11
10
Table 1: Summary of the datasets used.
the ratio of the test error rate to the baseline rate, which is the error rate of a tree
with only a single terminal node.
4
The Datasets
Eleven datasets were selected from the database of machine learning tasks maintained by the University of California at Irvine (see the appendix for a list of the
datasets used). Except as noted in the appendix, the datasets were used exactly
in the form of the distribution as of June 1993. All datasets have only continuous
attributes and there are no missing values. 5 The synthetic "waves" example [1] was
added as a twelfth dataset.
Table 1 gives a brief description of the datasets. Datasets are identified by a two
letter abbreviation along the top. The rows in the table give the total number of
instances, number of classes and number of attributes for each dataset.
A few datasets in the original distribution have designated training and testing
subsets while others do not. To ensure regularity among datasets, we pooled all
usable examples in a given dataset, randomized the order in the pool and then
divided the pool into training and testing sets. Two divisions were considered. The
large training division had ~ of the pooled examples allocated to the training set
and ~ to the test set. The small training division had ~ of the data in the training
set and ~ in the test set.
5
Results
Table 2 gives the error rates for both the large and small divisions of the data,
expressed as a percentage of the error rate obtained by guessing the dominant
class.
In both the small and large training divisions of the datasets, the pole-pair method
had lower error rates than axis-aligned or linear cart in the majority of datasets
tested. While these results are interesting, they do not provide any measure of confidence that one method performs better or worse than another. Since all methods
were trained and tested on the same data, we can perform a two-tailed McNemar
test [2] on the predictions for pairs of methods. The resulting P-values are given
in table 3. On most of the tasks, the pole-pair method is significantly better than
at least one of the standard methods for at least one of the training set sizes and
there are only 2 tasks for which either of the other methods is significantly better
on either training set size.
determine the best value of ex rather than the tree size
5In the Be dataset we removed the case identification number attribute and had to
delete 16 cases with missing values.
512
G. E. HINTON, M. REVOW
Database
IR
TR
LV
DB
BC
GL
VW
WN
VH
WV
IS
SN
cart
14.3
36.6
88.9
85.8
12.8
62.5
31.8
17.8
42.5
28.9
44.0
65.2
Small Train
linear
pole
14.3
4.3
26.8
14.6
100.0
100.0
82.2
87.0
14.1
8.3
81.3
89.6
37.7
30.0
13.7
11.0
46.5
44.2
25.8
24.3
41.7
31.0
71.2
48.5
cart
5.6
33.3
108.7
69.7
15.7
46.4
21.4
14.7
36.2
30.6
21.4
48.4
Large Train
linear
5.6
33.3
87.0
69.7
12.0
46.4
26.2
11.8
43.9
24.8
23 .8
45.2
pole
5.6
20.8
97.8
59.6
9.6
35.7
19.2
14.7
40.7
26.6
42.9
48.4
Table 2: Relative error rates expressed as a percentage of the baseline rate on the
small and large training sets.
6
Discussion
We only considered hyperplanes whose poles were in different classes, since these
seemed more plausible candidates. An alternative strategy is to disregard class
membership, and consider all possible pole-pairs. Another variant of the method
arises depending on whether the inputs are scaled. We transformed all inputs so
that the training data has zero mean and unit variance. However, using unsealed
inputs and/or allowing both poles to have the same class makes little difference to
the overall advantage of the pole-pair method.
To summarize, we have demonstrated that the pole-pair method is a simple, effective
method for generating projection directions at binary tree nodes. The same idea of
minimizing complexity by selecting among a sensible fixed set of possibilities rather
than searching a continuous space can also be applied to the choice of input-tohidden weights in a neural network.
A
Databases used in the study
IR - Iris plant database.
TR - Thyroid gland data.
LV - BUPA liver disorders.
DB - Pima Indians Diabetes.
BC - Breast cancer database from the University of Wisconsin Hospitals.
GL - Glass identification database. In these experiments we only considered the
classification into float/nonfloat processed glass, ignoring other types of glass.
VW - Vowel recognition.
WN - Wine recognition.
VH - Vehicle silhouettes.
WV - Waveform example, the synthetic example from [1].
IS - Johns Hopkins University Ionosphere database.
SN - Sonar - mines versus rocks discrimination. We did not control for aspect-angle.
Using Pairs of Data Points to Define Splits for Decision Trees
513
Small Training - Large Test
IR
TR
LV
DB
BC
GL
VW
WN
VH
WV
IS
Axis- Pole
.02
~
.18
.46
.06
.02
.24
.15
.33
,QQ..
.44
.07
Linear- Pole
~
.13
1.0
.26
~
.30
.00
.41
.27
.17
.09
~
Axis-Linear
1.0
.06
.18
.30
.40
J>O
J>O
.31
.08
.03
~
.32
-SN
Large Training - Small Test
IR
TR
LV
DB
BC
GL
VW
WN
VH
WV
IS
SN
Axis-Pole
.75
.23
.29
:.Q!..
.11
.29
.26
.69
.14
.08
:02
.60
Linear-Pole
.75
.23
.26
:.Q!..
.25
.30
J!!...
.50
.25
.26
:os
.50
Axis-Linear
1.0
1.0
.07
1.0
.29
.69
.06
.50
F3""
:.Q!.
.50
.50
Table 3: P-Values using a two-tailed McNemar test on the small (top) and large
(bottom) training sets. Each row gives P-values when the methods in the left most
column are compared. A significant difference at the P 0.05 level is indicated with
a line above (below) the P-value depending on whether the first (second) mentioned
method in the first column had superior performance. For example, in the top most
row, the pole-pair method was significantly better than the axis-aligned method on
the TR dataset.
=
Acknowledgments
We thank Leo Breiman for kindly making his CART code available to us. This
research was funded by the Institute for Robotics and Intelligent Systems and by
NSERC. Hinton is a fellow of the Canadian Institute for Advanced Research.
References
[1] L. Breiman, J. H. Freidman, R. A. Olshen, and C. J. Stone. Classification and
regression trees. Wadsworth international Group, Belmont, California, 1984.
[2] J. L. Fleiss. Statistical methods for rates and proportions. Second edition. Wiley,
1981.
| 1171 |@word version:1 proportion:1 twelfth:1 willing:1 pick:1 tr:6 selecting:2 bc:5 existing:1 comparing:1 must:1 john:1 belmont:1 midway:1 eleven:1 discrimination:1 selected:4 plane:3 node:15 toronto:6 hyperplanes:15 simpler:1 along:2 pairwise:2 terminal:7 jlt:1 little:1 considering:1 estimating:1 maximizes:1 ag:1 suite:1 fellow:1 every:1 ti:5 exactly:3 scaled:1 control:1 unit:1 producing:1 before:4 local:1 joining:2 approximately:1 chose:1 suggests:1 bupa:1 acknowledgment:1 testing:2 differs:1 communicated:1 significantly:3 projection:18 convenient:1 confidence:1 get:2 onto:3 applying:1 restriction:1 conventional:2 demonstrated:1 missing:2 splitting:2 disorder:1 pure:5 datapoints:1 his:1 searching:2 limiting:1 qq:1 neighbouring:1 us:1 diabetes:1 expensive:1 particularly:1 recognition:2 database:7 bottom:1 decrease:1 removed:1 mentioned:1 complexity:2 mine:1 trained:2 impurity:3 division:5 resolved:1 leo:2 train:2 unsealed:1 distinct:1 effective:2 gini:2 choosing:1 whose:1 larger:1 plausible:3 dab:1 la4:2 advantage:2 sequence:7 rock:1 interaction:1 aligned:13 ontario:2 description:2 regularity:1 generating:4 tk:1 depending:2 liver:1 strong:2 p2:1 c:2 involves:1 come:1 direction:11 waveform:1 attribute:4 hold:1 considered:3 major:1 wine:1 outperformed:1 exercised:1 largest:1 bisects:1 faithfully:1 clearly:1 always:2 rather:2 avoid:1 breiman:4 ax:4 june:1 greatly:1 baseline:2 glass:3 membership:1 transformed:1 pixel:1 issue:1 classification:6 orientation:1 among:2 overall:1 wadsworth:1 equal:1 saving:1 f3:1 eliminated:1 minimized:1 t2:1 recommend:1 others:1 intelligent:1 few:2 preserve:1 set4:1 vowel:1 possibility:2 mdl:1 weakness:1 subtrees:4 orthogonal:2 modest:2 tree:26 minimal:1 delete:1 instance:1 column:2 cost:1 pole:20 subset:3 too:3 synthetic:2 combined:1 international:1 randomized:1 kn3:1 picking:1 michael:1 pool:2 quickly:1 hopkins:1 choose:1 worse:1 usable:1 pooled:2 bg:1 performed:1 vehicle:1 lot:1 wave:1 minimize:1 ir:4 variance:1 reserved:1 maximized:1 yield:1 correspond:1 identification:2 m5s:2 gain:1 irvine:1 dataset:5 back:2 supervised:1 planar:1 specify:2 just:1 stage:4 correlation:1 until:2 overfit:1 o:1 kn2:1 defines:3 gland:1 indicated:1 building:4 effect:1 contain:1 true:1 former:1 hence:1 attractive:1 maintained:1 noted:1 iris:1 stone:1 allowable:1 performs:1 image:1 superior:1 belong:1 significant:2 resorting:1 had:5 funded:1 dominant:1 confounding:1 binary:3 wv:5 mcnemar:2 minimum:2 unrestricted:4 care:1 greater:1 determine:2 full:1 cross:2 divided:1 prediction:1 variant:1 regression:1 breast:1 achieved:2 robotics:1 want:1 float:1 allocated:1 cart:7 db:5 seem:1 call:2 vw:5 intermediate:1 split:26 canadian:1 wn:5 fit:1 restrict:1 perfectly:1 identified:1 reduce:1 idea:1 whether:3 expression:1 repeatedly:1 clear:1 amount:1 processed:1 simplest:2 supplied:1 percentage:2 sign:1 trapped:1 disjoint:1 estimated:1 group:1 threshold:9 halfway:1 angle:1 letter:1 communicate:1 decide:3 decision:10 appendix:2 flavour:1 bit:6 followed:1 fold:1 aspect:1 thyroid:1 argument:1 pruned:2 recombining:1 subsidiary:7 department:2 designated:1 jr:1 slightly:1 modification:1 making:1 projecting:1 computationally:2 know:1 end:1 available:1 operation:2 save:3 alternative:2 original:1 top:3 remaining:1 ensure:1 log2:3 freidman:1 restrictive:1 especially:2 already:1 added:1 occurs:4 strategy:1 usual:1 guessing:1 distance:5 link:1 separate:1 thank:1 majority:1 sensible:2 trivial:2 reason:1 assuming:1 length:1 code:2 index:2 ratio:1 minimizing:1 unfortunately:2 olshen:1 pima:1 perform:3 allowing:1 datasets:15 hinton:6 grew:1 canada:2 intensity:1 pair:18 required:2 specified:1 california:2 usually:1 below:1 summarize:2 program:1 natural:2 rely:1 regularized:1 advanced:1 brief:1 axis:17 sn:5 vh:5 determining:1 relative:1 wisconsin:1 fully:1 expect:1 plant:1 interesting:1 limitation:1 geoffrey:1 lv:5 versus:1 validation:4 thresholding:1 row:3 cancer:1 summary:1 gl:5 last:1 free:1 side:4 allow:2 institute:2 distinctly:1 overcome:2 default:1 boundary:2 seemed:1 ignores:1 commonly:1 approximate:1 pruning:3 ignore:1 silhouette:1 keep:1 overfitting:5 receiver:2 search:3 continuous:4 tailed:2 sonar:1 table:7 ignoring:1 kindly:2 did:1 main:4 edition:1 wiley:1 position:1 tohidden:1 candidate:10 lie:3 list:1 ionosphere:1 weakest:1 burden:1 subtree:2 simply:1 cheaply:1 expressed:3 nserc:1 g2:2 scalar:1 abbreviation:1 sized:1 revow:5 feasible:1 determined:1 except:2 hyperplane:8 total:1 hospital:1 disregard:1 internal:1 latter:1 arises:1 indian:1 tested:2 ex:1 |
189 | 1,172 | Stochastic Hillclimbing as a Baseline
Method for Evaluating Genetic
Algorithms
Ari Juels
Department of Computer Science
University of California at Berkeley?
Martin Wattenberg
Department of Mathematics
University of California at Berkeleyt
Abstract
We investigate the effectiveness of stochastic hillclimbing as a baseline for
evaluating the performance of genetic algorithms (GAs) as combinatorial function optimizers. In particular, we address two problems to which
GAs have been applied in the literature: Koza's ll-multiplexer problem
and the jobshop problem. We demonstrate that simple stochastic hillclimbing methods are able to achieve results comparable or superior to
those obtained by the GAs designed to address these two problems. We
further illustrate, in the case of the jobshop problem, how insights obtained in the formulation of a stochastic hillclimbing algorithm can lead
to improvements in the encoding used by a GA.
1
Introduction
Genetic algorithms (GAs) are a class of randomized optimization heuristics based
loosely on the biological paradigm of natural selection. Among other proposed applications, they have been widely advocated in recent years as a general method
for obtaining approximate solutions to hard combinatorial optimization problems
using a minimum of information about the mathematical structure of these problems. By means of a general "evolutionary" strategy, GAs aim to maximize an
objective or fitness function 1 : 5 --t R over a combinatorial space 5, i.e., to find
some state s E 5 for which 1(s) is as large as possible. (The case in which 1 is to
be minimized is clearly symmetrical.) For a detailed description of the algorithm
see, for example, [7], which constitutes a standard text on the subject.
In this paper, we investigate the effectiveness of the GA in comparison with that
of stochastic hillclimbing (SH), a probabilistic variant of hillclimbing. As the term
?Supported in part by NSF Grant CCR-9505448. E-mail: juels@cs.berkeley.edu
fE-mail: wattenbe@math.berkeley.edu
Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms
431
"hillclimbing" suggests, if we view an optimization problem as a "landscape" in
which each point corresponds to a solution s and the "height" of the point corresponds to the fitness of the solution, f(s), then hillclimbing aims to ascend to a
peak by repeatedly moving to an adjacent state with a higher fitness.
A number of researchers in the G A community have already addressed the issue
of how various versions of hillclimbing on the space of bitstrings, {O, l}n, compare
with GAs [1] [4] [9] [18] [15]. Our investigations in this paper differ in two important
respects from these previous ones. First, we address more sophisticated problems
than the majority of these studies, which make use of test functions developed for
the purpose of exploring certain landscape characteristics. Second, we consider hillclimbing algorithms based on operators in some way "natural" to the combinatorial
structures of the problems to which we are seeking solutions, very much as GA designers attempt to do. In one of the two problems in this paper, our SH algorithm
employs an encoding exactly identical to that in the proposed GA . Consequently,
the hillclimbing algorithms we consider operate on structures other than bitstrings.
Constraints in space have required the omission of a great deal of material found
in the full version of this paper. This material includes the treatment of two additional problems: the NP-complete Maximum Cut Problem [11] and an NP-complete
problem known as the multiprocessor document allocation problem (MDAP). Also
in the full version of this paper is a substantially more thorough exposition of the
material presented here. The reader is encouraged to refer to [10], available on the
World Wide Web at http://www.cs.berkeley.edu/,,-,juelsj.
2
Stochastic Hillclimbing
The SH algorithm employed in this paper searches a discrete space S with the aim
of finding a state whose fitness is as high (or as low) as possible. The algorithm
does this by making successive improvements to some current state 0" E S. As is
the case with genetic algorithms, the form of the states in S depends upon how the
designer of the SH algorithm chooses to encode the solutions to the problems to be
solved: as bitstrings, permutations, or in some other form. The local improvements
effected by the SH algorithm are determined by the neighborhood structure and the
fitness function f imposed on S in the design of the algorithm. We can consider the
neighborhood structure as an undirected graph G on vertex set S. The algorithm
attempts to improve its current state 0" by making a transition to one of the neighbors of 0" in G. In particular, the algorithm chooses a state T according to some
suitable probability distribution on the neighbors of 0". If the fitness of T is as least
as good as that of 0" then T becomes the new current state, otherwise 0" is retained.
This process is then repeated
3
3.1
GP and J obshop
The Experiments
In this section, we compare the performance of SH algorithms with that of GAs
proposed for two problems: the jobshop problem and Koza's 11-multiplexer problem. We gauge the performance of the GA and SH algorithms according to the
fitness of the best solution achieved after a fixed number of function evaluations,
rather than the running time of the algorithms. This is because evaluation of the
fitness function generally constitutes the most substantial portion of the execution
time of the optimization algorithm, and accords with standard practice in the GA
community.
432
3.2
A. JUELS, M . WATIENBERG
Genetic Programming
"Genetic programming" (GP) is a method of enabling a genetic algorithm to search
a potentially infinite space of computer programs, rather than a space of fixedlength solutions to a combinatorial optimization problem. These programs take the
form of Lisp symbolic expressions, called S-expressions. The S-expressions in GP
correspond to programs which a user seeks to adapt to perform some pre-specified
task. Details on GP, an increasingly common GA application, and on the 11multiplexer problem which we address in this section, may be found, for example,
in [13J [12J [14J.
The boolean 11-multiplexer problem entails the generation of a program to perform the following task. A set of 11 distinct inputs is provided, with labels ao, aI, a2, do, d l , ... , d7 , where a stands for "address" and d for "data". Each
input takes the value 0 or 1. The task is to output the value dm , where m =
ao + 2al + 4a2. In other words, for any 11-bit string, the input to the "address"
variables is to be interpreted as an index to a specific "data" variable, which the
program then yields as output. For example, on input al = 1, ao = a2 = 0,
and d 2 = l,do = d l = d 3 = ... = d7 = 0, a correct program will output a '1', since
the input to the 'a' variables specifies address 2, and variable d2 is given input 1.
The GA Koza's GP involves the use of a GA to generate an S-expression corresponding to a correct ll-multiplexer program. An S-expression comprises a tree of
LISP operators and operands, operands being the set of data to be processed - the
leaves of the tree - and operators being the functions applied to these data and
internally in the tree. The nature of the operators and operands will depend on the
problem at hand, since different problems will involve different sets of inputs and
will require different functions to be applied to these inputs. For the ll-multiplexer
problem in particular, where the goal is to create a specific boolean function, the
operands are the input bits ao, al, a2, do, d l , ... , d7 , and the operators are AND,
OR, NOT, and IF. These operators behave as expected: the subtree (AND al a2),
for instance, yields the value al A a2. The subtree (IF al d4 d3 ) yields the value d4
if al = 0 and d3 if al = 1 (and thus can be regarded as a "3-multiplexer"). NOT
and OR work similarly. An S-expression constitutes a tree of such operators, with
operands at the leaves. Given an assignment to the operands, this tree is evaluated
from bottom to top in the obvious way, yielding a 0 or 1 output at the root.
Koza makes use of a "mating" operation in his GA which swaps subexpressions
between two such S-expressions. The sub expressions to be swapped are chosen
uniformly at random from the set of all subexpressions in the tree. For details
on selection in this GA, see [13J. The fitness of an S-expression is computed by
evaluating it on all 2048 possible inputs, and counting the number of correct outputs.
Koza does not employ a mutation operator in his GA .
The SH Algorithm For this problem, the initial state in the SH algorithm is
an S-expression consisting of a single operand chosen uniformly at random from
{ ao, al, a2, do, ... , d7 }. A transition in the search space involves the random replacement of an arbitrary node in the S-expression. In particular, to select a neighboring
state, we chose a node uniformly at random from the current tree and replace it
with a node selected randomly from the set of all possible operands and operators.
With probability ~ the replacement node is drawn uniformly at random from the
set of operands {ao, al, a2, do, ... , d7 }, otherwise it is drawn uniformly at random
from the set of operators, {AND, OR, NOT, IF}. In modifying the nodes of the
S-expression in this way, we may change the number of inputs they require. By
changing an AND node to a NOT node, for instance, we reduce the number of inputs taken by the node from 2 to 1. In order to accommodate such changes, we do
Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms
433
the following. Where a replacement reduces the number of inputs taken by a node,
we remove the required number of children from that node uniformly at random.
Where, on the other hand, a replacement increases the number of inputs taken by a
node, we add the required number of children chosen uniformly at random from the
set of operands {ao, at, a2, do, ... , d7}. A similar, though somewhat more involved
approach of this kind, with additional experimentation using simulated annealing,
may be found in [17].
Experimental Results In the implementation described in [14], Koza performs
experiments with a GA on a pool of 4000 expressions. He records the results
of 54 runs. These results are listed in the table below. The average number of
function evaluations required to obtain a correct program is not given in [14]. In
[12], however, where Koza performs a series of 21 runs with a slightly different
selection scheme, he finds that the average number of function evaluations required
to find a correct S-expression is 46,667.
In 100 runs of the SH algorithm, we found that the average time required to obtain a
correct S-expression was 19,234.90 function evaluations, with a standard deviation
of 5179.45. The minimum time to find a correct expression in these runs was
3733, and the maximum, 73,651. The average number of nodes in the correct Sexpression found by the SH algorithm was 88.14; the low was 42, the high, 242, and
the standard deviation, 29.16.
The following table compares the results presented in [14], indicated by the heading
"GP", with those obtained using stochastic hillclimbing, indicated by "SH". We
give the fraction of runs in which a correct program was found after a given number
of function evaluations. (As this fraction was not provided for the 20000 iteration
mark in [14], we omit the corresponding entry.)
I Functionevaluations II
GP
20000
40000
60000
80000
28 %
78 %
90 %
SH
61 %
98 %
99 %
100%
We observe that the performance of the SH is substantially better than that of the
GA. It is interesting to note - perhaps partly in explanation of the SH algorithm's
success on this problem - that the SH algorithm formulated here defines a neighborhood structure in which there are no strict local minima. Remarkably, this is
true for any boolean formula. For details, as well as an elementary proof, see the
full version of this paper [10].
3.3
Jobshop
Jobshop is a notoriously difficult NP-complete problem [6] that is hard to solve
even for small instances. In this problem, a collection of J jobs are to be scheduled
on M machines (or processors), each of which can process only one task at a time.
Each job is a list of M tasks which must be performed in order. Each task must
be performed on a specific machine, and no two tasks in a given job are assigned to
the same machine. Every task has a fixed (integer) processing time. The problem is
to schedule the jobs on the machines so that all jobs are completed in the shortest
overall time. This time is referred to as the makespan.
Three instances formulated in [16] constitute a standard benchmark for this problem: a 6 job, 6 machine instance, a 10 job, 10 machine instance, and a 20 job, 5
434
A. JUELS. M. WATIENBERG
machine instance. The 6x6 instance is now known to have an optimal makespan of
55. This is very easy to achieve. While the optimum value for the 10x10 problem
is known to be 930, this is a difficult problem which remained unsolved for over 20
years [2]. A great deal of research has also been invested in the similarly challenging
20x5 problem, for which an optimal value of 1165 has been achieved, and a lower
bound of 1164 [3].
A number of papers have considered the application of GAs to scheduling problems.
We compare our results with those obtained in Fang et al. [5], one of the more recent
of these articles.
The GA Fang et al. encode a jobshop schedule in the form of a string of integers, to which their GA applies a conventional crossover operator. This string
contains JM integers at, a2,' .. , aJM in the range 1..J. A circular list C of jobs,
initialized to (1,2, ... , J) is maintained. For i = 1,2, ... , JM, the first uncompleted
task in the (ai mod ICl)th job in C is scheduled in the earliest plausible timeslot.
A plausible timeslot is one which comes after the last scheduled task in the current
job, and which is at least as long as the processing time of the task to be scheduled.
When a job is complete, it is removed from C. Fang et al. also develop a highly
specialized GA for this problem in which they use a scheme of increasing mutation
rates and a technique known as GVOT (Gene-Variance based Operator Targeting).
For the details see [5].
The SH Algorithm In our SH algorithm for this problem, a schedule is encoded
in the form of an ordering U1,U2, ... ,UJM of JM markers. These markers have
colors associated with them: there are exactly M markers of each color of 1, ... , J.
To construct a schedule, U is read from left to right. Whenever a marker with color
k is encountered, the next uncompleted task in job k is scheduled in the earliest
plausible timeslot. Since there are exactly M markers of each color, and since every
job contains exactly M tasks, this decoding of U yields a complete schedule. Observe
that since markers of the same color are interchangeable, many different ordering U
will correspond to the same scheduling of tasks.
To generate a neighbor of U in this algorithm, a marker Ui is selected uniformly at
random and moved to a new position j chosen uniformly at random. To achieve
this, it is necessary to shift the subsequence of markers between Ui and (J'j (including
Uj) one position in the appropriate direction. Ifi < j, then Ui+1,Ui+2, ... ,(J'j are
shifted one position to the left in u. If i > j, then (J'j, (J'j+l, ... ,Ui-1 are shifted one
position to the right. (If i = j, then the generated neighbor is of course identical to
u.) For an example, see the full version of this paper [10] .
Fang et al. consider the makespan achieved after 300 iterations of their GVOTbased GA on a population of size 500. We compare this with an SH for which each
experiment involves 150,000 iterations. In both cases therefore, a single execution
of the algorithm involves a total of 150,000 function evaluations. Fang et al. present
their average results over 10 trials, but do not indicate how they obtain their "best".
We present the statistics resulting from 100 executions of the SH algorithm.
IIIOXIO
Jobshop I
GA I SH
Mean
SD
High
Low
Best Known
977
949
966.96
13.15
997
938
930
20x5 Jobshop
1215
1202.40
12.92
1288
1189 1173
1165
I
Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms
435
As can be seen from the above table, the performance of the SH algorithm appears
to be as good as or superior to that of the GA.
3.4
A New Jobshop GA
In this section, we reconsider the jobshop problem in an attempt to formulate a
new GA encoding. We use the same encoding as in the SH algorithm described
above: (7 is an ordering (71, (72, ? . . , (7 J M of the J M markers, which can be used to
construct a schedule as before. We treated markers of the same color as effectively
equivalent in the SH algorithm. Now, however, the label of a marker (a unique
integer in {I, . . . ,J M}) will playa role.
The basic step in the crossover operator for this GA as applied to a pair (7, T)
of orderings is as follows. A label i is chosen uniformly at random from the
set {I, 2, ... , J M}. In (7, the marker with label i is moved to the position occupied by i in T; conversely, the marker with label i in T is moved to the position
occupied by that marker in (7. In both cases, the necessary shifting is performed
as before. Hence the idea is to move a single marker in (7 (and in T) to a new position as in the SH algorithm; instead of moving the marker to a random position,
though, we move it to the position occupied by that marker in T (and (7, respectively). The full crossover operator picks two labels j ~ k uniformly at random
from {I, 2, . .. , J M}, and performs this basic operation first for label j, then j + 1,
and so forth, through k. The mutation operator in our GA performs exactly the
same operation as that used to generate a neighbor in the SH algorithm. A marker
(7 i is chosen uniformly at random and moved to a new position j, chosen uniformly
at random. The usual shifting operation is then performed. Observe how closely
the crossover and mutation operators in this GA for the jobshop problem are based
on those in the corresponding SH algorithm.
Our GA includes, in order, the following phases: evaluation, elitist replacement,
selection, crossover, and mutation. In the evaluation phase, the fitnesses of all
members of the population are computed. Elitist replacement substitutes the fittest
permutation from the evaluation phase of the previous iteration for the least fit
permutation in the current population (except, of course, in the first iteration,
in which there is no replacement). Because of its simplicity and its effectiveness
in practice, we chose to use binary stochastic tournament selection (see [8] for
details). The crossover step in our GA selects pairs uniformly at random without
replacement from the population and applies the mating operator to each of these
pairs independently with probability 0.6. The number of mutations performed on
The
a given permutation in a single iteration is binomial with parameter p =
population in our GA is initialized by selecting every individual uniformly at random
from Sn.
f
*.
We execute this GA for 300 iterations on a population of size 500. Results of 100
experiments performed with this GA are indicated in the following table by "new
GA". For comparison, we again give the results obtained by the GA of Fang et al.
and the SH algorithm described in this paper.
II
Mean
SD
High
Low
Best Known
IOxlO
new GA
956.22
8.69
976
937
Jobshop
GA SH
977 965.64
10.56
996
949 949
930
20x5
new GA
1193.21
7.38
1211
1174
Jobshop
GA
SH
1215 1204.89
12.92
1241
1189 1183
1165
II
II
436
4
A. JUELS, M. WAllENBERG
Conclusion
As black-box algorithms, GAs are principally of interest in solving problems whose
combinatorial structure is not understood well enough for more direct, problemspecific techniques to be applied. As we have seen in regard to the two problems
presented in this paper, stochastic hill climbing can offer a useful gauge of the performance of the GA. In some cases it shows that a GA-based approach may not
be competitive with simpler methods; at others it offers insight into possible design
decisions for the G A such as the choice of encoding and the formulation of mating
and mutation operators. In light of the results presented in this paper, we hope that
designers of black-box algorithms will be encouraged to experiment with stochastic
hillclimbing in the initial stages of the development of their algorithms.
References
[1] D. Ackley. A Connectionist Machine for Genetic Hillclimbing. Kluwer Academic
Publishers, 1987.
[2] D. Applegate and W. Cook. A computational study of the job-shop problem. ORSA
Journal of Computing, 3(2), 1991.
[3] J. Carlier and E. Pinson. An algorithm for solving the jobshop problem. Mngmnt.
Sci., 35:(2):164-176, 1989.
[4] L. Davis. Bit-climbing, representational bias, and test suite design. In Belew and
Booker, editors, ICGA-4, pages 18-23, 1991.
[5] H. Fang, P. Ross, and D. Corne. A promising GA approach to job-shop scheduling,
rescheduling, and open-shop scheduling problems. In Forrest, editor, ICGA-5, 1993.
[6] M. Garey and D. Johnson. Computers and Intractability. W .H. Freeman and Co.,
1979.
[7] D. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison Wesley, 1989.
[8] D. Goldberg and K. Deb. A comparative analysis of selection schemes used in GAs.
In FOGA-2, pages 69-93, 1991.
[9] K. De Jong. An Analysis of the Behavior of a Class of Genetic Adaptive Systems.
PhD thesis, University of Michigan, 1975.
[10] A. Juels and M. Wattenberg. Stochastic hillclimbing as a baseline method for evaluating genetic algorithms. Technical Report CSD-94-834, UC Berkeley, CS Division,
1994.
[11] S. Khuri, T. Back, and J. Heitk6tter. An evolutionary approach to combinatorial
optimization problems. In Procs. of CSC 1994, 1994.
[12] J. Koza. FOGA, chapter A Hierarchical Approach to Learning the Boolean Multiplexer Function, pages 171-192. 1991.
[13] J. Koza. Genetic Programming. MIT Press, Cambridge, MA, 1991.
[14] J. Koza. The GP paradigm: Breeding computer programs. In Branko Soucek and
the IRIS Group, editors, Dynamic, Genetic, and Chaotic Prog., pages 203-221. John
Wiley and Sons, Inc., 1992.
[15] M. Mitchell, J. Holland, and S. Forrest. When will a GA outperform hill-climbing? In
J.D. Cowen, G. Tesauro, and J. Alspector, editors, Advances in Neural Inf. Processing
Systems 6, 1994.
[16] J. Muth and G. Thompson. Industrial Scheduling. Prentice Hall, 1963.
[17] U. O'Reilly and F. Oppacher. Program search with a hierarchical variable length
representation: Genetic programing, simulated annealing and hill climbing. In PPSN3, 1994.
[18] S. Wilson. GA-easy does not imply steepest-ascent optimizable. In Belew and Booker,
editors, ICGA-4, pages 85-89, 1991.
| 1172 |@word trial:1 version:5 open:1 d2:1 seek:1 pick:1 accommodate:1 initial:2 series:1 contains:2 selecting:1 genetic:17 document:1 icga:3 current:6 must:2 john:1 csc:1 remove:1 designed:1 leaf:2 selected:2 cook:1 problemspecific:1 steepest:1 record:1 math:1 node:12 successive:1 simpler:1 height:1 mathematical:1 direct:1 ascend:1 expected:1 alspector:1 behavior:1 freeman:1 jm:3 increasing:1 becomes:1 provided:2 kind:1 interpreted:1 substantially:2 string:3 developed:1 finding:1 suite:1 berkeley:5 thorough:1 every:3 orsa:1 exactly:5 grant:1 internally:1 omit:1 before:2 understood:1 local:2 sd:2 uncompleted:2 encoding:5 tournament:1 chose:2 black:2 suggests:1 challenging:1 conversely:1 co:1 range:1 unique:1 practice:2 chaotic:1 optimizers:1 cowen:1 crossover:6 reilly:1 pre:1 word:1 symbolic:1 ga:41 selection:6 operator:18 scheduling:5 targeting:1 prentice:1 www:1 conventional:1 imposed:1 equivalent:1 independently:1 thompson:1 formulate:1 simplicity:1 insight:2 regarded:1 his:2 fang:7 population:6 user:1 programming:3 goldberg:2 cut:1 bottom:1 role:1 ackley:1 solved:1 ordering:4 removed:1 substantial:1 ui:5 dynamic:1 depend:1 interchangeable:1 solving:2 applegate:1 upon:1 division:1 swap:1 various:1 chapter:1 distinct:1 neighborhood:3 whose:2 heuristic:1 widely:1 solve:1 plausible:3 encoded:1 otherwise:2 statistic:1 gp:8 invested:1 neighboring:1 achieve:3 representational:1 fittest:1 forth:1 description:1 moved:4 optimum:1 comparative:1 illustrate:1 develop:1 advocated:1 job:16 c:3 involves:4 ajm:1 come:1 indicate:1 differ:1 direction:1 closely:1 correct:9 modifying:1 stochastic:14 material:3 require:2 ujm:1 ao:7 investigation:1 biological:1 elementary:1 exploring:1 considered:1 hall:1 great:2 a2:10 purpose:1 combinatorial:7 label:7 ross:1 gauge:2 create:1 hope:1 mit:1 clearly:1 aim:3 rather:2 occupied:3 wilson:1 earliest:2 encode:2 improvement:3 industrial:1 baseline:6 multiprocessor:1 selects:1 booker:2 issue:1 among:1 overall:1 development:1 uc:1 construct:2 procs:1 encouraged:2 identical:2 constitutes:3 minimized:1 np:3 others:1 connectionist:1 report:1 employ:2 randomly:1 individual:1 fitness:10 phase:3 consisting:1 replacement:8 attempt:3 interest:1 investigate:2 circular:1 highly:1 evaluation:10 sh:30 yielding:1 light:1 necessary:2 tree:7 loosely:1 initialized:2 instance:8 boolean:4 assignment:1 vertex:1 deviation:2 entry:1 johnson:1 timeslot:3 chooses:2 peak:1 randomized:1 probabilistic:1 decoding:1 pool:1 again:1 thesis:1 multiplexer:8 de:1 includes:2 inc:1 depends:1 performed:6 view:1 root:1 portion:1 competitive:1 effected:1 mutation:7 variance:1 characteristic:1 correspond:2 yield:4 landscape:2 climbing:4 notoriously:1 researcher:1 processor:1 whenever:1 mating:3 involved:1 obvious:1 dm:1 garey:1 proof:1 associated:1 unsolved:1 icl:1 treatment:1 mitchell:1 color:6 schedule:6 sophisticated:1 back:1 appears:1 wesley:1 higher:1 x6:1 formulation:2 evaluated:1 though:2 execute:1 box:2 stage:1 hand:2 web:1 marker:18 fixedlength:1 defines:1 perhaps:1 scheduled:5 indicated:3 true:1 hence:1 assigned:1 read:1 deal:2 adjacent:1 ll:3 x5:3 maintained:1 davis:1 d4:2 iris:1 hill:3 complete:5 demonstrate:1 performs:4 ari:1 bitstrings:3 superior:2 common:1 specialized:1 operand:10 he:2 kluwer:1 makespan:3 refer:1 cambridge:1 ai:2 mathematics:1 similarly:2 moving:2 entail:1 add:1 playa:1 recent:2 inf:1 wattenberg:2 tesauro:1 certain:1 binary:1 success:1 seen:2 minimum:3 additional:2 somewhat:1 employed:1 paradigm:2 maximize:1 shortest:1 ii:4 full:5 d7:6 reduces:1 x10:1 technical:1 adapt:1 academic:1 offer:2 long:1 variant:1 basic:2 iteration:7 accord:1 achieved:3 remarkably:1 addressed:1 annealing:2 publisher:1 swapped:1 operate:1 strict:1 ascent:1 subject:1 corne:1 undirected:1 member:1 foga:2 mod:1 effectiveness:3 lisp:2 integer:4 counting:1 easy:2 enough:1 fit:1 jobshop:14 ifi:1 reduce:1 idea:1 shift:1 expression:16 rescheduling:1 carlier:1 constitute:1 repeatedly:1 generally:1 useful:1 detailed:1 involve:1 listed:1 processed:1 http:1 specifies:1 generate:3 outperform:1 nsf:1 shifted:2 koza:10 designer:3 ccr:1 discrete:1 group:1 drawn:2 d3:2 changing:1 juels:6 graph:1 ioxlo:1 fraction:2 year:2 run:5 prog:1 reader:1 forrest:2 decision:1 comparable:1 bit:3 bound:1 encountered:1 constraint:1 deb:1 u1:1 breeding:1 martin:1 department:2 according:2 slightly:1 increasingly:1 son:1 making:2 principally:1 taken:3 pinson:1 addison:1 optimizable:1 available:1 operation:4 experimentation:1 observe:3 hierarchical:2 appropriate:1 khuri:1 substitute:1 top:1 running:1 binomial:1 completed:1 belew:2 uj:1 seeking:1 objective:1 move:2 already:1 strategy:1 usual:1 evolutionary:2 simulated:2 sci:1 majority:1 mail:2 length:1 retained:1 index:1 difficult:2 fe:1 potentially:1 reconsider:1 design:3 implementation:1 perform:2 benchmark:1 enabling:1 behave:1 gas:10 omission:1 arbitrary:1 community:2 pair:3 required:6 specified:1 california:2 address:7 able:1 below:1 program:11 including:1 explanation:1 shifting:2 suitable:1 natural:2 treated:1 scheme:3 improve:1 shop:3 imply:1 elitist:2 sn:1 text:1 literature:1 subexpressions:2 permutation:4 generation:1 interesting:1 allocation:1 article:1 editor:5 intractability:1 course:2 supported:1 last:1 heading:1 bias:1 wide:1 neighbor:5 regard:1 evaluating:7 world:1 transition:2 stand:1 collection:1 adaptive:1 approximate:1 gene:1 symmetrical:1 subsequence:1 search:5 table:4 promising:1 nature:1 obtaining:1 csd:1 repeated:1 child:2 referred:1 wiley:1 sub:1 position:10 comprises:1 formula:1 remained:1 specific:3 list:2 effectively:1 phd:1 execution:3 subtree:2 michigan:1 hillclimbing:19 u2:1 holland:1 applies:2 corresponds:2 ma:1 goal:1 formulated:2 consequently:1 exposition:1 replace:1 hard:2 change:2 programing:1 determined:1 infinite:1 uniformly:15 except:1 called:1 total:1 partly:1 experimental:1 jong:1 select:1 mark:1 |
190 | 1,173 | hnproved Silicon Cochlea
?
uSIng
Compatible Lateral Bipolar Transistors
Andre van Schalk, Eric Fragniere, Eric Vittoz
MANTRA Center for Neuromimetic Systems
Swiss Federal Institute of Technology
CH-IOI5 Lausanne
email: vschaik@di.epfl.ch
Abstract
Analog electronic cochlear models need exponentially scaled filters.
CMOS Compatible Lateral Bipolar Transistors (CLBTs) can create
exponentially scaled currents when biased using a resistive line with a
voltage difference between both ends of the line. Since these CLBTs
are independent of the CMOS threshold voltage, current sources
implemented with CLBTs are much better matched than current
sources created with MOS transistors operated in weak inversion.
Measurements from integrated test chips are shown to verify the
improved matching.
1. INTRODUCTION
Since the original publication of the "analog electronic cochlea" by Lyon and Mead in
1988 [I], several other analog VLSI models have been proposed which try to capture
more of the details of the biological cochlear function [2],[3],[4]. In spite of the
differences in their design, all these models use filters with exponentially decreasing cutoff frequencies. This exponential dependency is generally obtained using a linear
decreasing voltage on the gates of MOS transistors operating in weak-inversion. In
weak-inversion, the drain current of a saturated MOS transistor depends exponentially
on its gate voltage. The linear decreasing voltage is easily created using a resistive
poly silicon line; if there is a voltage difference between the two ends of the line, the
voltage on the line will decrease linearly all along its length.
672
A. VAN SCHAlK. E. FRAGNIl1RE. E. VlrrOZ
The problem of using MOS transistors in weak-inversion as current sources is that their
drain currents are badly matched. An RMS mismatch of 12% in the drain current of two
identical transistors with equal gate and source voltages is not exceptional [5], even
when sufficient precautions, such as a good layout, are taken. The main cause of this
mismatch is a variation of the threshold voltage between the two transistors. Since the
threshold voltage and its variance are technology parameters, there is no good way to
reduce the mismatch once the chip has been fabricated.
One can avoid this problem using Compatible Lateral Bipolar Transistors (CLBTs) [6]
for the current sources. They can be readily made in a CMOS substrate, and their
collector current also depends exponentially on their base voltage, while this current is
completely independent of the CMOS technology's threshold Voltage. The remaining
mismatch is due to geometry mismatch of the devices, a parameter which is much better
controlled than the variance of the threshold voltage. Therefore, the use of CLBTs can
yield a large improvement in the regularity of the spacing of the cochlear filters. This
regularity is especially important in a cascade of filters like the cochlea, since one filter
can distort the input signal of all the following filters.
We have integrated an analog electronic cochlea as a cascade of second-order lOW-pass
filters, using CLBTs as exponentially scaled current sources. The design of this cochlea
is based on the silicon cochlea described in [7], since a number of important design
issues, such as stability, dynamic range, device mismatch and compactness, have already
been addressed in this design. In this paper, the design of [7] is briefly presented and
some remaining possible improvements are identified. These improvements, notably the
use of Compatible Lateral Bipolar Transistors as current sources, a differentiation that
does not need gain correction and temperature independent biasing of the cut-off
frequency, are then discussed in more detail. Finally, measurement results of a test chip
will be presented and compared to the design without CLBTs.
2. THE ANALOG ELECTRONIC COCHLEA
The basic building block for the filters in all analog electronic cochlear models is the
transconductance amplifier, operated in weak inversion. For input voltages smaller than
about 60 mV pp, the amplifier can be approximated as a linear transconductance:
(1)
with transconductance gm given by:
10
gm = 2nUT
where Io is the bias current, n is the slope factor, and the thermal voltage UT
25.6 mV at room temperature.
(2)
= kT/q =
This linear range is usually the input range used in the cochlear filters, yielding linear
filters. In [7], a transconductance amplifier having a wider linear input range is
proposed. This allows larger input signals to be used, up to about 140 m Vpp.
Furthermore, the wide range transconductance amplifier can be used to eliminate the
large-signal instability shown to be present in the original second-order section [7]. This
second-order section will be discussed in more detail in section 3.2.
673
Improved Silicon Cochlea Using Compatible Lateral Bipolar Transistors
The traditional techniques to improve matching [5], as for instance larger device sizes
for critical devices and placing identical devices close together with identical
orientation, are also discussed in [7] with respect to the implementation of the cochlear
filter cascade. The transistors generating the bias current 10 of the transconductance
amplifiers in the second-order sections were identified as the most critical devices, since
they have the largest effect on the cut-off frequency and the quality factor of each
section. Therefore, extra area had to be devoted to these bias transistors. A further
improvement is obtained in [7] by using a single resistive line to bias both the
transconductance amplifiers controlling the cut-off frequency and the transconductance
amplifier controlling the quality factor. The quality factor Q is then changed by varying
the source of the transistor which biases the Q control amplifier. Instead of using two
tilted resistive lines, this scheme uses only one tilted resistive line and a non-tilted Q
control line, and therefore doesn't need to rely on an identical tilt on both resistive lines.
3. IMPROVED ANALOG ELECTRONIC COCHLEA
The design discussed in the previous section already showed a substantial improvement
over the first analog electronic cochlea by Lyon and Mead. However, several
improvements remain possible.
3.1 V T VARIATION
The bias transistors have been identified as the major source of mismatch of the
cochlea's parameters. This mismatch is mainly due to variation of the threshold voltage
VT of the MOS transistors. Since the drain current of a saturated MOS transistor in
weak-inversion depends exponentially on the difference between its gate-source voltage
and its threshold voltage, small variations in VT introduce large variations in the drain
current of these transistors, and since both the cut-off frequency and the quality factor of
the filters are proportional to these drain currents, large parameter variations are
generated by small V T variations. This problem can be circumvented by the use of
CMOS Compatible Lateral Bipolar transistors as bias transistors.
A CMOS Compatible Lateral Bipolar Transistor is obtained if the drain or source
junction of a MOS transistor is forward-biased in order to inject minority carriers into
the local substrate. If the gate voltage is negative enough (for an n-channel device), then
no current can flow at the surface and the operation is purely bipolar [6]. Fig. 1 shows
the major flows of current carriers in this mode of operation, with the source, drain and
well terminals renamed emitter E, collector C and base B.
VBC<O
C
:fG
ISub
holes
-...
p
........ electrons
n
~
Fig. 1. : Bipolar operation of the MOS transistor: carrier flows and symbol.
674
A. VAN SCHAlK. E. FRAGNIERE. E. VITIOZ
Since there is no p+ buried layer to prevent injection to the substrate, this lateral npn
bipolar transistor is combined with a vertical npn. The emitter current IE is thus split
into a base current IB, a lateral collector current Ic and a substrate collector current Isub ?
Therefore, the common-base current gain ex. = -IdlE cannot be close to 1. However, due
to the very small rate of recombination inside the well and to the high emitter efficiency,
the common-emitter current gain ~ IeIlB can be large. Maximum values of ex. and ~ are
obtained in concentric structures using a minimum size emitter surrounded by the
collector and a minimum lateral base width.
=
For VCE = VBE-V BC larger than a few hundred millivolts, this transistor is in active mode
and the collector current is given, as for a normal bipolar transistor, by
fu
k=~e~
W
where ISb is the specific current in bipolar mode, proportional to the cross-section of the
emitter to collector flow of carriers. Since k is independent of the MOS transistor
threshold voltage VT, the main source of mismatch of distributed MOS current sources is
suppressed, when o....BTs are used to create the current sources.
VC.c
D
_ _......--'=-_B
c::::J
lEI
0+
poIy-Si
_
p+
(b)
Fig. 2. o....BT cascode circuit (a) and its layout (b).
A disadvantage of the CLBT is its low Early voltage, i.e., the device has a low output
resistance. Therefore, it is preferable to use a cascode circuit as shown in fig. 2. This
yields an output resistance several hundred times larger than that of the single o....BT,
whereas the area penalty, in a layout as shown in fig 2b, is acceptable.
Another disadvantage of the CLBTs, when biased using a resistive line, is their base
current, which introduces an additional voltage drop on the resistive line. However,
since the cut-off frequencies in the cochlea are controlled by the output current of the
CLBTs and since these cut-off frequencies are relatively small, typically 20 kHz, the
output current of the CLBTs will be small. If the common-emitter current gain ~ is
much larger than 1, the base current of these o....BTs will be very small, and the voltage
error introduced by the small base currents will be negligible. Furthermore, since the
cut-off frequencies of the cochlea will typically span 2 decades with an exponentially
decreasing cut-off frequency from the beginning to the end, only the first few filters will
have any noticeable influence on the current drawn from the resistive line.
3.2 DIFFERENTIATION
The stabilized second-order section of [7] uses two wide range transconductance
amplifiers (A 1 and A2 in fig. 3) with equal bias current and equal capacitive load, to
control the cut-off frequency. A basic transconductance amplifier (A3) is used in a
Improved Silicon Cochlea Using Compatible Lateral Bipolar Transistors
675
feedback path to control the quality factor of the filter. The voltage VOU1 at the output of
each second-order stage represents the basilar membrane displacement. Since the output
of the biological cochlea is proportional to the velocity of the basilar membrane, the
output of each second-order stage has to be differentiated. In [7] this is done by creating
a copy of the output current Lru- of amplifier A2 at every stage. Since the voltage on a
capacitor is proportional to the integral of the current onto the capacitor, Idit is
effectively proportional to the basilar membrane velocity. Yet, with equal displacement
amplitudes, velocity will be much larger for high frequencies than for low frequencies,
yielding output signals with an amplitude that decreases from the beginning of the
cochlea to the end. This can be corrected by normalizing Lru- to give equal amplitude at
every output. A second resistive line with identical tilt controlling the gain of the current
mirrors that create the copies of Idit at each stage is used for this purpose in [7].
However, if using a single resistive line for the control of the cut-off frequencies and the
quality factor improves the performance of the chip, the same is true for the control of
the current mirror gain.
fromprev.
section
Fig. 3. One section of the cochlear cascade, with differentiator.
An alternative solution, which does not need normalization, is to take the difference
between VOuI and VI (see fig. 3). This can be shown to be equivalent to differentiating
VOut. with OdB gain at the cut-off frequency for all stages. This can be easily done with a
combination of 2 transconductance amplifiers. These amplifiers can have a large bias
current, so they can also be used to buffer the cascade voltages before connecting them
to the output pins of the chip, to avoid charging the cochlear cascade with the extra
capacitance introduced by the output pins.
3.3 TEMPERATURE SENSITIVITY
The cut-off frequency of the first and the last low-pass filter in the cascade can be set by
applying voltages to both ends of the resistive line, and the intermediate filters will have
a cut-off frequency decreasing exponentially from the beginning to the end. Yet, if we
apply directly a voltage to the ends of the resistive line, the actual cut-off frequency
obtained will depend on the temperature, since the current depends exponentially on the
applied voltage normalized to the thermal voltage Ur (see(3). It is therefore better to
create the voltages at both ends of the resistive line on-chip using a current biasing a
CLBT with its base connected to its collector (or its drain connected to its gate if aMOS
transistor is used). If this gate voltage is buffered, so that the current through the
resistive line is not drawn from the input current, the bias currents of the first and last
filter, and thus the cut-off frequency of all filters can be set, independent of temperature.
676
A. VAN SCHAlK, E. FRAGNIERE, E. VITTOZ
3.4 THE IMPROVED SILICON COCHLEA
The improved silicon cochlea is shown in figure 4. It uses the cochlear sections shown
in figure 3, CLBTs as the bias transistors of each filter, and one resistive line to bias all
CLBTs. The resistive line is biased using two bipolar current mirror structures and two
voltage buffers, which allow temperature independent biasing of the cut-off frequencies
of the cochlea. A similar structure is used to create the voltage source Vq to control,
independent of temperature, the actual quality factor of each section. The actual bipolar
current mirror implemented uses the cascode structure shown in figure 2a, however this
is not shown in figure 4 for clarity.
Vdiffl
Fig 4. The improved silicon cochlea
4. TEST RESULTS
The proposed silicon cochlea has been integrated using the ECPD15 technology at ES2
(Grenoble, France), containing 104 second-order stages, on a 4.77mm X 3.21mm die.
Every second stage is connected to a pin, so its output voltage can be measured. In fig. 5,
the frequency response curves after on-chip derivation are shown for the output taps of
both the cochlea described in [7] (left), and our version (right). This clearly shows the
improved regularity of the cut-off frequencies and the gain obtained using CLBTs. The
drop-off in gain for the higher frequency stages (right) is a border effect, since at the
beginning of the cochlea no accumulation of gain has yet taken place. In the figure on
the left this is not visible, since the first nine outputs are not presented.
~
~
10
0
?10
-20
?20
?30
?30
F~(Hz)
10000
F~(Hz)
20000
Fig.5. Measured frequency responses at the different taps.
In fig . 6 we show the cut-off frequency versus tap number of both chips. Ideally, this
should be a straight line on a log-linear scale, since the cut-off frequency decreases
677
Improved Silicon Cochlea Using Compatible Lateral Bipolar Transistors
exponentially with tap number. This also clearly shows the improved regularity using
CLBTs as current sources.
lOOOO?r-------------------,
10
15
20
25
30
200?~----------------~~
o
10
20
30
40
50
Fig.6. Cut-off frequency (Hz) versus tap number for both silicon cochleae.
5. CONCLUSIONS
Since the biological cochlea functions as a distributed filter, where the natural frequency
decreases exponentially with the position along the basilar membrane, analog electronic
cochlear models need exponentially scaled filters. The output current of a Compatible
Lateral Bipolar Transistor depends exponentially on the base-emitter voltage. It is
therefore easy to create exponentially scaled current sources using CLBTs biased with a
resistive polysilicon line. Because the CLBTs are insensitive to variations of the CMOS
threshold voltage VT, current sources implemented with CLBTs are much better
matched than current sources using MaS transistors in weak inversion.
Regularity is further improved using an on-chip differentiation that does not need a
second resistive line to correct its gain, and therefore doesn't depend on identical tilt on
both resistive lines. Better independence of temperature can be obtained by fixing the
frequency domain of the cochlea using bias currents instead of voltages.
Acknowledgments
The authors would like to thank Felix Lustenberger for simulation and layout of the
chip. We are also indebted to Lloyd Watts for allowing us to use his measurement data.
References
[1] R.F. Lyon and C.A. Mead, "An analog electronic cochlea," IEEE Trans. Acoust.?
Speech. Signal Processing, vol. 36, pp. 1119-1134, July 1988.
[2] R.F. Lyon, "Analog implementations of auditory models," Proc. DARPA Workshop
Speech and Natural Language. San Mateo, CA:Morgan Kaufmann, 1991.
[3] W. Liu, et. al., "Analog VLSI implementation of an auditory periphery model,"
Advances Res. VLSI, Proc. 1991 Santa Cruz Con/., MIT Press, 1991, pp. 153-163.
[4] L. Watts, "Cochlear Mechanics: Analysis and Analog VLSI," Ph.D. thesis,
California Institute of Technology, Pasadena, 1992.
[5] E. Vittoz, "The design of high-performance analog circuits on digital CMOS
chips," IEEE 1. Solid-State Circuits, vol. SC-20, pp. 657-665, June 1985.
[6] E. Vittoz, "MaS transistors operated in the lateral bipolar mode and their
application in CMOS technology," IEEE 1. Solid-State Circuits, vol. SC-24, pp.
273-279, June 1983.
[7] L. Watts, et. al., "Improved implementation of the silicon cochlea," IEEE 1. SolidState Circuits, vol. SC-27, pp. 692-700, May 1992.
| 1173 |@word version:1 briefly:1 inversion:7 simulation:1 solid:2 liu:1 bc:1 current:55 si:1 yet:3 readily:1 cruz:1 tilted:3 visible:1 drop:2 precaution:1 device:8 beginning:4 along:2 resistive:20 inside:1 introduce:1 notably:1 mechanic:1 terminal:1 decreasing:5 lyon:4 actual:3 matched:3 circuit:6 acoust:1 fabricated:1 differentiation:3 lru:2 every:3 bipolar:18 preferable:1 scaled:5 control:7 carrier:4 negligible:1 before:1 local:1 felix:1 io:1 mead:3 path:1 mateo:1 lausanne:1 emitter:8 range:6 acknowledgment:1 block:1 swiss:1 displacement:2 area:2 cascade:7 matching:2 idle:1 spite:1 cannot:1 close:2 onto:1 isub:2 influence:1 instability:1 applying:1 accumulation:1 equivalent:1 center:1 layout:4 his:1 stability:1 variation:8 controlling:3 gm:2 substrate:4 us:4 velocity:3 approximated:1 cut:20 capture:1 connected:3 decrease:4 substantial:1 cascode:3 ideally:1 dynamic:1 depend:2 purely:1 eric:2 efficiency:1 completely:1 easily:2 darpa:1 chip:11 derivation:1 sc:3 larger:6 transistor:34 lustenberger:1 regularity:5 generating:1 cmos:9 wider:1 fixing:1 basilar:4 measured:2 noticeable:1 implemented:3 vittoz:4 correct:1 filter:21 vc:1 biological:3 correction:1 mm:2 ic:1 normal:1 fragniere:3 mo:10 electron:1 major:2 early:1 a2:2 purpose:1 proc:2 largest:1 create:6 exceptional:1 amos:1 federal:1 mit:1 clearly:2 avoid:2 varying:1 voltage:38 publication:1 june:2 improvement:6 mainly:1 epfl:1 integrated:3 eliminate:1 bt:2 compactness:1 typically:2 vlsi:4 pasadena:1 buried:1 france:1 issue:1 orientation:1 equal:5 once:1 having:1 identical:6 placing:1 represents:1 few:2 grenoble:1 geometry:1 amplifier:13 saturated:2 introduces:1 operated:3 yielding:2 devoted:1 kt:1 fu:1 integral:1 re:1 instance:1 disadvantage:2 bts:2 hundred:2 dependency:1 combined:1 sensitivity:1 ie:1 off:21 together:1 connecting:1 thesis:1 containing:1 creating:1 inject:1 lloyd:1 mv:2 depends:5 vi:1 try:1 slope:1 variance:2 kaufmann:1 yield:2 weak:7 vout:1 straight:1 indebted:1 andre:1 email:1 distort:1 frequency:28 pp:6 di:1 con:1 gain:11 auditory:2 ut:1 improves:1 amplitude:3 higher:1 response:2 improved:12 done:2 furthermore:2 stage:8 mode:4 quality:7 lei:1 building:1 effect:2 verify:1 true:1 normalized:1 nut:1 width:1 die:1 temperature:8 common:3 tilt:3 exponentially:15 khz:1 insensitive:1 analog:14 discussed:4 silicon:12 measurement:3 buffered:1 language:1 had:1 odb:1 operating:1 surface:1 base:10 showed:1 periphery:1 buffer:2 vt:4 morgan:1 minimum:2 additional:1 signal:5 july:1 polysilicon:1 cross:1 controlled:2 basic:2 cochlea:29 normalization:1 whereas:1 spacing:1 addressed:1 source:20 biased:5 extra:2 hz:3 flow:4 capacitor:2 intermediate:1 split:1 enough:1 npn:2 easy:1 independence:1 identified:3 reduce:1 rms:1 penalty:1 resistance:2 speech:2 cause:1 nine:1 differentiator:1 generally:1 santa:1 ph:1 stabilized:1 vol:4 threshold:9 drawn:2 clarity:1 prevent:1 cutoff:1 millivolt:1 place:1 electronic:9 acceptable:1 layer:1 badly:1 span:1 transconductance:11 injection:1 relatively:1 circumvented:1 combination:1 watt:3 membrane:4 mantra:1 smaller:1 remain:1 renamed:1 suppressed:1 ur:1 taken:2 vq:1 pin:3 end:8 neuromimetic:1 junction:1 operation:3 apply:1 differentiated:1 alternative:1 gate:7 original:2 capacitive:1 remaining:2 schalk:4 recombination:1 especially:1 capacitance:1 already:2 traditional:1 thank:1 lateral:14 cochlear:11 minority:1 length:1 negative:1 design:8 implementation:4 allowing:1 vertical:1 thermal:2 concentric:1 introduced:2 vbe:1 tap:5 california:1 trans:1 usually:1 mismatch:9 biasing:3 charging:1 critical:2 natural:2 rely:1 scheme:1 improve:1 technology:6 created:2 drain:9 proportional:5 versus:2 digital:1 sufficient:1 surrounded:1 compatible:10 changed:1 last:2 copy:2 bias:13 allow:1 institute:2 wide:2 differentiating:1 fg:1 van:4 distributed:2 feedback:1 curve:1 doesn:2 forward:1 made:1 author:1 san:1 vce:1 active:1 isb:1 decade:1 channel:1 ca:1 poly:1 domain:1 main:2 linearly:1 border:1 collector:8 fig:13 position:1 exponential:1 ib:1 load:1 specific:1 symbol:1 a3:1 normalizing:1 workshop:1 effectively:1 mirror:4 hole:1 vpp:1 ch:2 ma:2 room:1 corrected:1 pas:2 ex:2 |
191 | 1,174 | d
j
!
" # $&%('*) + , - ' ) + .
c
*/ 02HJ143I 571E6!DL8:KM29;<9NP3=!O >?M 6 5A@BD 5C5C6 1EDF5G6
6
6
R
Q
M
J
H
M
5
E
1
D
r _`ks7SU?mg TB??ht VR`a?eu4VR??]wWX?Ev SA?E?LgAY[xy{ZA?Uz \^_F}Gdo]<?Gj_a??x|`cn~?7be} de}7fEf4?7gAP?hi `k??\[jm_??lg2g??? jF?mnouhp deno??q?? ne?J_ ????
_c? u???? j??B? x| b s ? ]<?GsAu g j?_ g?? ]?u4de?k_ gX? _ h? ???Gneq | ? ?
?!?????G?G?^?R?
r? des f _z _ ?2q?7d_F?Ag fE_FdFj _&u4] ??f?? u ne?&?? f f4g j nmu?l g?dk`k?Gn sh? neh? _aj` b?Gq d d?_c_?k_Fgf4?AbkuEb nx? ?&h? f g?????4?[n h_L??f gjk? jF?E`F??s?_F_Fj ??n _Fno??h? _kj j n noxpf?? h? u4??e? ?
nhk??P?G_mnes ? ? _ ? h? ?7_ s x| ?n^s7j bu4] h??h? j?_ g ?A` de} q g u | _j ?Rno???2_??? _? ?Gf _a` nhp{u?u nkx?{]_ ?nesA` _ sAux? ??g ??? _?] _Fh? ? g _?7g _ gnku?_cn dFh? u4f ] g h| ?f4? [b x? ?f gfCn _a??dedku ? g d
?_F?A?ene??_ g ?~? ? jFxp gn fE? d u ?? ?J_?? jc` s ??_ ? _ _?h??? ]<_kqC?2j nono? d g u4?be_?h? g n ?LsA_??z ?Jx???u4? Z h? u] x??jPb } ? f _ ]<]UnkusGjP_u u4g ???A?de?efG??u4`k?as ? jf `?g ? _?A? dk_k_?
? h? `an h?{f g f? u `FsAu?| fEb xy ` n h| ? _?j _Fd hi _Fj p
? ? ?"? ??????"???*?
???no?jF? `F_abn? ?x| ? dkh? _g h? ? ? ?uEn_ uh| f g? g }?u?f4}? Rd ??? uE?_a`a_n?A? h?{de?`?f uf?z ] dF_??Ju ?E?Au_?GdFg ?] h?_F`agdFu u_an ] q?xp?? fE?d ?uuE]jn h ? gf _ag ?^no? ??} ??f4_` d `Fu4?Edkdkj _a_ku ?7? jF}_][?2? n?dsg ? _L_ g u4xn|?g q? ? gGfEh? h? gq j ?g u n f ? f4PP? no???F?A] ? fE_ _F? jFg n ?A?_q?no?Jnk]??|u4fjkb ?Adk?? ??f_????u?Z nedkf7`Fu s?f7h??]o? ? ?
ouE` h? h bkuEne_ h? g ne_Fd??Adk_ n n xp f g ?
r? dF?Gu ?h? ??b?? ? f u4? ? f4ujk?7b?_ ? ?x???G? _ ? ? }? ?k? _aqA??_ jane`kf sG_ ?? _ _ j q ?gf4d?_a?Gn d?qu ]gAh?{Z ?G? ???f4 u ? _F??_? ne?sA?f _&d ??? u4? dk? ? hp ? g u _]Jn?j ? u4dFdu _ x? ?? ? n ?Ex? ? _af u ? ]
?nef ?P? ?? u?e?G? h?{_ ? ? h g __n ?"u4u4] ?? Z n ? ? ? _&? ? ` s ?^u f7? ne? ? _j h?`kg sA_ ? _?ju y d _?Auj _ ??? f ?Ag _ g ? ne_ sh? ?7_&s7n? dkn g h| `a? q?xy ??ud? ` _kfx??? sGn h??g h? ? j
?h? d_ ? ? g ?_?? nk? s f4l gq nm?mdk??_?no? d u h|neg sAh? _g ? j uf ] h?? _ gn `F}_deh| _j?? ? j u n |?x?gA? h? g u4? ne_?? ? _ (h? ? ?Gbkn sGj _?Z ??? sh? dk_a]?_`annesA`k? _?u4?g ?E?J_ ? xy?g j `k?n dF_Fu ? xy ? _h??g ?y ?g ` _ ] deq dk?GfE_Fd j ?
dkh ?_anbkp dk?Au _?hp gA]?_ h??g&? nRh| j gLhp u g?]<h? f ? `a`au u ? g nBq ? u _ ? h? d?Eu ?Gb n?hi j `[ou4uE?A`a??`cdf fF??7h?h &?G?muEnon fh? fEbeg d u rh gAsEh| g ? ? _FdedefEd ? udFf _ A?7?U_ f7_an nes _???Z _no? _JfA?E?Ed j u4`ah? _ ~neq ?U]?u4] } n
d _F] x? ? z _?nks B_ ???Gj u ?Xg f7n _Fsd j f4`k??Af_ ?z _ ? d j?Ubku n h? h? g u4? b?? f4h? g ?Jx _??z u_Fd?Z?? _ h g _F?Adku f ] neh s? uE`n h? f g _Fj _Fdef dkg f _ d u`F]<? _u4_kdjF]n } h? g u4_an _ fE? dBj ?f u nes h???
?f g u g n z ? ub?} h ?G???j uEn ?Ay f_F_?? ? ? _?? b ? r ?Au4]_ h{?G? u4bf4h j f Pg fEj ? _ z b j?? f Z ? ? jPg `a?ufEgdeb ? ? {gU?AuEu4nebe_F_]<} _k?j beu h| ? ? u4? nef7_ nohf| ? d u f ? j ]?} z _ d?} ?gA_ f j h?? jk???} ??~u??Fne_a__ ? Z
y
f
g
h
e
i
- /.10 !"% 2 (
"!$#% & ')(+*),
' 0)06$0
% 3 (54
+
87
:9
<;
"=
>
H
7
NM
* 0 % B Q<' 0 % B 6
>
H M
YX
J
=
^]
H
_
0 ! . % B 6DC
A@
DI
G
!"EF'
LK
)J
OP
T
(S
W=
9
?
@
7
U
H
J
.<[
5Z -
5`
I
\a
.
)Q 6)R
5V
' 0 0\6D0
b
!"#$
257698 # 6;:
6
6
8# P
5
QP
% # &')(*+ %-,/. 0213411.
% . 0 & % , 8 _^ % *=8 <> @?ACB B/6 DEFHG % 3 % ?JIK 8 # < ( *L*NM< 3 *O % <> * %R K!S * % . TU1 *5
5
% 1 * . V2W ^ < * 25% 6 XY %[Z % 1 * 5% MM M-\] 1 * < * * 3 <> 6 W 6 % < 1 5 ( % & (a`b 5< 6xb w % 1dcJ^ egfihk^ j b < . lQm 6z%7ny{ o/P p E c M
M %q M % 1 .
* ]r % r M \s@t W[1u143Y < r M 4 . V & 1 * r 3 M . V * % M . v
< 1 4 && % X % ,;. l
< M T %R|% *
<} @( B7~/E #
????? ??? ?g????????? ?? ? ??;??????? ? ??????? ? ?????? ? ? ??? ???-? ? ?;? ? ??C? ? ? ?@??? ? ? ?? ???? ?[? ?/??? ? ? ? ???
? ?? ? ?/?x? ?/? ? ? ? ? ??? ?/? ? ? ??? ?k? ? ? ??? ?? ? ? ??? ? ? ?;?k?/? ? ? ? ??/? ?C? ? ???N? ? ????
?
???????x????? ?? ?;???? ?
??? ?x? ? ?????
? ?
? ??????? ? ? ? ?=???
? ? ? ? ? ?
? + % \ < 1 . 3?? ] W r * 5 * . 0Q5?3 ?? %1Z . V W < Z% 6 5?? *+ % & % 6 8 % 5M 6 < s . 2? < Z 5 . 25/6 % MM 5 M w < 1 , % M 8 ? P 5% ? ,?? ] `b < 5 . b %
^
^
#
?kh 5 j b 6 < . T b % ( A B 6 o B? ? * Y % 68 6 <?> } ^ % ,[?a. V 6 <?> e 56 M %C? . VQ3X V f MM 5 5 M=? c eg^ fg? ' ? + %4 % 5/5 c e fJ? *+ % w M ] 5
M 8 P %4 M < > % * s %5 <??^ ??5 &?+ < ?P %% r . ? %% M %C,?
? 5 ] h 6 % %?% & 8 ? ?
,6 ] ??w B 5 A?E? ? Y
8 ? ? ]? 5 ? 6
M Z + % < % M < & % & % K %? <? < *LO % ? M ? . V M %??V 4 >?V
< M? % , % * ? m 1
, % M % , % 1 *O W < * M
5/8
? 6 4 M 6 % 6 * w 5 Mb 6 . l ^ < ? % 6% ,? ??5 M5 ? < M ? < M 3 6 + ? 8 Q? % 3 Z? M %5 ? 8 ?Q? +?????5 . V 6 r 4 Z 4 ?6 O V 6* 1 n ?
8 5? 6 Y? T ,, %?5??
? 8 ?U?? . ? & W 56 ,
^
4 . V * 1 8 # 6< 8 ? < 6 1 . &> 5 % s . V 8 % < ? M 4 * r ? 5*66 4 . V ? < r? M ? r M . 6 <* t M 3 <6 >@<M 8 Q? 4 3 * ? ? 6 < ?r M 6?56 S X O ?
?? Y % 8 6 * . V <? 5 % *? 6 Mb 1 4 >@?> ] 3
r > %y % * 1 <
% 3 *5 % ,x?% ? % % ? 8< ^] % M 1 < ,
? ? ? l2?% < M l
< ?r V &?
8 # ? M W . r # 4 * 16 r < # 56 %?? ?? ? X *P + % 5 M % I >?6 <?? ? # ?
?
8 "? ! E 6 ? ? 5$# ? % Mt
? 1 * +
5
? %'& t ? , O W % 1 O < ? s? % 8 O & Y 5* ?6% 5 3 6'* * M # 6 < , ? ? ? ? 8 # 1 6 *N+ % ! r 6 M % ,/. 3 *
6 * Y t Z < # M& % 6 *
> . % <M < ?r & 1 3 1 . l , % M %,?3 < ? % ? R . * * t
4 *r 4
?
Y % r < M*)( VQ34 } < M < W > ]
? 0
< ,1 +
6
y
A,E
+ 698 : 3;< => < =?h ??A@ 3 ?"B)C ?D2 B (
h ? h ?.
? -/10 2435 *< 7
?
?
3 F"GH9I
JLKM9I
6 * ,5 O 5
5 66
6 6
?
6
6
4 * r 4 * ? % . q + Z 1 ? + . s % @ J NP?3
% 3 * * Y % . Q r 4 * < , + . V ,, % 4 . T * 1 #
2N 5 <M % + % + . l ,, % *
5?
5
5-?
6
5
?VU
6
:
%|4 1 % * Y % 14 R
T1 S 4 < M % , % MM M 1 *
% < 1 4 M % * + % % * ? MNb r % M M W I 3 %
?s^ % ^ M ? t
% ?m
+
< 1 .V
* % M W I ?
t
#
O1
+
Adc e %
WYXLZ\[]? ^`/b
_ a
mon,pqr
?
#
*NM< O w ? . V ? & % ?<
E f
h g
?
6
5
5
*+ % 68 # 6 4 Wvu % 5 R
? 6 ? } %,1 x ? 5
r M?
& 8 6 r M 5 3% , 4 ? M % 6 ? % 56 <
4 ? % * *Y % 3
*
*
M% I , 1
M
>
*
% %14 l q?3 1 4 3
A
h ? h ? ?Vhji
?
6
6
% 1? 4 M %
1* 4 ? 3*
|~}DW X Z [?? ? ?D?
?g?
?
6
8# 6 x
*V? . V > % < M K & . 1 *j? %1 Y
n
kjl
6
5
# 4 5W 6 % M . T8 3 < > 1 * I ? O wzy . ? * ] 8 { < ? , 8 5* 6
? * + < M % & 4 > < M <* l
O
&
~'E
5 ? 6
26 5
?
5
5?
8# 6 6
8 ? 6
? + % 6 W < . 5 1 4? M 5 3 %
4 68 3 % M < . V?|K 5?6 5
6 M*? <?& % ? * M < 65 2. ^ & 6 , < * < V ? V * . V &
*+ % % *N? ? Mb M W <?? V * % 1 %
. lU1 ] 6 % ? < W r 6 > %1 W ^ % 6< ? 1 + < ?)? *+ % 6 O ? % . *+ %1L% ? < M Z O T?
463 ??< M 5 % ?< r > % 1 ? . V >@> \ 6 % ? ** 5 % , ? x ?gt > > 8 S , ? + % ? r M % % % , 5 ? V 5? < % ? 5 % 1 X % ? I 6 W ? r > % 5 * + ?% ?
* + %
% * ? 6 M b ! ? 6 . v > > 6 W 6 < m 6 t < ? % M M # 6M|8 6 ? + . 3 + 1 y < M q % M ?V? Y < K 5 *+ % 5 t 5$M M ? M
6 ? ? r * . W < s 8% ? Rm
*NM < O x % , 6 < ^ . v ? 5 . xP * % R I ?O 8 & 1% * ?? ] 3 <M t > 3 ? *NM s
*N+ % ??? * . V & 3 < ? < u O ? > O * %1 ? % & ?
l ?
\ ] ? M 4 O ? &( 4 3 +
%?R ? * K & W I ]?\ % M % , ? 3 % ?
T
! 8^
5 56
5
P
? + % q/% 6 % R <> . T ? <* . 25 ?[
, ? < 1 * + %% < % M < & % 1?? ? < M % , % MM R ! < K % ?< W ? > %g? M W
8 %R M 8 5M 6[? ? , 6 t ? K 5% 6?
? ? ?8 P + % 6 ! % ? 8 I W % r > %1 I M %HW , % ? % ,-? ] < ??T? ?? ???
*6 Y % t 5 ?< W r?8 ? % ,/. 1 *# M ? ?? 4 *
4 3 * .V
j
?
?
%5 * ? ^ Mb ? ? * + ? % O &+ 1 g2??6 ? ? % & M 6 < , % 6 , ? 8 ? < 6 ,, . ? * Q % ? % V? h ????
P ? ? ? ? ? 8 E? ??? 6 ? E ? + t
? 6 . ? 1% 5 < 6 r P ? % 1 ? 6 ? ? < M % . ? % r % , % * T , % * . lQ3 <> >?? , . 1 * M . ?645 *x ? % , < ? M . .< ??? ?@%1 ? ? 26 l + ? 6 ? . l * % 6 ? ? ? ! * ?
<5 M6 . < ? 6 3? d
?? 'b ? 5 ?
% ? 8 26 i??g? 4 M*+ % M n ? % < 6 114 W % 5 * Y # 6 <* *+ ^% ?-??5 . 1% ? * 5 % M 1 5 < M ? %d? , ? % r % 56 , % * ?
M P , % > r 5 P . l2W O < 6 * . ? 6 ^ . T 1 Y %
+ ? # 6 3 6 MLM t ^ 1 r ? P ,/. & 5 r # 4 * 1 ?
'
? Y 6 % S4 8 #_< ? * . *5 ] 6 ?u. 5 * % M P % ?
*M < . . V & % < % M < & %
*+ % & % % M < y < * . l % MM MC( . V ?? * Y % < % M < & %
% M < % % W \? %
? ?????? ?z???'?d?Y?'? ?,? ? ? ???`?$??'? ? x ? ?Y? ? # ??? ? ???'?1?? ?j?V?
V ?
Q?
V
! #"%$
&(' !*)! ,++.-
0/21 -
3'
4(56
7.8:9<;>=@? 7ABDCFE G 9H?JIKE LMNO; PM O
I 9(;*Q@? 7RA B E GTSFU A#7.VXWYZK;>ZH?\[ ]^Q I`_ Q Sa[ L 9 D
Z E b V E cedgfih&j = A PkE L 9iW l9nmS >; =
N [ qTS
(
c o
p
P V; A P.rg;Z mRs>t(s A Puv w^x P=@[ y 7R9 s A A@7 AW q Ss Sz[ G{ P=,; ZH|}
???J???:?p?????@?
? ?F?
? ?n?
~??????3??????F??? ?
?&?>?(?
?
?J[ b = I = IK;?;!? ??;>M>= W G V;??gfX?H? ;A 7.?p? P A P ??;>=?; A S%? ;*[ G 9Xm ? ?????? =??>?#???3?:? ? ?2?????,? P A#S ; 9 P 9?Z
? P 9nS s 9??:? ?
N(; ? ;!C S![ L P 9:? ???([ w S=@Nn;%S;*M>? 9 Z?ZX; A [ G V P =@[ w VR;%??P.Q A E ??? 7? = I ;`= A P.[ w 9 E G 9Xm ; A@A 7RA
o
w
? v G =@N A ; ??
S!? ? ; M>=?= 7?=#N(;??F;>E ? m N =?S?P 9 Z =#N A ;!C IX7 u ZS?? I E Gj ;?? E G S?Q@Nn; A ; m f j P A E L^x ; Z??J;!S#S v G P 9
? ?J?? ? ?
??
9?PC#?X? ? Q 7 = W w M P jj ?? 9 | E ?eP S;Z?; S=@E G { P.= 7gA37.? = IX; 9X7 E ??S!;J??; V ; j E L S\?nA7.VDE bd ; Z ?g? L?c?
L? ???
??? ?*?
??????@? ? ?k?g?
? 9 S ; A =@E G 9 m ? ?2; m ;>=
?
G??
?? ?
?????
?J???
?
?
? ?@??
???
??????
? ? ????? ? ?J?
??
?
??
?
#
?
?
?
?
?
?
?
???
???
?O?
??
?
?
[b ;
?P 9 d
HP A | PkC;>Z 79?;!C=?E b P.=#;!S 7k?2=@IK;%M!InP 9(m ;%[ G 9 ? ? ???@?? ? ?F; Cs ; =@I P =\[ c 9
7RA Zg; A = 7 7 ? =@P.[ w t S>P j W G ; 9 ME L ; S = I(Pk=2; S =?E L ? P =?;?=@N(;M I P 9(mg; ?9`mD; 9 ; A P j E ? x P=,[ w 79H? ;J? f S= m ; 9i; A
P j?j?? Q!h B ; =@Nn; ?nA ; ??PRM>= 7A [ G 9 = 7 PkM M 7 f 9 =
;? 7 =@;= I P=a[ L ? = I ; 9n;>Q?? 7A@B E ^S
A ; m f j P A E y x ; Z
???
?
w b
?
?
=
?
?
?
?
p
?
?
A
?
?
=
?
A
(
?
E
t
[
L
\
?
I
M
?
I
M
?
s
@
=
n
N
;
;
.
P
*
M
=
[
G
?
?
h ? f 9 M*= v L 7 ?
?
?(A ?
7.A S%? 9
?
7R?= I ; Q 7 =Pkj 9gfn? | ; A 7k?? ; [ L r I =?S
L
9 Q IXE C?M ; A P 9nB E L 9 m PM*M 7.A ZXE G 9(m = 7 = A PkE 9 E 9nm ; AA@7DA
SP j E b ; M ? E w S ; .fnE c V.P u ; 9 =a= 7?A P 9XB [ w t m G P? M>M 7A ZDE b 9 m = 7`m ; 9 ; A P E ? x P=?E b 79 ; A A 7 A
b
7 ??;>Vg; A ? [ 9 =@IX; m ; 9 ; A [ y M\M ; 7k? P A ; m f j P A [ G ; d 9 ;>=,? 7gA#B = I [ G S W w S 9(7 ? 7RA ; Q A f(; ? ? ? ?
? ?
P 9 Z ?F; 9 ;>;Z Q 7 ; V.P j fiPR=@;?=,N(; MN(P 9 m ;E y 9 = I ; ?nA ; ? P M Q 7 A G ; &_ 9 =#N(;%; M*Q@E V ; 9 fn? ? ; A\7k?
? W = I?? A f 9 E 9(m P ? ;>E mgI = ; 9(7 =,l E L 9(m = I ; m ; 9 ; A P E x PR=?E c 7R9 ? P S!; Z
?<P A PR? ; =@;>A?S 0P.C#S 7 M>[ beP = ;Z ?a
??? ??
G
S [ 9 M ? 7.?p? [ b mDI =
P S
? ??; 9 Z
?
? ?
?
? ???
?
???
? ??? ? ?
?
?
??? ??@?T?
? ???#???
?
?
?
??
Ni; A ; = N ; f(? ?i; Aa7.?3? P A P.? ;z; A S?P ? Q@; ?(A f 9 v w 9 m 7
s [ b m Ig= E q S ?
??? ? [ c S
? P 9 Z
= I ; = A P.E ? 9 E y 9 m ; A@A7A ?iP S ;Z S P j [ L ; 9 M!}
0
-.
1
,/
"!
$#% '& )(*
,+
-+
32
,+
,8:9 ;<8
54 76
'=?> 1
A
B
DCE FG>
> IHKJ
M J L
UT J WV
N ]\ JKO N QP ^ J RS
YX[Z
_
a`
cbed f
k J N hgi ]l j
[m
7 ? A7 M*;>; Z ? ; 7 f =n [ c 9 ;3=?? 7Io Y ? ? j ;!? ; 9 QPq@p v ? 7R9 S ? = I ;3??P-r s 7A ZgE y =:; A ; 9ut ;F?<;>E 1 9 m = I ;FM 7 { ? f z Pv
oQ#E ? 7 hqw M ? j ;!?K[ x [ b
y ;? ?
9
7 ?
z } 9XV 7 j V; Z
9 = I ; L A,S!= ? ? NXE b M N [ l^SpP 9 ;an P.? 7A PR=@[ x 79 79 =?Ni;]
C!M I s*?z??
=,Ni;2S = 7A P m ;?M 7 ?H? j ;? v G=?? W ] S ?Kc A7? ? 7.A =,_ 7.9 P j = 7 = I ; 9 ?K???<; A:7k?0?F? ;*[ L m Ig=,S h 9 ZQ IXA ;!CN 7 j ZS
?
??I E G j ;\[ G ? =IX;aC;>M 79 Z?CM I ;aH
{ ;J= I ; M7D??? j ;?XEG = ? SM>P.??; S\? [w = I ? 8 h 9K| Eb S?P m ; 9 ; A P E} x P= vb 7 9?? ?
= I(;~
7 ;>? ?(I P.C Wb x ;%z N<h =??F;fXS!; = I ; m ; 9 ; A h? [ L x PR=?E G?? 9 ; A@A7AJ? 7A\A P 9(B E w 9 r 7.? ?F;>E ? mgI =?S
c:?
? ; fnS!; =#N(;? A ;?n
L ?7? GG ?
h 9 Z??u]
L
? ?????????????? ?5?a????[???7?????? ?Q?????:???
f A?? ? ? ? C@E ?Of j P= 7A E ? C | PS!; Z 79??h???Y????????c? 8 S; M 7k9 Z 7A ZK; A U Ss>f<Z 7q?? P.f S@S?? ;>??= 79
7R? Q@E B ??E w x P=@E ? 7k9 ??I:? ? M I v G SZgs!SM A E L ? ; Z E + 9 ?
V.P A ; A ;*=%Pk? ? ?@? ?
;%S>M I ;!??;.?(?<;>[ ? 9 r | P S ; Z 79
= I(;FZK[ ? P mR79 P j P ?n?K? ? ? WG? P = [ b 7Rt ??7RA = I ; ? ; S@S [ L P 9?8 A s.? f ? W w A ;S:S!= b37Ro A P.rD; 7.? P 9 fX? ? ; A?7? VkP A W ] P ? |Xj ;!C
S M P j<? 9(m j E ] 9 ;*P A j? ?\E b =@N?=? ;Ot f(?O? ; A 7.??? P A P.?H;>Q@; A,S ?
S _ 9?? ? ;??? 9 ; =OP j ????? ?\;
P ?(? A#7? v G? P=,sJQ I ; S ;M 79 Z ZK; A E w V P Q@E ? Vk; ? P= A E b ? | ? = I ; U 7 G S E ? = [ L V ;C ; {HE ? Z ; L 9 [ w =@; ;? ? c A ; S CE 1 79 L
?? c?
? ? ? ? ? ???? ? ?~? ?e?K??? ???? ?Y? ? ? ???
? ? H ? ? ? ? ? ??? ? ?K?$? ?
G
?
? ; ZK[y P m.7R9 P j P ?n? A@7 ?KEw ??P=?Eb 7R9 ??;?L 9 Z
9 =n
?
?$?
?
?
? ? ??Q??? ?
?
?Y??Z
?
?
? ?
?G?3??? ? ? ?????U? ?
@8
!
"
#
$
&%'
$_^`
(*)+,-./ 0213465 78:9<;>=?@ ACBDEF 7 GHJIK2LM N2O LQPR FTSVUWX L OZY N-\[ZO] MCa Ob Xced X Lf&ghO M O L
ikj
l] m X&noqpX&r
}
h
~
?
?
^
O r OgsO p M i out Y N O LO K2v X L ] wyx X Y
z ops{ f&M|L ]
?*? +*????? { O Y N o a d L od o i O a?? c j ? O:?ZK p O M X v????&??? [\??i K bb O? i
? K r?r c?X&d2d2? ] ??O aM
? L Oa? b O
? X L n O??O M [ o L
? i? o L L O b o ^ p ] $ M
? ? op?o ?Z) X p a[kL ] $ M M O p a ] _^ ] M?i ? N O ? X i?? ? b ] ? aO X ] i M o O i? ] {?X M
O
}
^
$^
^
H ^
?) O ? p b LO ? Os? p M ) Oh? ??& ? @ ? ? ? ???? ???&? [k) O p $ ^ a O r OY
] p ^ [ O] N Y
??h
? dX&? a ] p M
N O M
L X ] ? p ] p
O , L o L Y o i O b op a o L lO L\? p Y ) O d L K p O a?[ O] NM g X p ? ? Y
K a O*] ?yYZ] i\? o K p a ?
Nf M
R ????? D 7 ? 9<;>= ?@ A??
?
?? 9 ;>=?? ?&? ? ?
? ? J
E?
??
4
?W
?:? 5 E ?7
?u?
? N ? i O? M
] ?y? X YO Y?? +i ] ? p M o X bb o?? M?M
) X M M ) + [ O] ^ NMa + b X c Y O L g i?? o L bO M ) O [ O] $ n ) M ?
^
_?
y?
$_?
M o l
O
d X ^ , Y ? L o{
Y N O?gs] m p z Kg o ? Y ) O M L X ? ? p z ? p i
+ Y + L , o L?
? ? +?? L ?M?lO L z X M ? - o&? M ? O
? L X ] p ? p +, L o L ] ? p2op? x O L o ? N O p? O&? M|N O??2L iM?M O L g ] ? p?? ?
? o g d K M X M
] m op2X r v c ? [ O ? o M ? N X Y M
N O a ]
X ^ opX?? O i ?]
X p M O L g i X L O L OK i O a ? L o{ Y
?2O d ?OK a o?? X K? i
?? O[ M op Y L X ] p ? ? p ^
i b ) OghO
? i ] ? pn j|? ? X p l M
N?O a?? X ^ o? X ? ? o L? o ??\??? ? ( O?? p l M
) O ? o rV? o [ ] ? p?nsX d2d L o } ?
g X M ] ? ? O?O }?
??
$
? L O? i ] ? op ? o L n O p +?L X r ] x X M ] o p ? X ? ] - p? c j?????Z?
?
.
?
?
? 8*9 ?
? 9?
?? 9<; ?? AZ? ?
?
?? ?
8 <= ? A
?
. ? ? ?U W
}
^
^
^
$
$
I L ^ o{ Y ) ] ? iZ+ d L + i i ? op [ O r O X L p Y ) ? ? o ? Y [ o ( + ] ) Y i ] p a K ? p ?] ? ? $ X L ? ? Xup
O i $ ] p ^ M L X ? p
? p ? O L
L o Lh[ + ? N o K2v a
^ a + ? O M O M ) O
/o. p O?[\) ] ! #b " ^ N?? M N + r %X $
n O i ML X M ] o o ? M L X z ? ] p O , 'L &L
p 89 p
p 2?
3
o , O] ) Y a O b Xuc ??z + $ ? M ? O ( + ? ? ) Y?(k?z ? ? ) ? 1
? :
+ (+
. * Mb K L
X YK L )
p o ? ? $ 0 ++ X ? 5? o 476 ] X&p O o b O a ;
?
?
X
=
c
?
<
>
o
X
X
o
2
p
X
o
n
o
o
r
[ O] M\a + b
[ZO O L ? ?>L g
g @K M M ] ?
@ ] Y ? ^ ] O (
( O i [ Y M L Oa K b O
b A
Y ) O E^ p D K { 0 + L o ? d X L X g O M
O L < ? ? X i ^ ? f L X i do i$ i ] ? v?O ?C
? ? o [ZO?g?< ] ? ) M ] p ? f ? Y X bb O d Y Y o
a ^ O v -QM +CB
?
[ + ] ? Y ? [k] M i ? } X r
v do i ] M
]
O ? + p? O L X .r ] x X D M ] o p i X
r ] O p ?c ? ] p d2X LM ] b K ? X L b o p i ] a- L ] p M
N O
X g o K2?Y o ? X&d2d , o ? g X Y
] ? op ? p Eo F O a ] p Y O O i M|] g X #Y G i ?
H
IJLKNM
OP
QNR
JTSVUWYX[Z=\^]`_ba)c1d RTefQ e Z JNg
^
$
$
$
h p
M N j
O i d Y#
k g X r ?
L ^X ? pl K L O op ?#m X ? i ] 0 ? O M X ron ? ? Ep M
) O ] w p b L-u$ ? O ] q p M
L X ] p ] p n OL
L o L ] ?
O i YsVr g X ^ M O a ] p but K a^ ] p M
) O?Osv O b Y i ou?w K ^ X a L X $ ?#x b L | OY L X ? y p ] ? pn ? )? i?f v?v o [
i\^ ? o L d L
K $ p ] ? p2n?o ?
g $ o L O O p O L X ?{z O L
O O i o ? ? L O Oa o{ ? O}|
?
] $ M ^ K X Y ? o&p i [ ? + ? L w O?Y ) O M
L X ] m p ? p . OLL o L] p a 9 b O ?
?
~
? ] p + X L b o p i M , X ] ? p M i X g opn Y [ oo LJg o L O\[ZO] ) M i < ? ? O d , k b + M o ?O d X ] ? a ] ?kY ) X Y [ZO p O O a
?#? od + L X Y
O [ ] $ Y )?M ) O ? K r?r???? ? m O i i ] ? X&p g X Y L ] ? } o ? i O ? op a a O L ] ? X Y
]
? O? .?? NO ?j
? 7 *
^
$
$
? |
i z ? { K rP X Y o LP <) O p b + ? ] i H ?? O ^ a
o&p ? K ? t ? X K i ? ? O (:M o&p od M
] ? z x X Y ?? op ??? ) O??O r ] ? gs] pX Y] p
? ? + 2 Y ? [ + ] n ? Y LO M|L X ] p ] ? p ] ? a+ Y
O L
gs] ? p + a? c
?u?
?
?
?
?5? ?? f?
ju? ?
????? ?
4
?
?
?
?
???
?
^
$
?
[\N O L O ??? i
y^ Y ) O 2?? ? ) K p ] Y ^ O b M o
L ? O p OO a M o g o a ? ? c M
) O ?? l ? X r ] O p
? ] + ? $ [\) O p [ o ^ L ?
? ] p
? L o g X [ O ? ) Y a O b XuH c L O ? ? v X L ] xO a b o i M ? K p ? Y
] ? op $? ? Og o a ] ? O a i X r ] O p b ] O? [ G L O ] O p
] ? p ? ?*X&p i + p X&? a ? ? M
) ? ?%?
?=?
?s?
??
?
?
? ?
?
E
?
?
? ? ???5? ? 7 ? ??
7?
? ???
?u? ?/?u??? 0 ?+T? ? ?
?
?
?
?
?
?
5
?
?
4 ??? ? ??? ? ?#? ?
?? j? ???7?#???
j
? ? ? ? ?y?
?W
? ? + Y ) + L K i ]
pn M ? O ^ + ? + L ? ? ] wyx X Y z
o ? 0 X i O a?? ???C? o , ? Y X&p a X La ??? l (??*[ X&p M M o do ]
p C
M Mo
X p ] ? ? @ & , Y X p Y X ? d + b Y o ???
? l M
) X Y i O + g i po Y M o 0? ^ + p + L X ????? X&d2d L ? O b ?? X M O a ? p?X { O r c Y )?+
? ?u?C?E? ???V?E? ?u????? ? ?
??? ??? ? ???7?7???C?7???u?????s? $?? ? ??? ???7?u? ??? s? ?
?u?V?=? ???u?????{? ??? ?7? ?f? ?? ? ?;?
E? ?
? ?
???V?
?
~
?5?%? ? ? ?C? ? ? ?5? ? ?s? ? ? ?%?E???7? ??? ?V? .
"!#%$ & ')(+*,"-.#/10 2 .3 4" &65798;:=<
>?8
$@0
A1BCA
D1EFGIHKJL FMON N PRQIS T"UVWP1XZY[ [ D VWEV9L\J]^JE`_)acbed S 4 ]`J9fhg9ijkml^n V#E`UJPofpiqj 4 bsr J t+JuKS 4 L\v 4 PpV]`v 4 PIwVWP
FQ1x DIy x{z|JS }~R?R]??m? k V9HKHI? r J%z|JS , ~R? ]?U?]^F?]`??JXF E6EJ _ D FPm??v ??? w r1? , ?I?RJZP+Q1PIv T ]
VCE`J ? }???J??JX]%V9HK?6?
D E`QIP J ??V9z?V ? ??R? X??? _ v ?]?QmV9xk ? v FP?S } _ z?J??H????R? FCz???? }?P?]?RJ?UZ]? ] S ? UZ] ? 4 X U H ? ??? JE ? ] y1? J??C???+?1??JH
_ JH?JX] ? , FCP z ? JZEJ _ ? XZ? N N w? FWU ] S , P?D ? x z J ? 4 wC???^U{V9EJ??RP1F z ??V U{?RQ1v } _ V#PmXJ)DpV ? V9L\J]`YE`?#?@?^] ? ? ?
v , L DpFRE]V?P1]@]`F?EJZL FW?J{] r J _ J D V9EV9L J]JE^U|??E???]`? J)P Y ]??|FCE`??cQ?PmXxv F? G J? FEJ J U ] ? ? ???W? ? ? P w
x??J{UZV9HKS , JZPmXZ???#?{??Z?? ? ? ? V9Pm??]? J?EJZU6Q1H?]^S ??P?w?YZ? J X]^S T ?WJ)PRQI? G JZE
FC? D1VWE V L+J]`YE _??%?^?
k V#U?] r J?
k?
??
?
z
FRQIH???F?] r J Ez?v _ J?wCS ??J N N U D ? ? S F ? ? XZFCP?]^E`S ?? Q ] v FP?U?]^F ? ??JUZJ?J U ]^S ?\Vxc? U
D1DI? ? v w???? ?
?
?
?
z?S , ] r FQ ]@] V ??v P?wO] r S U ?^V X x S , P ] FXZF9P?US ? ??JEV ]`S FRP?FW??]`JZP?E`J U? H?] _ S , P UQm????J?? 4 Q1L+D?U?S } P?]??J{?KJ?JH
FW??x r J?PpJ] z?F9E??JE`E6FE??QmJ?]^F D E ? PIS P?~+F9??V9P?v L D FE]6?#P1]{z|JZS w r ] G V U J ??FP?V?XZF?? ? Q D ]`J ?
_ V9HKS JP=X?\J UZ]6S L?VC]`J @? J??F9??S ?"Pmw ] ? J _ Q1DpJZE^?pQ1F9Q U?z?JS ~ r ] _ ??E`FL?] r J z JS wR??] ? J X]^F?E???V9P=?
]??JOXZFE`EJ UDpFPm??S ??P1w?EF z _ V#Pp? X FH ? L?P?U?S , P???]`F ??FE6??x6??J)EJ?RQpXZJ ? l EJ w Q1??V#E ?,? J#??j???J U6U v ? V#P
?@? S _ U ]?EZV S w??R]??cFE z V9EZ?hk ? QI]?v 4 ]?S 4 U+XZFL D Q?xZV9x`S F?mV ? u ? J??DhJZP U S ? J?]^F?S ? P??JEZ]?J.VX r ?#??] r J
EJ UQI?K]`S }?PIw?aU ? ?? j^L?VWx E v } XJZ_ ??? ??FE|Q?UZJ?S } P?a?C?j V#Pm???Z?#?W?
??S 4 U XZF U ])XVWP?GhJ X FP?U ? ?RYEV#
G ??
?
EJ#? QpXZJ#? G?? EJ V?EE V9PIwS }?
P ?]`??JE6Fz
UV9Pp? XFCuKQ1L+P?U?F9? ??V _
? ? ?
? ? ? a ? ? j
!#"%$
?
? &
?
? +, ' a
*
?(' ?)
? - j)
?
?
? r J EZJ??.
k ? ' WV P ?\? ) 9V E`Y?] ? J?E`Fz?UV ?\XZFHKQ ??P?U XZFCE`E`JZU/1FPh?RS ? P?w?]?F\] r J Q1S ? UZVWP1XJ D V10
E V9L J ]`YE6U
w V _ ] VWP ? VWE? H?JZL?L\V???FE D V9E`x6S ? ] v , FCP1J ??L?VWx E S 4 XJU#fIz ? F%4 ? VWS 4 P
U S 4 PI?
?32
? ? j!57698: ?; , j ?=< ?c?> ? j
#? ?@.A j )!B 5 A ? - 6j '
? ?DC
?
z{??S ?X??F P1H ? XV HKH _ ?FCE
S P??CJE^US , F9P?F9?m] r J ? U L?VWHKH?jp_ ? G L\V9x6E`S E"?
? F )HG ?^P a I V?U`US 4?S J ]@V k9iq
? ??]?K
z J VWE`wQIJ ??] r ?9]?FP J L?S 4 Mw L?x _ V ?WY)FP\? XFCL?DIQ1]`V]6v , FP GR? Q?US , PIw?VWP?S ??]6JZEZVC]`v , ?9Y?!N r JZL?} J{M?FE
X.V9??X ? H?V]6v O?' FP?F9?]? J)S P??9YE`UQ
J P)JU _ S 4 V#P\? P F z J?JE f U S P1XZJ%U ] V?P=??VWSE R?
L\V]6E`S ? v P ? JE _ vT?FCP?ST?U
?
,
V
U
V?P?? ? ? F D JEV]`S O FWP z ? ? J x r Jv ]6JEV9x ? ? Y UZX ? JZL?J _ XV?u?JUOV _O? aXW ? j foV???J]6V ? }YKJ.? NF y P1]
_ r F ??U ]6? V]?] ? VW]?S Z?]?S _ FCP1H ??G JZPIJS[=XS ? V9H?]`F?Q U J?] r JS ?]`JZEV9?`v ? J UZX ??YL+J ? , P\]`??JV9?c? D ? , N]\ U ND\ _ J
? ^ W` b
_
? a
c
dfehgdikjmlndohp
b ?
J z?S ? HKH{S 4 H???Q?UZ]^EV9]J\] r J??Rv ? V Grq HKS Z?] ? FW?] r J\D1E`F / F _ J.? L?J]??F1? U FPs? _ ] V9Pp??V9E? ?
D EF 4?H J ?
?
,
d J _ JE`S } Y _ S _
FW??PIFRP1HKS T?P1J VWE?? ? PmVWL?? X U ?S ]6??Jts VCX`?JZ?%uv?Hxw U X?1VWF]`v X?]^S ??J U J ? S ? J HU y
4
4
w JZPmJEZV9x`J?? G ? S P?] ? w EZV]^S FP?FW?
] ? J???S ?=JEJZP?x6S z?V9HhJ]{ ? V ]^S FCP
??
?????? ????
X
?
?
??
|}~ ? 8?
? ??? ~
? ~ ? ???? j
~? ?
??} ????
???
8 ?#? ,
?r???
4? V P ?
#? ?M?
z{?1JZEJ?x r J\XZFP _ xV9P ]^U ??EJ
?IJ?UJEv 4 JU ? ?"U ? J??9? D HKJ#?
z{S ?x?+UZ???
V9L?r/??(H ?S Pm??
w ??D JZEv , F1??? VXXZF9E? ? , P1w?]^F?U ] VWPm??V E ? D ? VX] ? XZJ ??Y PIJ ] z FE?NZFCPb[mw%? ? EZ? ? ? F P
_ ??
S ?
?9x
v ? D H?? JZL+JZPI]|V?US ?"?\UZ]`JZ? D V ? J V ? / ? J ? v } X ? ? F
?? ? ?????V9Pp ??z J ]6 EV9v P?]^F\???
??
? ??? j B V9?m?
? ? ??? ? ? ? ?^P?? ? w?@??z???
r ?F z
S U r
k ?.?S??j
??? ?
?
jZf ? ??? ? ? ? 4 ?
J
?
?
? r
DIE`Q1PIS 4 P?w U XJ?1V9ES F U GpV U J??FCP?] r J ]6z F?~ ??S T ?=JEJZP x?? T L?/ ??J ??J ]? ] S F P U
J xE? ? ? ? }P w JE`EFE`U ?
z
]6JZUZ]OJE`E`F?E U ?#Pm?
? ??? ? J E`EF E U V9E6J D HKF ]`]6J? ? FEV?]^EV9v ?1v ? ? w?UJ xO_ S ? ? J?F ? ?M?M? JS?m?W?/IHKJU k ] r J
xJ UZ] _ J ] XFL?D?EZv ? _ JZU g ? ??JZ?IV#L / HKJZU ??P\] r JHKY? ]?D V9P?YH z|JUZ??FWz ]6??J ? JZU ? K]^UF? /?^Q1PIS P1w
? ? ? V9PI? U S L?v ? H?V#E? ? S T P ] r JE? , w ?R] D V9P1?J ? z|J{U ? Fz ] d JEJ _ ? H?] U F9???E`Q1PIS P w
VCXXZFCEZ??S 4 P w ]??
F ?m? 9
V?U?S x?F?XXQ1EE6J ??Q?UZv w ? ? ? ? , ??P?]`??S U J?IV#L D H J{z J ? F?P?FR=
] [1Pm? _ v , ~ P1? ??[pXV9??]?S L+D1E`FW?JZL+JP?]
,? P D JE ? FM?L?VWP1X J GR? UJ?F9M ? ???
?
?=
y
y
F S H HKQ U ]6EV]6J?]`??J V G ?, H ?, ] ? F??)]`??J?JUx`S L\V ] F E _ ??FE DIEJ? S X ]^S ? P?w?] ?1J?J?=JX]^U?F9? D E`Q1PIS P?w?FP
?] r J?]6J _ ]?JZEE6F9E?z?J D HKFR]OS ? ?
P ?mwQ?EJ??\x??J?J U ]^S ? L\V9xJ? ]^J _ x?JE`E`FCE U ?CJZE`U`Q?U?] r J\VX]^QpV ?]`Y U x
_
? ? XZV U J?]??S , U L ? V#P?U?] r J?]`JUZ]+JZE`E6FE+EJ _ QI?K]`S P?w ? E FL
JE`E`FRE ????]`JE D E`QIP1S P w , ??P ]??J ??=
,
D ? Q1PIS P1w ] ??J D VWEV9L+J ]^JE _ z S ] r F9Qm]?EJ ]?EV#S ??PIS zP wpf?z ??v ?? J?v , P ]`??J?? ? ? X.V _ J?S ]?L?J VW?RU x? J
] JZUZ]+JE`E`FCE??cF9HKHKFCz?S PIw?D1E6Q1P1v 4 P1w ? Pm??EJ ]?EV9v PIS TP ~ S , Ps] r ?
? ?9Q VR?REV]^S 4 X V D?D E`F#??S , ?\V]`v FC(
? ? b J
4
PIFR]`J?x?1VC]?x?R?
Y ?p??=
? ? JUZ] S L?V]?Y_?F ? x r J?] Y UZ] J E`EF E\V D?D E`F#??S ? L?VC]^JH ? DJ ?9QpV9H]??J V X] ? V9H
*,+.-
0 132 4
0 13B
C
DE 0 13F
R
0 165 7
:8 9:; 8
<>=@? 9
GIH.JK O S
TU@V:W
01t
" !
*,+./
# $ % &(' )
8 9:; 8
A @= ? 9
0 16B
C
DE 0 P F
R
XYZ ababcdfegh aiabjb[ kbl"mbn g"obo
L.M.N Q
\:]^ _I` rp q
bv w abxbyzij{| af{ }b~ db a j s
0?P u
0 13? ? 1
0 16? ? 1 ? P ? 1????f??? ? ?31 ? ?
?:? ? ?@???? ????? 1 ??1 ?
? 1 ??1??bP?????? ??1 ??? ?@?.? ??1?.??:??????? 1 ?31 ?
?
??? ?@????
????????????? ?i???r??? ? ? ?? ??,? ?b?? ? ? ? ? ??? ??? ?3? ? ?b? ???b??
? ?.? ? ??"? ???? ? ? ? ? ? ? ? ?I? ? ? ? ????????b??
@
?
??? ? ??? ? ? ? ? ?? ?f??? ? ? ??? ??? ? ? ? ? ? ???b? ????? ? ? ????? ? ? ? ? ???r? ??????? ? ? ? ? ? ??? ??? ??? ?? ??? ? ?
? ??? ????????? ? ? ? ? ? ??? ? ? ? ?? ? ? ? ? ? ? ???? ? ??? ? ? ? ? ? ? ? ???? ? ? ? ??? ? ?? ? ? ? ? ? ? ? ? ?
? ? ? ?f? ? ?b? ? ? ? ? ? ? ? ? ?? ??? ? ? ??
? ???.? ?
? ?i? ? ?i? ? ??? ?? ? ? ? ? ?
%$ '&
9;: =< ?> @
5H IJ
K
#"
!
#(
-687
) +*-,/.10 32
54
A
=G
C
B
5
D
E
F
#L M
N
O9 P
UT VW: YX
Q R S
=
Z
]
F
[
\
S
( 8&
^
_
a`bTdc >
e
gf h @i
j k l
\X
A
Gnm
yx{z |} \6 G ~ :
Q $
9 wv
B'qrts uG
i G
Pw? po ?G
w?
E
?
D X
? ? ?
QD ?
/T?? ? ?
k o
?? ??
@?
[ <
B
? ?i?? ? ? ?? ? ? ? ??? ? ? ? ? ? ? ? ? ? ? ? ?b? ? ? ??? ? ? ? ? ?
?? ??
? ? ? ? ?? ? ? ? ?? ? ? ? ? ? ?
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ???? ?? ? ?? ? ? ???? ? ? ? ? ? ? ? ? ?? ? ? ??? ? ? ? ? ? ? ??? ?
? ??? ??
? ? ? ?? ? ? ? f? ? ?i?? ? ??
? ? ? ? ? ? ?"? ? ? ? ? ?
? ?"? ? ?? ? ? ??? ? ? ? ? ? ? ? ? ? ? ? ? ?? ?? ? ??? ? ?? ? ? ?
? ? ? ? ? ??? ?f? ? ? ? ? ? ? ? ?b?
? ? ? ?? ? ? ? ? ? ? ? ? ? ? ?? ???"? ? ? ?? ? ? ??b? ? ??? ? ? ? ? ? ? ? ? ?
? ? ???? ? ?? ? ? ?f? ? ? ?
? ??"? ? ? ? ? ? ? ? ? ?
? ?? ??? ? ?b? ?.? ??? ? ?i? ? 3? ? ? ? ? ? ? ? ? ? ?? ? ? ??? ?? ? ? ? ?
?? ? ?? ? ??? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?f? ? ? ? ? ? ?b?"? ?r??? ?b? ? ? ? ? ? ? ? ????
? ? ? ? ?b? ?? ? ?? ? ?? ? ?
?I? ? ? ?
? ? ? ? ? ? ?
?? ?:?? ? ? ? ? ? ?b? ???i? r? ? ? ???? ? ? ? ? ? ? ? ?? ? ? ? ? ? ?
? ? ? ??f? ? ? ? ? ? ? ? ??? ? ???? ? ??
?? ? ?b? ? ? ? ?
? b? ?f? ? ? ?
? ? ? ? ? ?? ? ?? ? ? ??? ?f? ? ? ?
?? ?
? ? ? ? ? ? ? ?"? ? ? ? ?
?
?
?? ?
? ? ? ? ? ? ? ?.? ?? ? ?"???? ? ? ? ? ? ??? ? ? ? ?f? ? ? ? ? ? ? ? ? ??"? ? ? ? ? ? ???b? ? ? ? ? ? ? ?
? ??
?"? ? ?b? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ?f?? ? ? ? ? ? ? ? ? ? ? ? ?? ? ???
? ? ? ? ?f? ?
? ??? ???? ? ? ?"? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
? ? ? ?? ? ? ?? ? ? ?? ? ? ? ? ? ? ? ? i? ? ? @???b? ? ?b? ? ? ? ? ? ? ? ???b? ? ? ?b?
?? ?
?? ??
0 1??
0 1??
0 1??
0 ??? 7
01
C
??
CEC
?
??????
R
???
? * + ?
? ? ?
? ? ?? ? ? ?
?
5
?
#
?
?? ?W?y???? ?
? ? ? ?
?
?
? ???
?1
?
?
* +/
?
???
C
?R
CEC
????
? ?
?
? ? ? ? ? ? ?? ??
?
?
? ?{?C?3?W? ? ? ??? ? ? ? ?
?
?
??
?? ??
?
?
?
01F
? ?
? 1?? 0 1 t
0 ? ?? 1 u
0 16? ?
0 1 ? ?? ? ??0 ? 13? 7? ???? ?.? 0 1??
0 1W?
? 1W?
0 1W? ?
0 13B ??>? ? 0??P??r? ??@?b? ? ??? ? 0 1W?
0 ?#?
0 1W?
?
? ??6????
? P ?E ? ? ??? ? ? ? ? ? ? . ? ? ?A ? ? ? ?n? ? ? ? ? ?5<? ? ? ? ?I? ? ? ? ? ? ? ? ? ??? ?? H ?? ? ? ? ? ? ? ??? ? ? ? ? ? ??? ?
? ? ?
e ? ? ????? ? ? ? ? ? ?i? ? ?f? ? ? ? ? ?i? ? ? ? ? ???? ? ? ? ??? ? ? ? ? ? ? ?? ? ? ????.? ? ? ?b??3? ?? ? ??? ? ? ? ? ? ? ? ?
??G ? ?i? ? ?f? ? ? ? ?? ? ??a?'? ??? ? G ? ? ?b? ? ? ? ? ? ? G ? {? ? ? ?? B
? ? ? ^ . ? ? ? ? ? ? ? ? ? ? x ?r? ? ? ? ? ( ? ? ? ??
? ? ?.??? ???? ? ? ? ? ? ? ??? T 1> ? ? ? ?? J ? ? ???b? ? ? ?? ? ? ?I? ? ? ?
? ? ? ? ? ? ? ? ? ? ? ? ?f? ? ? ???=? ???b? ?? ? ? $?
? ?
?? ?"? ? ? ???? ??? ? ??? ?? ? ?i?8? ? ? ? ? ? ? ?
? ? ? ? ? ~ f? ?? ? ? ? ? ? ? ?? ? ??
?
01B
!#"$% '&#( *)+ -, /. 10 % 2436587:9 3 5<;
=?>A@
B CEDGF CIHKJMLON DGP
qhXQSnXRTnUWw_ d VYn X[d??AZ]Zd:\ X??mZ4? XR^s U[?*q?g _SvX?`bd?w a ??yc Xs _ V? ? de?XefU?ghU XiVv?ec _k_kX w?jm? skl?dnn?rX?o ef U-yOXwpc ?}Urd e? qt\?*q?sh?s u dd?qkX1_?-??n X
e^? dnV v?_hs\]\?n n wR?zd?4v?exz??y|XO_b{{ _}?rX d?we^d \}?_ X ~?nnnn w __V n:gks ?n? ss ?Sn {X? ? ??oX ?nUWn Vskd?d? RRcc s ?1_kU?sU d?/?cs _?v?~ U?X?~?d'??:n_!ZkK???x?xUry\?o-ec U ?ks? ??
d??<RT?O_bU<??? ? T?R??? ??X? ~?R?n?_kv gSVR?*? s!X1w X???RT??\ X?~ v-? Xs ??gkX XU sKd snsd?R??{Y_bv U n
??Xw d e?z\s R? ?~-w ? ??X?_ dn?X s w d#yVX ??n'Xn \?_ n ??X _kw j d'e^? \R?U?s? skdX?n?? ? _?? ??s!w ? !??tX??_hR? UId?? ???/?W? ??s R? ?O? X?K? w ?
????v?RR?*? V/V/?[? T!R^?R? w y?d{?-?rX X q q?XXYU?U-X Xnn/s s ? R?? ?RT ?s s d{Rc _ke? _ UKU _k?_ j ?#???m???A?? ? o-s nUW{/? v?X s n \ w V_b???YXk\t?k? XKX _k~???n ?b_koWg?Rcsh????X n??s d?s Rc V ? ? Z1_k?K\ d?_???? y/X/R? ?-q ?fU-d ???c??A?? s ??U ? d u Q?
n??X1U ? o shV ~-??~ {/? v?Rc V ?#s V/{'_?RT _k? U?~ ??o ?d d s ? dR?Rc U?_hU ds Rc ?}? _ X?g X?w nX??R??!X Zw???~-R?nUSX? ? _ R^?V gkd?!Rc _b??U?R?UiwvSd'v_bX??OX?? ? s U?d ??? ?*X R?U g q?Rcsk_!`??RcU?? Rc??{ R?]? ws _kU???VdX ?-~8X s wn o s qb\ qbX X d'w X dnX!y ? f
s4~-~ n _ s V/? f
?}?!?8?:??????W?$?W?S???????k?
?~ _ Xin dd'X? ??s ?U?? ? {? v X d?X n s U-?sec w? ?bU ?o?s w
d nbs n? q Q?sbVsR??nX?U V???X _ ns U8gh? Z ? o?Xs V? ??? UrX R^ ?V sR^ w u V w X yw R?zX _bsU?n w V cv Ov?_k?orfzwi?-nVXYec uyy?X1Zd/vn
n v _ ? R?zqbw??]w odv-~ X ?
_b\ ~Wo d s d?R ^ _ s ? X ornsk? ?Xd??:_ n ? XU-d'X n ? "T !$# s V? U-_ ?&% X1? q X?d'v?X s ? e' _ ~?skn {y
( _ o U?? s {'R ^ _bU j _ *n ) U s UWVRc sk?Wwo ~ ~ _ n { T
+}-? ,/? .!? ??? ?r?
0 ?21&3546c 38759c;:< = >@?@< A BDCEFD?HGDIKJ C IKJL
L < M N JPO GRQSJUTVLXW G IY I J QS< Z [R?\< ] GDBkc_^a`` xcb `-dfe fhgSi 4Reckjl45m Tonqp ?
rtsDuv-rtwDxzy"{f|R}~R}z? T
???? 7??X? ` 2? ? c g-/f ?X7 ` 3R7? y 4 `D? g-? ^ $T gS?5?$4 c???? ?@< ? ???z???IK?D< ? B? ?D? ? C J *f bK? ^ ?-? 4 `D? 7 d?? ? ?_?????-?U??
b `? ? ?f??4 i 6? ? `? ?f? ? 7 d
d 6c `$? g? d m?7U? d r ? j ?z? ? 4 `? 4R? ? ??4 ? ? ??R?? ~5? ?!? ? | }8}z?z? T
c
? 0
? ? ??? ? ? B`D? ?I 7 Gz? ??7? J
?z?c? 7 ?X? ? ` GS???????7? ??UW ?U? ? ?X? 7L?i?? ? ?D?5Q ?J 3 d ? ? G g-? ?5If?U? ????D7 ?@? < ?m GS???h? b\??i 4R????@? ? ??
?"c? ? ? ?c ? 7?-4 6c ``d?z7 d ? ? ? 45?Sb ?*? ^j ???a??;? }tw6 if? b\?i? 7 7 ? i ` 7 4 ? i d ? ?7 ? ` `$c?
|t?5}5? f*? ? dc? c$j 4 ? 6T ` 4 ?f? 4 ?D? ? ^V???j ?z? 4 dfd ? ? x ?5x?? ?5? ? ? { |t?5?w ?
?
???? 4 df? 6?? ? 6^ a? ? T ?P? g m ?z? 3 ? 4 `D? ??2c ? f ??? ?V? '
? ?` mK? 7 >Hn < ? `??4 ?8?26? ??` I??D? ? ?B ?q` FD? I C J G? B ?DBQ ?*J B$? ? ?z???&?R?@?GDI d
Y I B?? = B C ? ? ^ `c ? ?{ ? 77 ? 6 f ? ? ? ? m ? 7 ? ?5c }
b
?
? 7??f7 ? 7 ? ?X7 ? 4t??X7?m ? ? 3
g 4 ` ? 4 ` ? 6 d ?U? ?$d ???T f ? d 6 ? 7m?4R? ? ? r? Dr8}z} ? { }z} ?
? ? ? c 4 ? df? c ? ? J?? ? < M CDB&GfW?? J I\|R? } ? } 5J ? ? ? G I < = ??> J I L c ? ?rV^ ??^ ? 7 d 6 d ? ? 7 ??if?K? ` 6 f ? d b `-d i 6? m ?-i 7 ? ? ? ?S6 ? 4 ?
` 6 ? 7 ? d 6 i??? ? 7 ` ? ? 3
T
? ? 4 ? d 7 ` 4 `-? ? T 4 `d 7 ` ? ?XJ ? B$J IK???< M ?S?@< M GzB Y2J I W GDI???? ?DB [ J G W J CzFz? ?DIU< ?
Q??&?RFDI??z???&??\?GDI
O GRQDJ ? L ?a7 ? 7 i??2?z? ??3 ?? z6 ? ` ? ???z? 7 d?d 6 ? ` ? ? ? b ? ? ? ?D? 7U7 ? ? 6? ` s?r ?d ? ? | ? ? i {K| 7 } ?Rb w ? ? ??? ? 3 d ?S? ?
? ?$d f ?!c ??? ` m ? 7 m c ?2? g 7 ? ? 6 ? 7 ?X7 ` if7 ? 6 d ? 4 i 4 ? 4 ? ? ?
T
? ^ -c ? ?D? ? ? ?? ?aGD?HJXG-B aJ B JIK? ? ? ? ??\< ? G-B 2J C F ? ?SI ?@< GDB Q?E?I [ $< M >HJ
[R? SIf??/J ? ?f[8? < G B ? ?&? ?? ?
J ?SI?? tL ? J ? ? L { ` ?*7 ? 0 4 ?*? if?"? ? ? 3 d ? ?z? g 6T ?8`$? ? ? ?K?S? 7 dfd 6 ` ?? ? ?K?S? ? 77 ? 6c `-?d ? ? m ?' 7 | } ? b | g
?| }?z}? d ? ? ? ? ?-d T ?f ?c ? ? 4 ` ? ?^ T ?? ` 4 ? ? ? ? ? 4??? b ? g 7 ? ? 7 ? 7 ` mf7
T ??
? f g ? 4 ? 7 ? ? ^ ? ? ' 4 `-d 7? ` ? 4 `-? ? ^ ? |R?5?? d 7 ` c ? B ?UL < CzBl^ ?DB$Q ?8? FD?S?@< ? G-BPG ?z? ? ? ?? < ] B$?
J ?\??GSI L ` ?K?D? 77 ? 6 `-?5d ? ? i?? 7
` i 7 ? ` 4i?6 ? ` 4 ? ? ? `D? 7 ? 7 ` ? 7 ?z? ?X7 ? 4R? 7 i?? ? ?3 d ?
?
4 ` ? 4 ` ? 6T d ?? { ? ? d ? ? ?? ? d 6c ` 6c 7 i 4 ? f s ? | ? |t}5} T
^ ? g ? ?7U6 ? 7 `$? ^ ^ ? ? ? 7 ? ??4 ` ? 4 `-? ? f ? ? ?|t}7 ? Kr4 ?f? i } ? ? |RI ?5J } Q ? M [R? ? < ? ? B ? J W FD? If? ?E ?"? ? ? JU?
?\< = GB$< M L ?E??S?qI ? ?z[ ?b ` m/cq? ?S? ????7? ? g? d i 7?? d
T
0 D^ ? ? 6c i 7 Tc J ?DIUB < ] BDC?< Z BE&I
?\< [5? ] ? ? J FSIK? ?? J ?\??GDI L E ?? ? ? < ] L ? < M ? ?z? Y2J I L\? ?f[t?\< Z R?/c 7?? ? 4 ?
mU4 i 6? ? ` p ? szr ? s~Rw ? {f|R} ? }D? f
576
'&)(
3
4
8
YX
U
lk
u
0#
!
L
U
KJ
[Z
n
'?K?
d
?
?
BA
]\ A
_^ C @a`
;
!
`
c
q
b
U
?*
d
Tw
R?
r
%
U (
f
1
2
R
?
h?
?
e
!
?
? ?
f hg i
g i j
# Rr>s
t
z {}|a~ z
Tx
y
?[? `
??K?
]??
r??
=
f?
C@
Q
po
0/
P
WV
@
?=
$ %
.
@
4
#
-
ON
ML
L
??
?
?
,+
=
?
*
m
pv
*
"
?
:9 <; >=
IH
TS
M
DFE G
P
!
(
m
$?
??
w
]?
d
e
a? ?
?
H
`]?B?
H
?
n>?
???
| 1174 |@word h:1 pw:1 rno:1 nd:1 d2:1 hu:2 r:2 nks:1 pg:1 q1:1 gnm:1 sah:1 dff:1 zij:1 skd:3 ghj:1 q1e:1 od:1 si:2 bd:2 neq:1 fn:1 xyu:1 rts:1 xk:1 d2d:3 ron:1 gx:1 rc:10 ik:5 adk:1 qij:1 ra:5 p1:4 xz:1 uz:12 gjk:1 ry:1 fcz:1 jm:1 kg:1 xkx:1 adc:1 ag:3 acbed:1 qm:2 k2:1 uk:1 t1:1 diu:1 xv:1 sd:1 io:1 xiv:1 kml:1 au:4 eb:2 k:1 fov:1 uy:1 vu:1 z7:1 oqp:1 gkx:1 inp:1 dfh:1 ga:4 nb:1 k2v:2 ke:1 bep:1 q:1 x6s:2 y2j:2 oh:1 u6:1 dw:1 s6:1 fx:1 au4:1 p1w:5 jk:2 dcj:1 u4:13 ep:2 wj:1 aqa:1 eu:1 hkq:2 q5:1 ov:1 uh:1 gu:2 po:2 sif:1 tx:2 zo:5 mon:2 wg:1 p1j:1 ip:1 mg:2 rr:2 gq:3 mb:1 fr:1 pke:2 bpg:1 kh:1 kv:1 az:2 ky:1 p:3 r1:1 dpf:1 vsd:1 ac:1 v10:1 ij:2 qt:1 op:2 sa:3 wzy:1 c:1 qd:1 f4:5 vc:3 sgn:1 orn:1 pkj:1 f1:2 d_:1 c_:1 ezj:1 cb:1 prm:1 mo:1 jx:2 fh:2 f7:2 hgi:1 gihkj:1 iw:1 ppj:1 nmu:1 hj:1 dgp:1 dkn:1 vk:1 jz_:1 u_:1 hk:1 hkf:1 nn:6 sb:1 kc:1 fhg:1 xeg:1 kw:1 r7:1 xzj:1 fiz:1 fd:5 sh:4 beu:1 hg:2 xb:2 fu:2 xy:5 lh:1 iv:2 kjl:1 re:2 pmx:2 mk:1 wb:1 gn:4 lqp:1 gr:2 acb:2 ju:3 st:1 bu:2 diy:1 na:3 jts:1 nm:8 hky:1 hn:1 tu1:1 de:7 skl:1 sec:1 cedgfih:1 jc:1 mv:1 cdb:1 ni:3 jfa:1 v9:6 gdi:4 mgi:2 mc:1 bdcfe:1 rx:3 zx:2 ah:2 za:1 ed:1 pp:1 di:1 fre:1 ut:3 cj:1 ou:1 x6:1 ox:2 jzp:3 su:1 o:2 a7:1 v2w:1 aj:3 gdb:1 ixa:1 eg:1 ue:2 die:2 gg:1 ay:1 fj:3 jlknm:1 ef:3 dbj:1 wpc:1 ug:1 jpg:1 qp:2 ji:1 osv:1 he:1 frp:1 rd:2 u7:1 pm:7 hp:4 nhk:1 dj:1 pq:2 gj:1 gt:1 feb:1 j:1 pkc:1 wv:4 vt:1 uku:1 neg:1 ced:1 mca:1 mr:1 eo:1 rv:3 d7:1 rj:1 d0:1 af:2 qi:4 df:3 dfehg:1 ot:1 vwe:5 sr:1 mdk:1 pkm:1 nv:2 db:4 oq:1 mw:3 vw:3 p1h:1 m6:1 b7:1 xj:2 fm:2 cn:2 gb:1 j_:3 k9:2 ul:1 wo:2 f:1 gks:1 s4:1 ph:1 gih:1 rw:1 wr:2 rb:1 hji:1 zd:1 iz:1 uid:1 ht:1 uw:1 vb:1 hi:3 s7n:1 g:6 bv:1 bp:1 ri:1 x7:6 u1:1 mnb:1 wc:1 qb:1 px:1 bdc:2 zgs:1 dfd:2 cbed:1 qu:1 tw:1 b:2 ikj:2 pr:4 ene:1 fns:1 k2l:1 gkd:1 wvu:1 b_:1 ozy:1 jane:1 rp:2 jn:2 cf:1 mdi:1 xc:1 yx:5 dfe:3 yz:1 rt:6 md:1 nx:3 ru:1 o1:1 cq:2 dsg:1 xzf:5 fe:3 shv:1 ba:2 u6u:1 jzl:6 t:1 efg:1 dc:3 y1:1 ww:1 wyx:3 rrc:1 bk:1 kl:1 sgj:1 c1d:1 wy:1 ev:1 xm:2 spp:1 yc:1 tb:1 oj:1 ppv:1 hr:1 oue:1 hks:1 ne:6 lk:2 xg:1 dkh:2 gz:1 gf:3 fxs:1 kj:4 sn:2 l2:1 kf:1 zh:2 fce:4 vg:1 qnr:1 pij:1 xp:3 pi:4 lo:1 kbl:1 l_:1 d2x:1 fg:2 xn:1 ig:2 ec:2 agd:1 vce:1 hkh:2 kfx:1 ml:1 a_:1 nef:1 p2n:1 sk:2 z6:1 k_:1 jz:3 zk:6 ku:2 f_:1 fcp:4 gfn:1 mlm:1 qip:1 uwv:1 t8:1 da:1 pk:2 uzj:2 rh:1 x1:1 f9:7 tl:1 vr:3 n:2 pv:1 ihkj:2 ix:3 hw:1 xuc:1 cec:2 kew:1 dk:4 gzb:1 ih:1 deno:1 o9:1 vwx:2 jnk:1 nk:1 v9v:3 rg:1 tc:1 fc:2 ez:2 cdf:1 jf:5 fw:7 hkj:1 r9:5 xin:1 f4g:1 zg:1 e6:2 gsi:1 lu1:1 ub:1 qts:1 skn:1 ex:1 |
192 | 1,175 | Generating Accurate and Diverse
Members of a Neural-Network Ensemble
David w. Opitz
Computer Science Department
University of Minnesota
Duluth, MN 55812
opitz@d.umn.edu
Jude W. Shavlik
Computer Sciences Department
University of Wisconsin
Madison, WI 53706
shavlik@cs.wisc.edu
Abstract
Neural-network ensembles have been shown to be very accurate
classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly
correct, but ones that make their errors on different parts of the
input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks.
In this paper we present a technique called ADDEMUP that uses
genetic algorithms to directly search for an accurate and diverse
set of trained networks. ADDEMUP works by first creating an initial population, then uses genetic operators to continually create
new networks, keeping the set of networks that are as accurate as
possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that ADDEMUP is able to
generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that ADDEMUP
is able to effectively incorporate prior knowledge, if available, to
improve the quality of its ensemble.
1
Introduction
Many researchers have shown that simply combining the output of many classifiers
can generate more accurate predictions than that of any of the individual classifiers (Clemen, 1989; Wolpert, 1992). In particular, combining separately trained
neural networks (commonly referred to as a neural-network ensemble) has been
demonstrated to be particularly successful (Alpaydin, 1993; Drucker et al., 1994;
Hansen and Salamon, 1990; Hashem et al., 1994; Krogh and Vedelsby, 1995;
Maclin and Shavlik, 1995; Perrone, 1992). Both theoretical (Hansen and Salamon, 1990; Krogh and Vedelsby, 1995) and empirical (Hashem et al., 1994;
D. W. OPITZ, J. W. SHA VLIK
536
Maclin and Shavlik, 1995) work has shown that a good ensemble is one where
the individual networks are both accurate and make their errors on different parts
of the input space; however, most previous work has either focussed on combining
the output of multiple trained networks or only indirectly addressed how we should
generate a good set of networks. We present an algorithm, ADDEMUP (Accurate
anD Diverse Ensemble-Maker giving United Predictions), that uses genetic algorithms to generate a population of neural networks that are highly accurate, while
at the same time having minimal overlap on where they make their error.
Thaditional ensemble techniques generate their networks by randomly trying different topologies, initial weight settings, parameters settings, or use only a part of the
training set in the hopes of producing networks that disagree on where they make
their errors (we henceforth refer to diversity as the measure of this disagreement).
We propose instead to actively search for a good set of networks. The key idea behind our approach is to consider many networks and keep a subset of the networks
that minimizes our objective function consisting of both an accuracy and a diversity
term. In many domains we care more about generalization performance than we
do about generating a solution quickly. This, coupled with the fact that computing
power is rapidly growing, motivates us to effectively utilize available CPU cycles by
continually considering networks to possibly place in our ensemble.
proceeds by first creating an initial set of networks, then continually
produces new individuals by using the genetic operators of crossover and mutation.
It defines the overall fitness of an individual to be a combination of accuracy and
diversity. Thus ADDEMUP keeps as its population a set of highly fit individuals that
will be highly accurate, while making their mistakes in a different part of the input
space. Also, it actively tries to generate good candidates by emphasizing the current
population's erroneous examples during backpropagation training. Experiments
reported herein demonstrate that ADDEMUP is able to generate an effective set of
networks for an ensemble.
ADDEMUP
2
The Importance of an Accurate and Diverse Ensemble
Figure 1 illustrates the basic framework of a neural-network ensemble. Each network
in the ensemble (network 1 through network N in this case) is first trained using
the training instances. Then, for each example, the predicted output of each of
these networks (Oi in Figure 1) is combined to produce the output of the ensemble
(0 in Figure 1). Many researchers (Alpaydin, 1993; Hashem et al., 1994; Krogh
and Vedelsby, 1995; Mani, 1991) have demonstrated the effectiveness of combining
schemes that are simply the weighted average of the networks (Le., 0 = L:iEN Wi ?Oi
and L:iEN Wi = 1), and this is the type of ensemble we focus on in this paper.
Hansen and Salamon (1990) proved that for a neural-network ensemble, if the average error rate for a pattern is less than 50% and the networks in the ensemble are
independent in the production of their errors, the expected error for that pattern
can be reduced to zero as the number of networks combined goes to infinity; however, such assumptions rarely hold in practice. Krogh and Vedelsby (1995) later
proved that if diversity! Di of network i is measured by:
Di = I)Oi(X) -
o(xW,
(1)
x
then the ensemble generalization error
CE) consists of two distinct portions:
E = E - D,
1
Krogh and Vedelsby referred to this term as ambiguity.
(2)
Generating Accurate and Diverse Members of a Neural-network Ensemble
537
"o
?? ensemble output
InW$lllnW$21-lnW$NI
n
~
1j
??? input
Figure 1: A neural-network ensemble.
where [) = Li Wi? Di and E = Li Wi? Ei (Ei is the error rate of network i and the
Wi'S sum to 1). What the equation shows then, is that we want our ensemble to
consist of highly correct networks that disagree as much as possible. Creating such
a set of networks is the focus of this paper.
3
The ADDEMUP Algorithm
Table 1 summarizes our new algorithm, ADDEMUP, that uses genetic algorithms
to generate a set of neural networks that are accurate and diverse in their classifications. (Although ADDEMUP currently uses neural networks, it could be easily
extended to incorporate other types of learning algorithms as well.) ADDEMUP
starts by creating and training its initial population of networks. It then creates
new networks by using standard genetic operators, such as crossover and mutation.
ADDEMUP trains these new individuals, emphasizing examples that are misclassified
by the current population, as explained below. ADDEMUP adds these new networks
to the population then scores each population members with the fitness function :
Fitnessi = AccuracYi + A DiversitYi = (1 - E i ) + A D i ,
(3)
where A defines the tradeoff between accuracy and diversity. Finally, ADDEMUP
prunes the population to the N most-fit members, which it defines to be its current
ensemble, then repeats this process.
We define our accuracy term, 1 - E i , to be network i's validation-set accuracy (or
training-set accuracy if a validation set is not used), and we use Equation lover
this validation set to calculate our diversity term Di . We then separately normalize
each term so that the values range from 0 to 1. Normalizing both terms allows A to
have the same meaning across domains. Since it is not always clear at what value
one should set A, we have therefore developed some rules for automatically setting
A. First, we never change A if the ensemble error E is decreasing while we consider
new networks; otl.!erwise we change A if one of following two things happen: (1)
population error E is not increasing and the population diversity D is decreasing;
diversity seems to be under-emphasized and we increase A, or (2) E is increasing
and [) is not decreasing; diversity seems to be over-emphasized and we decrease A.
(We started A at 0.1 for the results in this paper.)
A useful network to add to an ensemble is one that correctly classifies as many
examples as possible while making its mistakes primarily on examples that most
D. W. OPITZ. 1. W. SHA VLIK
538
Table 1: The
ADDEMUP
algorithm.
GOAL: Genetically create an accurate and diverse ensemble of networks.
1. Create and train the initial population of networks.
2. Until a stopping criterion is reached:
(a) Use genetic operators to create new networks.
(b) Thain the new networks using Equation 4 and add them to the population.
(c) Measure the diversity of each network with respect to the current population (see Equation 1).
(d) Normalize the accuracy scores and the diversity scores of the individual
networks.
(e) Calculate the fitness of each population member (see Equation 3).
(f) Prune the population to the N fittest networks.
(g) Adjust oX (see the text for an explanation).
(h) Report the current population of networks as the ensemble. Combine
the output of the networks according to Equation 5.
of the current population members correctly classify. We address this during backpropagation training by multiplying the usual cost function by a term that measures
the combined population error on that example:
Cost =
L
kET
..2.-
It(k)
~O(k)I>-'+l
[t(k) -a(kW,
(4)
E
where t(k) is the target and a(k) is the network activation for example k in the
training set T. Notice that since our network is not yet a member of the ensemble,
o(k) and E are not dependent on our network; our new term is thus a constant when
calculating the derivatives during backpropagation. We normalize t(k) -o(k) by the
ensemble error E so that the average value of our new term is around 1 regardless of
the correctness of the ensemble. This is especially important with highly accurate
populations, since tk - o(k) will be close to 0 for most examples, and the network
would only get trained on a few examples. The exponent A~l represents the ratio
of importance of the diversity term in the fitness function. For instance, if oX is close
to 0, diversity is not considered important and the network is trained with the usual
cost function; however, if oX is large, diversity is considered important and our new
term in the cost function takes on more importance.
We combine the predictions of the networks by taking a weighted sum of the output
of each network, where each weight is based on the validation-set accuracy of the
network. Thus we define our weights for combining the networks as follows:
(5)
While simply averaging the outputs generates a good composite model (Clemen,
1989), we include the predicted accuracy in our weights since one should believe
accurate models more than inaccurate ones.
Generating Accurate and Diverse Members of a Neural-network Ensemble
4
539
Experimental Study
The genetic algorithm we use for generating new network topologies is the REGENT algorithm (Opitz and Shavlik, 1994). REGENT uses genetic algorithms
to search through the space of knowledge-based neural network (KNN) topologies. KNNs are networks whose topologies are determined as a result of the
direct mapping of a set of background rules that represent what we currently
know about our task. KBANN (Towell and Shavlik, 1994), for instance, translates a set of propositional rules into a neural network, then refines the resulting network's weights using backpropagation. Thained KNNs, such as KBANN'S
networks, have been shown to frequently generalize better than many other
inductive-learning techniques such as standard neural networks (Opitz, 1995;
Towell and Shavlik, 1994). Using KNNs allows us to have highly correct networks
in our ensemble; however, since each network in our ensemble is initialized with the
same set of domain-specific rules, we do not expect there to be much disagreement
among the networks. An alternative we consider in our experiments is to randomly
generate our initial population of network topologies, since domain-specific rules
are sometimes not available.
We ran ADDEMUP on NYNEX's MAX problem set and on three problems from the
Human Genome Project that aid in locating genes in DNA sequences (recognizing
promoters, splice-junctions, and ribosome-binding sites - RBS). Each of these domains is accompanied by a set of approximately correct rules describing what is
currently known about the task (see Opitz, 1995 or Opitz and Shavlik, 1994 for
more details). Our experiments measure the test-set error of ADDEMUP on these
tasks. Each ensemble consists of 20 networks, and the REGENT and ADDEMUP
algorithms considered 250 networks during their genetic search.
Table 2a presents the results from the case where the learners randomly create
the topology of their networks (Le., they do not use the domain-specific knowledge). Table 2a's first row, best-network, results from a single-layer neural network where, for each fold, we trained 20 networks containing between 0 and 100
(uniformly) hidden nodes and used a validation set to choose the best network. The
next row, bagging, contains the results of running Breiman's (1994) bagging algorithm on standard, single-hidden-Iayer networks, where the number of hidden nodes
is randomly set between 0 and 100 for each network. 2 Bagging is a "bootstrap"
ensemble method that trains each network in the ensemble with a different partition
of the training set. It generates each partition by randomly drawing, with replacement, N examples from the training set, where N is the size of the training set.
Breiman (1994) showed that bagging is effective on "unstable" learning algorithms,
such as neural networks, where small changes in the training set result in large
changes in predictions. The bottom row of Table 2a, AOOEMUP, contains the results
of a run of ADDEMUP where its initial population (of size 20) is randomly generated.
The results show that on these domains combining the output of mUltiple trained
networks generalizes better than trying to pick the single-best network.
While the top table shows the power of neural-network ensembles, Table 2b demonstrates ADDEMUP'S ability to utilize prior knowledge. The first row of Table 2b
contains the generalization results of the KBANN algorithm, while the next row,
KBANN-bagging, contains the results of the ensemble where each individual network in the ensemble is the KBANN network trained on a different partition of the
training set. Even though each of these networks start with the same topology and
2We also tried other ensemble approaches, such as randomly creating varying multilayer network topologies and initial weight settings, but bagging did significantly better
on all datasets (by 15-25% on all three DNA domains).
D. W. OPITZ. J. W. SHA VLlK
540
Table 2: Test-set error from a ten-fold cross validation. Table (a) shows the results
from running three learners without the domain-specific knowledge; Table (b) shows
the results of running three learners with this knowledge. Pairwise, one-tailed t-tests
indicate that AOOEMUP in Table (b) differs from the other algorithms in both tables
at the 95% confidence level, except with REGENT in the splice-junction domain.
I
Standard neural networks (no domain-specific knowledge used)
best-network
bagging
AOOEMUP
Promoters
6.6%
4.6%
4.6%
Splice Junction
7.8%
4.5%
4.9%
RBS
10.7%
9.5%
9.0%
I
MAX
37.0%
35.7%
34.9%
(a)
?Knowledge-based neural networks (domain-specific knowledge used)
KBANN
KBANN-bagging
REGENT-Combined
AOOEMUP
Promoters
6.2%
4.2%
3.9%
2.9%
Splice Junction
5.3%
4.5%
3.9%
3.6%
RBS
9.4%
8.5%
8.2%
7.5%
MAX
35.8%
35.6%
35.6%
34.7%
(b)
"large" initial weight settings (Le., the weights resulting from the domain-specific
knowledge), small changes in the training set still produce significant changes in
predictions. Also notice that on all datasets, KBANN-bagging is as good as or better
than running bagging on randomly generated networks (Le., bagging in Table 2a).
The next row, REGENT-Combined, contains the results of simply combining, using
Equation 5, the networks in REGENT'S final population. AOOEMUP, the final row of
Table 2b, mainly differs from REGENT-Combined in two ways: (a) its fitness function
(Le., Equation 3) takes into account diversity rather than just network accuracy, and
(b) it trains new networks by emphasizing the erroneous examples of the current
ensemble. Therefore, comparing AOOEMUP with REGENT-Combined helps directly
test ADDEMUP'S diversity-achieving heuristics, though additional results reported in
Opitz (1995) show ADDEMUP gets most of its improvement from its fitness function.
There are two main reasons why we think the results of ADDEMUP in Table 2b are
especially encouraging: (a) by comparing ADDEMUP with REGENT-Combined, we
explicitly test the quality of our heuristics and demonstrate their effectiveness, and
(b) ADDEMUP is able to effectively utilize background knowledge to decrease the
error of the individual networks in its ensemble, while still being able to create
enough diversity among them so as to improve the overall quality of the ensemble.
5
Conclusions
Previous work with neural-network ensembles have shown them to be an effective
technique if the classifiers in the ensemble are both highly correct and disagree
with each other as much as possible. Our new algorithm, ADDEMUP, uses genetic
algorithms to search for a correct and diverse population of neural networks to be
used in the ensemble. It does this by collecting the set of networks that best fits an
objective function that measures both the accuracy of the network and the disagreement of that network with respect to the other members of the set. ADDEMUP tries
Generating Accurate and Diverse Members of a Neural-network Ensemble
541
to actively generate quality networks during its search by emphasizing the current
ensemble's erroneous examples during backpropagation training.
Experiments demonstrate that our method is able to find an effective set of networks for our ensemble. Experiments also show that ADDEMUP is able to effectively
incorporate prior knowledge, if available, to improve the quality of this ensemble.
In fact, when using domain-specific rules, our algorithm showed statistically significant improvements over (a) the single best network seen during the search, (b) a
previously proposed ensemble method called bagging (Breiman, 1994), and (c) a
similar algorithm whose objective function is simply the validation-set correctness
of the network. In summary, ADDEMUP is successful in generating a set of neural
networks that work well together in producing an accurate prediction.
Acknowledgements
This work was supported by Office of Naval Research grant N00014-93-1-0998.
References
Alpaydin, E. (1993). Multiple networks for function learning. In Proceedings of the 1993
IEEE International Conference on Neural Networks, vol I, pages 27-32, San Fransisco.
Breiman, L. (1994). Bagging predictors. Technical Report 421, Department of Statistics,
University of California, Berkeley.
Clemen, R. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5:559-583.
Drucker, H., Cortes, C., Jackel, L., LeCun, Y., and Vapnik, V. (1994). Boosting and other
machine learning algorithms. In Proceedings of the Eleventh International Conference on
Machine Learning, pages 53-61, New Brunswick, NJ. Morgan Kaufmann.
Hansen, L. and Salamon, P. (1990). Neural network ensembles. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12:993-100l.
Hashem, S., Schmeiser, B., and Yih, Y. (1994). Optimal linear combinations of neural
networks: An overview. In Proceedings of the 1994 IEEE International Conference on
Neural Networks, Orlando, FL.
Krogh, A. and Vedelsby, J. (1995). Neural network ensembles, cross validation, and
active learning. In Tesauro, G., Touretzky, D., and Leen, T., editors, Advances in Neural
Information Processing Systems, vol 7, Cambridge, MA. MIT Press.
Maclin, R. and Shavlik, J. (1995). Combining the predictions of multiple classifiers:
Using competitive learning to initialize neural networks. In Proceedings of the Fourteenth
International Joint Conference on Artificial Intelligence, Montreal, Canada.
Mani, G. (1991). Lowering variance of decisions by using artificial neural network portfolios. Neural Computation, 3:484-486.
Opitz, D. (1995). An Anytime Approach to Connectionist Theory Refinement: Refining
the Topologies of Knowledge-Based Neural Networks. PhD thesis, Computer Sciences
Department, University of Wisconsin, Madison, WI.
Opitz, D. and Shavlik, J. (1994). Using genetic search to refine knowledge-based neural
networks. In Proceedings of the Eleventh International Conference on Machine Learning,
pages 208-216, New Brunswick, NJ. Morgan Kaufmann.
Perrone, M. (1992). A soft-competitive splitting rule for adaptive tree-structured neural
networks. In Proceedings of the International Joint Conference on Neural Networks, pages
689-693, Baltimore, MD.
Towell, G. and Shavlik, J. (1994). Knowledge-based artificial neural networks. Artificial
Intelligence, 70(1,2):119- 165.
Wolpert, D. (1992). Stacked generalization. Neural Networks, 5:241- 259.
| 1175 |@word seems:2 tried:1 pick:1 yih:1 initial:9 contains:5 score:3 united:1 genetic:12 existing:2 current:8 comparing:2 activation:1 yet:1 refines:1 partition:3 happen:1 nynex:1 intelligence:3 boosting:1 node:2 direct:1 consists:2 regent:10 combine:2 eleventh:2 pairwise:1 expected:1 frequently:1 growing:1 decreasing:3 automatically:1 cpu:1 encouraging:1 considering:1 increasing:2 project:1 classifies:1 what:4 minimizes:1 developed:1 nj:2 berkeley:1 collecting:1 classifier:4 demonstrates:1 grant:1 producing:2 continually:3 mistake:2 approximately:1 range:1 statistically:1 lecun:1 practice:1 differs:2 backpropagation:5 bootstrap:1 empirical:1 crossover:2 significantly:1 composite:1 confidence:1 get:2 close:2 operator:4 demonstrated:2 go:1 regardless:1 clemen:3 splitting:1 rule:8 otl:1 population:24 target:1 us:7 particularly:1 disagreeing:1 ien:2 bottom:1 calculate:2 cycle:1 alpaydin:3 decrease:2 ran:1 hashem:4 trained:10 kbann:8 creates:1 learner:3 easily:1 joint:2 train:4 stacked:1 distinct:1 effective:5 artificial:4 whose:2 heuristic:2 drawing:1 ability:1 statistic:1 knn:1 think:1 final:2 sequence:1 propose:1 combining:9 rapidly:1 fittest:1 normalize:3 produce:3 generating:7 tk:1 help:1 montreal:1 measured:1 krogh:6 c:1 predicted:2 indicate:1 correct:6 annotated:1 human:1 orlando:1 generalization:4 hold:1 around:1 considered:3 mapping:1 currently:3 hansen:4 maker:1 jackel:1 correctness:2 create:6 weighted:2 hope:1 mit:1 always:1 rather:1 breiman:4 varying:1 office:1 focus:2 refining:1 naval:1 improvement:2 mainly:1 dependent:1 stopping:1 inaccurate:1 maclin:3 hidden:3 misclassified:1 overall:2 classification:2 among:2 exponent:1 initialize:1 never:1 having:1 kw:1 represents:1 report:2 connectionist:1 primarily:1 few:1 randomly:8 individual:9 fitness:6 consisting:1 replacement:1 highly:8 adjust:1 umn:1 behind:1 accurate:19 tree:1 initialized:1 theoretical:1 minimal:1 instance:3 classify:1 soft:1 cost:4 subset:1 predictor:1 recognizing:1 successful:2 reported:2 fransisco:1 combined:8 international:7 together:1 quickly:1 thesis:1 ambiguity:1 containing:1 choose:1 possibly:1 duluth:1 henceforth:1 ket:1 creating:6 derivative:1 actively:3 li:2 account:1 diversity:17 accompanied:1 explicitly:1 later:1 try:2 portion:1 start:2 reached:1 competitive:2 mutation:2 oi:3 ni:1 accuracy:11 kaufmann:2 variance:1 ensemble:53 generalize:1 multiplying:1 researcher:2 touretzky:1 vlik:2 vedelsby:6 di:4 proved:2 knowledge:15 anytime:1 salamon:4 leen:1 ox:3 though:2 just:1 until:1 ei:2 defines:3 quality:5 believe:1 mani:2 inductive:1 ribosome:1 during:7 criterion:1 trying:2 demonstrate:3 meaning:1 overview:1 refer:1 significant:2 cambridge:1 portfolio:1 minnesota:1 add:3 showed:2 tesauro:1 n00014:1 seen:1 morgan:2 additional:1 care:1 prune:2 multiple:4 technical:1 cross:2 prediction:7 basic:1 multilayer:1 jude:1 sometimes:1 represent:1 background:2 want:1 separately:2 addressed:1 baltimore:1 thing:1 member:10 lover:1 effectiveness:2 enough:1 fit:3 topology:9 idea:1 tradeoff:1 translates:1 drucker:2 forecasting:1 locating:1 useful:1 clear:1 ten:1 dna:3 reduced:1 generate:10 notice:2 towell:3 correctly:2 rb:3 diverse:10 vol:2 key:1 achieving:1 wisc:1 ce:1 utilize:3 lowering:1 sum:2 run:1 fourteenth:1 place:1 decision:1 summarizes:1 layer:1 fl:1 fold:2 refine:1 infinity:1 bibliography:1 generates:2 department:4 structured:1 according:1 combination:2 perrone:2 across:1 wi:7 making:2 explained:1 equation:8 previously:1 describing:1 know:1 available:4 junction:4 generalizes:1 indirectly:2 disagreement:3 alternative:1 bagging:13 top:1 running:4 include:1 madison:2 xw:1 calculating:1 giving:1 especially:2 objective:3 opitz:12 lnw:1 sha:3 usual:2 md:1 unstable:1 reason:1 ratio:1 motivates:1 disagree:3 datasets:2 extended:1 canada:1 david:1 propositional:1 california:1 herein:1 erwise:1 address:2 able:7 proceeds:1 below:1 pattern:3 genetically:1 max:3 explanation:1 power:2 overlap:1 mn:1 scheme:1 improve:3 started:1 coupled:1 text:1 prior:3 review:1 acknowledgement:1 wisconsin:2 expect:1 validation:8 editor:1 production:1 row:7 summary:1 repeat:1 supported:1 keeping:1 shavlik:11 focussed:1 taking:1 genome:1 commonly:1 refinement:1 san:1 adaptive:1 transaction:1 keep:2 gene:1 active:1 iayer:1 search:8 tailed:1 why:1 table:16 domain:14 did:1 main:1 promoter:3 site:1 referred:2 aid:1 candidate:1 splice:4 emphasizing:4 erroneous:3 specific:8 emphasized:2 cortes:1 normalizing:1 consist:2 vapnik:1 effectively:4 importance:3 phd:1 illustrates:1 forecast:1 wolpert:2 simply:5 knns:3 binding:1 ma:1 goal:1 change:6 determined:1 except:1 uniformly:1 averaging:1 called:2 experimental:1 rarely:1 brunswick:2 incorporate:3 |
193 | 1,176 | Statistical Mechanics of the Mixture of
Experts
Kukjin Kang and Jong-Hoon Oh
Department of Physics
Pohang University of Science and Technology
Hyoja San 31, Pohang, Kyongbuk 790-784, Korea
E-mail: kkj.jhohOgalaxy.postech.ac.kr
Abstract
We study generalization capability of the mixture of experts learning from examples generated by another network with the same
architecture. When the number of examples is smaller than a critical value, the network shows a symmetric phase where the role
of the experts is not specialized. Upon crossing the critical point,
the system undergoes a continuous phase transition to a symmetry breaking phase where the gating network partitions the input
space effectively and each expert is assigned to an appropriate subspace. We also find that the mixture of experts with multiple level
of hierarchy shows multiple phase transitions.
1
Introduction
Recently there has been considerable interest among neural network community in
techniques that integrate the collective predictions of a set of networks[l, 2, 3, 4].
The mixture of experts [1, 2] is a well known example which implements the philosophy of divide-and-conquer elegantly. Whereas this model are gaining more
popularity in various applications, there have been little efforts to evaluate generalization capability of these modular approaches theoretically. Here we present the
first analytic study of generalization in the mixture of experts from the statistical
K. Kang and 1. Oh
184
physics perspective. Use of statistical mechanics formulation have been focused
on the study of feedforward neural network architectures close to the multilayer
perceptron[5, 6], together with the VC theory[8]. We expect that the statistical
mechanics approach can also be effectively used to evaluate more advanced architectures including mixture models.
In this letter we study generalization in the mixture of experts[l] and its variety
with two-level hierarchy[2]. The network is trained by examples given by a teacher
network with the same architecture. We find an interesting phase transition driven
by symmetry breaking among the experts. This phase transition is closely related
to the 'division-and-conquer' mechanism which this mixture model was originally
designed to accomplish.
2
Statistical Mechanics Formulation for the Mixture of
Experts
The mixture of experts[2] is a tree consisted of expert networks and gating networks
which assign weights to the outputs of the experts. The expert networks sit at the
leaves of the tree and the gating networks sit at its branching points of the tree.
For the sake of simplicity, we consider a network with one gating network and two
experts. Each expert produces its output J,lj as a generalized linear function of the
N dimensional input x :
J,lj
= /(Wj . x),
j = 1,2,
(1)
where Wj is a weight vector of the j th expert with spherical constraint[5]. We
consider a transfer function /(x) = sgn(x) which produces binary outputs. The
principle of divide-and-conquer is implemented by assigning each expert to a subspace of the input space with different local rules. A gating network makes partitions
in the input space and assigns each expert a weighting factor :
(2)
where the gating function 8(x) is the Heaviside step function. For two experts,
this gating function defines a sharp boundary between the two subspace which is
perpendicular to the vector V 1 = -V 2 = V, whereas the softmax function used in
the original literature [2] yield a soft boundary. Now the weighted output from the
mixture of expert is written:
2
J,l(V, W; x) =
2: 9j (x)J,lj (x).
(3)
j=1
The whole network as well as the individual experts generates binary outputs.
Therefore, it can learn only dichotomy rules. The training examples are generated
by a teacher with the same architecture as:
2
O'(xlJ)
= 2: 8(VJ . x)sgn(WJ . x)
j=1
,
(4)
185
Statistical Mechanics of the Mixture of Experts
where ~o and
teacher.
Wl are the weights of the jth gating network and the expert of the
The learning of the mixture of experts is usually interpreted probabilistically, hence
the learning algorithm is considered as a maximum likelihood estimation. Learning
algorithms originated from statistical methods such as the EM algorithm are often
used. Here we consider Gibbs algorithm with noise level T (= 1/(3) that leads to a
Gibbs distribution of the weights after a long time:
(5)
=
where Z
J dV dW exp( -(3E(V, Wj)) is the partition function. Training both the
experts and the gating network is necessary for a good generalization performance.
The energy E of the system is defined as a sum of errors over P examples:
p
L f(V, W
j;
xl),
(6)
1=1
(7)
The performance of the network is measured by the generalization function
f(V, W j )
dx f(V, Wj; x), where dx represents an average over the whole
input space . The generalization error fg is defined by fg = (((f(W))T)) where ((-.-))
denotes the quenched average over the examples and (- . -)T denotes the thermal
average over the probability distribution of Eq. (5).
=J
J
Since the replica calculation turns out to be intractable, we use the annealed approximation:
((log Z))
~
log((Z)) .
(8)
The annealed approximation is exact only in the high temperature limit, but it is
known that the approximation usually gives qualitatively good results for the case
of learning realizable rules[5, 6] .
3
Generalization Curve and the Phase Transition
The generalization function f(V, W j) is can be written as a function of overlaps
between the weight vectors of the teacher and the student:
2
2
LLPijfij
(9)
i=l j=l
where
(10)
(11)
K. Kang and J. Oh
186
and
Rij
Rij
1
0
-V??V ?
N'
J'
1
0
N Wi ?Wj .
(12)
(13)
is the overlap order parameters. Here, Pij is a probability that the i th expert of
the student learns from examples generated by the j th expert of the teacher . It
is a volume fraction in the input space where Vi . x and VJ . x are both positive.
For that particular examples, the ith expert of the student gives wrong answer with
probability fij with respect to the j th expert of the teacher. We assume that
the weight vectors of the teacher, V 0, W~ and W~, are orthogonal to each other,
then the overlap order parameters other than the oneS shown above vanish. We
use the symmetry properties of the network such as Rv = RYI = R~2 = - RY2,
R = Rll = R 22 , and r = R12 = R 21 .
The free energy also can be written as a function of three order parameters Rv, R,
and r . Now we consider a thermodynamic limit where the dimension of the input
space N and the number of examples P goes to infinity, keeping the ratio eY = PIN
finite. By minimizing the free energy with respect to the order parameters, we find
the most probable values ofthe order parameters as well as the generalization error.
Fig 1.(a) plots the overlap order parameters Rv, Rand r versus eY at temperature
T = 5. Examining the plot, we find an interesting phase transition driven by
symmetry breaking among the experts. Below the phase transition point eYe
51.5,
the overlap between the gating networks of the teacher and the student is zero
(Rv
0) and the overlaps between the experts are symmetric (R
r). In the
symmetric phase, the gating network does not have enough examples to learn proper
partitioning, so its performance is not much better than a random partitioning.
Consequently each expert of the student can not specialize for the subspaces with
a particular local rule given by an expert of the teacher. Each expert has to learn
multiple linear rules with linear structure, which leads to a poor generalization
performance. Unless more than a critical amount of examples is provided, the
divide-and-conquer strategy does not work.
=
=
=
Upon crossing the critical point eYe, the system undergoes a continuous phase transition to the symmetry breaking phase. The order parameter R v , related to the
goodness of partition, begins to increase abruptly and approaches 1 with increasing
eY . The gating network now provides a better partition which is close to that of the
teacher. The plot of order parameter Rand r, which is overlap between experts of
teacher and student , branches at eYe and approaches 1 and 0 respectively. It means
that each expert specializes its role by making appropriate pair with a particular
expert of the teacher. Fig. l(b) plots the generalization curve (f g versus eY) in the
same scale. Though the generalization curve is continuous, the slope of the curve
changes discontinuously at the transition point so that the generalization curve has
187
Statistical Mechanics of the Mixture of Experts
.' .,.-
O.S
0.6
0.4
;
0.2
0
;
I
/
I
,-'
,
-'
,,
/,/-\.
..
"
I
".
"
-- .. _-- - - -- --.
0
40
20
60
SO
100
120
140
160
ISO
160
180
ex
(a)
0.5
0.45
0.4
0.35
0.3
~.25
0.2
0.15
0.1
0.05
0
0
20
40
60
80
(b)
ex
100
120
140
Figure 1: (a) The overlap order parameters Rv, R, r versus 0' at T = 5. For
< O'c = 51.5, we find Rv = 0 (solid line that follows x axis), and R = r
(dashed line). At the transition point, Rv begins to increase abruptly, R (dotted
line) and r (dashed line) branches, which approach 1 and 0 respectively. (b) The
generalization curve (f g versus 0') for the mixture of experts in the same scale. A
cusp at the transition point O'c is shown.
0'
K. Kang and J. Oh
188
0.5 , . . . , , - - - - - - , - - - , . - - - , - - - , - - . - - - - - - - ,
0.45
0.4
0.35
0.3
~.25
0.2
0.15
0.1
0.05
OL-_~
o
__
~
__
100
50
~
__
150
a
_ L _ _~_~
200
250
300
Figure 2: A typical generalization error curve for HME network with continuous
weight. T = 5.
a cusp. The asymptotic behavior of fg at large 0' is given by:
f :::::
3
1
f3'
1 - e- 0'
(14)
where the 1/0' decay is often observed in learning of other feedforward networks.
4
The Mixture of Experts with Two-Level Hierarchy
We also study generalization in the hierarchical mixture of experts [2] . Consider
a two-level hierarchical mixture of experts consisted of three gating networks and
four experts. At the top level the tree is divided into two branch, and they are in
turn divided into two branches at the lower level. The experts sit at the four leaves
of the tree, and the three gating networks sit at the top and lower-level branching
points. The network also learns from the training examples drawn from a teacher
network with the same architecture.
FIG 2. (b) shows corresponding learning curve which has two cusps related to
the phase transitions. For 0' < O'ct, the system is in the fully symmetric phase.
The gating networks do not provide correct partition for the experts at both levels
of hierarchy and the experts cannot specialize at all. All the overlaps with the
weights of the teacher experts have the same value. The first phase transition at
the smaller 0'c1 is related to the symmetry breaking by the top-level gating network.
For 0'c1 < 0' < O'c2, the top-level gating network partition the input space into two
parts, but the lower-level gating network is not functioning properly. The overlap
between the gating networks at the lower level of the tree and that of the teacher
is still zero. The experts partially specialize into two groups . Specialization among
the same group is not accomplished yet. The overlap order parameter Rij can
Statistical Mechanics of the Mixture of Experts
189
have two distinct values . The bigger one is the overlap with the two experts of the
teacher for which the group is specializing, and the smaller is with the experts of
the teacher which belong to the other group. At the second transition point Q'c2, the
symmetry related to the lower-level hierarchy breaks. For c? > C?c2, all the gating
networks work properly and the input space is divided into four. Each expert makes
appropriate pair with an expert of the teacher . Now the overlap order parameters
can have three distinct values. The largest is the overlap with matching expert of
teacher. The next largest is the overlap with the neighboring teacher expert in the
tree hierarchy. The smallest is with the experts of the other group. The two phase
transition result in the two cusps of the learning curve.
5
Conclusion
Whereas the phase transition of the mixture of experts can be interpreted as a
symmetry breaking phenomenon which is similar to the one already observed in the
committee machine and the multi-Iayer-perceptron[6, 7], the transition is novel in
that it is continuous. This means that symmetry breaking is easier for the mixture
of experts than in the multi-layer perceptron. This can be a big advantage in
learning of highly nonlinear rules as we do not have to worry about the existence of
local minima. We find that the hierarchical mixture of experts can have multiple
phase transitions which are related to symmetry breaking at different levels. Note
that symmetry breaking comes first from the higher-level branch, which is desirable
property of the model.
We thank M. I. Jordan, L. K. Saul, H. Sompolinsky, H. S. Seung, H. Yoon and
C. K won for useful discussions and comments. This work was partially supported
by the Basic Science Special Program of the POSTECH Basic Science Research
Institute.
References
[1] R. A. Jacobs, M. I. Jordan, S. J. Nolwan, and G. E. Hinton, Neural Computation 3, 79 (1991).
[2] M. I. Jordan, and R. A. Jacobs, Neural Computation 6, 181 (1994).
[3] M.P. Perrone and L. N. Cooper, Neural Networks for Speech and Image Processing, R. J. Mammone. Ed., Chapman-Hill, London, 1993.
[4] D. Wolpert, Neural Networks, 5, 241 (1992).
[5] H. S. Seung, H. Sompolinsky, and N. Tishby, Phys. Rev . A 45, 6056 (1992) .
[6] K. Kang, J.-H . Oh, C. Kwon and Y. Park, Phys. Rev. E 48, 4805 (1993); K.
Kang, J .-H. Oh, C. Kwon and Y. Park, Phys. Rev. E 54, 1816 (1996).
[7] E. Baum and D. Haussler, Neural Computation 1, 151 (1989).
| 1176 |@word implemented:1 consisted:2 conquer:4 come:1 functioning:1 hence:1 assigned:1 fij:1 closely:1 correct:1 symmetric:4 already:1 strategy:1 vc:1 postech:2 jacob:2 sgn:2 cusp:4 branching:2 subspace:4 solid:1 thank:1 won:1 assign:1 generalized:1 generalization:17 mail:1 hill:1 probable:1 temperature:2 considered:1 image:1 ratio:1 assigning:1 dx:2 written:3 yet:1 exp:1 novel:1 recently:1 specialized:1 partition:7 minimizing:1 analytic:1 smallest:1 designed:1 plot:4 volume:1 estimation:1 belong:1 collective:1 proper:1 leaf:2 gibbs:2 iso:1 ith:1 largest:2 wl:1 finite:1 weighted:1 thermal:1 provides:1 hinton:1 sharp:1 c2:3 community:1 probabilistically:1 specialize:3 pair:2 perspective:1 properly:2 driven:2 likelihood:1 theoretically:1 wolpert:1 binary:2 kang:6 behavior:1 realizable:1 mechanic:7 multi:2 ol:1 accomplished:1 minimum:1 usually:2 spherical:1 lj:3 below:1 ey:4 little:1 increasing:1 program:1 provided:1 begin:2 dashed:2 rv:7 thermodynamic:1 among:4 multiple:4 branch:5 desirable:1 gaining:1 including:1 interpreted:2 critical:4 calculation:1 softmax:1 special:1 long:1 advanced:1 divided:3 f3:1 technology:1 bigger:1 chapman:1 specializing:1 represents:1 park:2 prediction:1 basic:2 eye:3 multilayer:1 axis:1 specializes:1 wrong:1 partitioning:2 kwon:2 literature:1 c1:2 whereas:3 positive:1 asymptotic:1 individual:1 local:3 fully:1 expect:1 limit:2 phase:18 interesting:2 versus:4 comment:1 interest:1 integrate:1 highly:1 pij:1 jordan:3 principle:1 mixture:22 feedforward:2 perpendicular:1 enough:1 variety:1 supported:1 keeping:1 architecture:6 free:2 jth:1 implement:1 necessary:1 hoon:1 korea:1 perceptron:3 orthogonal:1 unless:1 tree:7 institute:1 divide:3 saul:1 specialization:1 fg:3 curve:9 boundary:2 matching:1 dimension:1 effort:1 quenched:1 transition:18 soft:1 abruptly:2 qualitatively:1 san:1 cannot:1 close:2 goodness:1 speech:1 useful:1 rll:1 baum:1 annealed:2 go:1 examining:1 tishby:1 amount:1 focused:1 simplicity:1 teacher:20 assigns:1 answer:1 accomplish:1 iayer:1 r12:1 rule:6 haussler:1 dotted:1 continuous:5 oh:6 popularity:1 dw:1 learn:3 physic:2 transfer:1 together:1 symmetry:11 hierarchy:6 group:5 four:3 exact:1 pohang:2 drawn:1 elegantly:1 vj:2 crossing:2 replica:1 expert:60 whole:2 noise:1 big:1 hyoja:1 fraction:1 sum:1 observed:2 role:2 yoon:1 hme:1 letter:1 rij:3 student:6 fig:3 wj:6 cooper:1 sompolinsky:2 vi:1 originated:1 break:1 xl:1 ryi:1 layer:1 ct:1 vanish:1 capability:2 seung:2 breaking:9 weighting:1 slope:1 learns:2 trained:1 gating:20 constraint:1 infinity:1 upon:2 division:1 decay:1 yield:1 ofthe:1 sake:1 sit:4 intractable:1 generates:1 various:1 effectively:2 kr:1 distinct:2 london:1 department:1 easier:1 dichotomy:1 phys:3 poor:1 mammone:1 perrone:1 ed:1 smaller:3 xlj:1 modular:1 em:1 wi:1 energy:3 rev:3 making:1 partially:2 overlap:15 dv:1 advantage:1 turn:2 pin:1 mechanism:1 committee:1 consequently:1 neighboring:1 worry:1 considerable:1 change:1 originally:1 higher:1 typical:1 discontinuously:1 philosophy:1 rand:2 hierarchical:3 formulation:2 appropriate:3 though:1 ry2:1 jong:1 existence:1 original:1 produce:2 denotes:2 nonlinear:1 top:4 defines:1 ac:1 undergoes:2 phenomenon:1 measured:1 evaluate:2 heaviside:1 eq:1 ex:2 |
194 | 1,177 | Microscopic Equations in Rough Energy
Landscape for Neural Networks
K. Y. Michael Wong
Department of Physics,
The Hong Kong University of Science and Technology,
Clear Water Bay, Kowloon, Hong Kong.
E-mail: phkywong@usthk.ust.hk
Abstract
We consider the microscopic equations for learning problems in
neural networks. The aligning fields of an example are obtained
from the cavity fields, which are the fields if that example were
absent in the learning process. In a rough energy landscape, we
assume that the density of the local minima obey an exponential
distribution, yielding macroscopic properties agreeing with the first
step replica symmetry breaking solution. Iterating the microscopic
equations provide a learning algorithm, which results in a higher
stability than conventional algorithms .
1
INTRODUCTION
Most neural networks learn iteratively by gradient descent. As a result, closed expressions for the final network state after learning are rarely known. This precludes
further analysis of their properties, and insights into the design of learning algorithms. To complicate the situation, metastable states (i .e. local minima) are often
present in the energy landscape of the learning space so that, depending on the
initial configuration, each one is likely to be the final state.
However, large neural networks are mean field systems since the examples and
weights strongly interact with each other during the learning process. This means
that when one example or weight is considered, the influence of the rest of the system
can be regarded as a background satisfying Some averaged properties. The situation
is similar to a number of disordered systems such as spin glasses, in which mean
field theories are applicable (Mezard, Parisi & Virasoro, 1987). This explains the
success of statistical mechanical techniques such as the replica method in deriving
the macroscopic properties of neural networks, e.g. the storage capacity (Gardner
& Derrida 1988), generalization ability (Watkin, Rau & Biehl 1993). The replica
Microscopic Equations in Rough Energy Landscape for Neural Networks
303
method, though, provides much less information on the microscopic conditions of
the individual dynamical variables.
An alternative mean field approach is the cavity method. It is a generalization of
the Thouless-Anderson-Palmer approach to spin glasses, which started from microscopic equations of the system elements (Thouless, Anderson & Palmer, 1977).
Mezard applied the method to neural network learning (Mezard, 1989) . Subsequent extensions were made to the teacher-student perceptron (Bouten, Schietse
& Van den Broeck 1995), the AND machine (Griniasty, 1993) and the multiclass
perceptron (Gerl & Krey, 1995) . They yielded macroscopic properties identical to
the replica approach, but the microscopic equations were not discussed, and the
existence of local minima was neglected.
Recently, the cavity method was applied to general classes of single and multilayer
networks with smooth energy landscapes, i.e. without the local minima (Wong,
1995a). The aligning fields of the examples satisfy a set of microscopic equations.
Solving these equations iteratively provides a learning algoirthm, as confirmed by
simulations in the maximally stable perceptron and the committee tree. The method
is also useful in solving the dynamics of feedforward networks which were unsolvable
previously (Wong, 1995b) .
Despite its success, the theory is so far applicable only to the regime of smooth
energy landscapes. Beyond this regime, a stability condition is violated, and local
minima begin to appear (Wong, 1995a). In this paper I present a mean field theory
for the regime of rough energy landscapes. The complete analysis will be published
elsewhere and here I sketch the derivations, emphasizing the underlying physical
picture. As shown below, a similar set of microscopic equations hold in this case, as
confirmed by simulations in the committee tree . In fact, we find that the solutions to
these equations have a higher stability than other conventional learning algorithms.
2
MICROSCOPIC EQUATIONS FOR SMOOTH
ENERGY LANDSCAPES
We proceed by reviewing the cavity method for the case of smooth energy landscapes. For illustration we consider the single layer neural network (for two layer
networks see Wong, 1995a). There are N ? 1 input nodes {Sj} connecting to a
single output node by the synaptic weights {Jj}. The output state is determined
by the sign of the local field at the output node, i.e. Sout = sgn(Lj JjSj ). Learning
a set of p examples means to find the weights {Jj} such that the network gives the
correct input-to-output mapping for the examples. If example J.l maps the inputs
to the output 01-', then a successful learning process should find a weight vector Jj
such that sgn(Lj Jj~j) = 1, where ~j = 01-' Sf. Thus the usual approach to learning is to first define an energy function (or error function) E = Ll-'g(AI-')' where
AI-' == Lj Jj~f /VN are the aligning fields, i.e. the local fields in the direction of the
correct output, normalized by the factor VN. For example, the Adatron algorithm
uses the energy function g(A) = (I\: - A)6(1\: - A) where I\: is the stability parameter
and 6 is the step function (Anlauf & Biehl, 1989). Next, one should minimize E
by gradient descent dynamics. To avoid ambiguity, the weights are normalized to
'"
L...J. S~
J = '"
L...J. J~
J =N .
Sf
The cavity method uses a self-consistency argument to consider what happens when
a set of p examples is expanded to p + 1 examples. The central quantity in this
method is the cavity field. For an added example labelled 0, the cavity field is
the aligning field when it is fed to a network which learns examples 1 to p (but
K. Y. M. Wong
304
never learns example 0), i.e. to == E j JjeJ 1v'N. Since the original network has
no information about example 0, Jj and eJ are uncorrelated. Thus the cavity field
obeys a Gaussian distribution for random example inputs.
After the network has learned examples 0 to p, the weights adjust from {Jj} to
{Jj}, and the cavity field to adjusts to the generic aligning field Ao. As shown
schematically in Fig. l(a), we assume that the adjustments of the aligning fields
of the original examples are small, typically of the order O(N-l/2). Perturbative
analysis concludes that the aligning field is a well defined function of the cavity field,
i.e. Ao = A(to) where A(t) is the inverse function of
t = A + ,9' (A),
(1)
and, is called the local susceptibility. The cavity fields satisfy a set of self-consistent
equations
t JJ
= I)A(tv) -
tv]QVJJ
+ aXA(t JJ )
(2)
vtJJ
e;ej
where QVJJ = Lj
IN . X is called nonlocal susceptibility, and a == piN. The
weights Jj are given by
(3)
Noting the Gaussian distribution of the cavity fields, the macroscopic properties of
the neural network, such as the storage capacity, can be derived, and the results
are identical to those obtained by the replica method (Gardner & Derrida 1988).
However, the real advantage of the cavity method lies in the microscopic information
it provides. The above equations can be iterated sequentially, resulting in a general
learning algorithm. Simulations confirm that the equations are satisfied in the single
layer percept ron , and their generalized version holds in the committee tree at low
loading (Wong, 1995a).
E
E
J
a.
J
J
Figure 1: Schematic drawing of the change in the energy landscape in the weight
space when example 0 is added, for the regime of (a) smooth energy landscape, (b)
rough energy landscape.
Microscopic Equations in Rough Energy Landscape for Neural Networks
3
305
MICROSCOPIC EQUATIONS FOR ROUGH ENERGY
LANDSCAPES
However, the above argument holds under the assumption that the adjustment
due to the addition of a new example is controllable. We can derive a stability
condition for this assumption, and we find that it is equivalent to the AlmeidaThouless condition in the replica method (Mezard, Parisi & Virasoro, 1987).
An example for such instability occurs in the committee tree, which consists of
hidden nodes a = 1, ... , K with binary outputs, each fed by K nonoverlapping
groups of N / K input nodes. The output of the committee tree is the majority state
of the K hidden nodes . The solution in the cavity method minimizes the change
from the cavity fields {tal to the aligning fields {A a }, as measured by La(Aa -t a)2
in the space of correct outputs. Thus for a stability parameter K, Aa = K when
ta < K and the value of ta is above median among the K hidden nodes, otherwise
Aa = tao Note that a discontinuity exists in the aligning field function. Now
suppose ta < K is the median, but the next highest value tb happens to be slightly
less than ta. Then the addition of example may induce a change from tb < ta to
tbO > taO? Hence AbO changes from tb to K whereas Aao changes from K to taO. The
adjustment of the system is no longer small, and the previous perturbative analysis
is not valid. In fact, it has been shown that all networks having a gap in the aligning
field function are not stable against the addition of examples (Wong, 1995a).
?
To consider what happens beyond the stability regime, one has to take into account
the rough energy landscape of the learning space. Suppose that the original global
minimum for examples 1 to p is a. After adding example 0, a nonvanishing change
to the system is induced, and the global minimum shifts to the neighborhood of the
local minimum 13, as schematically shown in Fig. 1(b). Hence the resultant aligning
fields
are no longer well-defined functions of the cavity fields tg. Instead they
are well-defined functions of the cavity fields tg. Nevertheless, one may expect that
correlations exist between the states a and 13.
Ag
Let ViiO be the correlation between the network states, i.e. (Jj J1) = ViiO. Since
both states a and 13 are determined in the absence of the added example 0, the
correlation (tgtg) = ViiO as well. Knowing that both tg and tg obey Gaussian
distributions, the cavity field distribution can be determined if we know the prior
distribution of the local minima.
At this point we introduce the central assumption in the cavity method for rough
energy landscapes: we assume that the number of local minima at energy E obey
an exponential distribution d~( E) = C exp( -wE)dE. Similar assumptions have
been used in specifying the density of states in disordered systems (Mezard, Parisi
& Virasoro 1987). Thus for single layer networks (and for two layer networks with
appropriate generalizations), the cavity field ditribution is given by
P(ti3jt<~)
o
0
=
G(tgltg)exp[-w~E(-\(tg))]
J dtgG(tg Itg) exp[-w~E(-\(tg))]'
(4)
where G(tg Itg) is a Gaussian distribution. w is a parameter describing the distribution, and -\(tg) is the aligning field function. The weights J1 are given by
J1 = (1 - ax)-l ~ 2)-\(t~) -
t~]~f.
(5)
I'
Noting the Gaussian distribution of the cavity fields, self-consistent equations for
both qo and the local susceptibility 'Y can be derived .
K. Y. M. Wong
306
To determine the distribution of local minima, namely the parameters C and w,
we introduce a "free energy" F(p, N) for p examples and N input nodes, given
by d~(E) = exp[w(F(p, N) - E)]dE. This "free energy" determines the averaged
energy of the local minima and should be an extensive quantity, i.e. it should scale
as the system size. Cavity arguments enable us to find an expression F (p + 1, N) F(p, N). Similarly, we may consider a cavity argument for the addition of one input
node, expanding the network size from N to N + l. This yields an expression for
F(p, N + 1) - F(p, N). Since F is an extensive quantity, F(p, N) should scale as N
for a given ratio 0' = p/ N. This implies
F
N = O'(F(p + 1, N) - F(p, N))
+ (F(p, N + 1) -
F(p, N)).
(6)
We have thus obtained an expression for the averaged energy of the local minima.
Minimizing the free energy with respect to the parameter w gives a self-consistent
equation.
The three equations for qo, 'Y and w completely determines the model. The macroscopic properties of the neural network, such as the storage capacity, can be derived,
and the results are identical to the first step replica symmetry breaking solution in
the replica method.
It remains to check whether the microscopic equations have been modified due to
the roughening of the energy landscape. It turns out that while the cavity fields in
the initial state 0' do not satisfy the microscopic equations (2), those at the final
metastable state {3 do, except that the nonlocal susceptibility X has to be replaced
by its average over the distribution of the local minima. In fact, the nonlocal
susceptibility describes the reactive effects due to the background examples, which
adjust on the addition of the new example. (Technically, this is called the Onsager
reaction.) The adjustments due to hopping between valleys in a rough energy
landscape have thus been taken into account.
4
SIMULATION RESULTS
To verify the theory, I simulate a committee tree learning random examples . Learning can be done by the more conventional Least Action algorithm (Nilsson 1965),
or by iterating the microscopic equations.
We verify that the Least Action algorithm yields an aligning field function ..\(t)
consistent with the cavity theory. Suppose the weights from input j to hidden node
a is given by Jaj = 2:1' xal'~j /..IN. Comparing with Jaj = (1- O'X)-l 2:1'(Aal' tal')~j /..IN, we estimate the nonlocal susceptibility X by requiring the distribution
of tal' == Aal' - (1 - O'X)xal' to have a zero first moment. tal' is then an estimate of
tal"
Fig. 2 shows the resultant relation between Aal' and tal" It agrees with the
predictions of the cavity theory. Fig. 3 shows the values of the stability parameter
K, measured from the Least Action algorithm and the microscopic equations. They
have better agreement with the predictions of the rough energy landscape (first
step replica symmetry breaking solution) rather than the smooth energy landscape
(replica symmetric solution). Note that the microscopic equations yield a higher
stability than the Least Action algorithm.
Microscopic Equations in Rough Energy Landscape for Neural Networks
5
"C
.",
3
0)
1
- - - --
c
c
.2' -1
?
-3
-5
'"
'"
'"
_.
Q)
~
307
'" '"
-5
,.
, .'
-3
-1
1
Cavity field
3
5
Figure 2: The aligning fields versus the cavity fields for a branch of the committee
tree with K
3, a
0.8 and N = 600. The dashed line is the prediction of the
cavity theory for the regime of rough energy landscape.
=
=
2.0
1.5
~
1.0
0.5
0.0
0.0
0.5
1.0
a.
1.5
2.0
Figure 3: The stability parameter K, versus the storage level a in the committee
tree with K = 3 for the cavity theory of: (a) smooth energy landscape (dashed
line), (b) rough energy landscape (solid line), and the simulation of: (c) iterating
the microscopic equations (circles), (d) the Least Action algorithm (squares). Error
bars are smaller than the size of the symbols.
5
CONCLUSION
In summary, we have derived the microscopic equations for neural network learning
in the regime of rough energy landscapes. They turn out to have the same form as
in the case of smooth energy landscape, except that the parameters are averaged
over the distribution of local minima. Iterating the equations result in a learning
algorithm, which yields a higher stability than more conventional algorithms in the
committee tree. However , for high loading, the iterations may not converge.
308
K. Y. M. Wong
The success of the present scheme lies its ability to take into account the underlying
physical picture of many local minima of comparable energy. It correctly describes
the experience that slightly different training sets may lead to vastly different neural
networks. The stability parameter predicted by the rough landscape ansatz has a
better agreement with simulations than the smooth one. It provides a physical
interpretation of the replica symmetry breaking solution in the replica method. It
is possible to generalize the theory to the physical picture with hierarchies of clusters
of local minima, which corresponds to the infinite step replica symmetry breaking
solution, though the mathematics is much more involved.
Acknowledgements
This work is supported by the Hong Kong Telecom Institute of Information Techology, HKUST.
References
Anlauf, J .K, & Biehl, M. (1989) The AdaTron: an adaptive perceptron algorithm.
Europhysics Letters 10(7) :687-692.
Bouten, M., Schietse, J . & Van den Broeck, C. (1995) Gradient descent learning in
perceptrons: A review of its possibilities. Physical Review E 52(2):1958-1967 .
Gardner, E. & Derrida, B. (1988) Optimal storage properties of neural network
models. Journal of Physics A : Mathematical and General 21(1) :271-284.
Gerl, F . & Krey, U. (1995) A Kuhn-Tucker cavity method for generalization with
applications to perceptrons with Ising and Potts neurons. Journal of Physics A:
Math ematical and General 28(23):6501-6516.
Griniasty, M. (1993) "Cavity-approach" analysis of the neural-network learning
problem. Physical Review E 47(6):4496-4513.
Mezard , M. (1989) The space of interactions in neural networks: Gardner's computation with the cavity method. Journal of Physics A: Mathematical and General
22(12):2181-2190.
Mezard, M., Parisi, G. & Virasoro, M. (1987) Spin Glass Theory and Beyond.
Singapore: World Scientific.
Nilsson, N.J . (1965) Learning Machines. New York: McGraw-Hill.
Thouless, D.J., Anderson, P.W. & Palmer, R.G. (1977) Solution of 'solvable model
of a spin glass'. Philosophical Magazin,e 35(3) :593-601.
Watkin, T.L.H., Rau , A. & Biehl, M. (1993) The statistical mechanics of learning
a rule. Review of Modern Physics 65(2) :499-556.
Wong, KY.M. (1995a) Microscopic equations and stability conditions in optimal
neural networks. Europhysics Letters 30(4):245-250 .
Wong, KY.M. (1995b) The cavity method: Applications to learning and retrieval
in neural networks . In J.-H . Oh , C . Kwon and S. Cho (eds.), Neural Networks: The
Statistical Mechanics Perspective, pp. 175-190. Singapore: World Scientific.
| 1177 |@word kong:3 version:1 loading:2 simulation:6 solid:1 moment:1 initial:2 configuration:1 reaction:1 comparing:1 hkust:1 perturbative:2 ust:1 subsequent:1 j1:3 provides:4 math:1 node:10 ron:1 mathematical:2 consists:1 introduce:2 mechanic:2 begin:1 underlying:2 what:2 minimizes:1 ag:1 onsager:1 adatron:2 appear:1 local:19 despite:1 specifying:1 palmer:3 averaged:4 obeys:1 induce:1 valley:1 storage:5 influence:1 instability:1 wong:12 conventional:4 map:1 equivalent:1 insight:1 adjusts:1 rule:1 regarded:1 deriving:1 oh:1 stability:13 hierarchy:1 suppose:3 us:2 agreement:2 element:1 satisfying:1 ising:1 highest:1 dynamic:2 neglected:1 solving:2 reviewing:1 technically:1 completely:1 derivation:1 neighborhood:1 biehl:4 drawing:1 otherwise:1 precludes:1 ability:2 final:3 advantage:1 parisi:4 interaction:1 ky:2 cluster:1 depending:1 derive:1 derrida:3 measured:2 predicted:1 implies:1 direction:1 kuhn:1 correct:3 disordered:2 sgn:2 enable:1 explains:1 ao:2 generalization:4 extension:1 hold:3 considered:1 exp:4 viio:3 mapping:1 susceptibility:6 applicable:2 techology:1 agrees:1 rough:16 kowloon:1 gaussian:5 modified:1 rather:1 avoid:1 ej:2 derived:4 ax:1 potts:1 check:1 hk:1 glass:4 lj:4 typically:1 hidden:4 relation:1 tao:3 among:1 field:38 never:1 having:1 identical:3 modern:1 kwon:1 individual:1 thouless:3 replaced:1 possibility:1 adjust:2 yielding:1 experience:1 tree:9 circle:1 virasoro:4 tg:9 xal:2 successful:1 teacher:1 broeck:2 cho:1 density:2 sout:1 physic:5 ansatz:1 michael:1 connecting:1 nonvanishing:1 vastly:1 ambiguity:1 central:2 satisfied:1 watkin:2 account:3 de:2 nonoverlapping:1 student:1 satisfy:3 closed:1 minimize:1 square:1 spin:4 percept:1 yield:4 landscape:27 generalize:1 iterated:1 confirmed:2 published:1 itg:2 complicate:1 synaptic:1 ed:1 against:1 energy:35 pp:1 involved:1 tucker:1 resultant:2 higher:4 ta:5 maximally:1 done:1 though:2 strongly:1 anderson:3 correlation:3 sketch:1 qo:2 scientific:2 effect:1 normalized:2 verify:2 requiring:1 hence:2 symmetric:1 iteratively:2 ll:1 during:1 self:4 hong:3 algoirthm:1 generalized:1 hill:1 complete:1 recently:1 griniasty:2 physical:6 discussed:1 interpretation:1 rau:2 ai:2 consistency:1 mathematics:1 similarly:1 stable:2 longer:2 aligning:14 perspective:1 axa:1 binary:1 success:3 minimum:17 determine:1 converge:1 dashed:2 branch:1 smooth:9 retrieval:1 europhysics:2 schematic:1 prediction:3 multilayer:1 iteration:1 aal:3 background:2 schematically:2 addition:5 whereas:1 median:2 macroscopic:5 rest:1 induced:1 noting:2 feedforward:1 knowing:1 multiclass:1 absent:1 shift:1 whether:1 expression:4 jaj:2 proceed:1 york:1 jj:12 action:5 useful:1 iterating:4 clear:1 exist:1 singapore:2 sign:1 correctly:1 group:1 nevertheless:1 replica:13 inverse:1 letter:2 unsolvable:1 vn:2 comparable:1 layer:5 yielded:1 krey:2 tal:6 simulate:1 argument:4 expanded:1 department:1 tv:2 metastable:2 anlauf:2 smaller:1 describes:2 slightly:2 agreeing:1 happens:3 nilsson:2 den:2 taken:1 equation:29 previously:1 remains:1 pin:1 describing:1 committee:9 turn:2 know:1 fed:2 obey:3 generic:1 appropriate:1 alternative:1 existence:1 original:3 hopping:1 added:3 quantity:3 occurs:1 usual:1 microscopic:22 gradient:3 capacity:3 majority:1 mail:1 water:1 illustration:1 ratio:1 minimizing:1 design:1 neuron:1 descent:3 situation:2 namely:1 mechanical:1 extensive:2 philosophical:1 learned:1 discontinuity:1 beyond:3 bar:1 dynamical:1 below:1 regime:7 tb:3 solvable:1 scheme:1 technology:1 picture:3 gardner:4 started:1 concludes:1 prior:1 review:4 acknowledgement:1 expect:1 versus:2 consistent:4 uncorrelated:1 elsewhere:1 summary:1 supported:1 free:3 perceptron:4 institute:1 abo:1 van:2 valid:1 world:2 made:1 adaptive:1 far:1 nonlocal:4 sj:1 mcgraw:1 cavity:34 confirm:1 global:2 sequentially:1 bay:1 learn:1 expanding:1 controllable:1 symmetry:5 interact:1 aao:1 fig:4 telecom:1 mezard:7 exponential:2 sf:2 lie:2 breaking:5 learns:2 emphasizing:1 phkywong:1 symbol:1 exists:1 adding:1 gap:1 likely:1 adjustment:4 aa:3 corresponds:1 determines:2 labelled:1 absence:1 change:6 determined:3 except:2 infinite:1 called:3 la:1 perceptrons:2 rarely:1 reactive:1 violated:1 |
195 | 1,178 | An Architectural Mechanism for
Direction-tuned Cortical Simple Cells:
The Role of Mutual Inhibition
Silvio P. Sabatini
silvio@dibe.unige.it
Fabio Solari
fabio@dibe .unige.it
Giacomo M. Bisio
bisio@dibe.unige.it
Department of Biophysical and Electronic Engineering
PSPC Research Group
Genova, 1-16145, Italy
Abstract
A linear architectural model of cortical simple cells is presented.
The model evidences how mutual inhibition, occurring through
synaptic coupling functions asymmetrically distributed in space,
can be a possible basis for a wide variety of spatio-temporal simple
cell response properties, including direction selectivity and velocity
tuning. While spatial asymmetries are included explicitly in the
structure of the inhibitory interconnections, temporal asymmetries
originate from the specific mutual inhibition scheme considered.
Extensive simulations supporting the model are reported.
1
INTRODUCTION
One of the most distinctive features of striate cortex neurons is their combined
selectivity for stimulus orientation and the direction of motion. The majority of
simple cells, indeed, responds better to sinusoidal gratings that are moving in one
direction than to the opposite one, exhibiting also a narrower velocity tuning with
respect to that of geniculate cells. Recent theoretical and neurophysiological studies [1] [2] pointed out that the initial stage of direction selectivity can be related
to the linear space-time receptive field structure of simple cells. A large class of
simple cells has a very specific space-time behavior in which the spatial phase of
the receptive field changes gradually as a function of time. This results in receptive
field profiles that are tilted in the space-time domain. To account for the origin
of this particular spatio-temporal inseparability, numerous models have been proposed postulating the existence of structural asymmetries of the geniculo-cortical
projections both in the temporal and in the spatial domains (for a review, see [3]
An Architectural Mechanism/or Direction-tuned Cortical Simple Cells
k1d
[~i:
h01
inhibitory
pool !
Layer 1
:m
J
' To
I
eq
Layer 2
105
k1a~
m
KZd
I
~ el
{;l 4.\D------t
I~I
eo
ez
eO
!ez
ge~iculate
mput
(a)
+
(b)
Figure 1: (a) A schematic neural circuitry for the mutual inhibition; (b) equivalent
block diagram representation.
[4]) . Among them, feed-forward inhibition along the non-preferred direction, and
the combination of lagged and non-lagged geniculate inputs to the cortex have been
commonly suggested as the major mechanisms.
In this paper, within a linear field theory framework, we propose and analyse an
architectural model for dynamic receptive field formation, based on intracortical
interactions occurring through asymmetric mutual inhibition schemes.
2
MODELING INTRACORTICAL PROCESSING
The computational characteristics of each neuron are not independent of the ones
of other neurons laying in the same layer, rather, they are often the consequence of
a collective behavior of neighboring cells. To understand how intracortical circuits
may affect the response properties of simple cells one can study their structure
and function at many levels of organization, from subcellular, driven primarily by
biophysical data, to systemic, driven by functional considerations. In this study, we
present a model at the intermediate abstraction level to combine both functional
and neurophysiological descriptions into an analytic model of cortical simple cells.
2.1
STRUCTURE OF THE MODEL
Following a linear neural field approach [5] [6], we regard visual cortex as a continuous distribution of neurons and synapses. Accordingly, the geniculo-cortical
pathway is modeled by a multi-layer network interconnected through feed-forward
and feedback connections, both inter- and intra-layers. Each location on the cortical plane represents a homogeneous population of cells, and connections represent
average interactions among populations. Such connections can be modeled by spatial coupling functions which represent the spread of the synaptic influence of a
S. P. Sabatini, F. Solari and G. M. Bisio
106
population on its neighbors, as mediated by local axonal and dendritic fields. From
an architectural point of view, we assume the superposition of feed-forward (i.e.,
geniculate) and intracortical contributions which arise from inhibitory pools whose
activity is also primed by a geniculate excitatory drive. A schematic diagram showing the "building blocks" of the model is depicted in Fig. 1. The dynamics of each
population is modeled as first-order low-pass filters characterized by time constants
T'S . For the sake of simplicity, we restrict our analysis to 1-0 case, assuming that
such direction is orthogonal to the preferred direction of the receptive field [7]. This
1-0 model would produce spatia-temporal results that are directly compared with
the spatio-temporal plots usually obtained when an optimal stimulus is moved along
the direction orthogonal to the preferred direction of the receptive field.
Geniculate contributions eo(x, t) are modeled directly by a spatiotemporal convolution of the visual input s(x, t) and a separable kernel ho(x, t) characteri~d in the
spatial domain by a Gaussian shape with spatial extent 0'0 and, in the temporal
domain, by a first-order temporal low-pass filter with time constant TO. The output el(x, t) of the inhibitory neuron population results from the mutual inhibitory
scheme through spatially organized pre- and post-synaptic sites, modeled by the
kernels kIa(x -~) and kId(X - ~), respectively:
Tl
del(X,t)
dt
=-el(x,t)+
m(x,t) =
J
Jk~a(X-~)el(-~)~
kId(x-~)[eo(-~,t)-bm(-~,t)]d~
(1)
(2)
where the function m(x, t) describes the spatia-temporal mutual inhibitory interactions, and b is the inhibition strength. The layer 2 cortical excitation e2(x, t) is
the result of feed-forward contributions collected (k2d) from the inhibitory loop, at
axonal synaptic sites, and the geniculate input (eo(x, t)) . To focus the attention on
the inhibitory loop, in the following we assume a one-to-one mapping from layer 1
to layer 2, i.e., k2d(x -~) = 6(x - ~), consequently:
T2
de2( x, t)
dt
= -e2(x, t) + eo(x, t) -
bm(x, t)
(3)
and T2 are the time constants associated to layer 1 and layer 2, respectively.
where
TI
2.2
AVERAGE INTRACORTICAL CONNECTIVITY
When assessing the role of intracortical circuits on the receptive field properties
of cortical cells, one important issue concerns the spatial localization of inhibitory
and excitatory influences. In a previous work [8] we evidenced how the steadystate solution of Eqs. (1 )-(3) can give rise to highly structured Gabor-like receptive field profiles, when inhibition arises from laterally distributed clusters of
cells. In this case, the effective intrinsic kernel kl(x), defined as kl(x _~) ~f
k 1a ( -x', -e)k 1d(x - x', ~ - e)dx'de, can be modeled as the sum of two Gaussians symmetrically offset with respect to the target cell (see Fig. 2):
JJ
k1(x)
1 ( -WI exp [ -(x = v2~
z=
~
2
d l ) 2 /20'd
+ -W2 exp [ -(x + d2)2 /20'2]2 ) .
0'2
(4)
This work is aimed to investigate how spatial asymmetries in the intracortical coupling function lead to non-separable space-time interactions within the resulting
discharge field of the simple cells. To this end, we varied systematically the geometrical parameters (0', W, d) of the inhibitory kernel to consider three different
An Architectural Mechanism/or Direction-tuned Cortical Simple Cells
107
w
Figure 2: The basic inhibitory kernel used k1(x -e). The cell in the center receives
inhibitory contributions from laterally distributed clusters of cells. The asymmetric
kernels used in the model derive from this basic kernel by systematic variations of
its geometrical parameters (see Table 1).
types of asymmetries: (1) different spatial spread of inhibition (i.e., 0'1 i= 0'2); (2)
different amount of inhibition (W1 i= W2); (3) different spatial offset (d 1 i= d 2). A
more rigorous treatment should take care also of the continuous distortion of the
topographic map [9] . In our analysis this would result in a continuous deformation
of the inhibitory kernel, but for the small distances within which inhibition occurs,
the approximation of a uniform mapping produces only a negligible error.
Architectural parameters were determined from reliable measured values of receptive fields of simple cells [10] [11]. Concerning the spatial domain, we fixed the size
(0'0) of the initial receptive field (due to geniculate contributions) for an "average"
cortical simple cell with a resultant discharge field of"" 20 ; and we adjusted, accordingly, the parameters of the inhibitory kernel in order to account for spatial
interactions only within the receptive field.
Considering the temporal domain, one should distinguish the time constant T1,
caused by network interactions, from the time constants TO and T2 caused by temporal integration at a single cell membrane. In any case, throughout all simulations,
we fixed TO and T2 to 20ms, whereas we"varied T1 in the range 2 - lOOms.
3
RESULTS
Since visual cortex is visuotopically organized, a direct correspondence exists between the spatial organization of intracortical connections and the resulting receptive field topography. Therefore, the dependence of cortical surface activity e2(x, t)
on the visual input sex, t) can be formulated as e2(x, t) = h(x,t) * sex, t), where
the symbol * indicates a spatia-temporal convolution, and hex, t) is the equivalent receptive field interpreted as the spatia-temporal distribution of the signs of
all the effects of cortical interactions. In this context, hex, t) reflects the whole
spatio-temporal couplings and not only the direct neuroanatomical connectivity.
To test the efficacy of the various inhibitory schemes, we used a drifting sine wave
grating s(x,t) = Ccos[211'{fxx ? Itt)] where C is the contrast, Ix and It are the
spatial and temporal frequency, respectively. The direction selectivity index (DSI)
and the optimal velocity (vopt) obtained from the various inhibitory kernels of Fig.2
are summarized in Table 1, for different values of T1 and b. The direction selectivity
index is defined as DS] = ~:~~::, where Rp is the maximum response amplitude
for preferred direction, and Rn p is the maximum amplitude for non-preferred direction. The optimal velocity is defined as I;pt / lx, where Ix is chosen to match
the spatial structure of the receptive field, and I?t is the frequency which elicits
the maximum cell's response.
As expected, increasing the parameter b enhances
the effects of inhibition, thus resulting in larger DSI and higher optimal velocities.
However, for stability reason, b should "remain below a theshold value strictly re-
S. P. Sabatini, F. Solari and G. M. Bisio
J08
II
Table 1:
DSI
I Vopt II
DSI
= 10
I vopt II
0.00
0.00
0.08
0.17
0.00
0.00
1.82
1.82
0.00
0.00
0.24
0.34
0.00
0.00
1.82
1.82
0.50
0.85
1.00
1.30
0.00
0.04
0.06
0.07
0.00
1.82
1.82
1.82
0.00
0.17
0.19
0.28
0.00
1.82
1.82
1.82
0.00
0.25
0.28
0.37
0.00
1.82
1.82
1.82
0.50
1.00
5.00
9.00
0.00
0.00
0.02
0.01
0.00
0.00
1.82
1.82
0.00
0.00
0.05
0.03
0.00
0.00
1.82
1.82
0.00
0.09
0.09
0.06
0.00
1.82
1.82
1.82
0.25
0.50
0.60
0.72
0.00
0.06
0.06
0.07
0.00
2.07
2.07
2.00
0.00
0.20
0.39
0.66
0.00
2.07
4.14
6.00
0.00
0.32
0.40
0.65
0.00
2.07
2.07
4.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.25
0.60
0.80
0.88
0.00
0.00
0.08
0.14
0.00
00.0
1.82
1.82
0.00
0.00
0.23
0.26
0.00
0.00
1.82
1.82
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.50
0.85
1.00
1.33
0.00
0.04
0.05
0.06
0.00
1.82
1.82
1.82
0.00
0.16
0.18
0.26
0.00
1.82
1.82
1.82
0.00
0.23
0.26
0.35
0.00
1.82
1.82
1.82
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
b
0.25
0.60
0.80
0.91
II
T1
= 2
Tl
Tl
= 20
T1 = 100
I Vopt II DSI I Vopt II
DSI
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.16
0.39
0.38
0.00
1.82
3.64
3.64
ASY-IA
ASY-lB
ASY-2A
ASY-2B
ASY-3A
At$LA
::::,.
;.
ASY-3B
lated to the inhibitory kernel considered. Moreover, we observe that, except for
ASY-2A, the strongest direction selectivity can be obtained when the intracortical
time constant Tl? has values in the range of 10 - 20 ms, i.e., comparable to TO and T2.
Larger values of Tl would result, indeed, in a recrudescence of the velocity low-pass
behavior. For each asymmetry, Figs. 3 show the direction tuning curves and the x-t
plots, respectively, for the best cases considered (cf. bold-faced values in Table 1).
We have evidenced that appreciable DSI can be obtained when inhibition arises
from cortical sites at different distance from the target cell (i.e. , ASY-2B, d l i= d 2 ).
In such conditions we obtained a DSI as high as 0.66 and an optimal velocity up
to '" 6?/s, as could be inferred also from the spatia-temporal plot which present a
marked motion-type (i.e., oriented) non-separability (see Fig. 3ASY-2B).
4
DISCUSSION AND CONCLUSIONS
As anticipated in the Introduction, direction selectivity mechanisms usually relies
upon asymmetric alteration of the spatial and temporal response characteristics of
the geniculate input, which are presumably mediated by intracortical circuits. In
the architectural model presented in this study, spatial asymmetries were included
explicitly in the extension of the inhibitory interconnections, but no explicit asymmetric temporal mechanisms were introduced. It is worth evidencing how temporal
asymmetries originate from the specific mutual inhibition scheme considered, which
operates, regarding temporal domain, like a quadrature model [12] [13]. This can
An Architectural Mechanism/or Direction-tuned Cortical Simple Cells
"~
co
~
1.0
0
&
:l O.B
"
~ 0.6
co
~
co
0.6
?!::! 0.4
"6
~ 0.4
0
0
0
E 0.2
Z
ASY-IB
1.0
e- 0.8
Z
1
10
100
.'.
"
..,co
0
~ 0.8
~ 0.8
~
0.6
~ 0.6
co
.!::' 0.4
1
Z
10
100
0 .0
0 .1
'0
-.. ~ ........ -.
f!
100
1.0
a- 0.8
~
0.6
"" 0. 4
~
0
E 0 .2
0
Z
0
0.0
0. 1
1
Velocity deg/s
10
Z
100
0.0
0. 1
Velocity deg/s
400
'<il
'<il
"-'
"-'
"-'
e:
s
~
S
S
':;j
0
':;j
0
space (deg)
2
0
400
400
400
.........
en
.........
en
.........
s
S
'-'
'-'
'J:
':;j
s
'J:
0
0
space (deg)
0
2
0
space (deg)
2
0
space (deg)
2
en
S
"-'
s
100
e:
~
2
10
400
.........
en
':;j
1
Velocity deg/s
400
e:
100
0
E 0 .2
0
10
co
ASY-3A
'0
E 0.2
space (deg)
1
Velocity deg/s
.!::! 0. 4
0
".
0
co
~ 1.0
0
..... _--_. __.'. .,.
Velocity deg/s
"
10
06
~ 0 .4
0
0.0
0.1
~ 1.0
0
1
g. 0. 8
E 0.2
Velocity deg/s
Z 0 .0
0.1
~ 1.0
0
co
E 0.2
00
0.1
..
i
----" " -----'
109
S
~
0
space (deg)
2
0
Figure 3: Results from the model related to the bold-typed values indicated in
Table 1, for each asymmetry considered. (Top) direction tuning curve; (Bottom)
spatia-temporal plots. We can evidence a marked direction tuning for ASY-2B, i.e.,
when inhibition arises from two differentially offset Gaussians
be inferred by the examination of the equivalent transfer function H (w x , Wt) in the
Fourier domain:
Ho(wx ,wt) (
H( Wx , Wt ) =
.
1 + JWtT2
.
1 + JWtT1
1
+ bK 1(w x )
.
1
)
+ JWt
T1 -~.:-----:---:--...,...
1 + JWtT1 + bK 1 (w x )
(5)
where upper case letters indicate Fourier transforms, and j is the complex variable.
The terms in parentheses in Eq. (5) can be interpreted as the sum of temporal components that are approximately arranged in temporal quadrature. Furthermore, one
can observe that a direct monosynaptic influence (el) from the inhibitory neurons
of layer 1 to the excitatory cells of layer 2, would result in the cancellation of the
quadrature component in Eq. (5).
Further improvement of the model should take into account also transmission delays
between spatially separated interacting cells, theshold non-linearities, and ON-OFF
interactions.
110
S. P. Sabatini, F. Solari and G. M. Bisio
Acknowledgements
This research was partially supported by CEC-Esprit CORMORANT 8503, and by
MURST 40%-60%.
References
[1] R.C. Reid, R.E. Soodak, and R.M. Shapley. Directional selectivity and spatiotemporal structure of receptive fields of simple cells in cat striate cortex. 1.
Neurophysiol., 66:505-529, 1991.
[2] D.B. Hamilton, D.G. Albrecht, and W.S. Geisler. Visual cortical receptive
fields in monkey and cat: spatial and temporal phase transfer function. Vision
Res., 29(10):1285-1308, 1989.
[3] K. Nakayama. Biological image motion processing: a review. Vision Res.,
25:625-660, 1985.
[4] E.C. Hildreth and C. Koch. The analysis of visual motion: From computational
theory to neuronal mechanisms. Ann. Rev. Neurosci., 10:477-533, 1987.
[5] G. Krone, H. Mallot, G. Palm, and A. Schiiz. Spatiotemporal receptive fields:
A dynamical model derived from cortical architectonics. Proc. R. Soc. London
Bioi, 226:421-444, 1986.
[6] H.R. Wilson and J.D. Cowan. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kibernetik, 13:55-80, 1973.
[7] G.C. De Angelis, I. Ohzawa, and R.D. Freeman. Spatiotemporalorganization
of simple-cell receptive fields in the cat's striate cortex.!. General characteristics
and postnatal development. J. Neurophysiol., 69:1091-1117, 1993.
[8] S.P. Sabatini, 1. Raffo, and G.M. Bisio. Functional periodic intracortical couplings induced by structured lateral inhibition in a linear cortical network.
Neural Computation, 9(3):525-531, 1997.
[9] H.A. Mallot, W. von Seelen, and F. Giannakopoulos. Neural mapping and
space variant image processing. Neural Networks, 3:245-263, 1990.
[10] K. Albus. A quantitative study of the projection area of the central and the
paracentral visual field in area 17 of the cat. Exp. Brain Res., 24:159-202,
1975.
[11] J . Jones and 1. Palmer. The two-dimensional spatial structure of simple receptive fields in cat striate cortex. J. Neurophysiol., 58:1187-1211, 1987.
[12] A.B. Watson and A.J. Ahumada. Model of human visual-motion sensing. J.
Opt. Soc. Amer., 2:322-341, 1985.
[13] E.H . Adelson and J .R. Bergen. Spatiotemporal energy models for the perception of motion. J. Opt. Soc. Amer., 2:284-321, 1985.
| 1178 |@word sabatini:5 sex:2 d2:1 simulation:2 initial:2 efficacy:1 tuned:4 k1d:1 dx:1 tilted:1 wx:2 shape:1 analytic:1 seelen:1 plot:4 nervous:1 accordingly:2 plane:1 postnatal:1 location:1 lx:1 mathematical:1 along:2 direct:3 combine:1 pathway:1 shapley:1 inter:1 expected:1 indeed:2 behavior:3 multi:1 brain:1 freeman:1 considering:1 increasing:1 monosynaptic:1 moreover:1 linearity:1 circuit:3 interpreted:2 monkey:1 temporal:24 quantitative:1 ti:1 laterally:2 esprit:1 lated:1 hamilton:1 reid:1 t1:6 negligible:1 engineering:1 local:1 consequence:1 approximately:1 co:8 palmer:1 range:2 systemic:1 block:2 area:2 gabor:1 projection:2 pre:1 vopt:5 context:1 influence:3 equivalent:3 map:1 center:1 attention:1 simplicity:1 de2:1 population:5 stability:1 variation:1 discharge:2 target:2 pt:1 homogeneous:1 origin:1 velocity:13 jk:1 asymmetric:4 bottom:1 role:2 dynamic:3 kzd:1 distinctive:1 localization:1 upon:1 basis:1 neurophysiol:3 various:2 dibe:3 cat:5 evidencing:1 separated:1 effective:1 london:1 formation:1 asy:12 whose:1 unige:3 larger:2 distortion:1 interconnection:2 topographic:1 analyse:1 biophysical:2 propose:1 interaction:8 interconnected:1 neighboring:1 loop:2 subcellular:1 albus:1 description:1 moved:1 differentially:1 cluster:2 asymmetry:9 assessing:1 transmission:1 produce:2 coupling:5 derive:1 measured:1 eq:4 grating:2 soc:3 indicate:1 exhibiting:1 direction:22 h01:1 filter:2 human:1 opt:2 dendritic:1 biological:1 adjusted:1 strictly:1 extension:1 koch:1 considered:5 exp:3 presumably:1 mapping:3 circuitry:1 major:1 proc:1 geniculate:8 superposition:1 reflects:1 gaussian:1 rather:1 primed:1 wilson:1 derived:1 kid:2 focus:1 k1a:1 improvement:1 cormorant:1 indicates:1 contrast:1 rigorous:1 abstraction:1 el:5 bergen:1 issue:1 among:2 orientation:1 bisio:6 development:1 spatial:19 integration:1 mutual:8 field:25 represents:1 jones:1 adelson:1 anticipated:1 t2:5 stimulus:2 primarily:1 oriented:1 phase:2 organization:2 highly:1 investigate:1 intra:1 orthogonal:2 re:4 deformation:1 theoretical:1 modeling:1 visuotopically:1 uniform:1 delay:1 reported:1 spatiotemporal:4 periodic:1 giacomo:1 combined:1 krone:1 geisler:1 systematic:1 off:1 pool:2 connectivity:2 w1:1 von:1 central:1 albrecht:1 account:3 sinusoidal:1 de:2 intracortical:11 alteration:1 summarized:1 bold:2 explicitly:2 caused:2 sine:1 view:1 wave:1 thalamic:1 contribution:5 il:2 characteristic:3 directional:1 worth:1 drive:1 tissue:1 synapsis:1 strongest:1 synaptic:4 energy:1 frequency:2 typed:1 e2:4 resultant:1 associated:1 treatment:1 organized:2 amplitude:2 feed:4 higher:1 dt:2 response:5 arranged:1 amer:2 furthermore:1 stage:1 d:1 receives:1 del:1 hildreth:1 indicated:1 building:1 effect:2 ohzawa:1 spatially:2 excitation:1 m:2 motion:6 geometrical:2 image:2 steadystate:1 consideration:1 functional:4 tuning:5 pointed:1 cancellation:1 moving:1 cortex:7 surface:1 inhibition:16 spatia:6 recent:1 italy:1 driven:2 selectivity:8 watson:1 care:1 eo:6 ii:6 match:1 characterized:1 concerning:1 post:1 parenthesis:1 schematic:2 variant:1 basic:2 vision:2 represent:2 kernel:11 cell:29 whereas:1 diagram:2 w2:2 induced:1 mallot:2 cowan:1 structural:1 axonal:2 symmetrically:1 j08:1 intermediate:1 variety:1 geniculo:2 affect:1 fxx:1 restrict:1 opposite:1 regarding:1 jj:1 aimed:1 amount:1 transforms:1 inhibitory:19 sign:1 theshold:2 group:1 sum:2 letter:1 throughout:1 architectural:9 electronic:1 genova:1 comparable:1 layer:12 distinguish:1 correspondence:1 activity:2 strength:1 sake:1 architectonics:1 fourier:2 separable:2 department:1 structured:2 palm:1 combination:1 membrane:1 describes:1 remain:1 separability:1 wi:1 rev:1 gradually:1 mechanism:8 ge:1 end:1 gaussians:2 observe:2 v2:1 ho:2 drifting:1 existence:1 rp:1 neuroanatomical:1 top:1 cf:1 k1:2 occurs:1 receptive:19 dependence:1 striate:4 responds:1 enhances:1 fabio:2 distance:2 elicits:1 lateral:1 majority:1 originate:2 extent:1 collected:1 reason:1 laying:1 assuming:1 modeled:6 index:2 rise:1 lagged:2 collective:1 upper:1 neuron:6 convolution:2 supporting:1 rn:1 varied:2 interacting:1 lb:1 inferred:2 introduced:1 evidenced:2 bk:2 kl:2 extensive:1 connection:4 suggested:1 usually:2 below:1 dynamical:1 perception:1 including:1 reliable:1 ia:1 examination:1 scheme:5 loom:1 numerous:1 mediated:2 faced:1 review:2 acknowledgement:1 dsi:8 topography:1 k2d:2 jwt:1 systematically:1 excitatory:3 kia:1 supported:1 hex:2 understand:1 wide:1 neighbor:1 distributed:3 regard:1 feedback:1 curve:2 cortical:19 giannakopoulos:1 forward:4 commonly:1 bm:2 preferred:5 deg:12 spatio:4 continuous:3 table:5 itt:1 transfer:2 nakayama:1 ahumada:1 complex:1 domain:8 spread:2 neurosci:1 whole:1 arise:1 profile:2 quadrature:3 neuronal:1 fig:5 site:3 tl:5 en:4 postulating:1 explicit:1 ib:1 ix:2 specific:3 cec:1 showing:1 symbol:1 offset:3 sensing:1 evidence:2 concern:1 intrinsic:1 exists:1 inseparability:1 occurring:2 depicted:1 neurophysiological:2 ez:2 visual:8 partially:1 relies:1 bioi:1 marked:2 narrower:1 formulated:1 consequently:1 ann:1 appreciable:1 change:1 included:2 determined:1 except:1 operates:1 wt:3 silvio:2 asymmetrically:1 pas:3 la:1 solari:4 mput:1 arises:3 soodak:1 |
196 | 1,179 | Maximum Likelihood Blind Source
Separation: A Context-Sensitive
Generalization of ICA
Barak A. Pearlmutter
Computer Science Dept, FEC 313
University of New Mexico
Albuquerque, NM 87131
bap@cs.unm.edu
Lucas C. Parra
Siemens Corporate Research
755 College Road East
Princeton, NJ 08540-6632
lucas@scr.siemens.com
Abstract
In the square linear blind source separation problem, one must find
a linear unmixing operator which can detangle the result Xi(t) of
mixing n unknown independent sources 8i(t) through an unknown
n x n mixing matrix A( t) of causal linear filters: Xi = E j aij * 8 j .
We cast the problem as one of maximum likelihood density estimation, and in that framework introduce an algorithm that searches
for independent components using both temporal and spatial cues.
We call the resulting algorithm "Contextual ICA," after the (Bell
and Sejnowski 1995) Infomax algorithm, which we show to be a
special case of cICA. Because cICA can make use of the temporal
structure of its input, it is able separate in a number of situations
where standard methods cannot, including sources with low kurtosis, colored Gaussian sources, and sources which have Gaussian
histograms.
1
The Blind Source Separation Problem
Consider a set of n indepent sources 81 (t), . .. ,8n (t). We are given n linearly distorted sensor reading which combine these sources, Xi = E j aij8j, where aij is a
filter between source j and sensor i, as shown in figure 1a. This can be expressed
as
00
Xi(t)
= 2: 2: aji(r)8j(t j
r=O
r)
= 2: aji * 8j
j
B. A. Pearlmutter and L. C. Parra
614
IftY/(t )IY/(t-l) ? ... ;,,(1?
f,
f-h-..~,,---------..,------~
.
11"-
~Y~~-+~-+-~
x.
Figure 1: The left diagram shows a generative model of data production for blind
source separation problem. The cICA algorithm fits the reparametrized generative
model on the right to the data. Since (unless the mixing process is singular) both
diagrams give linear maps between the sources and the sensors, they are mathematicallyequivalent. However, (a) makes the transformation from s to x explicit,
while (b) makes the transformation from x to y, the estimated sources, explicit.
or, in matrix notation, x(t) = L~=o A(T)S(t - T) = A * s. The square linear blind
source separation problem is to recover S from x. There is an inherent ambiguity
in this, for if we define a new set of sources s' by s~ = bi * Si where bi ( T) is some
invertable filter, then the various s~ are independent, and constitute just as good a
solution to the problem as the true Si, since Xi = Lj(aij * bjl) * sj. Similarly the
sources could be arbitrarily permuted.
Surprisingly, up to permutation of the sources and linear filtering of the individual
sources, the problem is well posed-assuming that the sources Sj are not Gaussian.
The reason for this is that only with a correct separation are the recovered sources
truly statistically independent, and this fact serves as a sufficient constraint. Under
the assumptions we have made, I and further assuming that the linear transformation A is invertible, we will speak of recovering Yi(t) = L j Wji * Xj where these
Yi are a filtered and permuted version of the original unknown Si. For clarity of
exposition, will often refer to "the" solution and refer to the Yi as "the" recovered
sources, rather than refering to an point in the manifold of solutions and a set of
consistent recovered sources.
2
Maximum likelihood density estimation
Following Pham, Garrat, and Jutten (1992) and Belouchrani and Cardoso (1995),
we cast the BSS problem as one of maximum likelihood density estimation. In the
MLE framework, one begins with a probabilistic model of the data production process. This probabilistic model is parametrized by a vector of modifiable parameters
w, and it therefore assigns a w-dependent probability density p( Xo, Xl, ... ; w) to a
each possible dataset xo, Xl, .... The task is then to find a w which maximizes this
probability.
There are a number of approaches to performing this maximization. Here we apply
lWithout these assumptions, for instance in the presence of noise, even a linear mixing
process leads to an optimal un mixing process that is highly nonlinear.
Maximum Likelihood Blind Source Separation: ContextuallCA
615
the stochastic gradient method, in which a single stochastic sample x is chosen from
the dataset and -dlogp(x; w)/dw is used as a stochastic estimate of the gradient
of the negative likelihood 2: t -dlogp(x(t); w)/dw.
2.1
The likelihood of the data
The model of data production we consider is shown in figure 1a. In that model, the
sensor readings x are an explicit linear function of the underlying sources s.
In this model of the data production, there are two stages. In the first stage, the
sources independently produce signals. These signals are time-dependent, and the
probability density of source i producing value Sj(t) at time t is f;(Sj(t)lsj(t 1), Sj(t - 2), ... ). Although this source model could be of almost any differentiable
form, we used a generalized autoregressive model described in appendix A. For
expository purposes, we can consider using a simple AR model, so we model Sj(t) =
bj (l)sj(t -1) + bj (2)sj(t - 2) + ... + bj(T)sj(t - T) + Tj, where Tj is an iid random
variable, perhaps with a complicated density.
It is important to distinguish two different, although related, linear filters . When
the source models are simple AR models, there are two types of linear convolutions
being performed. The first is in the way each source produces its signal: as a linear
function of its recent history plus a white driving term, which could be expressed
as a moving average model, a convolution with a white driving term, Sj = bj * Tj.
The second is in the way the sources are mixed: linear functions of the output of
each source are added, Xi = 2: j aji * Sj = 2: j (aji * bj) *Tj. Thus, with AR sources,
the source convolution could be folded into the convolutions of the linear mixing
process.
If we were to estimate values for the free parameters of this model, i.e. to estimate
the filters, then the task of recovering the estimated sources from the sensor output
would require inverting the linear A = (aij), as well as some technique to guarantee
its non-singularity. Such a model is shown in figure 1a. Instead, we parameterize
the model by W = A-I, an estimated unmixing matrix, as shown in figure lb.
In this indirect representation, s is an explicit linear function of x, and therefore
x is only an implicit linear function of s. This parameterization of the model is
equally convenient for assigning probabilities to samples x, and is therefore suitable
for MLE. Its advantage is that because the transformation from sensors to sources
is estimated explicitly, the sources can be recovered directly from the data and the
estimated model, without invertion. Note that in this inverse parameterization, the
estimated mixture process is stored in inverse form. The source-specific models Ii
are kept in forward form. Each source-specific model i has a vector of parameters,
which we denote w(i).
We are now in a position to calculate the likelihood of the data. For simplicity we use
a matrix W of real numbers rather than FIR filters. Generalizing this derivation to
a matrix of filters is straightforward, following the same techniques used by Lambert
(1996), Torkkola (1996), A. Bell (1997), but space precludes a derivation here.
The individual generative source models give
p(y(t)ly(t - 1), y(t - 2), ... ) =
II Ii(Yi(t)IYi(t -
1), Yi(t - 2), ... )
(1)
B. A. Pear/mutter and L. C. Parra
616
where the probability densities h are each parameterized by vectors w(i). Using
these equations, we would like to express the likelihood of x(t) in closed form,
given the history x(t - 1), x(t - 2), .... Since the history is known, we therefore
also know the history of the recovered sources, y(t - 1),y(t - 2), .... This means
that we can calculate the density p(y(t)ly(t - 1), . .. ). Using this, we can express
the density of x(t) and expand G = logp(x; w) = log IWI + 2: j log fj(Yj(t)IYj(t 1), Yj(t - 2), ... ; wU?) There are two sorts of parameters which we must take the
derivative with respect to: the matrix W and the source parameters wU). The
source parameters do not influence our recovered sources, and therefore have a
simple form
dG
dfJ(Yj;wj)/dwj
dWj
fj(Yj; Wj)
However, a change to the matrix W changes y, which introduces a few extra terms.
Note that dlog IWI/dW = W- T , the transpose inverse. Next, since y = Wx, we
see that dYj/dW = (OlxIO)T, a matrix of zeros except for the vector x in row j .
Now we note that dfJO/dW term has two logical components: the first from the
effect of changing W upon Yj(t), and the second from the effect of changing W upon
Yj(t -1), Yj(t - 2), .... (This second is called the "recurrent term", and such terms
are frequently dropped for convenience. As shown in figure 3, dropping this term
here is not a reasonable approximation.)
dfJ(Yj(t)IYj(t-1), ... ;wj) = afj dYj(t)
dW
aYj(t) dW
+ 2:
T
afJ
dYj(t-T)
aYj(t - T)
dW
Note that the expression dYij:;T) is the only matrix, and it is zero except for the
jth row, which is x(t - T). The expression afJ/aYj(t) we shall denote fjO, and the
expression afjaYj(t - T) we shall denote f(T}(.). We then have
!
= _W- T
-
(f~(:))
fJ()
x(tf j
f
T=l
(ft}:?)) x(t -
fJ()
Tf
(2)
j
where (expr(j))j denotes the column vector whose elements are expr(1) , . .. , expr(n).
2.2
The natural gradient
Following Amari, Cichocki, and Yang (1996), we follow a pseudogradient. Instead of
using equation 2, we post-multiply this quantity by WTW. Since this is a positivedefinite matrix, it does not affect the stochastic gradient convergence criteria, and
the resulting quantity simplifies in a fashion that neatly eliminates the costly matrix
inversion otherwise required. Convergence is also accelerated.
3
Experiments
We conducted a number of experiments to test the efficacy of the cICA algorithm.
The first, shown in figure 2, was a toy problem involving a set of processed deliberately constructed to be difficult for conventional source separation algorithms.
In the second experiment, shown in figure 3, ten real sources were digitally mixed
with an instantaneous matrix and separation performance was measured as a funciton of varying model complexity parameters. These sources have are available for
benchmarking purposes in http://www.cs.unm.edu;-bap/demos.html.
Maximum Likelihood Blind Source Separation: ContextuallCA
617
Figure 2: cICA using a history of one time step and a mixture of five logistic densities
for each source was applied to 5,000 samples of a mixture of two one-dimensional
uniform distributions each filtered by convolution with a decaying exponential of
time constant of 99.5. Shown is a scatterplot of the data input to the algorithm,
along with the true source axes (left), the estimated residual probability density
(center), and a scatterplot of the residuals of the data transformed into the estimated
source space coordinates (right). The product of the true mixing matrix and the
estimated unmixing matrix deviates from a scaling and permutation matrix by
about 3%.Truncated Gradient
Full Gradient
Noise Model
100
?8
II
~
10
o
5
10
t5
number 01 AR filter taps
20
o
5
10
15
number 01 AR filter taps
20
2
number oIlogistica
Figure 3: The performance of cICA as a function of model complexity and gradient
accuracy. In all simulations, ten five-second clips taken digitally from ten audio CD
were digitally mixed through a random ten-by-ten instantanious mixing matrix. The
signal to noise ratio of each original source as expressed in the recovered sources is
plotted. In (a) and (b), AR source models with a logistic noise term were used, and
the number of taps of the AR model was varied. (This reduces to Bell-Sejnowski
infomax when the number of taps is zero.) Is (a), the recurrent term of the gradient
was left out, while in (b) the recurrent term was included. Clearly the recurrent
term is important. In (c), a degenerate AR model with zero taps was used, but the
noise term was a mixture of logistics, and the number of logistics was varied.
4
Discussion
The Infomax algorithm (Baram and Roth 1994) used for source separation (Bell and
Sejnowski 1995) is a special case of the above algorithm in which (a) the mixing is
not convolutional, so W(l) = W(2) = ... = 0, and (b) the sources are assumed to
be iid, and therefore the distributions fi(y(t)) are not history sensitive. Further,
the form of the Ii is restricted to a very special distribution: the logistic density,
618
B. A. Pearlmuner and L. C. Parra
the derivative of the sigmoidal function 1/{1 + exp -{). Although ICA has enjoyed
a variety of applications (Makeig et al. 1996; Bell and Sejnowski 1996b; Baram and
Roth 1995; Bell and Sejnowski 1996a), there are a number of sources which it cannot
separate. These include all sources with Gaussian histograms (e.g. colored gaussian
sources, or even speech to run through the right sort of slight nonlinearity), and
sources with low kurtosis. As shown in the experiments above, these are of more
than theoretical interest.
If we simplify our model to use ordinary AR models for the sources, with gaussian
noise terms of fixed variance, it is possible to derive a closed-form expression for
W (Hagai Attias, personal communication). It may be that for many sources of
practical interest, trading away this model accuracy for speed will be fruitful.
4.1
Weakened assumptions
It seems clear that, in general, separating when there are fewer microphones than
sources requires a strong bayesian prior, and even given perfect knowledge of the
mixture process and perfect source models, inverting the mixing process will be
computationally burdensome. However, when there are more microphones than
sources, there is an opportunity to improve the performance of the system in the
presence of noise. This seems straightforward to integrate into our framework.
Similarly, fast-timescale microphone nonlinearities are easily incorporated into this
maximum likelihood approach.
The structure of this problem would seem to lend itself to EM. Certainly the individual source models can be easily optimized using EM, assuming that they themselves
are of suitable form.
References
A. Bell, T.-W. L. (1997). Blind separation of delayed and convolved sources. In
Advances in Neural Information Processing Systems 9. MIT Press. In this
volume.
Amari, S., Cichocki, A., and Yang, H. H. (1996). A new learning algorithm for blind
signal separation. In Advances in Neural Information Processing Systems 8.
MIT Press.
Baram, Y. and Roth, Z. (1994). Density Shaping by Neural Networks with Application to Classification, Estimation and Forecasting. Tech. rep. CIS-9420, Center for Intelligent Systems, Technion, Israel Institute for Technology,
Haifa.
Baram, Y. and Roth, Z. (1995). Forecasting by Density Shaping Using Neural
Networks. In Computational Intelligence for Financial Engineering New York
City. IEEE Press.
Bell, A. J. and Sejnowski, T. J. (1995). An Information-Maximization Approach
to Blind Separation and Blind Deconvolution. Neural Computation, 7(6),
1129-1159.
Bell, A. J. and Sejnowski, T. J. (1996a). The Independent Components of Natural
Scenes. Vision Research. Submitted.
Maximum Likelihood Blind Source Separation: ContextuallCA
619
Bell, A. J. and Sejnowski, T. J. (1996b). Learning the higher-order structure of a
natural sound. Network: Computation in Neural Systems. In press.
Belouchrani, A. and Cardoso, J.-F. (1995). Maximum likelihood source separation
by the expectation-maximization technique: Deterministic and stochastic implementation. In Proceedings of 1995 International Symposium on Non-Linear
Theory and Applications, pp. 49- 53 Las Vegas, NV. In press.
Lambert, R. H. (1996). Multichannel Blind Deconvolution: FIR Matrix Algebra and
Separation of Multipath Mixtures. Ph.D. thesis, USC.
Makeig, S., Anllo-Vento, L., Jung, T.-P., Bell, A. J., Sejnowski, T. J., and Hillyard,
S. A. (1996). Independent component analysis of event-related potentials
during selective attention. Society for Neuroscience Abstracts, 22.
Pearlmutter, B. A. and Parra, L. C. (1996). A Context-Sensitive Generalization of ICA. In International Conference on Neural Information Processing
Hong Kong. Springer-Verlag. Url ftp:/ /ftp.cnl.salk.edu/pub/bap/iconip-96cica.ps.gz.
Pham, D., Garrat, P., and Jutten, C. (1992). Separation of a mixture of independent sources through a maximum likelihood approach. In European Signal
Processing Conference, pp. 771-774.
Torkkola, K. (1996). Blind separation of convolved sources based on information
maximization. In Neural Networks for Signal Processing VI Kyoto, Japan.
IEEE Press. In press.
A
Fixed mixture AR models
The fj{uj; Wj) we used were a mixture AR processes driven by logistic noise terms,
as in Pearlmutter and Parra (1996). Each source model was
fj{Uj{t)IUj{t -1), Uj{t - 2), ... ; Wj)
= I: mjk h{{u){t) -
Ujk)/Ujk)/Ujk
(3)
k
where Ujk is a scale parameter for logistic density k of source j and is an element
of Wj, and the mixing coefficients mjk are elements of Wj and are constrained by
'Ek mjk = 1. The component means Ujk are taken to be linear functions of the
recent values of that source,
Ujk
=L
ajk(r) Uj{t - r)
+ bjk
(4)
T=l
where the linear prediction coefficients ajk{r) and bias bjk are elements of Wj'
The derivatives of these are straightforward; see Pearlmutter and Parra (1996) for
details. One complication is to note that, after each weight update, the mixing
coefficients must be normalized, mjk t- mjk/ 'Ekl mjk' .
| 1179 |@word kong:1 version:1 inversion:1 seems:2 simulation:1 efficacy:1 pub:1 recovered:7 com:1 contextual:1 si:3 assigning:1 must:3 wx:1 update:1 cue:1 generative:3 fewer:1 intelligence:1 parameterization:2 colored:2 filtered:2 complication:1 sigmoidal:1 five:2 along:1 constructed:1 symposium:1 combine:1 introduce:1 ica:4 themselves:1 frequently:1 positivedefinite:1 begin:1 notation:1 underlying:1 maximizes:1 israel:1 transformation:4 nj:1 guarantee:1 temporal:2 dfj:2 makeig:2 ly:2 producing:1 dropped:1 engineering:1 plus:1 weakened:1 bi:2 statistically:1 bjk:2 practical:1 yj:8 aji:4 bell:11 convenient:1 road:1 cannot:2 convenience:1 operator:1 context:2 influence:1 www:1 conventional:1 map:1 fruitful:1 center:2 roth:4 deterministic:1 straightforward:3 attention:1 independently:1 simplicity:1 assigns:1 financial:1 dw:8 coordinate:1 speak:1 element:4 fjo:1 ft:1 parameterize:1 calculate:2 wj:8 digitally:3 complexity:2 personal:1 algebra:1 upon:2 easily:2 indirect:1 various:1 derivation:2 fast:1 sejnowski:9 whose:1 posed:1 cnl:1 amari:2 precludes:1 otherwise:1 timescale:1 itself:1 advantage:1 differentiable:1 kurtosis:2 product:1 mixing:12 degenerate:1 convergence:2 p:1 unmixing:3 produce:2 perfect:2 ftp:2 derive:1 recurrent:4 measured:1 strong:1 recovering:2 c:2 trading:1 correct:1 filter:9 stochastic:5 require:1 generalization:2 parra:7 singularity:1 hagai:1 pham:2 exp:1 bj:5 driving:2 purpose:2 estimation:4 sensitive:3 tf:2 city:1 mit:2 clearly:1 sensor:6 gaussian:6 rather:2 varying:1 ax:1 likelihood:14 tech:1 pear:1 burdensome:1 dependent:2 lj:1 expand:1 transformed:1 selective:1 classification:1 html:1 lucas:2 spatial:1 special:3 constrained:1 simplify:1 inherent:1 few:1 intelligent:1 dg:1 individual:3 delayed:1 usc:1 interest:2 highly:1 multiply:1 certainly:1 introduces:1 truly:1 mixture:9 tj:4 unless:1 haifa:1 plotted:1 causal:1 theoretical:1 instance:1 column:1 ar:11 logp:1 maximization:4 ordinary:1 reparametrized:1 uniform:1 technion:1 conducted:1 stored:1 density:15 international:2 probabilistic:2 infomax:3 invertible:1 iy:1 thesis:1 ambiguity:1 nm:1 fir:2 ek:1 iyj:2 derivative:3 toy:1 japan:1 potential:1 nonlinearities:1 coefficient:3 explicitly:1 blind:14 vi:1 performed:1 closed:2 recover:1 sort:2 complicated:1 decaying:1 iwi:2 square:2 accuracy:2 convolutional:1 variance:1 bayesian:1 albuquerque:1 lambert:2 iid:2 history:6 submitted:1 pp:2 dataset:2 baram:4 logical:1 knowledge:1 shaping:2 higher:1 follow:1 mutter:1 just:1 stage:2 implicit:1 nonlinear:1 jutten:2 logistic:5 perhaps:1 effect:2 normalized:1 true:3 deliberately:1 white:2 during:1 criterion:1 generalized:1 hong:1 iconip:1 pearlmutter:5 scr:1 fj:6 instantaneous:1 vega:1 fi:1 permuted:2 volume:1 slight:1 fec:1 refer:2 enjoyed:1 similarly:2 neatly:1 nonlinearity:1 moving:1 hillyard:1 iyi:1 recent:2 driven:1 verlag:1 rep:1 arbitrarily:1 yi:5 wji:1 signal:7 ii:5 full:1 corporate:1 sound:1 reduces:1 kyoto:1 post:1 mle:2 equally:1 prediction:1 involving:1 vision:1 expectation:1 histogram:2 diagram:2 singular:1 source:73 extra:1 eliminates:1 nv:1 seem:1 call:1 presence:2 yang:2 variety:1 xj:1 fit:1 affect:1 ujk:6 pseudogradient:1 simplifies:1 attias:1 expression:4 url:1 forecasting:2 speech:1 york:1 constitute:1 clear:1 cardoso:2 ten:5 ph:1 clip:1 processed:1 multichannel:1 http:1 dyj:3 estimated:9 neuroscience:1 modifiable:1 dropping:1 shall:2 express:2 clarity:1 changing:2 wtw:1 kept:1 run:1 inverse:3 parameterized:1 distorted:1 almost:1 reasonable:1 wu:2 separation:19 appendix:1 scaling:1 distinguish:1 constraint:1 scene:1 speed:1 performing:1 expository:1 bap:3 em:2 dlog:1 restricted:1 xo:2 taken:2 computationally:1 equation:2 know:1 serf:1 available:1 apply:1 multipath:1 away:1 convolved:2 original:2 denotes:1 include:1 opportunity:1 uj:4 society:1 expr:3 added:1 quantity:2 costly:1 gradient:8 separate:2 separating:1 parametrized:1 manifold:1 dwj:2 reason:1 assuming:3 ratio:1 mexico:1 difficult:1 negative:1 implementation:1 unknown:3 convolution:5 truncated:1 logistics:2 situation:1 communication:1 incorporated:1 varied:2 lb:1 princeton:1 inverting:2 cast:2 required:1 optimized:1 tap:5 able:1 reading:2 including:1 lend:1 suitable:2 event:1 natural:3 residual:2 improve:1 mjk:6 technology:1 gz:1 cichocki:2 deviate:1 prior:1 bjl:1 permutation:2 mixed:3 filtering:1 integrate:1 sufficient:1 consistent:1 ayj:3 cd:1 production:4 row:2 belouchrani:2 surprisingly:1 jung:1 free:1 transpose:1 jth:1 aij:4 bias:1 barak:1 institute:1 bs:1 autoregressive:1 t5:1 forward:1 made:1 invertable:1 unm:2 sj:11 lsj:1 cica:7 assumed:1 xi:6 demo:1 search:1 un:1 european:1 linearly:1 iuj:1 noise:8 benchmarking:1 fashion:1 vento:1 salk:1 position:1 explicit:4 exponential:1 xl:2 afj:3 specific:2 torkkola:2 deconvolution:2 scatterplot:2 ci:1 generalizing:1 expressed:3 springer:1 exposition:1 ajk:2 change:2 included:1 folded:1 except:2 microphone:3 called:1 ekl:1 la:1 siemens:2 east:1 college:1 accelerated:1 dept:1 audio:1 |
197 | 118 | 73
LEARNING BY CHOICE
OF INTERNAL REPRESENTATIONS
Tal Grossman, Ronny Meir and Eytan Domany
Department of Electronics, Weizmann Institute of Science
Rehovot 76100 Israel
ABSTRACT
We introduce a learning algorithm for multilayer neural networks composed of binary linear threshold elements. Whereas existing algorithms reduce the learning process to minimizing a cost
function over the weights, our method treats the internal representations as the fundamental entities to be determined. Once a
correct set of internal representations is arrived at, the weights are
found by the local aild biologically plausible Perceptron Learning
Rule (PLR). We tested our learning algorithm on four problems:
adjacency, symmetry, parity and combined symmetry-parity.
I. INTRODUCTION
Consider a network of binary linear threshold elements i, whose state Si
is determined according to the rule
Si
= sign(L WijSj + Oi)
= ?1
(1)
.
j
Here Wij is the (unidirectional) weight assigned to the connection from unit j to
i; 0i is a local bias. We focus our attention on feed-forward networks in which N
units of the input layer determine the states of H units of a hidden layer; these, in
turn, feed one or more output elements.
For a typical A vs B classification task such a network needs a single output,
with sout = + 1 (or -1) when the input layer is set in a state that belongs to catego~y
A (or B) of input space. The basic problem of learning is to find an algorithm, that
produces weights which enable the network to perform this task. In the absence
of hidden units learning can be accomplished by the PLR [Rosenblatt 1962], which
1, ... , N source units and a single target unit
we now briefly Jcscribe. Consider j
i. When the source units are set in anyone of p. 1, .. M patterns, i.e. Sj =
we require that the target unit (determined using (1? takes preassigned values
Learning takes place in the course of a training session. Starting from any arbitrary
initial guess for the weights, an input 1/ is presented, resulting in the output taking
some value
Now modify every weight according to the rule
=
er,
er.
=
Sr.
+
W??
-+ W??
"(1
11
IJ"
-
SI!CI!)CI!Cl(
1 ~I ~I ~J
'
(2)
74
Grossman, Meir and Domany
(er =
where TJ > 0 is a parameter
1 is used to modify the bias 0). Another
input pattern is presented, and so on, until all inputs draw the correct output.
The Perceptron convergence theorem states [Rosenblatt 1962, Minsky and Papert
1969] that the PLR will find a solution (if one exists), in a finite number of steps.
However, of the 22N possible partitions of input space only a small subset (less than
2N2 / N!) is linearly separable [Lewis and Coates 1967], and hence soluble by singlelayer perceptrolls. To get around this, hidden units are added. Once a single hidden
layer (with a large enough number of units) is inserted beween input and output,
every classification problem has a solution. But for such architectures the PLR
cannot be implemented; when the network errs, it is not clear which connection is
to blame for the error, and what corrective action is to be taken.
Back-propagation [Rumelhart et al 1986] circumvents this "credit-assignment"
problem by dealing only with networks of continuous valued units, whose response
function is also continuous (sigmoid). "Learning" consists of a gradient-descent
type minimization of a cost function that measure the deviation of actual outputs
from the required ones, in the space of weights Wij, 0i. A new version of BP, "back
propagation of desired states", which bears some similarity to our algorithm, has
recently been introduced [Plaut 1987]. See also Ie Cun [1985] and Widrow and
Winter [1988] for related methods.
Our algorithm views the internal representations associated with various inputs
as the basic independent variables of the learning process. This is a conceptually
plausible assumption; in the course of learning a biological or artificial system should
form maps and representations of the external world. Once such representations
are formed, the weights can be found by simple and local Hebbian learning rules
such as the PLR. Hence the problem of learning becomes one of searching for proper
internal representations, rather than one of minimization. Failure of the PLR to
converge to a solution is used as an indication that the current guess of internal
representations needs to be modified.
II. THE ALGORITHM
If we know the internal representations (e.g. the states taken by the hidden
layer when patterns from the training set are presented), the weights can be found
by the PLR. This way the problem of learning becomes one of choosing proper
internal representations, rather than of minimizing a cost function by varying the
values of weights. To demonstrate our approache, consider the classification probI, ... , M input
lem with output values, sout,~ eout,~, required in response to J1.
patterns. If a solution is found, it first maps each input onto an internal representation generated on the hidden layer, which, in turn, produces the correct output.
Now imagine that we are not supplied with the weights that solve the problem;
however the correct internal representations are revealed. That is, we are given a
table with M rows, one for each input. Every row has H bits e;'~, for i = I, .. H,
specifying the state of the hidden layer obtained in response to input pattern JJ.
One can now view each hidden-layer cell i as the target cell of the PLR, with the
N inputs viewed as source. Given sufficient time, the PLR will converge to a set
=
=
Learning by Choice of Internal Representations
of weights Wi,j, connecting input unit j to hidden unit i, so that indeed the inputoutput association that appears in column i of our table will be realized. In a
similar fashion, the PLR will yield a set of weights Wi, in a learning process that
uses the hidden layer as source and the output unit as target. Thus, in order to
solve the problem of learning, all one needs is a search procedure in the space of
possible internal representations, for a table that can be used to generate a solution.
Updating of weights can be done in parallel for the two layers, using the current
table of internal representations. In the present algorithm, however, the process is
broken up into four distinct stages:
1. SETINREP: Generate a table of internal representations {e?'II} by presenting
each input pattern from the training set and calculating the state on the hidden
layer,using Eq.(la), with the existing couplings Wij and ej.
2. LEARN23: The hidden layer cells are used as source, and the output as the
target unit of the PLR. The current table of internal representations is used as
the training set; the PLR tries to find appropriate weights Wi and e to obtain the
desired outputs. If a solution is found, the problem has been solved. Otherwise
stop after 123 learning sweeps, and keep the current weights, to use in IN REP.
3. INREP: Generate a new table of internal representations, which, when used in
(lb), yields the correct outputs. This is done by presenting the table sequentially,
row by row, to the 11idden layer. If for row v the wrong output is obtained, the
internal representation eh ,1I is changed. Having the wrong output means that the
"field" produced by the hidden layer on the output unit, ho ut ,lI = Ej Wje~'11 is
either too large or too small. We then randomly pick a site j of the hidden layer,
and try to flip the sign of e;'II; if hout ,lI changes in the right direction, we replace
the entry of our table, i.e.
&~,II
'3
-.
_&~,II
'J'
We keep picking sites and changing the internal representation of pattern v until
the correct output is generated. We always generate the correct output this way,
provided Ej IWjl > leoutl (as is the case for our learning process in LEARN23).
This procedure ends with a modified table which is our next guess of internal
representations.
4. LEARN12: Apply the PLR with the first layer serving as source, treating
every hidden layer site separately as target. Actually, when an input from the
training set is presented to the first layer, we first check whether the correct result
is produced on the output unit of the network. If we get wrong overall output, we
use the PLR for every hidden unit i, modifying weights incident on i according
to (2), using column i of the table as the desired states of this unit. If input v
does yield the correct output, we insert the current state of the hidden layer as the
internal representation associated with pattern v, and no learning steps are taken.
We sweep in this manner the training set, modifying weights Wij, (between input
and hidden layer), hidden-layer thresholds ei, and, as explained above, internal
75
76
Grossman, Meir and Domany
representations. If the network has achieved error-free performance for the entire
training set, learning is completed. If no solution has been found after 112 sweeps
of the training set, we abort the PLR stage, keep the present values of Wij, OJ, and
start SETINREP again.
This is a fairly complete account of our procedure (see also Grossman et al
[1988]). There are a few details ?that need to be added.
a) The "impatience" parameters: 112 and 123, which are rather arbitrary, are
introduced to guarantee that the PLR stage is aborted if no solution is found. This
is necessary since it is not clear that a solution exists for the weights, given the
current table of internal representations. Thus, if the PLR stage does not converge
within the time limit specified, a new table of internal representations is formed.
The parameters have to be large enough to allow the PLR to find a solution (if
one exists) with sufficiently high probability. On the other hand, too large values
are wasteful, since they force the algorithm to execute a long search even when
no solution exists. Therefore the best values of the impatience parameters can be
determined by optimizing the performance of the network; our experience indicates,
however, that once a "reasonable" range of values is found, performance is fairly
insensitive to the precise choice.
b) Integer weights: In the PLR correction step, as given by Eq.2, the size of
D.. W is constant. Therefore, when using binary units, it can be scaled to unity (by
setting T] = 0.5) and one can use integer Wi,j'S without any loss of generality.
c) Optimization: The algorithm described uses several parameters, which should
be optimized to get the best performance. These parameters are: 112 and 123 - see
section (a) above; Imax - time limit, i.e. an upper bound to the total number of
training sweeps; and the PLR training parameters - i.e the increment of the weights
and thresholds during the PLR stage. In the PLR we used values of 1] ~ 0.1 [see
Eq. (2) ] for the weights, and 1] ~ 0.05 for thresholds, whereas the initial (random)
values of all weights were taken from the interval (-0.5,0.5), and thresholds from
(-0.05,0.05). In the integer weights program, described above, these parameters are
not used.
d) Treating Multiple Outputs: In the version of inrep described above, we
keep flipping the internal representations 'until we find one that yields the correct
output, i.e. zero error for the given pattern. This is not always possible when using
more than one output unit. Instead, we can allow only for a pre-specified number
of attempted flips, lin' and go on to the next pattern even if vanishing error was
not achieved. In this modified version we also introduce a slightly different, and less
"restrictive" criterion for accepting or rejecting a flip. Having chosen (at random)
a hidden unit i, we check the effect of flipping the sign of ~;,II on the total output
error, i.e. the number of wrong bits (and not on the output field, as described
above). If the output error is not increased, the flip is accepted and the table of
internal representations is changed accordingly.
This modified algorithm is applicable for multiple-output networks. Results of
preliminary experiments with this version are presented in the next section.
Learning by Choice of Internal Representations
III. PERFORMANCE OF THE ALGORITHM
The "time" parameter that we use for measuring performance is the number of
sweeps through the training set of M patterns needed in order to find the solution.
Namely, how many times each pattern was presented to the network. In each cycle
of the algorithm there are 112 + 123 such sweeps. For each problem, and each
parameter choice, an ensemble of many independent runs, each starting with a
different random choice of initial weights, is created. In general, when applying a
learning algorithm to a given problem, there are cases in which the algorithm fails
to find a solution within the specified time limit (e.g. when BP get stuck in a local
minimum), and it is impossible to calculate the ensemble average of learning times.
Therefore we calculate, as a performance measure, either the median number of
sweeps, t m , or the "inverse average rate", T, as defined in Tesauro and Janssen
[1988].
The first problem we studied is contiguity: the system has to determine whether
the number of clumps (i.e. contiguous blocks) of +1 's in the input is, say, equal to
2 or 3. This is called [Denker et al 1987] the "2 versus 3" clumps predicate. We
used, as our training set, all inputs that have 2 or 3 clumps, with learning cycles
parametrized by 112
20 and 123
5. Keeping N
6 fixed, we varied H; 500
cases were used for each data point of Fig.l.
=
=
=
400 -
x
BP
<>
CHIR
300
200
100
3
4
5
6
7
8
H
Figure 1. Median number of sweeps t m , needed to train a network of N = 6
input units, over an exhaustive training set, to solve the" 2 vs 3" clumps predicate,
plotted against the number of hidden units H. Results for back-propagation [Denker
et al 1987] (x) and this work (?) are shown.
77
78
Grossman, Meir and Domany
In the next problem, symmetry, one requires sout = 1 for reflection-symmetric
inputs and -1 otherwise. This can be solved with H ~ 2 hidden units. Fig. 2
2, the median number of exhaustive training sweeps needed to
presents, for H
solve the problem, vs input size N. At each point 500 cases were run, with 112 10
and 123 5. We always found a solution in' less than 200 cycles.
=
=
=
6
8
10
N
Figure 2.
Median number of sweeps t m
symmetry (with H
2).
=
,
needed to train networks on
=
1 for an even number of +1 bits in
In the Parity problem one requires sout
the input, and -1 otherwise. In order to compare performance of our algorithm to
that of BP, we studied the Parity problem, using networks with an architecture of
N : 2N : 1, as chosen by Tesauro and Janssen [1988].
We used the integer version of our algorithm, briefly described above. In this
version of the algorithm the weights and thresholds are integers, and the increment
size, for both thresholds and weights, is unity. As an initial condition, we chose
them to be +1 or -1 randomly. In the simulation of this version, all possible input
patterns were presented sequentially in a fixed order (within the perceptron learning
sweeps). The results are presented in Table 1. For all choices of the parameters
( It2, 123 ), that are mentioned in the table, our success rate was 100%. Namely, the
algorithm didn't fail even once to find a solution in less than the maximal number
of training cycles Imax specified in the table. Results for BP, r(BP) (from Tesauro
and Janssen 1988) are also given in the table. Note that BP does get caught in
local minima, but the percentage of such occurrences was not reported.
Learning by Choice of Internal Representations
For testing the multiple output version of the algorithm we use8 the combined
parity and symmetry problem; the network has two output units, both connected to
all hidden units. The first output unit performs the parity predicate on the input,
and the second performs the symmetry predicate. The network architecture was
N:2N:2 and the results for N=4 .. 7 are given in Table 2. The choice of parameters
is also given in that table.
tm
3
4
8
19
290
2900
2400
Ima.x
3 (8,4)
10
4 (9,3)(6,6)
20
5 (12,4)(9,6) 40
6 (12,4)(10,5) 120
7 (12,4)(15,5) 240
8 (20,10)
900
9 (20,10)
900
N (I12,/23)
T(CH IR)
3
4
6
9
30
150
1300
T(BP)
3g
75
130
310
80Q
20db
I
Table 1. Parity with N:2N:1 architecture.
N
112
h3
lin
4
5
6
12
14
18
40
8
7
9
20
7
7
7
7
7
Ima.x
40
400
900
900
tm
50
900
5250
6000
T
33
350
925
2640
Table 2. Parity and Symmetry with N:2N:2 architecture.
IV. DISCUSSION
We have presented a learning algorithm for two-Iayerperceptrons, that searches
for internal representations of the training set, and determines the weights by the
local, Hebbian perceptron learning rule. Learning by choice of internal representation may turn out to be most useful in situations where the "teacher" has some
information about the desired internal representations.
We demonstrated that our algorithm works well on four typical problems, and
studied the manner in which training time varies with network size. Comparisons
with back-propagation were also made. it should be noted that a training sweep
involves much less computations than that of back-propagation. We also presented
a generalization of the algorithm to networks with multiple outputs, and found
that it functions well on various problems of the same kind as discussed above. It
appears that the modification needed to deal with multiple outputs also enables us
to solve the learning problem for network architectures with more than one hidden
layer.
79
80
Grossman, Meir and Domany
At this point we can offer only very limited discussion of the interesting question - why does our algorithm work at all? That is, how come it finds correct
internal representations (e.g. "tables") while these constitute only a small fraction
of the total possible number (2H2N)? The main reason is that our procedure actually does not search this entire space of tables. This large space contains a small
subspace, T, of "target tables", i.e. those that can be obtained, for all possible
choices of w{j and OJ, by rule (1), in response to presentation of the input patterns.
Another small subspace S, is that of the tables that can potentially produce the
required output. Solutions of the learning problem constitute the space T n S.
Our algorithm iterates between T and S, executing also a "walk" (induced by the
modification of the weights due to the PLR) within each.
An appealing feature of our algorithm is that it can be implemented in a
manner that uses only integer-valued weights and thresholds. This discreteness
makes the analysis of the behavior of the network much easier, since we know
the exact number of bits used by the system in constructing its solution, and do
not have to worry about round-off errors. From a technological point of view, for
hardware implementation it may also be more feasible to work with integer weights.
We are extending this work in various directions. The present method needs, in
the learning stage, M H bits of memory: internal representations of all M training
patterns are stored. This feature is biologically implausible and may be technologically limiting; we are developing a method that does not require such memory.
Other directions of current study include extensions to networks with continuous
variables, and to networks with feed-back.
References
Denker J., Schwartz D., Wittner B., SolI a S., Hopfield J.J., Howard R. and Jackel
L. 1987, Complex Systems 1, 877-922
Grossman T., Meir R. and Domany E . 1988, Complex Systems in press.
I1ebb D.O. 1949, The organization of Behavior, J. Wiley, N.Y
Le Cun Y. 1985, Proc. Cognitiva 85, 593
Lewis P.M. and Coates C.L. 1967, Threshold Logic. (Wiley, New York)
Minsky M. and Papert S. 1988, Perceptrons. (MIT, Cambridge).
Plaut D.C., Nowlan S.J. and Hinton G.E. 1987, Tech. Report CMU-CS-86-126
Rosenblatt F. Principles of neurodynamics. (Spartan, New York, 1962)
Rumelhart D.E., Hinton G.E. and Williams R.J. 1986, Nature 323,533-536
Tesauro G. and Janssen H. 1988, Complex Systems 2, 39
Widrow B. and Winter R. 1988, Computer 21, No.3, 25
| 118 |@word version:8 briefly:2 simulation:1 pick:1 initial:4 electronics:1 contains:1 existing:2 current:7 nowlan:1 si:3 partition:1 j1:1 enables:1 treating:2 v:3 guess:3 accordingly:1 probi:1 vanishing:1 accepting:1 iterates:1 plaut:2 wijsj:1 consists:1 manner:3 introduce:2 indeed:1 behavior:2 aborted:1 actual:1 becomes:2 provided:1 didn:1 israel:1 what:1 kind:1 contiguity:1 guarantee:1 every:5 wrong:4 scaled:1 schwartz:1 unit:28 local:6 treat:1 modify:2 limit:3 preassigned:1 chose:1 studied:3 specifying:1 limited:1 range:1 clump:4 weizmann:1 testing:1 block:1 procedure:4 pre:1 get:5 cannot:1 onto:1 ronny:1 applying:1 impossible:1 map:2 demonstrated:1 go:1 attention:1 starting:2 caught:1 williams:1 rule:6 imax:2 it2:1 searching:1 increment:2 limiting:1 target:7 imagine:1 exact:1 us:3 element:3 rumelhart:2 updating:1 i12:1 inserted:1 solved:2 calculate:2 cycle:4 chir:1 connected:1 technological:1 mentioned:1 broken:1 hopfield:1 various:3 corrective:1 train:2 distinct:1 artificial:1 spartan:1 choosing:1 exhaustive:2 whose:2 plausible:2 valued:2 solve:5 say:1 otherwise:3 indication:1 hout:1 maximal:1 setinrep:2 inputoutput:1 convergence:1 extending:1 produce:3 executing:1 coupling:1 widrow:2 ij:1 h3:1 eq:3 implemented:2 c:1 involves:1 come:1 direction:3 correct:11 modifying:2 enable:1 adjacency:1 require:2 generalization:1 preliminary:1 biological:1 insert:1 extension:1 correction:1 around:1 credit:1 sufficiently:1 proc:1 applicable:1 jackel:1 minimization:2 mit:1 always:3 modified:4 rather:3 ej:3 varying:1 focus:1 check:2 indicates:1 tech:1 entire:2 hidden:24 wij:5 overall:1 classification:3 fairly:2 field:2 once:5 equal:1 having:2 report:1 few:1 randomly:2 winter:2 composed:1 ima:2 minsky:2 organization:1 tj:1 solo:1 necessary:1 experience:1 iv:1 walk:1 desired:4 plotted:1 increased:1 column:2 technologically:1 contiguous:1 measuring:1 assignment:1 cost:3 deviation:1 subset:1 entry:1 predicate:4 too:3 reported:1 stored:1 teacher:1 varies:1 combined:2 fundamental:1 ie:1 sout:4 off:1 picking:1 connecting:1 again:1 soluble:1 external:1 grossman:7 li:2 account:1 inrep:2 view:3 try:2 start:1 parallel:1 unidirectional:1 oi:1 formed:2 ir:1 ensemble:2 yield:4 conceptually:1 rejecting:1 produced:2 implausible:1 failure:1 against:1 associated:2 stop:1 ut:1 actually:2 back:6 appears:2 feed:3 worry:1 response:4 done:2 execute:1 generality:1 learn23:2 stage:6 until:3 hand:1 ei:1 propagation:5 abort:1 effect:1 hence:2 assigned:1 impatience:2 symmetric:1 deal:1 round:1 during:1 noted:1 criterion:1 arrived:1 presenting:2 complete:1 demonstrate:1 performs:2 reflection:1 recently:1 sigmoid:1 insensitive:1 association:1 discussed:1 cambridge:1 session:1 blame:1 similarity:1 wje:1 optimizing:1 belongs:1 tesauro:4 binary:3 errs:1 rep:1 success:1 accomplished:1 minimum:2 determine:2 converge:3 ii:6 multiple:5 hebbian:2 offer:1 long:1 lin:2 wittner:1 basic:2 multilayer:1 cmu:1 achieved:2 cell:3 whereas:2 separately:1 interval:1 median:4 source:6 sr:1 cognitiva:1 induced:1 db:1 integer:7 revealed:1 iii:1 enough:2 architecture:6 plr:23 reduce:1 domany:6 tm:2 whether:2 york:2 jj:1 action:1 constitute:2 useful:1 clear:2 hardware:1 generate:4 meir:6 supplied:1 percentage:1 coates:2 sign:3 serving:1 rosenblatt:3 rehovot:1 four:3 threshold:10 changing:1 wasteful:1 discreteness:1 fraction:1 run:2 inverse:1 place:1 reasonable:1 draw:1 circumvents:1 bit:5 layer:22 bound:1 bp:8 tal:1 anyone:1 separable:1 department:1 developing:1 according:3 slightly:1 unity:2 wi:4 appealing:1 cun:2 biologically:2 lem:1 modification:2 explained:1 taken:4 beween:1 turn:3 fail:1 needed:5 know:2 flip:4 end:1 apply:1 denker:3 appropriate:1 occurrence:1 ho:1 include:1 completed:1 calculating:1 restrictive:1 sweep:12 added:2 realized:1 flipping:2 question:1 gradient:1 subspace:2 entity:1 parametrized:1 reason:1 minimizing:2 potentially:1 implementation:1 proper:2 perform:1 upper:1 howard:1 finite:1 descent:1 situation:1 hinton:2 precise:1 varied:1 arbitrary:2 lb:1 introduced:2 namely:2 required:3 specified:4 connection:2 optimized:1 pattern:15 program:1 oj:2 memory:2 eh:1 force:1 created:1 singlelayer:1 loss:1 bear:1 interesting:1 versus:1 incident:1 sufficient:1 principle:1 row:5 course:2 changed:2 parity:8 free:1 keeping:1 bias:2 allow:2 perceptron:4 institute:1 taking:1 world:1 forward:1 stuck:1 made:1 sj:1 keep:4 dealing:1 logic:1 sequentially:2 continuous:3 search:4 why:1 table:26 neurodynamics:1 nature:1 symmetry:7 h2n:1 cl:1 complex:3 constructing:1 main:1 linearly:1 n2:1 site:3 fig:2 fashion:1 wiley:2 papert:2 fails:1 theorem:1 er:3 exists:4 janssen:4 ci:2 easier:1 ch:1 determines:1 lewis:2 viewed:1 presentation:1 eout:1 replace:1 absence:1 feasible:1 change:1 approache:1 determined:4 typical:2 total:3 called:1 eytan:1 accepted:1 la:1 attempted:1 perceptrons:1 internal:32 learn12:1 tested:1 |
198 | 1,180 | Representing Face Images for Emotion
Classification
Curtis Padgett
Department of Computer Science
University of California, San Diego
La Jolla, CA 92034
Garrison Cottrell
Department of Computer Science
University of California, San Diego
La Jolla, CA 92034
Abstract
We compare the generalization performance of three distinct representation schemes for facial emotions using a single classification
strategy (neural network). The face images presented to the classifiers are represented as: full face projections of the dataset onto
their eigenvectors (eigenfaces); a similar projection constrained to
eye and mouth areas (eigenfeatures); and finally a projection of
the eye and mouth areas onto the eigenvectors obtained from 32x32
random image patches from the dataset. The latter system achieves
86% generalization on novel face images (individuals the networks
were not trained on) drawn from a database in which human subjects consistently identify a single emotion for the face .
1
Introduction
Some of the most successful research in machine perception of complex natural
image objects (like faces), has relied heavily on reduction strategies that encode
an object as a set of values that span the principal component sub-space of the
object 's images [Cottrell and Metcalfe, 1991, Pentland et al., 1994]. This approach
has gained wide acceptance for its success in classification, for the efficiency in which
the eigenvectors can be calculated, and because the technique permits an implementation that is biologically plausible. The procedure followed in generating these
face representations requires normalizing a large set of face views (" mug-shots") and
from these, identifying a statistically relevant sub-space. Typically the sub-space is
located by finding either the eigenvectors of the faces [Pentland et al., 1994] or the
weights of the connections in a neural network [Cottrell and Metcalfe, 1991].
In this work, we classify face images based on their emotional content and examine
how various representational strategies impact the generalization results of a classifier. Previous work using whole face representations for emotion classification by
Representing Face Images for Emotion Classification
895
Cottrell and Metcalfe [Cottrell and Metcalfe, 1991] was less encouraging than results obtained for face recognition. We seek to determine if the problem in Cottrell
and Metcalfe's work stems from bad data (i.e., the inability of the undergraduates
to demonstrate emotion), or an inadequate representation (i.e. eigenfaces).
Three distinct representations of faces are considered in this work- a whole face
representation similar to that used in previous work on recognition, sex, and emotion [Cottrell and Metcalfe, 1991]; a more localized representation based on the eyes
(eigeneyes and eigenmouths) and mouth [Pentland et aI., 1994]; and a representation of the eyes and mouth that makes use of basis vectors obtained by principal components of random image blocks. By examining the generalization rate of
the classifiers for these different face representations, we attempt to ascertain the
sensitivity of the representation and its potential for broader use in other vision
classification problems.
2
Face Data
The dataset used in Cottrell and Metcalfe's work on emotions consisted of the faces
of undergraduates who were asked to pose for particular expressions. However,
feigned emotions by untrained individuals exhibit significant differences from the
prototypical face expression [Ekman and Friesen, 1977]. These differences often result in disagreement between the observed emotion and the expression the actor
is attempting to feign. A feigned smile for instance, differs around the eyes when
compared with a " natural" smile. The quality of the displayed emotion is one of
the reasons cited by Cottrell and Metcalfe for the poor recognition rates achieved
by their classifier.
To reduce this possibility, we made use of a validated facial emotion database (Pictures of Facial Affect) assembled by Ekman and Friesen [Ekman and Friesen, 1976].
Each of the face images in this set exhibits a substantial agreement between the
labeled emotion and the observed response of human subjects. The actors used in
this database were trained to reliably produce emotions using Facial Action Coding
System [Ekman and Friesen, 1977] and their images were presented to undergraduates for testing. The agreement between the emotion the actor was required to
express and the students' observations was at least 70% on all the images incorporated in the database. We digitized a total of 97 images from 12 individuals (6
male, 6 female). Each portrays one of 7 emotions- happy, sad, fear, anger, surprise,
disgust or neutral. With the exception of the neutral faces, each image in the set is
labeled with a response vector of the remaining six emotions indicating the fraction
of total respondents classifying the image with a particular emotion.
Each of the images was linearly stretched over the 8 bit greyscale range to reduce lighting variations. Although care was taken in collecting the original images,
natural variations in head size and the mouth's expression resulted in significant
variation in the distance between the eyes (2.7 pixels) and in the vertical distance
from the eyes to the mouth (5.0 pixels). To achieve scale invariance, each image was
scaled so that prominent facial features were located in the same image region. Eye
and mouth templates were constructed from a number of images, and the most correlated template was used to localize the respective feature. Similar techniques have
been employed in previous work on faces [Brunelli and Poggio , 1993] . Examples of
the normalized images and typical facial expressions can be found in Figure 1.
896
C. Padgen and G. W. Conrell
A
B
C
Figure 1: The image regions from which the representations are derived. Image A
is a typical normalized and cropped image used to generate the full face eigenvectors. Image B depicts the feature regions from which the feature eigenvectors are
calculated. Image C indicates each of the block areas projected onto the random
block eigenvectors.
3
Representation
From the normalized database, we develop three distinct representations that form
independent pattern sets for a single classification scheme. The selected representations differ in their scope (features or whole face) and in the nature ofthe sub-space
(eigen- faces/features or eigenvectors of random image patches). The more familiar
representational schemes (eigenfaces, eigenfeatures) are based on PCA of aligned
features or faces. They have been shown to provide a reasonably compact representation for recognition purposes but little is known about their suitability for other
classification tasks.
Random image patches are used to identify an alternative sub-space from which a
set of localized face feature patterns are generated. This space is different in that
the sub-space is more general, the variance captured by the leading eigenvectors is
derived from patches drawn randomly over the set of face images. As we seek to
develop generalizations across the rather small portion of image space containing
faces or features , perturbations in this space will hopefully reflect more about class
characteristics than individual distinctions.
For each of the pattern sets, we normalized the resultant set of values obtained
from their projections on the eigenvectors by their standard deviation to produce Z
scores. The Z score obtained from each image constitutes a single input to the neural
network classifier. The highest valued eigenvectors typically contain more average
features so that presumably they would be more suitable for object classification.
All the representations will make use of the top k principal components.
The full-faced pattern has proved to be quite useful in identification and the
same techniques using face features have also been valuable [Pentland et al., 1994,
Cottrell and Metcalfe, 1991]. However representations useful for identification of
individuals may not be suitable for emotion recognition. In determining the appropriate emotion, structural differences in faces need to be suppressed. One way
to accomplish this is to eliminate portions of the face image where variation provides little information with respect to emotion. Local changes in facial muscles
around the eyes and mouth are generally associated with our perception of emotions [Ekman and Friesen, 1977]. The full face images presumably contain much
information that is simply irrelevant to the task at hand which could impact the
ability of the classifier to uncover the signal.
Representing Face Images for Emotion Classification
897
The feature based representations are derived from local windows around the eyes
and mouth of the normalized whole face images (see Fig. IB). The eigenvectors of
the feature sub-space are determined independently for each feature (left/right eye
and mouth). A face pattern is generated by projecting the particular facial features
on their respective eigenvectors.
The random block pattern set is formed from image blocks extracted around the
feature locations (see Fig. lC). The areas around each eye are divided into two vertically overlapping blocks of size 32x32 and the mouth is sectioned into three. However, instead of performing PCA on each individual block or all of them together,
a more general PCA of random 32x32 blocks taken over the entire image was used
to generate the eigenvectors. We used random blocks to reduce the uniqueness of a
projection for a single individual and provide a more reasonable model of the early
visual system. The final input pattern consists of the normalized projection of the
seven extracted blocks for the image on the top n principal components.
4
Classifier design and training
The principal goal of classification for this study is to examine how the different representational spaces facilitate a classifiers ability to generalize to novel individuals.
Comparing expected recognition rate error using the same classification technique
with different representations should provide an indication of how well the signal
of interest is preserved by the respective representation. A neural network with
a hidden layer employing a non-linear activation function (sigmoid) is trained to
learn the input-output mapping between the representation of the face image and
the associated response vector given by human subjects.
A simple, fully connected, feed-forward neural network containing a single hidden
layer with 10 nodes, when trained using back propagation, is capable of correctly
classifying the input of training sets from each of the three representations (tested
for pattern sizes up to 140 dimensions). The architecture of the network is fixed
for a particular input size (based on the number of projections on the respective
sub-space) and the generalization of the network is found on a set of images from a
novel individual. An overview of the network design is shown in Fig. 2.
To minimize the impact of choosing a poor hold out set from the training set, each
of the 11 individuals in the training set was in turn used as a hold out. The results of
the 11 networks were then combined to evaluate the classification error on the test
set. A number of different techniques are possible: winner take all, weighted average
output, voting, etc. The method that we found to consistently give the highest
generalization rate involved combining Z scores from the 11 networks. The average
output for each possible emotion across all the networks was calculated along with
its deviation over the entire training set. These values were used to normalize each
output of the 11 networks and the highest weighted sum for a particular input was
the associated emotion.
Due to the limited amount of data available for testing and training, a crossvalidation technique using each set of an individual's images for testing was employed to increase the confidence ofthe generalization measurement. Thus, for each
individual, 11 networks were combined to evaluate the generalization on the single
test individual, and this procedure is repeated for all 12 individuals to give an average generalization error. This results in a total of 132 networks to evaluate the entire
database. A single trial consisted of the generalization rate obtained over the whole
database for a particular size of input pattern . By varying the initial weights of
the network, we can determine the expected generalization performance of this type
C. Padgett and G. W. Cottrell
898
Human ReSPOllge.
Ensemble
Networks
Figure 2: The processing path used to evaluate the pattern set of each representation
scheme. The original image data (after normalization) is used to generate the
eigenvectors and construct the pattern sets. Human responses are used in training
the classifiers and determining generalization percentages on the test data.
classifier on each representation. The number of projections on the relevant space is
also varied to determine a generalization curve for each representation . Constructing, training, and evaluating the 132 networks takes approximately 2 minutes on a
SparcStation 10 for input pattern size of 15 and 4 minutes for an input pattern size
of 80.
5
Results
Fig. 3 provides the expected generalization achIeved by the neural network architecture initially seeded with small random weights for an increasing number of
projections in the respective representational spaces. Each data point represents
the average of 20 trials, 1 (T error bars show the amount of error with respect to
the mean. The curve (generalization rate vs. input pattern size) was evaluated at
6 points for the whole face and at 8 points for each feature based approach. The
eigenfeature representation made use of up to 40 eigenvectors for the three regions
while the random block representation made use of up to 17 eigenvectors for each
of its seven regions .
For the most part, all the representations show improvement as the number of
projections increase. Variations as input size increases are most likely due to a
combination of two factors: lower signal to noise ratios (SNR) for higher order projections; and the increasing number of para~eters with a fixed number of patterns,
making generalization difficult. The highest average recognition rate achieved by
the neural network ensembles is 86%, found using the random block representation with 15 projections per block. The results indicate that the generalization
rate for emotion classification varies significantly depending on the representational
strategy. Both local feature-based approaches (eigenfeatures and random block)
did significantly better over their shared range than the eigenface representation.
Over most of the range, the random block representation is clearly superior to the
eigenfeature representation even though both are derived from the same image area.
899
Representing Face Images for Emotion Classification
o.9r----,---..,..----,.----,.------,,.-----,
random block
0.85
~
0.8
c:
.Q
~
?ill 0.75
~
~
..
~
~
0.7
eigenface
O?~':-O--r--4:':-O---6"':0---BO-.l-:----l~OO-:------'12-0--~140
Number of inputs (projections on eignevectors)
Figure 3: Generalization curves for feature-based representation and full-face representation.
6
Discussion
Fig. 3 clearly demonstrates that reasonable recognition rates can be obtained for
novel individuals using representational techniques that were found useful for identity. The 86% generalization rate achieved by the neural network ensemble using random block patterns with 105 projections compares favorably with the results obtained from techniques that use an expression sequence (neutral to expression) [Mase, 1991, Yacoob and Davis, 1996, Bartlett et al., 1996]. Such schemes
make use of a neutral mask which enhances the sequence's expression by simple
subtraction, a technique that is not possible on novel, static face images. That our
technique works as well or better indicates the possibility that the human visual
system need not rely on difference image strategies over sequences of images in classifying emotions. As many psychological studies are performed on static images of
individuals, models that can accommodate this aspect of emotion recognition can
make predictions that directly guide research [Padgett et al., 1996] .
As for the suitability of the various representations for fine grained discrimination
over different individual objects (as required by emotion classification), Fig. 3 clearly
demonstrates the benefits accrued by concentrating on facial features important to
emotion . The generalization of the trained networks making use of the two local
feature-based representations averages 6-15% higher than do the networks trained
using projections on the eigenfaces. The increased performance can be attributed
to a better signal to noise ratio for the feature regions. As much of the face is rigid
(e.g. the chin and forehead), these regions provide little in the way of information
useful in classifying emotions. However, there are substantial differences in these
areas between individuals, which will be expressed by the principal component
analysis of the images and thus reflected in the projected values. These variations
are essentially noise with respect to emotion recognition making it more difficult
for the classifier to extract useful generalizations during learning.
The final point is the superiority of the random block representation over the range
C. Padgett and G. W. Cottrell
900
examined. One possible explanation for its significant performance edge is that
major feature variations (e.g. open mouth, open eyes, etc.) are more effectively
preserved by this representation than the eigenfeature approach, which covers the
same image area. Due to individual differences in mouth/eye structure, one would
expect that many of the eigenvectors of the feature space would be devoted to this
variance. Facial expressions could be substantially orthogonal to this variance, so
that information pertinent to emotion discrimination is effectively hidden. This of
course would imply that the eigenfeature representation should be better than the
random block representation for face recognition purposes. However, this is not
the case. Nearest neighbor classification of individuals using the same pattern sets
shows that the random block representation does better for this task as well (results
not shown). We are currently developing a noise model that looks promising as an
explanation for this phenomenon.
7
Conclusion
We have demonstrated that average generalization rates of 86% can be obtained
for emotion recognition on novel individuals using techniques similar to work done
in face recognition. Previous work on emotion recognition has relied on image
sequences and obtained recognition rates of nearly the same generalization. The
model we developed here is potentially of more interest to researchers in emotion
that make use of static images of novel individuals in conducting their tests. Future
work will compare aspects of the network model with human performance.
References
[Bartlett et al., 1996] Bartlett, M., Viola, P., Sejnowski, T ., Larsen, J., Hager, J.,
and Ekman, P. (1996). Classifying facial action. In Touretzky, D., Mozer, M. ,
and Hasselmo, M., editors, Advances in Neural Information Processing Systems
8, Cambridge, MA . MIT Press.
[Brunelli and Poggio, 1993] Brunelli, R. and Poggio, T. (1993) . Face recognition:
Feature versus templates. IEEE Trans. Patt. Anal. Machine Intell., 15(10).
[Cottrell and Metcalfe, 1991] Cottrell, G . W. and Metcalfe, J. (1991). Empath:
Face, gender and emotion recognition using holons. In Lippman, R., Moody, J .,
and Touretzky, D., editors, Advances in Neural Information Processing Systems
3, pages 564-571, San Mateo. Morgan Kaufmann.
[Ekman and Friesen, 1976] Ekman , P. and Friesen, W. (1976). Pictures of facial
affect.
[Ekman and Friesen, 1977] Ekman, P. and Friesen, W. (1977). Facial Action Coding
System. Consulting Psychologists, Palo Alto, CA.
[Mase, 1991] Mase, K. (1991). Recognition of facial expression from optical flow.
IEICE Transactions, 74(10):3474-3483 .
[Padgett et al., 1996] Padgett, C., Cottrell, G., and Adolphs, R. (1996). Categorical
perception in facial emotion classification. In Cottrell, G., editor, Proceedings of
the 18th Annual Cognitive Science Conference, San Diego CA .
[Pentland et al., 1994] Pentland, A. P., Moghaddam, B., and Starner, T. (1994).
View-based and modular eigenspaces for face recognition. In IEEE Conference
on Computer Vision fj Pattern Recognition.
[Yacoob and Davis, 1996] Yacoob, Y. and Davis, L. (1996). Recognizing human facial expressions from long image sequences using optical flow. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 18:636-642.
| 1180 |@word trial:2 sex:1 open:2 seek:2 accommodate:1 shot:1 hager:1 reduction:1 initial:1 score:3 empath:1 comparing:1 activation:1 cottrell:16 pertinent:1 v:1 discrimination:2 intelligence:1 selected:1 eigenfeatures:3 provides:2 consulting:1 node:1 location:1 along:1 constructed:1 consists:1 mask:1 expected:3 examine:2 encouraging:1 little:3 window:1 increasing:2 alto:1 substantially:1 developed:1 sparcstation:1 finding:1 collecting:1 voting:1 holons:1 classifier:11 scaled:1 demonstrates:2 superiority:1 local:4 vertically:1 path:1 approximately:1 mateo:1 examined:1 limited:1 range:4 statistically:1 testing:3 block:20 differs:1 lippman:1 procedure:2 area:7 significantly:2 projection:15 confidence:1 onto:3 demonstrated:1 independently:1 x32:3 identifying:1 variation:7 diego:3 padgett:6 heavily:1 agreement:2 recognition:20 located:2 database:7 labeled:2 observed:2 region:7 connected:1 highest:4 valuable:1 substantial:2 mozer:1 asked:1 trained:6 efficiency:1 basis:1 represented:1 various:2 distinct:3 sejnowski:1 choosing:1 quite:1 modular:1 plausible:1 valued:1 ability:2 final:2 sequence:5 indication:1 relevant:2 aligned:1 combining:1 eigenfeature:4 achieve:1 representational:6 normalize:1 crossvalidation:1 produce:2 generating:1 object:5 depending:1 develop:2 oo:1 pose:1 nearest:1 indicate:1 differ:1 human:8 eigenface:2 generalization:24 suitability:2 hold:2 around:5 considered:1 presumably:2 scope:1 mapping:1 major:1 achieves:1 early:1 purpose:2 uniqueness:1 currently:1 palo:1 hasselmo:1 weighted:2 mit:1 clearly:3 rather:1 varying:1 yacoob:3 broader:1 encode:1 validated:1 derived:4 improvement:1 consistently:2 indicates:2 rigid:1 typically:2 eliminate:1 entire:3 initially:1 hidden:3 pixel:2 classification:18 ill:1 constrained:1 emotion:39 construct:1 represents:1 look:1 constitutes:1 nearly:1 anger:1 future:1 randomly:1 resulted:1 intell:1 individual:22 familiar:1 attempt:1 acceptance:1 interest:2 possibility:2 male:1 devoted:1 moghaddam:1 edge:1 capable:1 poggio:3 respective:5 facial:16 orthogonal:1 eigenspaces:1 psychological:1 instance:1 classify:1 increased:1 cover:1 deviation:2 neutral:4 snr:1 recognizing:1 successful:1 examining:1 inadequate:1 para:1 varies:1 accomplish:1 combined:2 cited:1 accrued:1 sensitivity:1 starner:1 together:1 moody:1 reflect:1 containing:2 cognitive:1 leading:1 potential:1 coding:2 student:1 performed:1 view:2 portion:2 relied:2 minimize:1 formed:1 variance:3 who:1 characteristic:1 ensemble:3 conducting:1 identify:2 ofthe:2 kaufmann:1 generalize:1 identification:2 eters:1 lighting:1 researcher:1 touretzky:2 involved:1 larsen:1 resultant:1 associated:3 attributed:1 static:3 dataset:3 proved:1 concentrating:1 uncover:1 back:1 feed:1 higher:2 friesen:9 response:4 reflected:1 evaluated:1 though:1 done:1 hand:1 hopefully:1 overlapping:1 propagation:1 quality:1 ieice:1 facilitate:1 consisted:2 normalized:6 contain:2 seeded:1 brunelli:3 mug:1 during:1 davis:3 prominent:1 chin:1 demonstrate:1 fj:1 image:53 novel:7 sigmoid:1 superior:1 overview:1 winner:1 forehead:1 significant:3 measurement:1 cambridge:1 ai:1 stretched:1 actor:3 etc:2 female:1 jolla:2 irrelevant:1 success:1 muscle:1 captured:1 morgan:1 care:1 employed:2 subtraction:1 determine:3 signal:4 full:5 stem:1 long:1 divided:1 impact:3 prediction:1 vision:2 essentially:1 normalization:1 achieved:4 preserved:2 respondent:1 cropped:1 fine:1 subject:3 flow:2 smile:2 structural:1 sectioned:1 affect:2 architecture:2 reduce:3 expression:11 six:1 pca:3 bartlett:3 action:3 useful:5 generally:1 eigenvectors:18 amount:2 generate:3 percentage:1 correctly:1 per:1 patt:1 express:1 drawn:2 localize:1 fraction:1 sum:1 disgust:1 reasonable:2 patch:4 sad:1 bit:1 layer:2 followed:1 annual:1 aspect:2 span:1 attempting:1 performing:1 optical:2 department:2 developing:1 combination:1 poor:2 across:2 ascertain:1 suppressed:1 biologically:1 making:3 psychologist:1 projecting:1 taken:2 turn:1 available:1 permit:1 appropriate:1 disagreement:1 adolphs:1 alternative:1 eigen:1 original:2 top:2 remaining:1 emotional:1 strategy:5 exhibit:2 enhances:1 distance:2 seven:2 reason:1 ratio:2 happy:1 difficult:2 potentially:1 greyscale:1 favorably:1 implementation:1 reliably:1 design:2 anal:1 vertical:1 observation:1 pentland:6 displayed:1 viola:1 incorporated:1 head:1 digitized:1 perturbation:1 varied:1 required:2 connection:1 california:2 distinction:1 assembled:1 trans:1 bar:1 perception:3 pattern:19 explanation:2 mouth:13 suitable:2 natural:3 rely:1 representing:4 scheme:5 eye:14 imply:1 picture:2 categorical:1 mase:3 extract:1 faced:1 determining:2 fully:1 expect:1 prototypical:1 versus:1 localized:2 editor:3 classifying:5 course:1 guide:1 wide:1 eigenfaces:4 face:47 template:3 neighbor:1 benefit:1 curve:3 calculated:3 dimension:1 evaluating:1 forward:1 made:3 san:4 projected:2 employing:1 transaction:2 compact:1 promising:1 nature:1 reasonably:1 learn:1 ca:4 curtis:1 untrained:1 complex:1 constructing:1 did:1 linearly:1 whole:6 noise:4 repeated:1 fig:6 depicts:1 garrison:1 lc:1 sub:8 ib:1 grained:1 minute:2 bad:1 normalizing:1 undergraduate:3 portrays:1 effectively:2 gained:1 surprise:1 simply:1 likely:1 visual:2 expressed:1 bo:1 fear:1 gender:1 extracted:2 ma:1 goal:1 identity:1 shared:1 content:1 ekman:10 change:1 typical:2 determined:1 principal:6 total:3 feigned:2 invariance:1 la:2 exception:1 indicating:1 metcalfe:11 latter:1 inability:1 evaluate:4 tested:1 phenomenon:1 correlated:1 |
199 | 1,181 | Early Brain Damage
Volker Tresp, Ralph Neuneier and Hans Georg Zimmermann*
Siemens AG, Corporate Technologies
Otto-Hahn-Ring 6
81730 Miinchen, Germany
Abstract
Optimal Brain Damage (OBD) is a method for reducing the number of weights in a neural network. OBD estimates the increase in
cost function if weights are pruned and is a valid approximation
if the learning algorithm has converged into a local minimum. On
the other hand it is often desirable to terminate the learning process before a local minimum is reached (early stopping). In this
paper we show that OBD estimates the increase in cost function
incorrectly if the network is not in a local minimum. We also show
how OBD can be extended such that it can be used in connection with early stopping. We call this new approach Early Brain
Damage, EBD. EBD also allows to revive already pruned weights.
We demonstrate the improvements achieved by EBD using three
publicly available data sets.
1
Introduction
Optimal Brain Damage (OBD) was introduced by Le Cun et al. (1990) as a method
to significantly reduce the number of weights in a neural network. By reducing the
number of free parameters, the variance in the prediction of the network is often
reduced considerably which -in some cases- leads to an improvement in generalization performance of the neural network. OBD might be considered a realization
of the principle of Occam's razor which states that the simplest explanation (of the
training data) should be preferred to more complex explanations (requiring more
weights).
If E is the cost function which is minimized during training, OBD calculates the
{Volker. Tresp, Ralph. Neuneier, Georg.Zimmermann}@mchp.siemens.de.
V. Tresp, R. Neuneierand H. G. Zimmermann
670
saliency of each parameter Wi defined as
11)2E
OBD(wd
= A(Wi) = 2 I)w~s wl?
Weights with a small OBD(wd are candidates for removal. OBD(wd has the
intuitive meaning of being the increase in cost function if weight Wi is set to zero
under the assumptions
? that the cost function is quadratic,
? that the cost function is "diagonal" which means it can be written as E =
Bias + 1/2l:i hi(Wi - wi)2 where where {wn~l are the weights in a
(local) optimum of the cost function (Figure 1) and the hi and BIAS are
parameters which are dependent on the training data set.
? and that Wi ~ wi.
In practice, all three assumptions are often violated but experiments have demonstrated that OBD is a useful method for weight removal.
In this paper we want to take a closer look at the third assumption, i. e. the assumption that weights are close to optimum. The motivation is that theory and practice
have shown that it is often advantageous to perform early stopping which means
that training is terminated before convergence. Early stopping can be thought of
as a form of regularization: since training typically starts with small weights, with
early stopping weights are biased towards small weights analogously to other regularization methods such as ridge regression and weight decay. According to the
assumptions in OBD we might be able to apply OBD only in heavily overtrained
networks where we loose the benefits of early stopping. In this paper we show that
OBD can be extended such that it can work together with early stopping. We call
the new criterion Early Brain Damage (EBD). As in OBD, EBD contains a number of simplifying assumptions which are typically invalid in practice. Therefore,
experimental results have to demonstrate that EBD has benefits. We validate EBD
using three publicly available data sets.
2
Theory
As in OBD we approximate the cost function locally by a quadratic function and
assume a "diagonal" form. Figure 1 illustrates that OBD(Wi) for Wi = wi calculates
the increase in cost function if Wi is set to zero. In early stopping where Wi =f. wi,
oB D( Wi) calculates the quantity denoted as Ai in Figure 1. Consider
I)E
Bi = ---Wi'
I)wi
The saliency of weights Wi in Early Stopping Pruning
ESP(wd = Ai
+ Bi
is an estimate of how much the cost function increases if the current Wi (i. e.
early stopping) is set to zero. Finally, consider
Wi
m
671
Early Brain Damage
E
\
\
I
\
I
\
I
\
I
\
I
\
_.l __ _
I
I
\
I
\
\
I
\ ,B.1
I
/
/
/
c?
_____ 1
w*
1
Figure 1: The figure shows the cost function E as a function of one weight Wi
in the network. wi is the optimal weight. Wi is the weight at an early stopping
point. If OBD is applied at Wi, it estimates the quantity Ai- ESP(Wi) = Ai +
Bi = E(Wi) - E(Wi = 0) estimate the increase in cost function if Wi is pruned.
EBD{Wi) = Ai + Bi + Ci = E{wi) - E(Wi = 0) is the difference in cost function
if we would train to convergence and if we would set Wi = O. In other words
EBD{wd = OBD(wi).
V. Tresp, R. Neuneierand H. G. Zimmermann
672
The saliency of weight
Wi
in EBD is
EBD(wd
= OBD(w;) = Ai + B
j
+ Cj
which estimates the increase in cost function if Wj is pruned after convergence (i.e.
EBD(wd = OBD(wt)) but based on local information around the current value
of Wi. In this sense EBD evaluates the "potential" of Wi. Weights with a small
EBD(wd are candidates for pruning.
Note, that all terms required for EBD are easily calculated. With a quadratic cost
function E = l:f=I(J!' - N N(x k ))2 OBD approximates (OBD-approximation)
8 2E ~ 2 K (8NN(X k ))2
8w~I
8w'I
k=1
2:
(1)
where (xk, J!'){f=1 are the training data and N N(xk) is the network response.
3
Extensions
3.1
Revival of Weights
In some cases, it is beneficial to revive weights which are already pruned. Note, that
Ci exactly estimate the decrease in cost function if weight Wi is "revived". Weights
with a large Ci(Wi 0) are candidates for revival.
=
3.2
Early Brain Surgeon (EBS)
After OBD or EBD is performed, the network needs to be retrained since the
"diagonal" approximation is typically violated and there are dependencies between
weights. Optimal Brain Surgeon (OBS, Hassibi and Storck, 1993) does not use
the "diagonal" approximation and recalculates the new weights without explicit
retraining. OBS still assumes a quadratic approximation of the cost function. The
saliency in OBS is
W~
Li = 2[H~I]ii
where [H- 1]ii is i-th diagonal element of the inverse of the Hessian. Li estimates the
increase in cost if the i-th weight is set to zero and all other weights are retrained.
To recalculate all weights after weight Wi is removed apply
W new
where
ei
=
Wi
Wold -
[H-l]ii H
-1
ei
is the unit vector in the i-th direction.
Analogously to OBS, Early Brain Surgeon EBS would first calculate the optimal
weight vector using a second order approximation of the cost function
w*=w_H- 18E
8w
and then apply OBS using w*. We did not pursue this idea any further since our
initial experiments indicated that W* was not estimated very accurately in praxis.
Hassibi et al. (1994) achieved good performance with OBS even when weights were
far from optimal.
Early Brain Damage
3.3
673
Approximations to the Hessian and the Gradient
Finnoff et al. (1993) have introduced the interesting idea that the relevant quantities
for OBD can be estimated from the statistics of the weight changes.
Consider the update in pattern by pattern gradient descent learning and a quadratic
cost function
aWi -'1 OEk = 2'1(yk _ N N(xk)) oN N(xk)
=
with Ek
= (Yk -
ow
OWi
N N(Xk))2 where '1 is the learning rate.
We assume that xk and yk are drawn online from a fixed distribution (which is
strictly not true since in pattern by pattern learning we draw samples from a fixed
training data set). Then, using the quadratic and "diagonal" approximation of the
cost function and assuming that the noise f in the model
is additive uncorrelated with variance
(J'2 1
(2)
and
v AR(aWi) = V AR (2'1(yk -
N N(xk)) oN:w:xk))
=4~'V AR ((v" - NN'(zk)) aN:w:xk )) +4~'V AR ((W; _ Wi) (aN:w:zk)) ')
=
4,Nc (aN:w:zk))' +4~'(w; -
wo)'VAR (
eN:w:zk)) ')
where N N* (xk) is the network output with optimal weights {wi} ~1. Note, that
in the OBD approximation (Equation 1)
and
If we make the further assumption that oN N(xk)j OWi is Gaussian distributed with
zero mean 2
e
1
stands for the expected value. With Wi kept at a fixed value.
2The zero mean assumption is typically violated but might be enforced by
renormalization.
V. Tresp, R. Neuneier and H. G. Zimmermann
674
we obtain
(3)
The first term in Equation 3 is a result of the residual error which is translated
into weight fluctuations. But note, that weights with a small variance with a large
/)2 E / /)wl fluctuate the most. The first term is only active when there is a residual
error, i.e. (72 > O. The second term is non-zero independent of (72 and is due to the
fact that in sample-by-sample learning, weight updates have a random component.
From Equation 2 and Equation 3 all terms needed in EBD (i. e. /)E//)w, and
/)2 E / /)wl) are easily estimated.
4
Experimental Results
In our experiments we studied the performance of OBD, ESP and EBD in connection with early stopping. Although theory tells us that EBD should provide the best
estimate of the the increase in cost function by the removal of weight Wi, it is not
obvious how reliable that estimate is when the assumptions ("diagonal" quadratic
cost function) are violated. Also we are not really interested in the correct estimate
of the increase in cost function but in a ranking of the weights. Since the assumptions which go into OBD, EBD, ESP (and also OBS and EBS) are questionable, the
usefulness of the new methods have to be demonstrated using practical experiments.
We used three different data sets: Breast Cancer Data, Diabetes Data, and Boston
Housing Data. All three data sets can be obtained from the DCI repository
(ftp:ics.uci.edu/pub/machine-Iearning-databases). The Breast Cancer Data contains 699 samples with 9 input variables consisting of cellular characteristics and
one binary output with 458 benign and 241 malignant cases. The Diabetes Data
contains 768 samples with 8 input variables and one binary output. The Boston
Housing Data consist of 506 samples with 13 input variables which potentially influence the housing price (output variable) in a Boston neighborhood (Harrison &
Rubinfeld, 1978).
Our procedure is as follows. The data set is divided into training data, validation
data and test data. A neural network (MLP) is trained until the error on the validation data set starts to increase. At this point OBD, ESP and EBD are employed
and 50% of all weights are removed. After pruning the networks are retrained until
again the error on the validation set starts to increase. At this point the results
are compared. Each experiment was repeated 5-times with different divisions of the
data into training data, validation data and test data and we report averages over
those 5 experiments.
Table 1 sums up the results. The first row shows the number of data in training
set, validation set and test set. The second row displays the test set error at the
(first) early stopping point. Rows 3 to 5 show test set performance of OBD, ESP
and EBD at the stopping point after pruning and retraining (absolute / relative
to early stopping). In all three experiments, EBD performed best and OBD was
second best in two experiments (Breast Cancer Data and Diabetes Data). In two
experiments (Breast Cancer Data and Boston Housing Data) the performance after
pruning improved.
675
Early Brain Damage
Train/V /Test
Hidden units
MSE (Stopp)
OBD
ESP
EBD
5
Table 1: Comparing OBD, ESP, and EBD.
Boston Housing Data
Breast Cancer
Diabetes
168/169/169
233/233/233
256/256/256
10
5
3
0.2283
0.0340
0.1625
0.0328/ 0.965 0.1652/1.017
0.2275 /0.997
0.2178 / 0.954
0.0331 / 0.973 0.1657 /1.020
0.2160 / 0.946
0.0326/ 0.959 0.1647 /1.014
Conclusions
In our experiments, EBD showed better performance than OBD if used in conjunction with early stopping. The improvement in performance is not dramatic which
indicates that the rankings of the weights in OBD are reasonable as well.
References
Finnoff, W., Hergert, F., and Zimmermann, H. (1993). Improving model selection
by nonconvergent methods, Neural Networks, Vol. 6, No.6.
Hassibi, B. and Storck, D. G. (1993). Second order derivatives for network pruning:
Optimal Brain Surgeon. In: Hanson, S. J., Cowan, J. D., and Giles, C. L. (Eds.).
Advances in Neural Information Processing Systems 5, San Mateo, CA, Morgan
Kaufman.
Hassibi, B., Storck, D. G., and Wolff, G. (1994). Optimal Brain Surgeon: Extensions
and performance comparisons. In: Cowan, J. D., Tesauro, G., and Alspector, J.
(Eds.). Advances in Neural Information Processing Systems 6, San Mateo, CA,
Morgan Kaufman.
Le Cun, Y., Denker, J. S., and SolI a, S. A. (1990). Optimal brain damage. In: D.
S. Touretzky (Ed.). Advances in Neural Information Processing Systems 2, San
Mateo, CA, Morgan Kaufman.
| 1181 |@word hahn:1 repository:1 requiring:1 true:1 advantageous:1 regularization:2 retraining:2 direction:1 already:2 quantity:3 correct:1 damage:9 simplifying:1 diagonal:7 during:1 gradient:2 dramatic:1 ow:1 razor:1 initial:1 criterion:1 contains:3 generalization:1 pub:1 really:1 ridge:1 demonstrate:2 cellular:1 extension:2 strictly:1 assuming:1 neuneier:3 wd:8 current:2 comparing:1 around:1 considered:1 ic:1 meaning:1 written:1 nc:1 additive:1 potentially:1 dci:1 benign:1 early:23 update:2 approximates:1 perform:1 xk:11 ai:6 wl:3 descent:1 incorrectly:1 extended:2 miinchen:1 gaussian:1 han:1 retrained:3 fluctuate:1 volker:2 conjunction:1 introduced:2 showed:1 required:1 improvement:3 connection:2 tesauro:1 indicates:1 hanson:1 expected:1 binary:2 alspector:1 sense:1 storck:3 brain:14 dependent:1 stopping:16 nn:2 morgan:3 typically:4 minimum:3 able:1 employed:1 hidden:1 pattern:4 interested:1 germany:1 ii:3 ralph:2 corporate:1 desirable:1 reliable:1 explanation:2 denoted:1 kaufman:3 pursue:1 residual:2 divided:1 ag:1 technology:1 calculates:3 prediction:1 look:1 regression:1 breast:5 questionable:1 iearning:1 exactly:1 minimized:1 report:1 tresp:5 unit:2 achieved:2 removal:3 praxis:1 before:2 want:1 relative:1 local:5 harrison:1 esp:8 consisting:1 interesting:1 recalculates:1 biased:1 var:1 fluctuation:1 mlp:1 validation:5 might:3 awi:2 cowan:2 eb:3 studied:1 mateo:3 call:2 principle:1 uncorrelated:1 occam:1 bi:4 row:3 wn:1 cancer:5 practical:1 obd:35 solo:1 free:1 practice:3 closer:1 reduce:1 idea:2 bias:2 procedure:1 absolute:1 mchp:1 benefit:2 distributed:1 significantly:1 thought:1 calculated:1 valid:1 word:1 stand:1 wo:1 giles:1 ar:4 san:3 close:1 selection:1 hessian:2 far:1 influence:1 cost:24 useful:1 approximate:1 pruning:6 preferred:1 demonstrated:2 usefulness:1 revived:1 go:1 locally:1 active:1 simplest:1 dependency:1 reduced:1 considerably:1 estimated:3 table:2 ebd:26 terminate:1 zk:4 ca:3 analogously:2 together:1 vol:1 georg:2 improving:1 heavily:1 again:1 mse:1 complex:1 drawn:1 diabetes:4 did:1 element:1 kept:1 ek:1 derivative:1 terminated:1 motivation:1 owi:2 li:2 noise:1 database:1 potential:1 sum:1 de:1 enforced:1 inverse:1 repeated:1 en:1 recalculate:1 calculate:1 wj:1 reasonable:1 revival:2 renormalization:1 ranking:2 hassibi:4 decrease:1 removed:2 performed:2 draw:1 yk:4 ob:8 explicit:1 candidate:3 reached:1 start:3 hi:2 third:1 display:1 quadratic:7 trained:1 nonconvergent:1 publicly:2 surgeon:5 variance:3 characteristic:1 division:1 decay:1 saliency:4 translated:1 easily:2 consist:1 accurately:1 pruned:5 ci:3 train:2 illustrates:1 converged:1 according:1 rubinfeld:1 boston:5 tell:1 neighborhood:1 touretzky:1 ed:3 beneficial:1 evaluates:1 wi:43 cun:2 otto:1 obvious:1 statistic:1 revive:2 zimmermann:6 finnoff:2 online:1 housing:5 equation:4 loose:1 cj:1 malignant:1 needed:1 invalid:1 towards:1 price:1 relevant:1 uci:1 realization:1 change:1 available:2 reducing:2 response:1 improved:1 apply:3 denker:1 wt:1 wolff:1 intuitive:1 wold:1 validate:1 experimental:2 siemens:2 until:2 convergence:3 hand:1 optimum:2 ei:2 assumes:1 ring:1 ftp:1 hergert:1 overtrained:1 violated:4 indicated:1 |