metadata
stringlengths
38
71
text
stringlengths
1
156k
category
stringlengths
2
35
https://plato.stanford.edu/entries/action-perception/
Third, adaptation to ORDs, contrary to the theory, is not restricted to situations in which subjects receive reafferent visual feedback, but may also take place when subjects receive feedback generated by passive effector or whole-body movement (Singer & Day 1966; Templeton et al. 1966; Fishkin 1969). Adaptation is even possible in the complete absence of motor action (Howard et al. 1965; Kravitz & Wallach 1966).
action-perception
https://plato.stanford.edu/entries/action-perception/
In general, the extent to which adaptation occurs depends not on the availability of reafferent stimulation, but rather on the presence of either of two related kinds of information concerning “the presence and nature of the optical rearrangement” (Welch 1978: 24). Following Welch, we shall refer to this view as the “information hypothesis”.
action-perception
https://plato.stanford.edu/entries/action-perception/
One source of information present in a displaced visual array concerns the veridical directions of objects from the observer (Rock 1966: chaps. 2–4). Normally, when engaging in forward locomotion, the perceived radial direction of an object straight ahead of the observer’s body remains constant while the perceived radial directions of objects to either side undergo constant change. This pattern also obtains when the observer wears prisms that displace the retinal image to side. Hence, “an object seen through prisms which retains the same radial direction as we approach must be seen to be moving in toward the sagittal plane” (Rock 1966: 105). On Rock’s view, at least some forms of adaptation to ORDs can be explained by our ability to detect and exploit such invariant sources of spatial informational in locomotion-generated patterns of optic flow.
action-perception
https://plato.stanford.edu/entries/action-perception/
Another related source of information for adaptation derives from the conflict between seen and proprioceptively experienced limb position (Wallach 1968; Ularik & Canon 1971). When this discrepancy is made conspicuous, proponents of the information hypothesis have found that passively moved (Melamed et al. 1973), involuntarily moved (Mather & Lackner 1975), and even immobile subjects (Kravitz & Wallach 1966) exhibit significant adaption. Although self-produced bodily movement is not necessary for adaptation to occur, it provides subjects with especially salient information about the discrepancy between sight and touch (Moulden 1971): subjects are able proprioceptively to determine the location of a moving limb much more accurately than a stationary or passively moved limb. It is the enhancement of the visual-proprioceptive conflict rather than reafferent visual stimulation, on this interpretation, that explains why active movement yields more adaptation than passive movement in Held’s experiments.
action-perception
https://plato.stanford.edu/entries/action-perception/
A final objection to the reafference theory concerns the end product of adaptation to ORDs. According to the theory, adaptation occurs when subjects learn new rules of sensorimotor dependence that govern how actions affect sensory inputs. There is a significant body of evidence, however, that much, if not all, adaptation rather occurs at the proprioceptive level. Stratton, summarizing the results of his experiment on mirror-based optical rearrangement, wrote:
action-perception
https://plato.stanford.edu/entries/action-perception/
…the principle stated in an earlier paper—that in the end we would feel a thing to be wherever we constantly saw it—can be justified in a wider sense than I then intended it to be taken…. We may now, I think, safely include differences of distance as well, and assert that the spatial coincidence of touch and sight does not require that an object in a given tactual position should appear visually in any particular direction or at any particular distance. In whatever place the tactual impression’s visual counterpart regularly appeared, this would eventually seem the only appropriate place for it to appear in. If we were always to see our bodies a hundred yards away, we would probably also feel them there. (1899: 498, our emphasis)
action-perception
https://plato.stanford.edu/entries/action-perception/
On this interpretation, the plasticity revealed by ORDs is primarily proprioceptive and kinaesthetic, rather than visual. Stratton’s world came to look “right side up” (1897b: 469) after adaptation to the inverted retinal image because things were felt where they were visually perceived to be—not because, his “entire visual field flipped over” (Kuhn 2012 [1962]: 112). This is clear from the absence of a visual negative aftereffect when Stratton finally removed his inverting lenses at the end of his eight-day experiment:
action-perception
https://plato.stanford.edu/entries/action-perception/
The visual arrangement was immediately recognized as the old one of pre-experimental days; yet the reversal of everything from the order to which I had grown accustomed during the past week, gave the scene a surprising, bewildering air which lasted for several hours. It was hardly the feeling, though, that things were upside down. (1897b: 470)
action-perception
https://plato.stanford.edu/entries/action-perception/
Moreover, Stratton reported changes in kinaesthesis during the course of the experiment consistent with the alleged proprioceptive shift:
action-perception
https://plato.stanford.edu/entries/action-perception/
when one was most at home in the unusual experience the head seemed to be moving in the very opposite direction from that which the motor sensations themselves would suggest. (1907: 156)
action-perception
https://plato.stanford.edu/entries/action-perception/
On this view, the end product of adaptation to an ORD is a recalibration of proprioceptive position sense at one or more points of articulation in the body (see the entry on bodily awareness). As you practice reaching for a target while wearing laterally displacing prisms, for example, the muscle spindles, joint receptors, and Golgi tendon organs in your shoulder and arm continue to generate the same patterns of action potentials as before, but the proprioceptive and kinaesthetic meaning assigned to them by their “consumers” in the brain undergoes change: whereas before they signified that your arm was moving along one path through the seven-dimensional space of possible arm configurations (the human arm has seven degrees of freedom: three at the wrist, one at the elbow, and three at the shoulder), they gradually come to signify that it is moving along a different path in that kinematic space, namely, the one consistent with the prismatically distorted visual feedback you are receiving. Similar recalibrations are possible with respect to sources of information for head and eye position. After adapting to laterally displacing prisms, signals from receptors in your neck that previously signified the alignment of your head and torso, for example, may come to signify that your head is turned slightly to the side. For discussion, see Harris 1965, 1980 and Welch 1978: chap. 3.
action-perception
https://plato.stanford.edu/entries/action-perception/
The enactive approach defended by J. Kevin O’Regan and Alva Noë (O’Regan & Noë 2001; Noë 2004, 2005, 2010; O’Regan 2011) is best viewed as an extension of the reafference theory. According to the enactive approach, spatially contentful, world-presenting perceptual experience depends on implicit knowledge of the way sensory stimulations vary as a function of bodily movement. “Over the course of life”, O’Regan and Noë write,
action-perception
https://plato.stanford.edu/entries/action-perception/
a person will have encountered myriad visual attributes and visual stimuli, and each of these will have particular sets of sensorimotor contingencies associated with it. Each such set will have been recorded and will be latent, potentially available for recall: the brain thus has mastery of all these sensorimotor sets. (2001: 945)
action-perception
https://plato.stanford.edu/entries/action-perception/
To see an object o as having the location and shape properties it has it is necessary (1) to receive sensory stimulations from o and (2) to use those stimulations in order to retrieve the set of sensorimotor contingencies associated with o on the basis of past encounters. In this sense, seeing is a “two-step” process (Noë 2004: 164). It is important to emphasize, however, that the enactive approach distances itself from the idea that vision is functionally dedicated, in whole or in part, to the guidance of spatially directed actions: “Our claim”, Noë writes,
action-perception
https://plato.stanford.edu/entries/action-perception/
is that seeing depends on an appreciation of the sensory effects of movement (not, as it were, on the practical significance of sensation)…. Actionism is not committed to the general claim that seeing is a matter of knowing how to act in respect of or in relation to the things we see. (Noë 2010: 249)
action-perception
https://plato.stanford.edu/entries/action-perception/
The enactive approach also has strong affinities with the sense-data tradition. According to Noë, an object’s visually apparent shape is the shape of the 2D patch that would occlude the object on a plane perpendicular to the line of sight, i.e., the shape of the patch projected by the object on the frontal plane in accordance with the laws of linear perspective. Noë calls this the object’s “perspectival shape” (P-shape). An object’s visually apparent size, in turn, is the size of the patch projected by the object on the frontal plane. Noë calls this the object’s “perspectival size” (P-size). Appearances are “perceptually basic” (Noë 2004: 81) because in order to see an object’s actual spatial properties it is necessary both to see its 2D P-properties and to understand how they would vary (undergo transformation) with changes in one’s point of view. This conception of what it is to perceive objects as voluminous space-occupiers is closely to akin to views defended by Russell (1918), Broad (1923), and Price (1950). It also worth mentioning that the enactive approach has strong affinities to views in the phenomenological tradition that are beyond the scope of this entry (but for discussion, see Thompson 2005; Hickerson 2007; and the entry on phenomenology).
action-perception
https://plato.stanford.edu/entries/action-perception/
Assessment of the enactive approach is complicated by questions concerning the nature of P-properties. First, there is a tendency on the part of its main proponents to speak interchangeably of consciously perceived P-properties (or ‘looks’), on the one hand, and proximal sensory stimulations, on the other. Noë, e.g., writes:
action-perception
https://plato.stanford.edu/entries/action-perception/
The sensorimotor profile of an object is the way its appearance changes as you move with respect to it (strictly speaking, it is the way sensory stimulation varies as you move). (2004: 78, our emphasis)
action-perception
https://plato.stanford.edu/entries/action-perception/
It is far from clear how these different characterizations are to be related, however (Briscoe 2008; Kiverstein 2010). P-properties, according to the enactive approach, are distal, relational properties of the objects we see: “If there is a mind/world divide… then P-properties are on the world side of the divide” (2004: 83). Moreover, Noë clearly assumes that they are visible: “P-properties are themselves objects of sight, that is, things that we see” (2004: 83). Sensory stimulations, by contrast, are proximal, subpersonal vehicles of visual perception. They are not objects of sight. Quite different, if not incommensurable, notions of sensorimotor profile and, so, of sensorimotor knowledge would thus seem to be implied by the two characterizations.
action-perception
https://plato.stanford.edu/entries/action-perception/
There is also an ambiguity with the “-motor” in “sensorimotor knowledge”. On the one hand, Noë argues that perception is active in the sense that perceivers require knowledge of the proximal, sensory effects of movement. E.g., in order to see an object’s shape and size it is necessary to have certain anticipations concerning the way in which retinal stimulations caused by the object would vary as a function of her point of view. “This perspectival aspect”, Noë writes, “marks the place of action in perception” (Noë 2004: 34). On this conception there is no commitment to the view that vision is for the guidance of action, that vision constitutively has something to do with adapting animal behavior to the spatial layout of the distal environment (Noë 2004: 18–19). Rather, vision is active in the sense that it involves learned expectations concerning the ways in which sensory stimulations would be “perturbed” by possible bodily movements (Noë 2010: 247–248).
action-perception
https://plato.stanford.edu/entries/action-perception/
On the other hand, Noë adverts to a more world-engaging conception of sensorimotor knowledge in order to explain our visual experience of P-properties:
action-perception
https://plato.stanford.edu/entries/action-perception/
variation in looks reveals how things are. But what of the looks themselves, what of P-properties? Do we see them by seeing how they look? This would threaten to lead to infinite regress…. (2004: 87)
action-perception
https://plato.stanford.edu/entries/action-perception/
The solution to the regress problem is that seeing an object’s P-properties involves a kind of practical know-how. A tilted plate, e.g., looks elliptical and small from here because one has to move one’s hand in a certain way in order to indicate its shape and size in the visual field (2004: 89). Whereas seeing an object’s intrinsic properties, according to the enactive approach, requires knowledge of the way P-properties would vary as a function of movement, seeing P-properties involves knowing how one would need to move one’s body in relation to what one sees in order to achieve a certain goal.
action-perception
https://plato.stanford.edu/entries/action-perception/
While this seems to suggest that the first kind of sensorimotor knowledge is asymmetrically dependent on the latter, Noë maintains that just the opposite is the case. “I do not wish to argue”, he writes,
action-perception
https://plato.stanford.edu/entries/action-perception/
that to experience something as having a certain [P-shape] is to experience it as affording a range of possible movements; rather I want to suggest that one experiences it as having a certain P-shape, and so as affording possible movements, only insofar as, in encountering it, one is able to draw on one’s appreciation of the sensorimotor patterns mediating (or that might be mediating) your relation to it. (2004: 90)
action-perception
https://plato.stanford.edu/entries/action-perception/
The problem with this suggestion, however, is that it leads the enactive approach directly back to the explanatory regress that the second, affordance-detecting kind of sensorimotor knowledge was introduced to avoid.
action-perception
https://plato.stanford.edu/entries/action-perception/
The enactive approach rests its case on three main sources of empirical support. The first derives from experiments with optical rearrangement devices (ORDs), discussed in Section 2.2 above. Hurley and Noë (2003) maintain that adaptation to ORDs only occurs when subjects relearn the systematic patterns of interdependence between active movement and reafferent visual stimulation. Moreover, contrary to the proprioceptive change theory of Stratton, Harris, and Rock, Hurley and Noë argue that the end product of adaptation to inversion and reversal of the retinal image is genuinely visual in nature: during the final stage of adaptation, visual experience “rights itself”.
action-perception
https://plato.stanford.edu/entries/action-perception/
In Section 2.2 above, we reviewed empirical evidence against the view that active movement and corresponding reafferent stimulation are necessary for adaptation to ORDs. Accordingly, we will focus here on Hurley and Noë’s objections to the proprioceptive-change theory. According to the latter, “what is actually modified [by the adaptation process] is the interpretation of nonvisual information about positions of body parts” (Harris 1980: 113). Once intermodal harmony is restored, the subject will again be able to perform visuomotor actions without error or difficulty, and she will again feel at home in the visually perceived world.
action-perception
https://plato.stanford.edu/entries/action-perception/
Hurley and Noë do not contest the numerous sources of empirical and introspective evidence that Stratton, Harris, and Rock adduce for the proprioceptive-change theory. Rather they reject the theory on the basis of what they take to be an untoward epistemic implication concerning adaptation to left-right reversal:
action-perception
https://plato.stanford.edu/entries/action-perception/
while rightward things really look and feel leftward to you, they come to seem to look and feel rightward. So the true qualities of your experience are no longer self-evident to you. (2003: 155)
action-perception
https://plato.stanford.edu/entries/action-perception/
The proprioceptive-change theory, however, does not imply such radical introspective error. According to proponents of the theory, experience normalizes after adaptation to reversal not because things that really look leftward “seem to look rightward” (what this might mean is enigmatic at best), but rather because the subjects eventually become familiar with the way things look when reversed—much as ordinary subjects can learn to read mirror-reversed writing fluently (Harris 1965: 435–36). Things seem “normal” after adaptation, in other words, because subjects are again able to cope with the visually perceived world in a fluent and unreflective manner.
action-perception
https://plato.stanford.edu/entries/action-perception/
A second line of evidence for the enactive approach comes from well-known experiments on tactile-visual sensory substitution (TVSS) devices that transform outputs from a low-resolution video camera into a matrix of vibrotactile stimulation on the skin of one’s back (Bach-y-Rita 1972, 2004) or electrotactile stimulation on the surface of one’s tongue (Sampaio et al. 2001).
action-perception
https://plato.stanford.edu/entries/action-perception/
At first, blind subjects equipped with a TVSS device experience its outputs as purely tactile. After a short time, however, many subjects cease to notice the tactile stimulations themselves and instead report having quasi-visual experiences of the objects arrayed in space in front of them. Indeed, with a significant amount of supervised training, blind subjects can learn to discriminate spatial properties such as shape, size, and location and even to perform simple “eye”-hand coordination tasks such as catching or batting a ball. A main finding of relevance in early experiments was that subjects learn to “see” by means of TVSS only when they have active control over movement of the video camera. Subjects who receive visual input passively—and therefore lack any knowledge of how (or whether) the camera is moving—experience only meaningless, tactile stimulation.
action-perception
https://plato.stanford.edu/entries/action-perception/
Hurley and Noë argue that passively stimulated subjects do not learn to “see” by means of sensory substitution because they are unable to learn the laws of sensorimotor contingency that govern the prosthetic modality:
action-perception
https://plato.stanford.edu/entries/action-perception/
active movement is required in order for the subject to acquire practical knowledge of the change from sensorimotor contingencies characteristic of touch to those characteristic of vision and the ability to exploit this change skillfully. (Hurley & Noë 2003: 145)
action-perception
https://plato.stanford.edu/entries/action-perception/
An alternative explanation, however, is that subjects who do not control camera movement—and who are not otherwise attuned to how the camera is moving—are simply unable to extract any information about the structure of the distal scene from the incoming pattern of sensory stimulations. In consequence they do not engage in “distal attribution” (Epstein et al. 1986; Loomis 1992; Siegel & Warren 2010): they do not perceive through the changing pattern of proximal stimulation to a spatially external scene in the environment. For development of this alternative explanation in the context of Bayesian perceptual psychology, see Briscoe forthcoming.
action-perception
https://plato.stanford.edu/entries/action-perception/
A final source of evidence for the enactive approach comes from studies of visuomotor development in the absence of normal, reafferent visual stimulation. Held & Hein 1963 performed an experiment in which pairs of kittens were harnessed to a carousel in a small, cylindrical chamber. One of the kittens was able to engage in free circumambulation while wearing a harness. The other kitten was suspended in the air in a metal gondola whose motions were driven by the first harnessed kitten. When the first kitten walked, both kittens moved and received identical visual stimulation. However, only the first kitten received reafferent visual feedback as the result of self-movement. Held and Hein reported that only mobile kittens developed normal depth perception—as evidenced by their unwillingness to step over the edge of a visual cliff, blinking reactions to looming objects, and visually guided paw placing responses. Noë (2004) argues that this experiment supports the enactive approach: in order to develop normal visual depth perception it is necessary to learn how motor outputs lead to changes to visual inputs.
action-perception
https://plato.stanford.edu/entries/action-perception/
There are two main reasons to be skeptical of this assessment. First, there is evidence that passive transport in the gondola may have disrupted the development of the kittens’ innate visual paw placing responses (Ganz 1975: 206). Second, the fact that passive kittens were prepared to walk over the edge of a visual cliff does not show that their visual experience of depth was abnormal. Rather, as Jesse Prinz (2006) argues, it may only indicate that they “did not have enough experience walking on edges to anticipate the bodily affordances of the visual world”.
action-perception
https://plato.stanford.edu/entries/action-perception/
The enactive approach confronts objections on multiple fronts. We focus on just three of them here (but see Block 2005; Prinz 2006; Briscoe 2008; Clark 2009: chap. 8; and Block 2012). First, the approach is essentially an elaboration of Held’s reafference theory and, as such, faces many of the same empirical obstacles. Evidence, for example, that active movement per se is not necessary for perceptual adaptation to optical rearrangement (Section 2.2.1) is at variance with predictions made by the reafference theory and the enactive approach alike.
action-perception
https://plato.stanford.edu/entries/action-perception/
A second line of criticism targets the alleged perceptual priority of P-properties. According to the enactive approach, P-properties are “perceptually basic” (Noë 2004: 81) because in order to see an object’s intrinsic, 3D spatial properties it is necessary to see its 2D P-properties and to understand how they would undergo transformation with variation in one’s point of view. When we view a tilted coin, critics argue, however, we do not see something that looks—in either an epistemic or non-epistemic sense of “looks”—like an upright ellipse. Rather, we see what looks like a disk that is partly nearer and partly farther away from us. In general, the apparent shapes of the objects we perceive are not 2D but have extension in depth (Austin 1962; Gibson 1979; Smith 2000; Schwitzgebel 2006; Briscoe 2008; Hopp 2013).
action-perception
https://plato.stanford.edu/entries/action-perception/
Support for this objection comes from work in mainstream vision science. In particular, there is abundant empirical evidence that an object’s 3D shape is specified by sources of spatial information in the light reflected or emitted from the object’s surfaces to the perceiver’s eyes as well as by oculomotor factors (for reviews, see Cutting & Vishton 1995; Palmer 1999; and Bruce et al. 2003). Examples include binocular disparity, vergence, accommodation, motion parallax, texture gradients, occlusion, height in the visual field, relative angular size, reflections, and shading. That such shape-diagnostic information having once been processed by the visual system is not lost in conscious visual experience of the object is shown by standard psychophysical methods in which experimenters manipulate the availability of different spatial depth cues and gauge the perceptual effects. Objects, for example, look somewhat flattened under uniform illumination conditions that eliminate shadows and highlights, and egocentric distances are underestimated for objects positioned beyond the operative range of binocular disparity, accommodation, and vergence. Results of such experimentation show that observers can literally see the difference made by the presence or absence of a certain cue in the light available to the eyes (Smith 2000; Briscoe 2008).
action-perception
https://plato.stanford.edu/entries/action-perception/
According to the influential dual systems model (DSM) of visual processing (Milner & Goodale 1995/2006; Goodale & Milner 2004), visual consciousess and visuomotor control are supported by functionally and anatomically distinct visual subsystems (these are the ventral and dorsal information processing streams, respectively). In particular, proponents of the DSM maintain that the contents of visual experience are not used by motor programming areas in the primate brain:
action-perception
https://plato.stanford.edu/entries/action-perception/
The visual information used by the dorsal stream for programming and on-line control, according to the model, is not perceptual in nature …[I]t cannot be accessed consciously, even in principle. In other words, although we may be conscious of the actions we perform, the visual information used to program and control those actions can never be experienced. (Milner & Goodale 2008: 775–776)
action-perception
https://plato.stanford.edu/entries/action-perception/
A final criticism of the enactive approach is that it is empirically falsified by evidence for the DSM (see the commentaries on O’Regan & Noë 2001; Clark 2009: chap. 8; and the essays collected in Gangopadhyay et al. 2010): the bond it posits between what we see and what we do is much too tight to comport with what neuroscience has to tells us about their functional relations.
action-perception
https://plato.stanford.edu/entries/action-perception/
The enactivist can make two points in reply to this objection. First, experimental findings indicate that there are a number of contexts in which information present in conscious vision is utilized for purposes of motor programming (see Briscoe 2009 and Briscoe & Schwenkler forthcoming). Action and perception are not as sharply dissociated as proponents of DSM sometimes claim.
action-perception
https://plato.stanford.edu/entries/action-perception/
Second, the enactive approach, as emphasized above, rejects the idea that the function of vision is to guide actions. It
action-perception
https://plato.stanford.edu/entries/action-perception/
does not claim that visual awareness depends on visuomotor skill, if by “visuomotor skill” one means the ability to make use of vision to reach out and manipulate or grasp. Our claim is that seeing depends on an appreciation of the sensory effects of movement (not, as it were, on the practical significance of sensation). (Noë 2010: 249)
action-perception
https://plato.stanford.edu/entries/action-perception/
Since the enactive approach is not committed to the idea that seeing depends on knowing how to act in relation to what we see, it is not threatened by empirical evidence for a functional dissociation between visual awareness and visually guided action.
action-perception
https://plato.stanford.edu/entries/action-perception/
At this point, it should be clear that the claim that perception is active or action-based is far from unambiguous. Perceiving may implicate action in the sense that it is taken constitutively to involve associations with touch (Berkeley 1709), kinaesthetic feedback from changes in eye position (Lotze 1887 [1879]), consciously experienced “effort of the will” (Helmholtz 2005 [1924]), or knowledge of the way reafferent sensory stimulation varies as a function of movement (Held 1961; O’Regan & Noë 2001; Hurley & Noë 2003).
action-perception
https://plato.stanford.edu/entries/action-perception/
In this section, we shall examine two additional conceptions of the role of action in perception. According to the motor component theory, as we shall call it, efference copies generated in the oculomotor system and/or proprioceptive feedback from eye-movements are used in tandem with incoming sensory inputs to determine the spatial attributes of perceived objects (Helmholtz 2005 [1924]; Mack 1979; Shebilske 1984, 1987; Ebenholtz 2002). Efferent readiness theories, by contrast, appeal to the particular ways in which perceptual states prepare the observer to move and act in relation to the environment. The modest readiness theory, as we shall call it, claims that the way an object’s spatial attributes are represented in visual experience is sometimes modulated by one or another form of covert action planning (Festinger et al. 1967; Coren 1986; Vishton et al. 2007). The bold readiness theory argues for the stronger, constitutive claim that, as J.G. Taylor puts its, “perception and multiple simultaneous readiness for action are one and the same thing” (1968: 432).
action-perception
https://plato.stanford.edu/entries/action-perception/
As pointed out in Section 2.3.2, there are numerous, independently variable sources of information about the spatial layout of the environment in the light sampled by the eye. In many cases, however, processing of stimulus information requires or is optimized by recruiting sources of auxiliary information from outside the visual system. These may be directly integrated with incoming visual information or used to change the weighting assigned to one or another source of optical stimulus information (Shams & Kim 2010; Ernst 2012).
action-perception
https://plato.stanford.edu/entries/action-perception/
An importantly different recruitment strategy involves combining visual input with non-perceptual information originating in the body’s motor control systems, in particular, efference copy, and/or proprioceptive feedback from active movement (kinaesthesis). The motor component theory, as we shall call it, is premised on evidence for such motor-modal processing.
action-perception
https://plato.stanford.edu/entries/action-perception/
The motor component theory can be made more concrete by examining three situations in which the spatial contents of visual experience are modulated by information concerning recently initiated or impending bodily movements:
action-perception
https://plato.stanford.edu/entries/action-perception/
The motor component theory is a version of the view that perception is embodied in the sense of Prinz 2009 (see the entry on embodied cognition). Prinz explains that
action-perception
https://plato.stanford.edu/entries/action-perception/
embodied mental capacities, are ones that depend on mental representations or processes that relate to the body…. Such representations and processes come in two forms: there are representations and processes that represent or respond to body, such as a perception of bodily movement, and there are representations and processes that affect the body, such as motor commands. (2009: 420; for relevant discussion of various senses of embodiment, see Alsmith and Vignemount 2012)
action-perception
https://plato.stanford.edu/entries/action-perception/
The three examples presented above provide empirical support for the thesis that visual perception is embodied in this sense. For additional examples, see Ebenholtz 2002: chap. 4.
action-perception
https://plato.stanford.edu/entries/action-perception/
Patients with frontal lobe damage sometimes exhibit pathological “utilization behaviour” (Lhermitte 1983) in which the sight of an object automatically elicits behaviors typically associated with it, such as automatically pouring water into a glass and drinking it whenever a bottle of water and a glass are present (Frith et al. 2000: 1782). That normal subjects often do not automatically perform actions afforded by a perceived object, however, does not mean that they do not plan, or imaginatively rehearse, or otherwise represent them. (On the contrary, recent neuroscientific findings suggest that merely perceiving an object often covertly prepares the motor system to engage with it in a certain manner. For overviews, see Jeannerod 2006 and Rizzolatti 2008.)
action-perception
https://plato.stanford.edu/entries/action-perception/
Efferent readiness theories are based on the idea that covert preparation for action is “an integral part of the perceptual process” and not “merely a consequence of the perceptual process that has preceded it” (Coren 1986: 394). According to the modest readiness theory, as we shall call it, covert motor preparation can sometimes influence the way an object’s spatial attributes are represented in perceptual experience. The bold readiness theory, by contrast, argues for the stronger, constitutive claim that to perceive an object’s spatial properties just is to be prepared or ready to act in relation to the object in certain ways (Sperry 1952; Taylor 1962, 1965, 1968).
action-perception
https://plato.stanford.edu/entries/action-perception/
A number of empirical findings motivate the modest readiness theory. Festinger et al. 1967 tested the view that visual contour perception is
action-perception
https://plato.stanford.edu/entries/action-perception/
determined by the particular sets of preprogrammed efferent instructions that are activated by the visual input into a state of readiness for immediate use. (p. 34)
action-perception
https://plato.stanford.edu/entries/action-perception/
Contact lenses that produce curved retinal input were placed on the right eye of three observers, who were instructed to scan a horizontally oriented line with their left eye covered for 40 minutes. The experimenters reported that there was an average of 44% adaptation when the line was physically straight but retinally curved, and an average of 18% adaptation when the line was physically curved but retinally straight (see Miller & Festinger 1977, however, for conflicting results).
action-perception
https://plato.stanford.edu/entries/action-perception/
An elegantly designed set of experiments by Coren 1986 examined the role of efferent readiness in the visual perception of direction and extent. Coren’s experiments support the hypothesis that the spatial parameter controlling the length of a saccade is not the angular direction of the target relative to the line of sight, but rather the direction of the center of gravity (COG) of all the stimuli in its vicinity (Coren & Hoenig 1972; Findlay 1982). Importantly,
action-perception
https://plato.stanford.edu/entries/action-perception/
the bias arises from the computation of the saccade that would be made and, hence, is held in readiness, rather than the saccade actually emitted. (Coren 1986: 399)
action-perception
https://plato.stanford.edu/entries/action-perception/
The COG bias is illustrated in Figure 3. In the first row (top), there are no extraneous stimuli near the saccade target. Hence, the saccade from the point of fixation to the target is unbiased. In the second row, by contrast, the location of an extraneous stimulus (×) results in a saccade from the point of fixation that undershoots its target, while in the third row the saccade overshoots its target. In the fourth row, changing the location of the extraneous stimulus eliminates the COG bias: because the extraneous stimulus is near the point of fixation rather than the saccade target, the saccade is accurate.
action-perception
https://plato.stanford.edu/entries/action-perception/
Figure 3: The effect of starting eye position on saccade programming (after Coren 1986: 405)
action-perception
https://plato.stanford.edu/entries/action-perception/
The COG bias is evolutionarily adaptive: eye movements will bring both the saccade target as well as nearby objects into high acuity vision, thereby maximizing the amount of information obtained with each saccade. Motor preparation or “efferent readiness” to execute an undershooting or overshooting saccade, Coren found, however, can also give rise to a corresponding illusion of extent (1986: 404–406). Observers, e.g., will perceptually underestimate the length of the distance between the point of fixation and the saccade target when there is an extraneous stimulus on the near side of the target (as in the second row of Figure 3) and will perceptually overestimate the length of the distance when there is an extraneous stimulus on the far side of the target (as in the third row of Figure 3).
action-perception
https://plato.stanford.edu/entries/action-perception/
According to Coren, the well known Müller-Lyer illusion can be explained within this framework. The outwardly turned wings in Müller-Lyer display shift the COG outward from each vertex, while the inwardly turned wings in this figure shift the COG inward. This influences both saccade length from vertex to vertex as well as the apparent length of the central line segments. The influence of COG on efferent readiness to execute eye movements, Coren argues (1986: 400–403), also explains why the line segments in the Müller-Lyer display can be replaced with small dots while leaving the illusion intact as well as the effects of varying wing length and wing angle on the magnitude of the illusion.
action-perception
https://plato.stanford.edu/entries/action-perception/
The modest readiness theory holds that the way an object’s spatial attributes are represented in visual experience is sometimes modulated by one or another form of covert action planning. The bold readiness theory argues for a stronger, constitutive claim: to perceive an object’s spatial properties just is to be prepared or ready to act in relation to the object in certain ways. We begin by examining J.G. Taylor’s “behavioral theory” of perception (Taylor 1962, 1965, 1968).
action-perception
https://plato.stanford.edu/entries/action-perception/
Taylor’s behavioral theory of perception identifies the conscious experience of seeing an object’s spatial properties with the passive activation of a specific set of learned or “preprogrammed” motor routines:
action-perception
https://plato.stanford.edu/entries/action-perception/
[P]erception is a state of multiple simultaneous readiness for actions directed to the objects in the environment that are acting on the receptor organs at any one moment. The actions in question have been acquired by the individual in the course of his life and have been determined by the reinforcing contingencies in the environment in which he grew up. What determines the content of perception is not the properties of the sensory transducers that are operated on by stimulus energies from the environment, but the properties of the behaviour conditioned to those stimulus energies…. (1965: 1, our emphasis)
action-perception
https://plato.stanford.edu/entries/action-perception/
According to Taylor’s theory, sensory stimulation gives rises to spatially contentful visual experience as a consequence of associative, reinforcement learning: we perceive an object as having the spatial attribute G when the types of proximal sensory stimulation caused by the object have been conditioned to the performance of actions sensitive to G (1962: 42). The conscious experience of seeing an object’s distance, e.g., is constituted by the subject’s learned readiness to perform specific whole body and limb movements that were reinforced when the subject previously received stimulation from objects at the same remove. In general, differences in the spatial content of a visual experience are identified with differences in the subject’s state of “multiple simultaneous readiness” to interact with the objects represented in the experience.
action-perception
https://plato.stanford.edu/entries/action-perception/
The main problem with Taylor’s theory is one that besets behaviorist theories of perception in general: it assumes that for any visible spatial property G, there will be some distinctive set of behavioral responses that are constitutive of perceiving the object as having G. The problem with this assumption, as Mohan Matthen (1988) puts it,
action-perception
https://plato.stanford.edu/entries/action-perception/
there is no such thing as the proper response, or even a range of functionally appropriate responses, to what perception tells us. (p. 20, see also Hurley 2001: 17)
action-perception
https://plato.stanford.edu/entries/action-perception/
The last approach we shall discuss has roots in, and similarities to, many of the proposals covered above, but is most closely aligned with the bold readiness theory. We will follow Grush (2007) in calling this approach the disposition theory (see Grush 2007: 394, for discussion of the name). The primary proponent of this position is Gareth Evans, whose work on spatial representation focused on understanding how we manage to perceive objects as occupying locations in egocentric space.
action-perception
https://plato.stanford.edu/entries/action-perception/
The starting point of Evans’ theory is that the subject’s perceptual systems have isolated a channel of sensory input, an “information link”, through which she receives information about the object. The information link by itself does not allow the subject to know the location of this object. Rather, it is when the information link is able to induce in the subject appropriate kinds of behavioral dispositions that it becomes imbued with spatial import:
action-perception
https://plato.stanford.edu/entries/action-perception/
The subject hears the sound as coming from such-and-such a position, but how is the position to be specified? Presumably in egocentric terms (he hears the sound as up, or down, to the right or to the left, in front or behind). These terms specify the position of the sound in relation to the observer’s own body; and they derive their meaning in part from their complicated connections with the subject’s actions. (Evans 1982: 155)
action-perception
https://plato.stanford.edu/entries/action-perception/
This is not a version of a motor theory (e.g., Poincaré 1907: 71). The behavioral responses in question are not to be understood as raw patterns of motor activations, or even muscular sensations. Such a reduction would face challenges anyway, since for any location in egocentric space, there are an infinite number of kinematic configurations (movements) that would, for example, effect a grasp to that location; and for any kinematic configuration, there are an infinite number of dynamic profiles (temporal patterns of muscular force) that would yield that configuration. The behavioral responses in question are overt environmental behavior:
action-perception
https://plato.stanford.edu/entries/action-perception/
It may well be that the input-output connections can be finitely stated only if the output is described in explicitly spatial terms (e.g., ‘extending the arm’, ‘walking forward two feet’, etc.). If this is so, it would rule out the reduction of the egocentric spatial vocabulary to a muscular vocabulary. But such a reduction is certainly not needed for the point being urged here, which is that the spatial information embodied in auditory perception is specifiable only in a vocabulary whose terms derive their meaning partly from being linked with bodily actions. Even given an irreducibility, it would remain the case that possession of such information is directly manifestable in behaviour issuing from no calculation; it is just that there would be indefinitely many ways in which the manifestation can occur. (Evans 1982: 156)
action-perception
https://plato.stanford.edu/entries/action-perception/
Also, on this proposal, all modalities are in the same boat. As such the disposition theory is more ambitious than most of the theories already discussed, which are limited to vision. Not only is there no reduction of perceptual spatial content to a “muscular vocabulary”, there is also no reduction of the spatial content of some perceptual modalities to that of one or more others—as there was for Berkeley, who sought to reduce the spatial content of vision to that of touch, and whose program forced a distinction between two spaces, visual space and tangible space:
action-perception
https://plato.stanford.edu/entries/action-perception/
The spatial content of auditory and tactual-kinaesthetic perceptions must be specified in the same terms—egocentric terms. … It is a consequence of this that perceptions from both systems will be used to build up a unitary picture of the world. There is only one egocentric space, because there is only one behavioural space. (Evans 1982: 160)
action-perception
https://plato.stanford.edu/entries/action-perception/
Relatedly, for Evans it is not even the case that spatial perceptual content, for all modalities, is being reduced to behavioral dispositions. Rather, perceptual inputs and behavioral outputs jointly and holistically yield a single behavioral space:
action-perception
https://plato.stanford.edu/entries/action-perception/
Egocentric spatial terms are the terms in which the content of our spatial experiences would be formulated, and those in which our immediate behavioural plans would be expressed. This duality is no coincidence: an egocentric space can exist only for an animal in which a complex network of connections exists between perceptual input and behavioural output. A perceptual input—even if, in some loose sense, it encapsulates spatial information (because it belongs to a range of inputs which vary systematically with some spatial facts)—cannot have a spatial significance for an organism except in so far as it has a place in such a complex network of input-output connections. (Evans 1982: 154)
action-perception
https://plato.stanford.edu/entries/action-perception/
Egocentric spatial terms and spatial descriptions of bodily movement would, on this view, form a structure familiar to philosophers under the title “holistic”. (Evans 1982: 156, fn. 26)
action-perception
https://plato.stanford.edu/entries/action-perception/
This last point and the associated quotes address a common misconception of the disposition theory. It would be easy to read the theory as providing a proposal of the following sort: A creature gets sensory information from a stimulus, and the problem is to determine where that stimulus is located in egocentric space; the solution is that features of that sensory episode induce dispositions to behavior targeting some egocentric location. While this sort of thing is indeed a problem, it is relatively superficial. Any creature facing this problem must already have the capacity to grasp egocentric spatial location contents, and the problem is which of these ready-at-hand contents it should assign to the stimulus. But the disposition theory is addressing a deeper question: in virtue of what does this creature have a capacity to grasp egocentric spatial contents to begin with? The answer is that the creature must have a rich set of interconnections between sensory inputs (and their attendant information links) and dispositions for behavioral outputs.
action-perception
https://plato.stanford.edu/entries/action-perception/
Rick Grush (2000, 2007) has adopted Evans’ theory, and attempted to clarify and expand upon it, particularly in three areas: first, the distinction between the disposition theory and other approaches; second, the neural implementation of the disposition theory; and finally the specific kinds of dispositions that are relevant for the issue of spatial experience.
action-perception
https://plato.stanford.edu/entries/action-perception/
The theory depends on behavioral dispositions. Grush (2007) argues that there are two distinctions that need to be made: first, the organism might possess i) knowledge of what the consequences (bodily, environmental, or sensory) of a given action will be; or ii) knowledge of which motor commands will bring about a given desired end state (of the body, environment, or sensory channels) (Grush 2007: 408). I might be able to recognize that a series of moves someone shows me will force my grandmaster opponent into checkmate (knowledge of the first sort, the consequences of a given set of actions), and yet not have been anywhere near the skill level to have come up with that series of moves on my own (knowledge of the second sort, what actions will achieve a desired effect). Sensorimotor contingency theorists appeal to knowledge of the first sort—though as was discussed in Section 2.3.1, Noë flirts with appealing to knowledge of the second sort to explain the perceptual grasp of P-shapes; to the extent he does, he is embracing a disposition theoretic account of P-shapes. Disposition theorists, and bold readiness theorists (Section 3.2.2) appeal to knowledge of the second sort. These are the dispositions of the disposition theory: given some goal, the organism is disposed to execute certain actions.
action-perception
https://plato.stanford.edu/entries/action-perception/
This leads to the second distinction, between type-specifying and detail-specifying dispositions. Grush (2007: 393) maintains that only the latter are directly relevant for spatial perception. A type-specifying disposition is a disposition to execute some type of behavior with respect to an object or place. For example, an organism might be disposed to grasp, bite, flee, or foveate some object. This sort of disposition is not relevant to the spatial content of the experience on the disposition theory. Rather, what are relevant are detail-specifying dispositions: the specifics of how I am disposed to act to execute any of these behavior types. When reaching to grab the cup to take a drink (type), do I move my hand like so (straight ahead, say), or like such (off to the right)? When I want to foveate or orient towards (behavior type) the ant crawling up the wall, do a I move my head and eyes like this, or like that?
action-perception
https://plato.stanford.edu/entries/action-perception/
This latter distinction allows the disposition theory to answer one of the main objections to the bold readiness theory (described at the end of section 3.2.2) that there is no single special disposition connected to perceiving any given object. That is true of type-specifying dispositions, but not of detail-specifying dispositions. Given the ant’s location there is indeed a very limited range of detail specifying dispositions that will allow me to foveate it (though this might require constraints on possible actions, such as minimum jerk or other such constraints).
action-perception
https://plato.stanford.edu/entries/action-perception/
Grush (2007; 2009) has proposed a detailed implementation of the disposition theory in terms of neural information processing. The proposal involves more mathematics than is appropriate here, and so a quick qualitative description will have to suffice (for more detail, see Grush 2007; 2009). The basic idea is that relevant cortical areas learn sets of basis functions which, to put it very roughly, encode equivalence classes of combinations of sensory and postural signals (for discussion, see Pouget et al. 2002). For example, many combinations of eye orientation and location of stimulation on the retina correspond to a visual stimulus that is directly in front of the head. Sorting such bodily postural information (not just eye orientation, but any postural information that affects sensation, which is most) and sensory condition pairs into useful equivalence classes is the first half of the job.
action-perception
https://plato.stanford.edu/entries/action-perception/
What this does is encode incoming information in a way that renders it ready to be of use in guiding behavior, since the equivalence classes are precisely those for which a given kind of motor program is appropriate. The next part corresponds to how this information, so represented, can be used to produce the details of such a motor program. For every type of action in a creature’s behavioral repertoire (grasp, approach, avoid, foveate, bite, etc.) its motor areas have a set of linear coefficients, easily implemented as a set of neural connection strengths, and when these are applied to a set of basis function values, a detailed behavior is specified. For example, when a creature senses an object O1, a set of basis function values B1 for that stimulus is produced. If the creature decides to execute overt action A1, then the B1 basis function values are multiplied by the coefficient corresponding to A1. The result is an instance of behavior type A1 executed with respect to object O1. If the creature had decide instead to execute action A2, with respect to O1, the B1 basis function values would have been multiplied by the A2 set of coefficients, and the result would be a motor behavior executing A2 on object O1.
action-perception
https://plato.stanford.edu/entries/action-perception/
Accordingly, the disposition theory has a very different account of what is happening with sensory substitution devices than Susan Hurley and Alva Noë (see Section 2.3.2 above). On the disposition theory, what allows the user of such a device to have spatial experience is not the ability to anticipate how the sensory input will change upon execution of movement as the sensorimotor contingency theory would have it. Rather, it is that the subject’s brain has learned to take these sensory inputs together with postural signals to produce sets of basis functions that poise the subject to act with respect to the object that is causing the sensory signals (see Grush 2007: 406).
action-perception
https://plato.stanford.edu/entries/action-perception/
One objection to disposition theories is what Hurley has called The Myth of the Giving:
action-perception
https://plato.stanford.edu/entries/action-perception/
To suppose that … the content of intentions can be taken as unproblematically primitive in explaining how the content of experience is possible, is to succumb to the myth of the giving. (Hurley 1998: 241)
action-perception
https://plato.stanford.edu/entries/action-perception/
The idea behind this objection is that one is simply shifting the debt from one credit card to another when one takes as problematic the spatial content of perception, and then appeals to motor behavior as the supplier of this content. For then, of course, the question will be: Whence the spatial content of motor behavior?
action-perception
https://plato.stanford.edu/entries/action-perception/
The disposition theory, however, does not posit any such unilateral reduction (though Taylor’s bold readiness theory arguably does, see Section 3.2.2 above). As discussed above, Evans explicitly claims that the behavioral space is holistically determined by both behavior and perception. And on Grush’s account spatial content is implemented in the construction of basis function values, and these values coordinate transitions from perceptual input to behavioral output. As such, they are highly analogous to inferences whose conditions of application are given in sensory-plus-postural terms and whose consequences of application manifest in behavioral terms. The import of the states that represent these basis function values is no more narrowly motor than the meaning of a conditional can be identified with its consequent (or its antecedent, for that matter) in isolation.
action-perception
https://plato.stanford.edu/entries/action-perception/
Another very common objection, one that is often leveled at many forms of motor theory, has to do with the fact that even paralyzed people, with very few possibilities for action, seem capable in many cases of normal spatial perception. Such objections would, at a minimum, place significant pressure on any views that explain perceptual content by appeal to actual behavior. It is also easy to see how even hypothetical behavior would be called into question in such cases, since in many such cases behavior is not physically possible. Grush’s theory (2007), right or wrong, has something specific to say about this objection. Since spatial content is taken to be manifested in the production of basis function values in the cortex, the prediction is that any impairments manifesting farther down the chain, the brain stem or spinal cord, for example, need have no direct effect on spatial content. So long as the relevant brain areas have the wherewithal to produce sets of basis function values suitable for constructing a motor sequence (if multiplied by the action-type-specific coefficients), then the occasioning perceptual episode will have spatial content.
action-perception
https://plato.stanford.edu/entries/action-perception/
bodily awareness | cognition: embodied | phenomenology
action-perception
https://plato.stanford.edu/entries/action-perception/
We would like to thank Jason Winning for helping to compose the bibliography and proofreading. We are also grateful to Adrian Alsmith, John Schwenkler, and an anonymous referee for comments that resulted in many improvements.
action-perception
https://plato.stanford.edu/entries/action-perception/
Copyright © 2015 by Robert Briscoe <rbriscoe@gmail.com> Rick Grush <rgrush@ucsd.edu>
action-perception
https://plato.stanford.edu/entries/action-perception/
View this site from another server:
action-perception