Click on a title to download a pdf.
Cross-modal and action-specific: neuroimaging and the human
mirror neuron system.
Oosterhof N, Tipper S, and Downing P.
Trends in Cognitive Sciences (in press).
A fronto-parietal human mirror neuron system (HMNS) has
been invoked to explain a range of social phenomena.
However, most human neuroimaging studies of this system do
not address critical “mirror” properties:
neural representations should be action specific, and
should generalize across visual and motor modalities.
Studies using repetition suppression and particularly
multivariate pattern analysis (MVPA) highlight the
contribution to action perception of anterior parietal
regions. Further, they add to mounting evidence that
lateral occipitotemporal cortex plays a role in the HMNS,
yet provide less support for involvement of the premotor
cortex. Neuroimaging, particularly through application of
MVPA, has the potential to reveal in further detail the
properties of the HMNS which could challenge prevailing
views about its neuroanatomical organization.
A critical role for the hippocampus and perirhinal cortex
in perceptual learning of scenes and faces: complementary
findings from amnesia and fMRI
Mundy M, Downing P, Dwyer D, Honey R, and Graham K.
Journal of Neuroscience (in press).
It is debated whether sub-regions within the medial
temporal lobe (MTL), in particular the hippocampus (HC) and
perirhinal cortex (PrC), play domain-sensitive roles in
learning. Two patients, with differing degrees of MTL
damage, were first exposed to pairs of highly similar
scenes, faces and dot patterns, and then asked to make
repeated same/different decisions to pre-exposed, and also
non-exposed (novel), pairs from the three categories
(Experiment 1). We measured whether (a) patients would show
a benefit of prior exposure (pre-exposed > non-exposed)
and (b) whether repetition of pre-exposed and non-exposed
pairs at test would benefit discrimination accuracy. While
selective HC damage impaired learning of scenes, but not
faces and dot patterns, broader MTL damage, involving the
HC and PrC, compromised discrimination learning of scenes
and faces, but left dot pattern learning unaffected. In
Experiment 2, a similar task was run in healthy young
participants in the MRI scanner. Functional region-of
interest analyses revealed that posterior HC and posterior
parahippocampal gyrus showed greater activity during scene,
but not face and dot pattern, learning, while PrC, anterior
HC and posterior fusiform gyrus were recruited during
discrimination learning for faces, but not scenes and dot
patterns. Critically, activity in posterior HC and PrC, but
not the other fROIs, was modulated by accuracy (correct
> incorrect within preferred category). Both approaches,
therefore, revealed a key role for the HC and PrC in
discrimination learning, consistent with representational
accounts in which sub-regions in these MTL structures store
complex spatial and object representations,
A Causal Role for the Extrastriate Body
Area in Detecting People in Real-World Scenes
van Koningsbruggen M, Peelen M, Downing P.
Journal of Neuroscience (2013).
People are extremely efficient at detecting relevant
objects in complex natural scenes. In 3 experiments, we
used fMRI-guided transcranial magnetic stimulation (TMS) to
investigate the role of the extrastriate body area (EBA) in
the detection of people in scenes. In Experiment 1,
participants reported - in different blocks - whether
people or cars were present in a briefly-presented scene.
Detection (d-prime) of people, but not of cars, was
impaired after TMS over right EBA (5 pulses at -200, -100,
0, 100, 200 ms) compared to sham stimulation. In Experiment
2, we applied TMS either before (-200, -100 ms) or after
(+100, +200) the scene onset. Post-stimulus EBA stimulation
impaired people detection relative to pre-stimulus EBA
stimulation, while timing had no effect during sham
stimulation. In Experiment 3, we examined anatomical
specificity by comparing TMS over EBA with TMS over
scene-selective transverse occipital sulcus (TOS). Two
scenes were presented side by side, and response times to
detect which contained people (or cars) were measured. For
people detection, but not for car detection, response times
during EBA stimulation were significantly slower than
during TOS stimulation. Furthermore, right EBA stimulation
led to an equivalent slowing of response times to left and
right lateralized targets. These findings are the first to
demonstrate the causal involvement of a category-selective
human brain region in detecting its preferred stimulus
category in natural scenes. They shed light on the nature
of such regions, and help us understand how we efficiently
extract socially-relevant information from a complex input.
Visuo-motor imagery of specific manual
actions: a multi-variate pattern analysis fMRI
Oosterhof N, Tipper S, Downing P.
Neuroimage (in press).
An important human capacity is the ability to imagine
performing an action, and its consequences, without
actually executing it. Here we seek neural representations
of specific manual actions that are common across
visuo-motor performance and imagery. Participants were
scanned with fMRI while they performed and observed
themselves performing two different manual actions during
some trials, and imagined performing and observing
themselves performing the same actions during other trials.
We used multi-variate pattern analysis to identify areas
where representations of specific actions generalize across
imagined and performed actions. The left anterior parietal
cortex showed this property. In this region, we also found
that activity patterns for imagined actions generalize
better to performed actions than vice versa, and we provide
simulation results that can explain this asymmetry. The
present results are the first demonstration of action-
specific representations that are similar irrespective of
whether actions are actively performed or covertly
imagined. Further, they demonstrate concretely how the
apparent cross-modal visuo-motor coding of actions
identified in studies of a human "mirror neuron system"
could, at least partially, reflect imagery.
Doing, seeing, or both: effects of learning
condition on subsequent action
Hudson M; Clifford A; Tipper S; Downing P.
Social Neuroscience (in press).
It has been proposed that common codes for vision and
action emerge from associations between an
individual’s production and simultaneous observation
of actions. This typically first-person view of one’s
own action subsequently transfers to the third-person view
when observing another individual. We tested vision-action
associations and the transfer from first- person to
third-person perspective by comparing novel hand-action
sequences that were learned under three conditions: first,
by being performed and simultaneously viewed from a
first-person perspective; second, by being performed but
not seen; and third, by being seen from a first-person view
without being executed. We then used fMRI to compare the
response to these three types of learned action sequences
when they were presented from a third-person perspective.
Visuomotor areas responded most strongly to sequences that
were learned via simultaneously producing and observing the
action sequences. We also note an important asymmetry
between vision and action: action sequences learned by
performance alone, in the absence of vision, facilitated
the emergence of visuomotor responses, whereas action
sequences learned by viewing alone had comparably little
effect. This dominance of action over vision supports the
notion of forward/predictive models of visuomotor systems.
Viewpoint (in)dependence of action
representations: an MVPA study.
Oosterhof N, Tipper S, Downing P.
Journal of Cognitive Neuroscience (in press).
The discovery of mirror neurons – neurons that code
specific actions both when executed and observed – in
area F5 of the macaque provides a potential neural
mechanism underlying action understanding. To date,
neuroimaging evidence for similar coding of specific
actions across the visual and motor modalities in human
ventral premotor cortex (PMv) – the putative
homologue of macaque F5 – is limited to the case of
actions observed from a first-person perspective. However,
it is the third-person perspective that figures centrally
in our understanding of the actions and intentions of
others. To address this gap in the literature, we scanned
participants with functional magnetic resonance imaging
while they viewed two actions from either a first or third
person perspective during some trials, and executed the
same actions during other trials. Using multi-voxel pattern
analysis, we found action-specific cross-modal visual-motor
representations in PMv for the first-person but not for the
third-person perspective. Additional analyses showed no
evidence for spatial or attentional differences across the
two perspective conditions. In contrast, more posterior
areas in the parietal and occipitotemporal cortex did show
cross-modal coding regardless of perspective. These
findings point to a stronger role for these latter regions,
relative to PMv, in supporting the understanding of
others’ actions with reference to one’s own
Division of labor between lateral and
ventral extrastriate representations of faces, bodies,
Taylor J, Downing P
Journal of Cognitive Neuroscience (2011) 23,
The occipitotemporal cortex is strongly implicated in
carrying out the high-level computations associated with
vision. In human neuroimaging studies, focal regions are
consistently found within this broad region that respond
strongly and selectively to faces, bodies, or objects. A
notable feature of these selective regions is that they are
found in pairs. In the posterior-lateral occipitotemporal
cortex, focal selectivity is found for faces (occipital
face area; OFA), bodies (extrastriate body area; EBA); and
objects (lateral occipital; LO). These three areas are
found bilaterally and at close quarters to each other.
Likewise, in the ventromedial occipitotemporal cortex,
three similar category-selective regions are found, also in
close proximity to each other: for faces (fusiform face
area; FFA), bodies (fusiform body area; FBA), and objects
(posterior fusiform; pFs). Here we review some of the
extensive evidence on the functional properties of these
areas, with two aims. First, we seek to identify principles
that distinguish the posterior-lateral and ventromedial
clusters of selective regions, but that apply generally
within each cluster across the three stimulus kinds. Our
review identifies and elaborates several principles by
which these relationships hold. In brief, the
posterior-lateral representations are more primitive,
local, and stimulus-driven relative to the ventromedial
representations, which in contrast are more invariant to
visual features, global, and linked to the subjective
percept. Second, because the evidence base of studies that
compare both posterior-lateral and ventromedial
representations of faces, bodies, and objects is still
relatively small, we seek to provoke more cross-talk among
the research strands that are traditionally separate. We
identify several promising approaches for such future work.
The role of occipitotemporal body-selective
regions in person perception
Cognitive Neuroscience (2011) 2(3-4),
The visual appearance of others’ bodies
is a powerful source of information about the people around
us. This information is implicit in the stimulus and must
be extracted and made explicit by the coordination of
activity in multiple cortical areas. Here we consider the
contribution to this process of two strongly body-selective
occipitotemporal regions identified in human neuroimaging
experiments: the extrastriate body area (EBA) and the
fusiform body area (FBA). We address the evidence and
arguments behind numerous recent proposals that EBA and FBA
build explicit representations of identity, emotion, body
movements, or goal-directed actions from the visual
appearance of bodies, and also explore the contribution of
these regions to motor control. We argue that the current
evidence does not support a model in which EBA and FBA
directly perform any of these higher-level functions.
Instead, we argue that these regions comprise populations
of neurons that encode fine details of the shape and
posture of the bodies of people in the current percept. In
doing so, they provide a powerful but cognitively
unelaborated perceptual framework that allows other
cortical systems to exploit the rich, socially relevant
information that is conveyed by the body.
Reflections on the hand: the use of a
mirror highlights the contributions of perceived and
interpreted representations in the rubber hand
Kontaris I, Downing P.
Perception (in press).
In the rubber hand illusion, observing a
rubber hand stroked in synchrony with one’s own hand
results in mislocalization of the own hand, which is
perceived as being located closer to the rubber hand. This
illusion depends on having the rubber hand placed at a
plausible egocentric orientation with respect to the
observer. In the present study, we took advantage of this
finding in order to compare the relative influence on the
illusion of the rubber hand’s perceived retinotopic
image against its real-world position. The rubber hand was
positioned egocentrically (fingers away from the
participant) or allocentrically (fingers towards the
participant) while participants viewed it either directly
or via a mirror that was placed facing the participant. In
the mirror conditions, the orientation of the retinotopic
image of the hand (either ego- or allocentric) was opposed
to its real-world orientation. We found that the illusion
was elicited in both mirror conditions, to roughly the same
extent. Thus either of two representations can elicit the
rubber hand illusion: a world-centered understanding of the
scene, resulting from the inferred position of the hand
based on its mirror reflection, or a purely visual
retinotopic representation of the viewed hand. In the
mirror conditions, the illusion was somewhat weaker than in
the typical directly-viewed, egocentric condition. We
attribute this to competition between two incompatible
representations introduced by the presence of the mirror.
Finally, in two control experiments we ruled out that this
reduction was due to two properties of mirror reflections:
the increased perceived distance of items and the reversal
of the apparent handedness of the rubber hand.
Learning associations between action and
perception: effects of incompatible training on body
part and spatial priming.
Wiggett A, Hudson M, Tipper S, Downing P.
Brain and Cognition (2011) 76(1), 87-96.
another person executing an action primes the same action
in the observer's motor system. Recent evidence has shown
that these priming effects are flexible, where training of
new associations, such as making a foot response when
viewing a moving hand, can reduce standard action priming
effects (Gillmeister et al. 2008). Previously, these
effects were obtained after explicit learning tasks in
which the trained action was cued by the content of a
visual stimulus. Here we report similar learning processes
in an implicit task in which the participant's action is
self-selected, and subsequent visual effects are determined
by the nature of that action. Importantly, we show that
these learning processes are specific to associations
between actions and viewed body-parts, in that incompatible
spatial training did not influence body part or spatial
priming effects. Our results are consistent with models of
visuomotor learning that place particular emphasis on the
repeated experience of watching oneself perform an action.
Representation of action in
Journal of Cognitive Neuroscience (2011) 23(7), 1765-80.
A fundamental question for social cognitive neuroscience is
how and where in the brain the identities and actions of
others are represented. Here we present a replication and
extension of a study by Kable and Chatterjee (2006)
examining the role of the occipitotemporal cortex in these
processes. We presented full-cue movies of actors
performing whole-body actions and used fMRI to test for
action- and identity-specific adaptation effects. We
examined a series of functionally defined regions,
including: the extrastriate and fusiform body areas; the
fusiform face area; the parahippocampal place area; the
lateral occipital complex; the right posterior superior
temporal sulcus; and motion-selective area hMT+. These
regions were analyzed with both standard univariate
measures as well as multi-voxel pattern analyses.
Additionally we performed whole-brain tests for significant
adaptation effects. We found significant action-specific
adaptation in many areas, but no evidence for
identity-specific adaptation. We argue that this finding
could be explained by differences in the familiarity of the
stimuli presented: the actions shown were familiar while
the actors performing the actions were unfamiliar. However,
in contrast to previous findings, we found that the action
adaptation effect could not be conclusively tied to
specific functionally defined regions. Instead our results
suggest that the adaptation to previously seen actions
across identities is a widespread effect, evident across
the lateral and ventral occipitotemporal cortex.
A comparison of volume-based and
surface-based multi-voxel pattern analysis
Oosterhof N, Wiestler T, Downing P, Diedrichsen
Neuroimage, (2011) 56(2), 593-600.
For functional magnetic resonance imaging (fMRI),
Multi-Voxel Pattern Analysis (MVPA) has been shown to be a
sensitive method to detect areas that encode certain
stimulus dimensions. By moving a searchlight through the
volume of the brain, one can map how local patterns of
activity carry information content about the experimental
conditions of interest. Traditionally, the searchlight is
defined as a volume sphere that does not take account the
anatomy of the cortical surface. Here we present a method
that uses a cortical surface reconstruction to guide voxel
selection for information mapping. This approach differs in
two important aspects from a volume-based searchlight
definition. First, it uses only voxels that are classified
as grey matter based on an anatomical scan. Second, it uses
a geodesic distance metric to define neighbourhoods of
voxels on the cortical surface thus preventing selection of
voxels across sulci. We study here the influence of these
two factors on classification accuracy and on the spatial
specificity of the resulting information map. In our
example data set, participants pressed one of four fingers
while undergo- ing fMRI. We used MVPA to identify regions
in which local fMRI patterns can successfully discriminate
which finger was moved. We show that surface-based in-
formation mapping is a more sensitive measure of local
information content, and provides better spatial
specificity. This makes surface-based information mapping a
useful technique for a data-driven analysis of information
representation in the cerebral cortex.
Surface-based information mapping reveals
crossmodal vision-action representations in human
parietal and occipitotemporal cortex
Oosterhof N, Wiggett A, Diedrichsen J, Tipper S,
Journal of Neurophysiology (2010) 104(2), 1077-89.
Many lines of evidence point
to a tight linkage between the perceptual and motoric
representations of actions. Numerous demonstrations show
how the visual perception of an action engages compatible
activity in the observer’s motor system. This is seen
for both intransitive actions (e.g. in the case of
unconscious postural imitation) and for transitive actions
(e.g. grasping an object). While the discovery of
“mirror neurons” in macaques has inspired
explanations of these processes in human action behaviours,
the evidence for areas in the human brain that similarly
form a crossmodal visual/motor representation of actions
remains incomplete. To address this, in the present study,
participants performed and observed hand actions while
being scanned with fMRI. We took a data-driven approach by
applying whole-brain information mapping using a
multi-voxel pattern analysis (MVPA) classifier, performed
on reconstructed representations of the cortical surface.
The aim was to identify regions in which local voxel-wise
patterns of activity can distinguish among different
actions, across the visual and motor domains. Experiment 1
tested intransitive, meaningless hand movements, while
Experiment 2 tested object-directed actions (all
right-handed). Our analyses of both experiments revealed
crossmodal action regions in the lateral occipitotemporal
cortex (bilaterally) and in the left postcentral
gyrus/anterior parietal cortex. Furthermore, in Experiment
2 we identified a gradient of bias in the patterns of
information in the left hemisphere postcentral / parietal
region. The postcentral gyrus carried more information
about the effectors used to carry out the action (fingers
vs whole hand), while anterior parietal regions carried
more information about the goal of the action (lift vs
punch). Taken together, these results provide evidence for
common neural coding in these areas of the visual and motor
aspects of actions, and demonstrate further how MVPA can
contribute to our understanding of the nature of
distributed neural representations.
Functional characterization of the
extrastriate body area based on the N1 ERP
Taylor J, Roberts M, Downing P, Thierry G.
Brain and Cognition (2010) 73(3),
Electrophysiological and functional
neuroimaging evidence points to the existence of neural
populations that respond strongly and selectively to the
appearance of the human body and its parts. However, the
relationship between ERP and fMRI markers of these
populations remains unclear. Here we used a previously
identified functional dissociation between two
body-selective regions identified with fMRI (extrastriate
body area or EBA; fusiform body area or FBA) in order to
better understand the source of a body-selective N1 ERP
component. Specifically, we compared the magnitude of the
N1 elicited by images of fingers, hands, arms and bodies to
that obtained for hierarchically-matched control stimuli.
We found close agreement between the pattern of body-part
selectivity exhibited by N1, and the pattern of BOLD
selectivity elicited in a previous study by the same type
of stimuli in EBA (in contrast to FBA). We interpret these
findings as evidence for EBA as the primary generator of
the body selective N1 component. Our results are an example
of the use of functional criteria to distinguish among the
possible neural sources of ERP markers.
fMRI–adaptation studies of viewpoint
tuning in the extrastriate and fusiform body areas
Taylor J, Wiggett A, Downing P
J Neurophysiology (2010)
People are easily able to perceive the human body
across different viewpoints, but the neural mechanisms
underpinning this ability are currently unclear. In three
experiments, we used functional MRI (fMRI) adaptation to
study the view-invariance of representations in two
cortical regions that have previously been shown to be
sensitive to visual depictions of the human body--the
extrastriate and fusiform body areas (EBA and FBA). The
BOLD response to sequentially presented pairs of bodies was
treated as an index of view invariance. Specifically, we
compared trials in which the bodies in each image held
identical poses (seen from different views) to trials
containing different poses. EBA and FBA adapted to
identical views of the same pose, and both showed a
progressive rebound from adaptation as a function of the
angular difference between views, up to approximately 30
degrees. However, these adaptation effects were eliminated
when the body stimuli were followed by a pattern mask.
Delaying the mask onset increased the response (but not the
adaptation effect) in EBA, leaving FBA unaffected. We
interpret these masking effects as evidence that
view-dependent fMRI adaptation is driven by later waves of
neuronal responses in the regions of interest. Finally, in
a whole brain analysis, we identified an anterior region of
the left inferior temporal sulcus (l-aITS) that responded
linearly to stimulus rotation, but showed no selectivity
for bodies. Our results show that body-selective cortical
areas exhibit a similar degree of view-invariance as other
object selective areas--such as the lateral
occipitotemporal area (LO) and posterior fusiform gyrus
Dissociation of extrastriate body- and
biological-motion selective areas by manipulation of
Kontaris J, Wiggett A, Downing P.
Neuropsychologia (2009) 47(14):3118-24.
To date, several posterior brain regions have been
identified that play a role in the visual perception of
other people and their movements. The aim of the present
study is to understand how these areas may be involved in
relating body movements to their visual consequences. We
used fMRI to examine the extrastriate body area (EBA), the
fusiform body area (FBA), and an area in the posterior
superior temporal sulcus (pSTS) that responds to patterns
of human biological motion. Each area was localized in
individual participants with independent scans. In the main
experiment, participants performed and/or viewed simple,
intransitive hand actions while in the scanner. An
MRcompatible camera with a near-egocentric view of the
participant's hand was used to manipulate the relationship
between motor output and the visual stimulus. Participants'
only view of their hands was via this camera. In the
Compatible condition, participants viewed their own live
hand movements projected onto the screen. In the
Incompatible condition, participants viewed actions that
were different from the actions they were executing. In
pSTS, the BOLD response in the Incompatible condition was
significantly higher than in the Compatible condition.
Further, the response in the Compatible condition was below
baseline, and no greater than that found in a control
condition in which hand actions were performed without any
visual input. This indicates a strong suppression in pSTS
of the response to the visual stimulus that arises from
one's own actions. In contrast, in EBA and FBA, we found a
large but equivalent response to the Compatible and
Incompatible conditions, and this response was the same as
that elicited in a control condition in which hand actions
were viewed passively, with no concurrent motor task. These
findings indicate that, in contrast to pSTS, EBA and FBA
are decoupled from motor systems. Instead we propose that
their role is limited to perceptual analysis of
body-related visual input.
Animate and inanimate objects in human
visual cortex: evidence for task-independent category
Wiggett A, Pritchard I, Downing P.
Neuropsychologia (2009) 47(14):3111-7.
Evidence from neuropsychology suggests that the distinction
between animate and inanimate kinds is fundamental to human
cognition. Previous neuroimaging studies have reported that
viewing animate objects activates ventrolateral visual
brain regions, whereas inanimate objects activate
ventromedial regions. However, these studies have typically
compared only a small number of animate and inanimate kinds
(e.g. animals and tools) and some evidence indicates that
task demands determine whether these effects occur at all.
In the current study we test whether a lateral-medial
animacy bias is evident across a variety of stimuli, and
across different tasks (matching two stimuli at a general,
intermediate and exemplar level). Images of objects were
presented sequentially in pairs, and match/mismatch
judgements were made at different levels in different
scans. The fMRI data showed ventrolateral activation for
animate objects and ventromedial activation for inanimate
objects. Additional analyses within these regions revealed
no main effect of task, and no interactions between task
and animacy. Furthermore, there were no subpopulations of
voxels in any of the regions of interest that showed a
significant task by animacy interaction. We conclude that
ventral animate/inanimate category biases do not always
depend on top-down task orientation. Furthermore, we
consider whether the animate and inanimate activations
reflect biases in the non-preferred responses of strongly
category-selective regions such as the fusiform face area
or the parahippocampal place area.
Material-independent and material-specific activation in
fMRI after perceptual learning.
Mundy M, Honey R, Downing P, Wise R, Graham K,
Neuroreport (2009) 20(16):1397-401
Schedule of exposure to similar stimuli contributes to the
degree of perceptual learning over and above the amount of
exposure in a variety of species and stimuli. In an
event-related functional MRI study, investigating schedule
and stimulus effects in perceptual learning, we found that
intermixed presentation (A, B, A, B y) resulted in better
subsequent discrimination than blocked presentation (C, C y
D, D y) for face and checkerboard stimuli, despite being
matched for the number of exposures. Exposure schedule
resulted in differential activation in the same early
visual regions in both types of stimuli. There was evidence
of material-specific activation in the fusiform face area
for faces but not for checkerboards, suggesting that
material-specific mechanisms are recruited alongside more
material-independent mechanisms in perceptual learning.
Three recent comment pieces:
Visual Neuroscience: A hat-trick for
Current Biology. 2009; 19(4): R160-2.
(Comment on: "Triple dissociation of faces, bodies, and
objects in extrastriate cortex,", by David Pitcher
Face Perception: Broken into parts
Current Biology. 2007; 17(20): R888-9
(Comment on: "TMS evidence for the involvement of the
right occipital face area in early face
processing." by David Pitcher et al.)
The face network: overextended?
Wiggett AJ, Downing P
NeuroImage. 2008; 40(2): 420-2.
(Comment on: "Let's face it: It's a cortical network"
by Alumit Ishai)
The neural basis of visual body
Peelen M, Downing P
Nature Reviews Neuroscience. 2007; 8(8) 636-48.
The human body, like the human face, is a rich source of
socially relevant information about other individuals.
Evidence from studies of both humans and non-human primates
points to focal regions of the higher-level visual cortex
that are specialized for the visual perception of the body.
These body-selective regions, which can be dissociated from
regions involved in face perception, have been implicated
in the perception of the self and the 'body schema', the
perception of others' emotions and the understanding of
fMRI analysis of body and body part
representations in the extrastriate and fusiform body
Taylor J, Wiggett
A, Downing P
Journal of Neurophysiology. 2007; 98:1626-33.
This study examined the contributions of two
previously-identified brain regions -- the extrastriate and
fusiform body areas (EBA and FBA) -- to the visual
representation of the human form. Specifically we measured
in these two areas the magnitude of fMRI response as a
function of the amount of the human figure that is visible
in the image, in the range from a single finger to the
entire body. A second experiment determined the selectivity
of these regions for body and body part stimuli relative to
closely-matched control images. We found a gradual increase
in the selectivity of the EBA as a function of the amount
of body shown. In contrast, the FBA shows a step like
function, with no significant selectivity for individual
fingers or hands. In a third experiment we demonstrate that
the response pattern seen in EBA does not extend to
adjacent motionselective area hMT. We propose an
interpretation of these results by analogy to nearby
face-selective regions OFA (occipital face area) and FFA
(fusiform face area). Specifically, we hypothesize that the
EBA analyzes bodies at the level of parts (as has been
proposed for faces in the OFA), whereas FBA (by analogy to
FFA) may have a role in processing the configuration of
body parts into wholes.
Controlling for inter-stimulus perceptual
variance abolishes N170 face selectivity.
Thierry G, Martin C, Downing P, Pegna A
Nature Neuroscience. 2007; 10(4): 505-11
Establishing when and how the human brain differentiates
between object categories is key to understanding visual
cognition. Event-related potential (ERP) investigations
have led to the consensus that faces selectively elicit a
negative wave peaking 170 ms after presentation, the
'N170'. In such experiments, however, faces are nearly
always presented from a full front view, whereas other
stimuli are more perceptually variable, leading to
uncontrolled interstimulus perceptual variance (ISPV).
Here, we compared ERPs elicited by faces, cars and
butterflies while-for the first time-controlling ISPV (low
or high). Surprisingly, the N170 was sensitive, not to
object category, but to ISPV. In addition, we found
category effects independent of ISPV 70 ms earlier than has
been generally reported. These results demonstrate early
ERP category effects in the visual domain, call into
question the face selectivity of the N170 and establish
ISPV as a critical factor to control in experiments relying
on multitrial averaging.
Organization of felt and seen pain
responses in anterior cingulate cortex.
Morrison I, Downing P
Neuroimage. 2007; 37(2): 642-51.
Previous neuroimaging studies comparing pain observation
with directly-experienced pain have shown conjoint
activations in the cingulate cortex between felt and seen
pain. However, whereas this phenomenon may be due to the
functional-anatomical overlap of a shared neural substrate,
it may also reflect neighboring but distinct activations
for felt and seen pain respectively, the co-localization of
which is made more likely in group-averaged,
spatially-smoothed data. This study explores responses to
felt and seen pain, and their spatial overlap, on
unsmoothed data from single subjects. Significant
activation for the statistical conjunction of felt and seen
pain effects was present both at the group level and in six
of the eleven individual subjects. However, although each
subject showed distinct felt and seen pain areas in the
cingulate, a conjunction between these activations was not
found in every individual. Among those that showed a
felt-seen pain conjunction, its location along the gyrus
was variable and the conjunction always fell in a spatially
intermediate location between the felt and seen pain
activations. These results suggest that the BOLD signal
conjunction originates from the intersection of adjacent
and partially distinct activations—which do not
necessarily always overlap—rather than from a single
neural population coding equally for felt and seen pain.
This has implications for the interpretation of BOLD data
in addressing "mirrorlike" activations in general, whether
in action-related or pain-related areas.
fMRI investigation of overlapping lateral
occipitotemporal activations using multi-voxel pattern
Downing P, Wiggett A, Peelen M
Journal of Neuroscience. 2007; 27:226-233.
Several functional areas are proposed to reside in human
lateral occipitotemporal cortex, including motion selective
hMT, object-form selective LO, and body-selective EBA.
Indeed several fMRI studies have reported significant
activation overlap among these regions. The standard
interpretation of this overlap would be that the common
areas of activation reflect engagement of common neural
systems. Alternatively, motion, object form, and body form
may be processed independently within this general region.
To distinguish these possibilities, we first analysed the
lateral occipitotemporal responses to motion, objects,
bodies, and body parts with whole-brain group-average
analyses and within-subjects functional region of interest
(ROI) analyses. The activations elicited by these stimuli,
each relative to a matched control, overlapped
substantially in the group analysis. When hMT, LO, and EBA
were defined functionally within subjects, each ROI in each
hemisphere (except right hemisphere hMT) showed significant
selectivity for motion, intact objects, bodies, and body
parts, even though only the peak voxel of each region was
tested. In contrast, multi-voxel analyses of variations in
selectivity patterns revealed that visual motion, object
form, and the form of the human body elicited three
relatively independent patterns of fMRI activity in lateral
occipitotemporal cortex. Multi-voxel approaches, in
contrast to other methods, can reveal the functional
significance of overlapping fMRI activity in extrastriate
cortex and, by extension, elsewhere in the brain.
The sight of others' pain modulates motor
processing in human cingulate cortex
Peelen M, Downing P
Cerebral Cortex. 2007; 17(9):2214-22.
Neuroimaging evidence has shown that a network
including cingulate cortex and bilateral insula responds to
both felt and seen pain. Of these, dorsal anterior
cingulate and midcingulate areas are involved in preparing
context-appropriate motor responses to painful situations,
but it is unclear whether the same holds for observed pain.
Participants in this fMRI study viewed short animations
depicting a noxious implement (eg sharp knives) or an
innocuous implement (eg butter knives) striking a person's
hand. Participants were required to execute or suppress
button-press responses depending on whether the implements
hit or missed the hand. The combination of the implement's
noxiousness and whether it contacted the hand strongly
affected reaction times, with the fastest responses to
noxious-hit trials. BOLD signal changes mirrored this
behavioral interaction with increased activation during
noxious-hit trials only in midcingulate, dorsal anterior,
and dorsal posterior cingulate regions. Crucially, the
activation in these cingulate regions also depended on
whether the subject made an overt motor response to the
event, linking their role in pain observation to their role
in motor processing. This study also suggests a functional
topography in medial premotor regions implicated in "pain
empathy", with adjacent activations relating to
pain-selective and motor-selective components, and their
Using multi-voxel pattern analysis of fMRI
data to interpret overlapping functional
Peelen M, Downing P
comment in Trends in Cognitive Sciences. 2007; 11:4-5.
Norman et al. [TICS, 2006] recently
summarized the use of multi-voxel pattern analysis (MVPA)
of fMRI data. They provide examples showing that patterns
of activation across a set of voxels can contain far more
information about mental states than the more
typically-used univariate approach. Patterns of fMRI
activation can be used to discriminate cognitive states
(sometimes called ‘mind reading’), to relate
brain activity to behavior, and to clarify the structure of
neural representations. Here, we point out an additional
use of MVPA: its ability to separate overlapping functional
Response-specific effects of pain
observation on motor behavior
Morrison I, Poliakoff E, Gordon L, Downing P
Cognition. 2007; 104(2): 407-16.
How does seeing a painful event happening to someone else
influence the observer's own motor system? To address this
question, we measured simple reaction times following
videos showing noxious or innocuous implements contacting
corporeal or noncorporeal objects. Key releases in a
go/nogo task were speeded, and key presses slowed, after
subjects saw a video of a needle pricking a fingertip. No
such effect was seen when the observed hand was replaced by
a sponge, nor when the needle was replaced by a cotton bud.
These findings demonstrate that pain observation modulates
the motor system by speeding withdrawal movements and
slowing approach movements of the finger. This illustrates
a basic mechanism by which visual information about pain is
used to facilitate appropriate behavioral responses.
An event-related potential component
sensitive to images of the human body
Thierry G, Pegna A, Dodds C, Roberts M, Basan S,
Neuroimage. 2006; 32:871-9.
One of the critical functions of vision is
to provide information about other individuals.
Neuroimaging experiments examining the cortical regions
that analyze the appearance of other people have found
partially overlapping networks that respond selectively to
human faces and bodies. In event-related potential (ERP)
studies, faces systematically elicit a negative component
peaking 170 ms after presentation – the N170. To
characterize the electrophysiological response to human
bodies, we compared the ERPs elicited by faces, bodies, and
various control stimuli. In Experiment 1, a comparison of
ERPs elicited by faces, bodies, objects and places showed
that pictures of the human body (without the head) elicit a
negative component peaking at 190 ms (an N190). While
broadly similar to the N170, the N190 differs in both
spatial distribution and amplitude from the N1 components
elicited by faces, objects and scenes, and peaks
significantly later than the N170. The difference between
N190 and N170 was further supported using topographic
analyses of ERPs and source localization techniques. A
unique, stable map topography was found to characterize
human bodies between 130 and 230 ms. In Experiment 2, we
tested the four conditions from Experiment 1, as well as
intact and scrambled silhouettes and stick figures of the
human body. We found that intact silhouettes and stick
figures elicited significantly greater N190 amplitudes than
their scrambled counterparts. Thus the N190 generalizes to
some degree to schematic depictions of the human form.
Overall, our findings are consistent with intertwined, but
functionally distinct, neural representations of the human
face and body.
Patterns of fMRI Activity Dissociate
Overlapping Functional Brain Areas that Respond to
Peelen M, Wiggett A, Downing P
Neuron. 2006; 49,
Accurate perception of the actions and
intentions of other people is essential for successful
interactions in a social environment. Several cortical
areas that support this process respond selectively in fMRI
to static and dynamic displays of human bodies and faces.
Here we apply pattern-analysis techniques to arrive at a
new understanding of the neural response to biological
motion. Functionally defined body-, face-, and
motion-selective visual areas all responded significantly
to “point-light” human motion. Strikingly,
however, only body selectivity was correlated, on a
voxel-by-voxel basis, with biological motion selectivity.
We conclude that (1) biological motion, through the process
of structure-from-motion, engages areas involved in the
analysis of the static human form; (2) body-selective
regions in posterior fusiform gyrus and posterior inferior
temporal sulcus overlap with, but are distinct from, face-
and motion-selective regions; (3) the interpretation of
region-of-interest findings may be substantially altered
when multiple patterns of selectivity are
The Role of the Extrastriate Body Area in
Supplementary materials: movie 1 movie 2
Downing P, Peelen M, Wiggett A, Tew B
Social Neuroscience. 2006; 1(1), 52-62.
Numerous cortical regions respond to aspects of the human
form and its actions. What is the contribution of the
extrastriate body area (EBA) to this network? In
particular, is the EBA involved in constructing a dynamic
representation of observed actions? We scanned 16
participants with fMRI while they viewed two kinds of
stimulus sequences. In the coherent condition, static
frames from a movie of a single, intransitive whole-body
action were presented in the correct order. In the
incoherent condition, a series of frames from multiple
actions (involving one actor) were presented. ROI analyses
showed that the EBA, unlike area MT+ and the posterior
superior temporal sulcus, responded more to the incoherent
than to the coherent conditions. Whole brain analyses
revealed increased activation to the coherent sequences in
parietal and frontal regions that have been implicated in
the observation and control of movement. We suggest that
the EBA response adapts when succeeding images depict
relatively similar postures (coherent condition) compared
to relatively different postures (incoherent condition). We
propose that the EBA plays a unique role in the perception
of action, by representing the static structure, rather
than dynamic aspects, of the human form.
Domain specificity in visual cortex
Downing P, Chan A, Peelen M, Dodds C, Kanwisher N
Cerebral Cortex. 2006; 16 (10), 1453-61.
We investigated the prevalence and specificity of
category-selective regions in human visual cortex. In the
broadest survey to date of category selectivity in visual
cortex, twelve participants were scanned with fMRI while
viewing scenes and 19 different object categories in a
blocked design experiment. As expected, we found
selectivity for faces in the fusiform face area (FFA), for
scenes in the parahippocampal place area (PPA), and for
bodies in the extrastriate body area (EBA). In addition, we
describe three main new findings. First, evidence for the
selectivity of the FFA, PPA, and EBA was strengthened by
the finding that each area responded significantly more
strongly to its preferred category than to the next
most-effective of the remaining 19 stimulus categories
tested. Second, a region in the middle temporal gyrus that
has been reported to respond significantly more strongly to
tools than to animals, did not respond significantly more
strongly to tools than to other non-tool categories (such
as fruits and vegetables), casting doubt on the
characterization of this region as tool-selective. Finally,
we did not find any new regions in the occipitotemporal
pathway that were strongly selective for other categories.
Taken together, these results demonstrate both the strong
selectivity of a small number of regions, and the scarcity
of such regions in visual cortex.
Is the extrastriate body area involved in
Supp. Fig. 1 Supp. Fig. 2
Peelen, M., & Downing, P.
Nature Neuroscience. 2005, Feb; 8(2): 125.
Astafiev et al. report that unseen, visually-guided motor
acts activate the extrastriate body area (EBA). This
finding has potential implications for understanding the
interactions between motor and perceptual systems, and
suggests a mechanism by which the visual stimulation
resulting from one’s own motor acts is distinguished
from that produced by others. We replicated Astafiev et
al.’s experiment and found, in line with their
findings, action-related modulation in EBA. However, a
closer look showed that the region involved in visually
guided motor acts is distinct from EBA, and that
action-related modulation and body-selectivity are
Within-Subject Reproducibility of
Category-Specific Visual Activation with Functional
Peelen MV, Downing PE.
Hum Brain Mapp. 2005, 25:402-8
The present study used fMRI to investigate the
within-subject reproducibility of activation in higher
level, category-specific visual areas in order to validate
the functional localization approach widely used for these
areas. The brain areas we investigated included the
extrastriate body area (EBA), which responds selectively to
human bodies, the fusiform face area (FFA) and the
occipital face area (OFA), which respond selectively to
faces, and the parahippocampal place area (PPA), which
responds selectively to places and scenes. All 6 subjects
showed significant bilateral activation in the four areas.
Reproducibility was very high for all areas both within a
scanning session and between scanning sessions separated by
3 weeks. Within sessions, the mean distance between peak
voxels of the same area localized by using different
functional runs was 1.5 mm. The mean distance between peak
voxels of areas localized in different sessions was 2.9 mm.
Functional reproducibility, as expressed by the stability
of T-values across sessions, was high for both
within-session and between-session comparisons. We conclude
that, within subjects, high-level category-specific visual
areas can be localized robustly across scanning sessions.
The effect of viewpoint on body
representation in the extrastriate body area.
Chan, A W-Y., Peelen, M., & Downing, P.
Neuroreport. 2004 Oct 25;15(15):2407-10.
Functional neuroimaging has revealed several brain regions
that are selective for the visual appearance of others, in
particular the face. More recent evidence points to a
lateral temporal region that responds to the visual
appearance of the human body (extrastriate body area or
EBA). We tested whether this region distinguishes between
egocentric and allocentric views of the self and other
people. EBA activity increased significantly for
allocentric relative to egocentric views in the right
hemisphere, but was not influenced by identity. Whole-brain
analyses revealed several regions that were infuenced by
viewpoint or identity. Modulation of EBA activity by
viewpoint was modest relative to modulation by stimulus
class. We propose that the EBAplays a relatively early role
in social vision.
Selectivity for the human body in the
Peelen, M., and Downing, P.
Journal of Neurophysiology. 2005 Jan;93(1):603-8.
Functional neuroimaging studies have revealed human brain
regions, notably in the fusiform gyrus, that respond
selectively to images of faces as opposed to other kinds of
objects. Here we use fMRI to show that the mid-fusiform
gyrus responds with nearly the same level of selectivity to
images of human bodies without faces, relative to tools and
scenes. In a group-average analysis (N=22), the fusiform
activations identified by contrasting faces vs. tools and
bodies vs. tools are very similar. Analyses of
within-subjects regions of interest, however, show that the
peaks of the two activations occupy close but distinct
locations. In a second experiment, we find that the
body-selective fusiform region, but not the face-selective
region, responds more to stick figure depictions of bodies
than to scrambled controls. This result further
distinguishes the two foci, and confirms that the
body-selective response generalises to abstract image
formats. These results challenge accounts of the
mid-fusiform gyrus that focus solely on faces, and suggest
that this region contains multiple distinct
category-selective neural representations.
Competition in visual working memory for
control of search.
Downing, PE, and Dodds, CM
Visual Cognition. 2004 Jun; 11(6): 689-703.
Recent perspectives on selective attention posit a central
role for visual working memory (VWM) in the top-down
control of attention. According to the biased-competition
model (Desimone & Duncan, 1995), active maintenance of
an object in VWM gives matching (Downing, 2000) or related
(Moores, Laiti, & Chelazzi, 2003) objects in the
environment a competitive advantage over other objects in
gaining access to limited processing resources.
Participants in this study performed a visual search task
while simultaneously maintaining a second item in VWM. On
half of the trials, this item appeared as a distractor item
in the search array. We found no evidence that this item
interferes with successful selection of the search target,
as measured with response time in a target detection task
and accuracy in a target discrimination task. These results
are consistent with two general models: One in which a
representation of the current task biases the competition
between items in a unitary VWM, or one in which VWM is
fractionated to allow for maintenance of critical items
that are not immediately relevant to the task.
Bodies capture attention when nothing is
Downing PE, Bray D, Rogers J, Childs C.
Cognition. 2004 Aug;93(1):B27-38.
Functional neuroimaging research has shown that certain
classes of visual stimulus selectively activate focal
regions of visual cortex. Specifically, cortical areas that
generally and selectively respond to faces (Kanwisher, N.,
McDermott, J., & Chun, M. M. (1997). The fusiform face
area: a module in human extrastriate cortex specialized for
face perception. Journal of Neuroscience, 17(11),
4302-4311; Puce, A., Allison, T., Asgari, M., Gore, J. C.,
& McCarthy, G. (1996). Differential sensitivity of
human visual cortex to faces, letterstrings, and textures:
a functional magnetic resonance imaging study. Journal of
Neuroscience, 16(16), 5205-5215.) and to the human body
(Downing, P. E., Jiang, Y., Shuman, M., & Kanwisher, N.
(2001). A cortical area selective for visual processing of
the human body. Science, 293(5539), 2470-2473.) have
recently been described using fMRI. A parallel body of
research has focused on the ability of faces to "capture"
the focus of attention, compared to other kinds of objects
(Lavie, N., Ro, T., & Russell, C. (2003). The role of
perceptual load in processing distractor faces.
Psychological Science, 14(5), 510-515; Ro, T., Russell, C.,
& Lavie, N. (2001). Changing faces: a detection
advantage in the flicker paradigm. Psychological Science,
12(1), 94-99; Vuilleumier, P. (2000). Faces call for
attention: evidence from patients with visual extinction.
Neuropsychologia, 38(5), 693-700.). The present study uses
Mack and Rock's "inattentional blindness" paradigm to
investigate whether unexpected, task-irrelevant human body
stimuli capture awareness when attention is occupied by a
primary task (Mack, A., & Rock, I. (1998).
Inattentional blindness. London: MIT Press). Silhouettes
and stick figures of human bodies, and silhouettes of
hands, were compared to control stimuli including object
silhouettes, object stick figures, and scrambled
silhouettes of bodies, body parts, and objects.
Participants were significantly better able to detect a
human figure relative to the control stimuli. These results
suggest that the human body, like the face, may be
prioritized for attentional selection. More generally, they
are consistent with the proposal that the visual system
assigns attentional priority to types of stimuli that are
also represented in strongly selective cortical regions.
Why does the gaze of others direct visual
Downing PE, Dodds CM, Bray D.
Visual Cognition. 2004 11(1):71-79.
Viewing another person directing his or her gaze can
produce automatic shifts of covert visual attention in the
same direction. This holds true even when the task-relevant
target is much more likely to occur at the uncued location.
These findings, along with other evidence, have been taken
to suggest that gaze represents a “special”
stimulus – the foundation of a social cognition
system that can make inferences about the mental states of
other people. However, gaze-driven cueing effects could
simply be due to spatial compatibility between cue and
target. We compared the attentional effects of gaze shifts
to a face with the tongue extended laterally to the left or
right. When tongue direction was a non-predictive cue, we
found cueing effects from tongues that were
indistinguishable from those produced by gaze. However, in
contrast to previous findings with gaze, tongue cues did
not overcome a validity manipulation in which targets were
4 times more likely to appear at the uncued location. We
conclude that simple attentional cueing effects from gaze
may be better explained by spatial compatibility, and that
more complex, unique features of cueing from gaze may be
better indices into perceptual systems specialised for
Viewpoint-specific scene representations in
human parahippocampal cortex.
Epstein R, Graham KS, Downing PE.
Neuron. 2003 Mar 6;37(5):865-76.
The "parahippocampal place area" (PPA) responds more
strongly in functional magnetic resonance imaging (fMRI) to
scenes than to faces, objects, or other visual stimuli. We
used an event-related fMRI adaptation paradigm to test
whether the PPA represents scenes in a viewpoint-specific
or viewpoint-invariant manner. The PPA responded just as
strongly to viewpoint changes that preserved intrinsic
scene geometry as it did to complete scene changes, but
less strongly to object changes within the scene. In
contrast, lateral occipital cortex responded more strongly
to object changes than to spatial changes. These results
demonstrate that scene processing in the PPA is viewpoint
specific and suggest that the PPA represents the
relationship between the observer and the surfaces that
define local space.
A cortical area selective for visual
processing of the human body.
Downing PE, Jiang Y, Shuman M, Kanwisher N.
Science. 2001 Sep 28;293(5539):2470-3.
Despite extensive evidence for regions of human visual
cortex that respond selectively to faces, few studies have
considered the cortical representation of the appearance of
the rest of the human body. We present a series of
functional magnetic resonance imaging (fMRI) studies
revealing substantial evidence for a distinct cortical
region in humans that responds selectively to images of the
human body, as compared with a wide range of control
stimuli. This region was found in the lateral
occipitotemporal cortex in all subjects tested and
apparently reflects a specialized neural system for the
visual perception of the human body.
Testing cognitive models of visual
attention with fMRI and MEG.
Downing P, Liu J, Kanwisher N.
Neuroimaging techniques can be used not only to identify
the neural substrates of attention, but also to test
cognitive theories of attention. Here we consider four
classic questions in the psychology of visual attention:
(i) Are some 'special' classes of stimuli (e.g. faces)
immune to attentional modulation?; (ii) What are the
information units on which attention operates?; (iii) How
early in stimulus processing are attentional effects
observed?; and (iv) Are common mechanisms involved in
different modes of attentional selection (e.g. spatial and
non-spatial selection)? We describe studies from our
laboratory that illustrate the ways in which fMRI and MEG
can provide key evidence in answering these questions. A
central methodological theme in many of our fMRI studies is
the use of analyses in which the activity in certain
functionally-defined regions of interest (ROIs) is used to
test specific cognitive hypotheses. An analogous
sensor-of-interest (SOI) approach is applied to MEG. Our
results include: evidence for the modulation of face
representations by attention; confirmation of the
independent contributions of object-based and
location-based selection; evidence for modulation of face
representations by non-spatial selection within the first
170 ms of processing; and implication of the intraparietal
sulcus in functions general to spatial and non-spatial
Interactions between visual working memory
and selective attention.
Psychol Sci. 2000 Nov;11(6):467-73.
The relationship between working memory and selective
attention has traditionally been discussed as operating in
one direction: Attention filters incoming information,
allowing only relevant information into short-term
processing stores. This study tested the prediction that
the contents of visual working memory also influence the
guidance of selective attention. Participants held a sample
object in working memory on each trial. Two objects, one
matching the sample and the other novel, were then
presented simultaneously. As measured by a probe task,
attention shifted to the object matching the sample. This
effect generalized across object type, attentional-probe
task, and working memory task. In contrast, a matched task
with no memory requirement showed the opposite pattern,
demonstrating that this effect is not simply due to
exposure to the sample. These results confirm a specific
prediction about the influence of working memory contents
on the guidance of attention.
fMRI evidence for objects as the units of
O'Craven KM, Downing PE, Kanwisher N.
Nature. 1999 Oct 7;401(6753):584-7.
Contrasting theories of visual attention emphasize
selection by spatial location, visual features (such as
motion or colour) or whole objects. Here we used functional
magnetic resonance imaging (fMRI) to test key predictions
of the object-based theory, which proposes that
pre-attentive mechanisms segment the visual array into
discrete objects, groups, or surfaces, which serve as
targets for visual attention. Subjects viewed stimuli
consisting of a face transparently superimposed on a house,
with one moving and the other stationary. In different
conditions, subjects attended to the face, the house or the
motion. The magnetic resonance signal from each subject's
fusiform face area, parahippocampal place area and area
MT/MST provided a measure of the processing of faces,
houses and visual motion, respectively. Although all three
attributes occupied the same location, attending to one
attribute of an object (such as the motion of a moving
face) enhanced the neural representation not only of that
attribute but also of the other attribute of the same
object (for example, the face), compared with attributes of
the other object (for example, the house). These results
cannot be explained by models in which attention selects
locations or features, and provide physiological evidence
that whole objects are selected even when only one visual
attribute is relevant.
The line-motion illusion: attention or
Downing PE, Treisman AM.
J Exp Psychol Hum Percept Perform. 1997
When a brief lateral cue precedes an instantaneously
presented horizontal line, observers report a sensation of
motion in the line propagating from the cued end toward the
uncued end. This illusion has been described as a measure
of the facilitatory effects of a visual attention gradient
(O. Hikosaka, S. Miyauchi, & S. Shimojo, 1993a).
Evidence in the present study favors, instead, an account
in which the illusion is the result of an impletion process
that fills in interpolated events after the cue and the
line are linked as successive states of a single object in