Computational Neurobiology of Sensory Representations 
through a layer of hidden units to an output layer 
that represented the perceived egocentric depth 
of the object, as determined by psychophysical 
measurements. The disparity tuning curves of the 
hidden units were similar to those of the input 
units, and varying the vergence did not change 
the shape of the tuning curves; however, the ver- 
-^'gence did modulate their amplitude. 
Similar "gain fields" for conjugate eye move- 
ments have been observed in the posterior pari- 
etal cortex, a region of the brain that is essential 
for our internal representation of external space. 
The predictions of this model can be tested by 
recording single-unit activity in the cerebral cor- 
tex of awake and behaving monkeys, and several 
laboratories are pursuing these experiments. Pre- 
liminary results support the model. 
Neurons in the early stages of visual processing 
in the cerebral cortex are organized in retinoto- 
pic maps. Thus visual features are arranged in a 
system of coordinates that is based on the posi- 
tion of features in the visual field of the retina 
rather than on the absolute position of features in 
space. Psychological experiments provide fur- 
ther evidence that simple visual features such as 
orientation and direction of motion are organized 
according to retinal coordinates. At later stages of 
visual processing, the receptive fields of neurons 
become very large; and in the posterior parietal 
cortex, containing areas important for sensory- 
motor coordination, the visual responses of neu- 
rons are modulated by both eye and head posi- 
tion. A previous model of the parietal cortex 
showed that the modulation of the neurons ob- 
served there is consistent with a distributed spa- 
tial transformation from retinal to spatial coordi- 
nates. Our model of the transformation from 
disparity to distance by vergence modulation can 
be considered a generalization of this model to 
include the third dimension of space. 
All these models assume that the responses of 
neurons in the early stages of visual processing in 
cerebral cortex depend only on retinal informa- 
tion and not on the direction of gaze. Several labo- 
ratories have now reported that eye position does 
in fact modulate the visual response of many neu- 
rons in early stages of visual processing. Further- 
more, this modulation appears to be qualitatively 
similar to that previously reported for neurons in 
the parietal cortex. These new findings suggest 
that transformations from retinal to spatial repre- 
sentations could be initiated much earlier than 
previously thought. 
We have used network models to study the 
consequences of incremental spatial transforma- 
tions in a feedforward hierarchy of cortical maps. 
Our model shows that it is possible for visual fea- 
tures to be encoded in spatial coordinates already 
at very early stages of visual processing. We call 
this new type of spatial map a retinospatiotopic 
representation and are exploring its counterin- 
tuitive properties. The model makes several sur- 
prising predictions that we are testing with per- 
ceptual experiments on human observers. 
The primate visual system is very good at com- 
plex motion-processing tasks such as tracking a 
moving object against a textured background 
under a variety of luminance conditions. In order 
to track a moving object, the visual system must 
integrate many local motion estimates from many 
neurons, each with limited spatial receptive 
fields. No single neuron has the information 
needed to estimate the velocity of the object. 
We have developed a simple model for motion 
processing in the visual areas of cortex that spe- 
cialize in representing motion. The model as- 
sumes two pools of filters at each location on the 
visual field: one pool computes estimates of mo- 
tion in a local region of the visual field, while the 
other estimates the relevance or reliability of 
each local motion estimate, based on the estimate 
itself and on additional information from the 
visual scene. Outputs from the second pool can 
"gate" the outputs from the first pool through a 
gain-control mechanism, before the local motion 
estimates are integrated to form more-global esti- 
mates. The proposed mechanism of gain control 
is consistent with measured responses of cortical 
cells under conditions of interfering motion of 
transparent stimuli. 
These models provide representations of ob- 
jects in space that are highly distributed. We also 
want to understand how these distributed repre- 
sentations can be used to direct the motor system 
to orient toward these objects. For example, mo- 
tion estimates can be used to direct the eyes to 
track moving objects, and distance estimates can 
be used to guide hand movements to reach out for 
objects. We are developing models of motor sys- 
tems in the brain that will complement these 
models of sensory processing. The models of mo- 
tor control are based on networks of neurons that 
include feedback connections, which makes 
them highly dynamic. New principles of neural 
processing may emerge as more-detailed dynami- 
cal properties of neurons are incorporated into 
these models. 
368 
