Computational Neurobiology of Sensory 
Representations 
TerrenceJ. Sejnowski, Ph.D. — Investigator 
Dr. Sejnowski is also Professor at the Salk Institute for Biological Studies and Professor of Biology and 
Neuroscience at the University of California, San Diego. He received his B.S. degree in physics from Case 
Western University and his M.A. and Ph.D. degrees in physics from Princeton University. He was a 
postdoctoral fellow with Alan Gelperin in the Biology Department at Princeton and with Stephen Kuffler 
at Harvard Medical School, where he studied mechanisms of synaptic transmission. Dr. Sejnowski was a 
member of the faculty of the Biophysics Department at the Johns Hopkins University before moving to 
San Diego. He and Patricia Churchland have recently written The Computational Brain, a book on 
computational neuroscience. 
WE do not yet understand how the nervous 
system enables us to recognize objects, to 
learn new skills, and to plan actions. The discov- 
ery that single neurons in the visual system can be 
highly selective in responding to visual stimuli 
led to the view that the perception of complex 
objects could be directly linked to the activity of 
individual neurons. This possibility raises a num- 
ber of questions, such as what degree of influ- 
ence a single neuron can have on behavior and 
whether there are enough neurons in the brain to 
account for the large number of objects that can 
be perceived. 
An alternative possibility relies on populations 
of neurons to represent perceptual states. On this 
account, the information essential to the repre- 
sentation of an object is distributed over a large 
population of neurons. It is difficult to imagine 
how a pattern of activity in a large number of 
neurons distributed widely throughout the brain 
could be used to recognize an object and serve as 
the input for motor actions. Computer models 
incorporating cellular information from single- 
cell recordings and constrained by psychological 
measurements on performance can help to orga- 
nize these data and provide a conceptual frame- 
work for understanding distributed represen- 
tations. Such models are being used to explore 
how the visual cortex represents the three- 
dimensional world, how this representation may 
arise during development, and how the informa- 
tion coded by these neurons might be used to 
coordinate actions such as eye movements. 
The perception of depth depends upon a num- 
ber of visual cues, but only one of them relies on 
the slight positional shift that occurs between the 
diff^erent viewpoints of the two eyes, called the 
image disparity. Neurons in the first cortical stage 
of vision are sensitive to disparity, and the devel- 
opment of this sensitivity is dependent on binocu- 
lar vision during a critical period. 
In the adult visual cortex, neurons are ob- 
served to be either dominated by input from one 
eye (monocular cells) or relatively balanced with 
input from both eyes (binocular cells) . Further- 
more, binocular cells tend to be stimulated maxi- 
mally when images are in exact correspondence 
in both eyes, thus preferring zero disparity, and 
relatively monocular cells tend to prefer nonzero 
disparities. We have simulated the development 
of a layer of cortical cells receiving inputs from 
both eyes and show how such a relationship be- 
tween ocularity and disparity might arise. 
The key feature of our model is the use of 
correlations of activity both within each eye and 
between the eyes. We assume that two retinal 
cells close to each other will have more corre- 
lated activity than two cells far apart. Corre- 
sponding points in the two eyes will also tend to 
be correlated, since they will look, on average, at 
the same point in space. However, the correla- 
tion between the eyes will also be spread out by 
convergent and divergent eye movement. We can 
simulate visual development in two stages: prena- 
tal, when the two retinae have essentially inde- 
pendent activities, and postnatal, when the eyes 
are open and have correlated activities. By vary- 
ing the amount of development that occurs in the 
model before eye opening, we can show that a 
mixture of monocular and binocular cells arises 
with the observed relationship to disparity. 
Disparity provides information about the rela- 
tive positions of objects in space, but this cue is 
insufficient to recover the absolute distance of 
objects from the viewer. However, the distance of 
an object from the viewer can be computed by 
combining relative depth cues with other infor- 
mation, such as eye position. We have developed 
a network model to explore how the vergence of 
the two eyes (angle between the two lines of 
sight) and the binocular disparity could be com- 
bined to represent the distance to an object. 
Single neurons have a wide range of disparity 
tuning curves that are broad and overlapping. 
Such a distributed representation of disparity was 
used in a network model to encode the inputs. 
The network was trained to transform disparity 
and vergence input information by projections 
367 
