The attractor neural network scenario is a popular scenario for memory storage in association cortex.
However, there is still a large gap between models based on this scenario and experimental data. In
this talk, I will present the study of a recurrent network model in which both learning rules and
distribution of stored patterns are inferred from distributions of visual responses for novel and familiar
images in the inferior temporal cortex (ITC).
Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting learning rules in ITC are optimized to store a large number of attractor states. We show that two types of retrieval states exist: one in which firing rates are constant in time, other in which firing rates fluctuate chaotically. Chaotic retrieval states present irregular temporal dynamics that strongly resemble the temporal variability observed during delay periods.
Finally, I will briefly describe a theory for computing the storage capacity of the network in parameter space.