Recurrent circuit based neural population codes for stimulus representation and inference
A large part of the synaptic input received by cortical neurons comes from local cortico-cortical connectivity. Despite their abundance, the role of local recurrence in cortical function is unclear, and in simple coding schemes it is often the case that a circuit with no recurrent connections performs optimally. We consider a recurrent excitatory-inhibitory circuit model of a cortical hypercolumn which performs sampling-based Bayesian inference to infer latent hierarchical stimulus features. We show that local recurrent connections can store an internal model of the correlations between stimulus features that are present in the external world. When the resulting recurrent input is combined with feedforward input it produces a population code from which the posterior over the stimulus features can be linearly read out. Internal Poisson spiking variability provides the proper fluctuations for the population to sample stimulus features, yet the resultant population variability is aligned along the stimulus feature direction, producing what are termed differential correlations.
Importantly, the amplitude of these internally generated differential correlations is determined by the associative prior in the model stored in the recurrent connections, thus providing experimentally testable predictions for how population connectivity and response variability are connected to the structure of latent external stimuli.
北京时间[UTC+8] 1月31日(周日) 21:00
欧洲中部时间[CET] 1月31日(周日) 14:00
美国东部时间[EST] 1月31日(周日) 08:00
Meeting ID: 913 9401 0836
Wenhao Zhang is a postdoc studying theoretical neuroscience at the University of Chicago. Before that, he did his postdoc research at the University of Pittsburgh and Carnegie Mellon University. His research mainly focuses on developing normative theories and biologically plausible models that address fundamental questions of neural information processing. A distinguishing feature of his research is it tightly combines abstract computational theories with concrete neural circuit models that are amenable to experimental testing. To achieve this broad research goal he combines techniques from nonlinear dynamics, Bayesian inference, neural coding, information theory, and Lie group theory. To ground his theoretical framework, most of his accomplished studies use correlated response variability and multisensory integration as examples to provide concrete experimental pre/post-dictions. He is one of few researchers in the field whose interdisciplinary work has been published in both top neuroscience journals and top machine learning conferences.