OpenTalks #16



计算神经科学通过建立理论和数学模型来探寻大脑信息处理的一般性原理。这样不仅为神经科学与认知科学的实验观测提供理论基础以及实验预测,而且可以从大脑得到启发来发展新的类脑人工智能算法。神经科学研究中的一个基本问题是大脑神经活动如何表征、推测与提取外界信息,即神经编码(Neural coding),它是大脑(多模态)感觉信息处理、知觉决策、感觉运动转换、学习与记忆的基础。那么大脑神经元如何相互作用产生神经活动,这些神经活动以何种形式来编码外界信息?此外,大脑神经活动看似杂乱且充满噪声,它们又如何稳定地编码外界信息进而产生稳定的知觉?本期OpenTalk我们有幸邀请到美国芝加哥大学的张文昊博士分享他在这方面的研究。具体内容详见如下摘要。


Recurrent circuit based neural population codes for stimulus representation and inference


A large part of the synaptic input received by cortical neurons comes from local cortico-cortical connectivity. Despite their abundance, the role of local recurrence in cortical function is unclear, and in simple coding schemes it is often the case that a circuit with no recurrent connections performs optimally. We consider a recurrent excitatory-inhibitory circuit model of a cortical hypercolumn which performs sampling-based Bayesian inference to infer latent hierarchical stimulus features. We show that local recurrent connections can store an internal model of the correlations between stimulus features that are present in the external world. When the resulting recurrent input is combined with feedforward input it produces a population code from which the posterior over the stimulus features can be linearly read out. Internal Poisson spiking variability provides the proper fluctuations for the population to sample stimulus features, yet the resultant population variability is aligned along the stimulus feature direction, producing what are termed differential correlations.
Importantly, the amplitude of these internally generated differential correlations is determined by the associative prior in the model stored in the recurrent connections, thus providing experimentally testable predictions for how population connectivity and response variability are connected to the structure of latent external stimuli.


北京时间[UTC+8] 1月31日(周日) 21:00
欧洲中部时间[CET] 1月31日(周日) 14:00
美国东部时间[EST] 1月31日(周日) 08:00


Meeting ID: 913 9401 0836






Wenhao Zhang is a postdoc studying theoretical neuroscience at the University of Chicago. Before that, he did his postdoc research at the University of Pittsburgh and Carnegie Mellon University. His research mainly focuses on developing normative theories and biologically plausible models that address fundamental questions of neural information processing. A distinguishing feature of his research is it tightly combines abstract computational theories with concrete neural circuit models that are amenable to experimental testing. To achieve this broad research goal he combines techniques from nonlinear dynamics, Bayesian inference, neural coding, information theory, and Lie group theory. To ground his theoretical framework, most of his accomplished studies use correlated response variability and multisensory integration as examples to provide concrete experimental pre/post-dictions. He is one of few researchers in the field whose interdisciplinary work has been published in both top neuroscience journals and top machine learning conferences.