Time-varying gain modulation on neural circuit dynamics and performance in perceptual decisions

Ritwik Niyogi, KongFatt Wong-Lin

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


    Recent studies have shown that urgency signals may play an important role during the temporal integration of sensory evidence in perceptual decision-making. Furthermore, it has been shown that neurons in the lateral intraparietal area (LIP) can exhibit temporal integration properties in perceptual decision-making tasks, as well as multiplicative gain modulation properties. Our theoretical work connects this growing body of research areas. We investigate how time-varying single-cell gain modulation affects the network dynamics of a biological neural circuit model (Wong, Huk, Shadlen and Wang, 2007) for motion-discrimination tasks. Since we are interested in two-alternative forced-choice tasks, we implement a three-population network model. Two populations of excitatory pyramidal cells are selective to the coherent motion directions in a random dot motion-discrimination task. A third population of interneurons mediates pooled inhibition. We use dynamical systems analyses to study how the network dynamics evolves over different epochs of a trial. In order to enable categorical choice, the two selective populations need to remain at a common stable high firing-rate during the target-period prior to the motion stimulus onset. In reaction time (RT) tasks, the firing-rates typically diverge at a higher firing-rate shortly after motion stimulus onset (e.g. Roitman and Shadlen, 2002; Huk and Shadlen, 2005). Our analyses reveal that increasing the gains of both excitatory and inhibitory neurons is necessary to permit such neuronal dynamics to occur; an unstable saddle steady-state (that causes the divergence) has to be higher than the stable firing-rate during the target-period. In addition, the network is required to operate near a critical bifurcation point. Simulating our model, we are able to strikingly capture the full characteristics of the LIP neuronal dynamics in motion-discrimination tasks more accurately than past models, while accurately reproducing the behavioral data. Our model is also able to reproduce the timecourse of neuronal firing-rates in the fixed-viewing duration (FD) version of this task (e.g. Shadlen and Newsome 2001; Roitman and Shadlen 2002). It has been observed that in the FD task, the firing-rates of the selective populations are lower, and diverge from a level lower than during the target-period, contrary to that in the RT version. Firing-rates are also typically lower than the decision threshold during the delay-period in FD task. These phenomena can be accommodated in the same model by having lower gains than in the RT task. This approach is consistent with the intuition that decision-making that aims to maximize reward-rate (total number of correct responses over total time taken), by optimizing a speed-accuracy tradeoff in a RT task, would be more demanding and therefore involve more attentional resources than decision-making in a FD task, where the reward is dependent only on accuracy. In fact, our simulations predict that a short time-constant of gain modulation, e.g. indicating fast attentional modulation, enables the maximum reward rate to be attained. Our model thus provides an integrative and coherent understanding of the interplay among separate neuronal processes to enable flexible and optimal decision performance.
    Original languageEnglish
    Title of host publicationUnknown Host Publication
    Number of pages1
    Publication statusPublished (in print/issue) - 1 Mar 2010
    EventComputational and Systems Neuroscience 2010, 25 Feb - 2 Mar, 2010 - Salt Lake City, UT, USA
    Duration: 1 Mar 2010 → …


    ConferenceComputational and Systems Neuroscience 2010, 25 Feb - 2 Mar, 2010
    Period1/03/10 → …


    Dive into the research topics of 'Time-varying gain modulation on neural circuit dynamics and performance in perceptual decisions'. Together they form a unique fingerprint.

    Cite this