The CCN Group combines biophysical modeling, goal-driven deep learning, computational neuroimaging, data-driven science and neurorobotics to study how the brain acquires and performs perceptual, cognitive and motor skills. Our research focuses on visual perception and imagery, the coordination of complex hand movements as well as visuomotor integration subserved by the frontoparietal network.
A biophysical model is a simulation of a biological system using mathematical formalizations of the physical properties of that system. It focuses on mechanisms that may explain known phenomena of said system. For instance, a biophysical model of whole brain dynamics may explain typical co-activation patterns of cortical regions mechanistically in terms of the anatomical connections between these regions and local dynamics produced by interactions between excitatory and inhibitory neurons within these regions. This approach can be used to formalize more abstract theories of neural mechanisms and phenomena in a quantifiable, testable, model. It can also be used to generate hypotheses and theories of neural mechanisms by fitting a model to data and interpreting the resulting model parameters.
The CCN group utilizes biophysical modeling to study whole-brain dynamics during resting and task performance conditions, decision making in visual and visuomotor contexts, visual perceptual learning and the functional role of cortical oscillations.
Goal-driven deep learning
The basic idea of this approach is to use deep learning to model biological brain regions and functions. Specifically, in this approach one would design a biologically constrained neural network architecture as well as an ecologically valid task and employ deep (reinforcement) learning to tune the model such that it becomes able to solve this task. In this way, deep learning can be utilized to uncover the neurocomputational principles (in terms of their representations and computations) that may underlie complex, high-level, functions of the brain. As such, biologically plausible deep neural networks can be used as generative models to formulate new hypotheses about brain functionality. Furthermore, deep networks can be used for testing hypotheses in silico by training them on ecologically relevant tasks and subsequently exposing them to stimuli used in neuroscientific experimentation.
The CCN group utilizes goal-driven deep to study visually-guided in-hand object manipulation, attention and scene representation as well as the organizational principles of the visual system.
This refers to the development of quantitative models that are specified in the input frame. The employed models are input-referred and allow for mapping the response profiles of cortical regions in terms of interpretable parameters characterizing this input. For example, the parameters of population receptive field models, one of the most prominent tools within computational neuroimaging, characterize each unit (e.g. voxel in fMRI) of a cortical region in terms of the location and size of visual space it is responsive to. This approach allows to identify the spatial profiles of model parameters including whether they exhibit topographic organization. Computational neuroimaging thus allows to characterize not only what a brain region encodes but also the corresponding population code.
The CCN group utilizes computational neuroimaging to investigate the topographic organization of visual perception, visual mental imagery and visual attention. The group is also actively involved in developing novel computational neuroimaging tools and to translate insights towards the development of novel brain computer interface (BCI) applications.
This field is predominantly concerned with discovering dynamical systems from data. Two prominent data-driven science tools that are employed in the CCN group are dynamic mode decomposition (DMD) and sparse identification of nonlinear dynamics (SINDy). Dynamic mode decomposition was developed by the fluid dynamics community to identify spatio-temporal coherent structures from high-dimensional data. It is based on singular value decomposition which allows it to provide effective dimensionality reduction. In contrast to standard singular value decomposition, which results in a hierarchy of modes based entirely on spatial correlation and largely ignores temporal information, DMD also provides a model for how these modes evolve in time. Specifically, given a time series of data, DMD specifies modes as spatially correlated structures that exhibit the same linear behavior in time (oscillations and exponential growth or decay). Sparse identification of nonlinear dynamics constitutes a framework that utilizes sparsity-promoting techniques and machine learning to discover governing equations underlying a dynamical system directly from noisy measurement data. Specifically, it uses sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in models that are sparse in the space of possible functions and hence balances accuracy with model complexity. In constructing these models, SINDy thus makes the assumption that there are only a few important terms that govern dynamics. This assumption holds for many physical systems; even those capable of exhibiting highly complex behavior.
The CCN group utilizes the tools from data-driven science to investigate whole-brain dynamics and decision-making as well as to relate the dynamics exhibited by the brain and deep recurrent neural networks as they coordinate complex hand movements.
Neurorobotics allows for embodiment of brain models on anthropomorphic robots which is required for achieving high ecological validity through functional realism. The CCN group mainly uses the neurorobotics platform, a simulation platform that embodies brain models in robots, developed by the Human Brain Project. Neurorobotics are mainly utilized to study visually-guided in-hand object manipulation and saccadic eye movements for visual scene understanding.