To learn new things our brain has to change its internal neuronal activity pattern that encode information across large neuronal ensembles and multiple brain areas. At the same time neuronal activity pattern in the brain are often unreliable, multi dimensional and highly complex, which makes it challenging to investigate learning related changes in network information processing. To gain a mechanistic/analytical understanding of neuronal network learning principles we use a two-fold research strategy. As starting point we characterize learning induced changes of neuronal ensemble activity in mice using in vivo population Ca2+-imaging, high-throughput image processing and data analysis methods that are commonly used in machine learning. In parallel we aim to develop biologically inspired multi-layer artificial neuronal network models (ANNs) that exhibit similar information processing and storage capabilities as previously observed in the real biological networks. The process of reverse-engineering neuronal network function on a very abstract level is thereby crucial to understand the fundamental underpinnings that determine learning induced changes of neuronal activity pattern. Moreover, the ANN reverse-engineering and development process will help to derive hypothesis of how individual network components (i.e. lateral inhibition, recurrent connection rate, etc.) shape learning induced changes of neuronal population activity and internal information coding. We can then go back to the experiment and test if these hypotheses hold in vivo for real biological neuronal networks.
Through research in experimental neuroscience and related neuro-theory, we aim to generate novel insights into the nature of mammalian intelligence. Because the neocortex is considered the ‘center’ of mammalian intelligence, we use a combination of neuroimaging, electrophysiology, and data analysis techniques (e.g., miniaturized in vivo caclium imaging) to investigate learning in the neocortex of mice. We train mice on a variety of tasks and then record their neuronal activity in selected cortical brain areas. To understand learning at the single cell level, we record plasticity in individual cortical neurons using patch clamp, calcium, and voltage imaging.
We aim to develop novel bio-inspired network learning algorithms that function similar to the mammalian brain while simultaneously addressing the shortcomings of current deep learning systems. We therefore take ideas and insights from our neuroscience and project these into machine learning thereby helping AI researchers to build the next generation of brain-like AI systems. For example, the ability to quickly adapt to new situations and tasks is a hallmark of biological intelligence. Studying the mechanisms of continual learning and adaptation in brains, allowed us to design a new AI system that is capable of learning and adapting to new tasks in real-time while progressively improving performance.
To decipher neural learning in the rodent brain we also develop new tools and methods that allow us to record, track and manipulate neuronal activity in brain of awake behaving rodents. In addition, we develop new data analysis approaches that allow us to analyze and interpret large sets of high dimensional population data recorded during learning.