Be Part of the Institute for Neuroinformatics

We're hiring! Join the Institute for Neuroinformatics at ETH Zurich and the University of Zurich, Switzerland.

The Grewe Lab is a group that collaborates and promotes inclusivity, aiming to enhance comprehension of how the brain learnes and to use insights from neuroscience to improve algorithms used in Machine learning.

sdfs

Open positions

Postdoctoral Fellows and PhD Students in Neuroscience

We are looking for a motivated neuroscience Ph.D. or Post-Doc with a strong interest to understand neuronal network information processing as well as learning-induced changes at the level of single cells and large neuronal ensembles.

We are hiring: Postdoc Position in Systems Neuroscience

World models are fundamental to how organisms interpret raw sensory inputs, enabling structured and purposeful behaviors such as navigation, decision-making, and object manipulation. Despite their critical role, the processes through which these models are learned, represented, and maintained across distributed neural networks remain largely elusive. This research seeks to address these questions by investigating how active engagement with the environment dynamically shapes neural circuits to support goal-directed behavior, offering profound insights into the interplay between sensory and motor systems.

Master Thesis and Semester Projects

We are looking for a motivated Masterstudents with a strong interest to understand neuronal network information processing as well as learning-induced changes at the level of single cells and large neuronal ensembles.

Synthetic Data Generation for Automated Speech Recognition in Impaired Speech

Automatic Speech Recognition (ASR) for individuals with impaired speech is severely hampered by data scarcity. This project addresses this problem by developing a personalized Text-to-Dysarthric-Speech (TTDS) model to serve as an advanced data augmentation method. Unlike assistive technologies that aim to correct speech impairments, the primary goal here is to faithfully clone a speaker’s unique impaired speech patterns. Using state-of-the-art generative audio models (e.g., VITS), the system will learn to generate synthetic yet realistically impaired speech data from very few recordings. A key innovation will be to leverage phoneme uncertainty analyses from prior work [5] to guide the synthesis process , enabling the targeted generation of more realistic phonetic deviations. This project is designed for a highly motivated, independent individual ready to take ownership of a challenging research topic.

Developing an XR Platform for Demonstrating Human Perceptual Adaptation and Brain Plasticity

Human perception actively adapts to continuous sensory input, allowing the brain to recalibrate and maintain accurate representations of the world. Optical illusions, phantom sensations, and visuomotor distortions provide unique insights into this adaptability. At the Life Science Learning Centre (UZH and ETH), such e􀆯ects are currently demonstrated using physical setups. The goal of this Project is to develop and implement such effects demonstrated in viritual reality.

Personalizing Automatic Speech Recognition Models for Non-normative Speech using MoE

Automatic Speech Recognition (ASR) for individuals with impaired speech remains a significant challenge due to extreme data scarcity and high acoustic variability. This project builds upon a successful, data-efficient Bayesian personalization framework (Variational Inference Low-Rank Adaptation, VI-LoRA), coming from our team. This project aims to further increase personalised ASR performance by exploring the implementation of Hydra VI-LoRA. Hydra VI- LoRA, is a novel Mixture-of-Experts (MoE) architecture capable of learning specialized adapters for different speakers or phonetic challenges within a single model. This project is designed for a highly motivated and independent student eager to take ownership of a cutting-edge research topic.