2020-12-16 at 15:00
All eyes on cognitive mapping: How vision and memory combine in our representation of space
The ability to integrate visual information into memories of the world is a fundamental basis of human cognition. We acquire information with our eyes and use it to construct an internal model of the environment that guides the flexible expression of behavior. In this talk, I will discuss how visual cortices and the medial temporal lobe together may give rise to this process, enabling the stable perception and memorization of space. I will present a short series of neuroimaging and eye-tracking experiments that focus on different stages along the cortical hierarchy, and highlight how the interplay between these regions may solve central computational challenges associated with self-motion. In this context, I will also present two recently developed machine-learning approaches, which allow characterizing human fMRI activity during naturalistic (virtual navigation) behavior in unprecedented detail, and performing MR-based camera-less eye tracking in future and existing fMRI datasets (DeepMReye).
I am a postdoctoral fellow in the lab of Chris Baker at the National Institute of Mental Health in Bethesda (USA) and guest researcher at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig (Germany). I study the links between human vision, viewing behavior and memory. I am especially interested in how we perceive and memorize space, and how our knowledge about the world shapes the way we see and interact with it. I address these questions using neuroimaging and eye tracking combined with psychophysical and virtual reality experiments. My work is complemented by machine learning to characterize brain activity in sensory and memory regions and how they interact. I also enjoy methods development to push the boundaries of how we study these processes altogether.