Giulia D’Angelo is an Assistant Professor at the Czech Technical University in Prague, where she develops neuromorphic algorithms for active vision. She earned a BSc in Biomedical Engineering from the University of Genoa and an MSc (with honours) in Neuroengineering. During her Master’s at King’s College London, she developed a neuromorphic system for the egocentric representation of peripersonal visual space. She completed her PhD in neuromorphic algorithms at the University of Manchester, where she received the President’s Doctoral Scholar Award, in collaboration with the Event-Driven Perception for Robotics Laboratory at the Italian Institute of Technology, proposing a biologically plausible model for event-driven, saliency-based visual attention. Following her PhD, she was awarded a Marie Skłodowska-Curie Postdoctoral Fellowship at the Czech Technical University in Prague, during which she explored sensorimotor contingency theories in neuromorphic active vision. After completing the fellowship, she joined the Czech Technical University in Prague as an Assistant Professor. Her current research bridges bio-inspired software and hardware to enable robust, efficient perception and control for low-power, low-latency autonomous systems.
What’s catching your eye? – Event-driven sensing and neuromorphic computing for active vision
Vision is an exploratory behaviour that emerges from the dynamic relationship between actions and sensory feedback. For any agent, whether biological or robotic, processing visual input efficiently is fundamental to understanding and interacting with the environment. The central challenge lies in continuously recalibrating perception through sensorimotor contingencies, where what an agent sees is shaped by how it moves. Embodiment is central to this vision: perception and action are inseparable, and intelligence emerges from their continuous interaction with the physical world, shaped by the very structure of our sensors. Selective visual attention is one of the many mechanisms by which the visual cortex meets this challenge, organizing and interpreting complex visual scenes in real time.
To address these challenges, I develop brain-inspired algorithms that harness the computational principles of biological neurons and spiking neural networks, optimized for neuromorphic hardware. These algorithms enable real-time robotic perception with microsecond latency and milliwatt power consumption, bringing the efficiency of biological vision systems within reach of autonomous systems at the edge.
Her talk takes place on Thursday, March 26, 2026 at 14:00 in room A113. The talk will be streamed live at TBA.