Themos Stafylakis is a Marie Curie Research Fellow on audiovisual automatic speech recognition at the Computer Vision Laboratory of University of Nottingham (UK). He holds a PhD from Technical University of Athens (Greece) on Speaker Diarization for Broadcast News. He has a strong publication record on speaker recognition and diarization, as a result of his 5-year post-doc at CRIM (Montreal, Canada), under the supervision of Patrick Kenny. He is currently working on lip-reading and audiovisual speech recognition using deep learning methods. His talk takes place on November 22, 2017 at 13:00 in room A112.
Deep Word Embeddings for Audiovisual Speech Recognition
During the last few years, visual and audiovisual automatic speech recognition (ASR) are witnessing a renaissance, which can largely be attributed to the advent of deep learning methods. Deep architectures and learning algorithms initially proposed for audio-based ASR are combined with powerful computer vision models and are finding their way to lipreading and audiovisual ASR. In my talk, I will go through some of the most recent advances in audiovisual ASR, with emphasis on those based on deep learning. I will then present a deep architecture for visual and audiovisual ASR which attains state-of-the-art results in the challenging lipreading-in-the-wild database. Finally, I will focus on how this architecture can generalize to words unseen during training and discuss its applicability in continuous speech audiovisual ASR.