Piotr Didyk is an Independent Research Group Leader at the Cluster of Excellence on ”Multimodal Computing and Interaction” at the Saarland University (Germany), where he is heading a group on Perception, Display, and Fabrication. He is also appointed as a Senior Researcher at the Max Planck Institute for Informatics. Prior to this, he spent two years as a postdoctoral associate at Massachusetts Institute of Technology. In 2012, he obtained his PhD from the Max Planck Institute for Informatics and the Saarland University for his work on perceptual display. During his studies, he was also a visiting student at MIT. In 2008, he received his M.Sc. degree in Computer Science from the University of Wrocław (Poland). His research interests include human perception, new display technologies, image/video processing, and computational fabrication. His main focus are techniques that account for properties of the human sensory system and human interaction to improve perceived quality of the final images, videos, and 3D prints. His talk takes place on Wednesday, February 15th, 1pm in room A113.
Perception and Personalization in Digital Content Reproduction
There has been a tremendous increase in quality and number of new output devices, such as stereo and automultiscopic screens, portable and wearable displays, and 3D printers. Unfortunately, abilities of these emerging technologies outperform capabilities of methods and tools for creating content. Also, the current level of understanding of how these new technologies influence user experience is insufficient to fully exploit their advantages. In this talk, I will present our recent efforts in the context of perception-driven techniques for digital content reproduction. I will demonstrate that careful combinations of new hardware, computation, and models of human perception can lead to solutions that provide a significant increase in perceived quality. More precisely, I will discuss two techniques for overcoming limitations of 3D displays. They exploit information about gaze direction as well as the motion-parallax cue. I will also demonstrate a new design of automultiscopic screen for cinema and a prototype of a near-eye augmented reality display that supports focus cues. Next, I will show how careful rendering of frames enables continuous framerate manipulations giving artists a new tool for video manipulation. The technique can, for example, reduce temporal artifacts without sacrificing the cinematic look of a movie content. In the context of digital fabrication, I will present a perceptual model for compliance with its applications to 3D printing.