Monthly Archives: March 2016

Tomáš Werner: Linear Programming Relaxation Approach to Discrete Energy Minimization

werner-faceTomáš Werner works as a researcher at the Center for Machine Perception, Faculty of Electrical Engineering, Czech Technical University, where he also obtained his PhD degree. In 2001-2002 he worked as a post-doc at the Visual Geometry Group, Oxford University, U.K. In the past, his main interest was multiple view geometry and three-dimensional reconstruction in computer vision. Today, his interest is in machine learning and optimization, in particular graphical models. He is a (co-)author of more than 70 publications, with 350 citations in WoS. His talk will take place on Wednesday, February 24, 2016, 1pm in room G202. THE TALK IS POSTPONED, it will take place on Tuesday, April 12, 2016, 2pm in room A113.

Linear Programming Relaxation Approach to Discrete Energy Minimization

Abstract: Discrete energy minimization consists in minimizing a function of many discrete variables that is a sum of functions, each depending on a small subset of the variables. This is also known as MAP inference in graphical  models (Markov random fields) or weighted constraint satisfaction. Many successful approaches to this useful but NP-complete problem are based on  its natural LP relaxation. I will discuss this LP relaxation in detail,  along with algorithms able to solve it for very large instances, which appear e.g. in computer vision. In particular, I will discuss in detail a convex message passing algorihtm, generalized min-sum diffusion.

Christian Theobalt: Reconstructing the Real World in Motion

Christian Theobalt is a Professor of Computer Science and the head of the research group “Graphics, Vision, & Video” at the Max-Planck-Institute for Informatics, Saarbruecken, Germany. He is also an adjunct faculty at Saarland University. From 2007 until 2009 he was a Visiting Assistant Professor in the Department of Computer Science at Stanford University. Most of his research deals with algorithmic problems that lie on the boundary between the fields of Computer Vision and Computer Graphics, such as dynamic 3D scene reconstruction and marker-less motion capture, computer animation, appearance and reflectance modelling, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically-based rendering.

For his work, he received several awards, including the Otto Hahn Medal of the Max-Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, and the German Pattern Recognition Award 2012. Further, in 2013 he was awarded an ERC Starting Grant by the European Union. In 2015, the German business magazine Capital elected him as one of the top 40 innovation leaders under 40. Christian Theobalt is a Principal Investigator and a member of the Steering Committee of the Intel Visual Computing Institute in Saarbruecken. He is also a co-founder of a spin-off company from his group – www.thecaptury.com – that is commercializing a new generation of marker-less motion and performance capture solutions.

Reconstructing the Real World in Motion

Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by animation artists to craft models of moving virtual scenes.

To overcome this limitation, the graphics and vision communities has been developing techniques to capture dense 4D (3D+time) models of dynamic scenes from real world examples, for instance from footage of real world scenes recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actor’s face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions. Recently, the need for efficient dynamic scene reconstruction methods has further increased by developments in other thriving research domains, such as virtual and augmented reality, 3D video, or robotics.

In this talk, I will elaborate on some ideas on how to go beyond the current limits of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can take steps to capture dynamic models of humans and general scenes in unconstrained environments with few sensors. I will also show how we can capture higher shape detail as well as material parameters of scenes outside of the lab. The talk will also show how one can effectively reconstruct very challenging scenes of a smaller scale, such a hand motion. Further on, I will discuss how we can capitalize on more sophisticated light transport models to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, with only few cameras, or even just a single one. Ideas on how to perform deformable scene reconstruction in real-time will also be presented, if time allows.

His talk takes place on Wednesday, March 23, 2016, 1pm in room G202.

Video recording of the talk is publicly available.