[Seminar] Intrinsic image decomposition from multiple photographs
Editing materials and lighting is a common image manipulation task that requires significant expertise to achieve plausible results. Each pixel aggregates the effect of both material and lighting, therefore standard color manipulations are likely to affect both components. Intrinsic image decomposition separates a photograph into independent layers: reflectance, which represents the color of the materials, and illumination, which encodes the effect of lighting at each pixel. We tackle this ill-posed separation problem by leveraging additional information provided by multiple photographs of the scene. We combine image-guided algorithms with sparse 3D information reconstructed from multi-view stereo, in order to constrain the decomposition. We first present an approach to decompose images of outdoor scenes, using photographs captured at a single time of day. This method not only separates reflectance from illumination, but also decomposes the illumination into sun, sky, and indirect layers. In a second part, we focus on image collections gathered from photo-sharing websites or captured with a moving light source. We exploit the variations of lighting to process complex scenes without user assistance, nor precise and complete geometry. The methods described in this talk enable advanced image manipulations such as lighting-aware editing, insertion of virtual objects, and image-based illumination transfer between photographs of a collection.
Pierre-Yves Laffont is a postdoctoral researcher at Brown University, with James Hays. His research focuses on intrinsic images, image editing, image-based rendering and relighting, with geometric cues from multi-view reconstruction. He did his PhD at INRIA Sophia-Antipolis under the supervision of George Drettakis and Adrien Bousseau, and spent a few months at UC Berkeley and MIT CSAIL. He also studied at INSA Lyon (France) and at KAIST (South Korea), and loves travelling in Asia.