현재 위치

[Seminar] On Image-to-Image Translation

소속: 
Berkeley AI Research (BAIR) Lab
일시: 
2017년 10월 18일 수요일 AM 11:00 - 2017년 10월 18일 수요일 PM 12:00
장소: 
302-309

■ 호스트: 김건희 교수: (x7300, 880-7300) ■ 문의: 시각 및 학습 연구실(880-7289)

요약

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image. In this talk, I will first investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. Using a training set of aligned image pairs, these networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Second, I will present an approach for learning to translate an image from a source domain to a target domain in the absence of paired examples. We exploit the property that translation should be “cycle consistent”, in the sense that if we translate, e.g., an sentence from English to French, and then translate it back from French to English, we should arrive back at the original sentence. Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.

연사 소개

Jun-Yan Zhu is a Ph.D. student at the Berkeley AI Research (BAIR) Lab, working on computer vision, graphics and machine learning with Professor Alexei A. Efros. He received his B.E. from Tsinghua University in 2012 and was a Ph.D. student at CMU from 2012-13. His research goal is to build machines capable of recreating the visual world. Jun-Yan is currently supported by the Facebook Graduate Fellowship.