[Seminar] Video Highlight Detection at Yahoo!
■호스트: 김건희 교수(x7300, 880-7300)
The sheer amount of video produced each day makes it increasingly more difficult to search, browse and watch desired content efficiently. Video highlight detection has the potential to alleviate this issue by providing users with the most interesting moments from a video. In this talk, I will give an overview of various video highlighting techniques we developed at Yahoo, and show how we use the techniques to empower innovative product features that serve millions of users each day. Specifically, I will show how we detect highlights from live broadcast eSports matches (pro gaming events), how we create animated GIFs automatically from videos, how we leverage textual descriptions for video summarization, and how we exploit visual aesthetics to detect the most beautiful thumbnails from videos.
Yale Song is a Senior Research Scientist at Yahoo Research in New York City. He graduated with a Ph.D. in Computer Science from MIT in 2014. He is interested in innovative techniques to video understanding using computer vision and deep learning. His current research projects include video highlighting and summarization, video captioning and visual question answering, and generative modeling for video prediction. At Yahoo Research, he works on various real-world problems involving Yahoo's web-scale image and video data. Some of his works have been deployed to various products at Yahoo, including Flickr, Tumblr, Video Guide, and Yahoo Esports, and have been featured in MIT News, Economist, Vice Motherboard, among others.