[Seminar] Deep Latent Variable Models of Natural Language

Yoon Kim
Ph.D. student
Harvard University
Friday, January 12th 2018, 11:00am - Friday, January 12th 2018, 12:00pm

■호스트: 김건희 교수(x7300,880-7300) ■문의: Vision and Learning Lab. (02-880-7289)


Deep latent variable models assume a generative process whereby a simple random variable is transformed from the latent space to the observed, output space through a deep neural network. Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are two of the most popular variants of this approach. Both GANs and VAEs have been remarkably effective at modeling images, and the learned latent representations often correspond to interesting, semantically-meaningful representations of the observed data. In contrast, GANs and VAEs have been less successful at modeling natural language, but for different reasons. GANs have difficulty dealing with discrete output spaces (such as natural language) as the resulting objective is no longer differentiable with respect to the generator. VAEs can deal with discrete output spaces, but when a powerful model (e.g. LSTM) is used as a generator, the model learns to ignore the latent variable and simply becomes a language model. In this talk, I will discuss our ongoing work on applying GANs and VAEs to natural language and present some recent successes

Speaker Bio

Yoon Kim is currently a Ph.D. student in Computer Science at Harvard University, working on machine learning and natural language processing. He is advised by Prof. Alexander Rush. He obtained a Bachelor's degree in Mathematics and Economics from Cornell University, a Master's degree in Statistics from Columbia University, and a Master's degree in Data Science from a New York University.