Machine Learning with Nearest Neighbors

Yung-Kyun Noh
Friday, December 27th 2013, 2:00pm
Room 308 Building 302

초청자: 장병탁 교수


In this talk, I will explain contemporary machine learning methods utilizing theoretical properties of nearest neighbors. The theoretical study for nearest neighbor information goes back to T. Cover and P. Hart’s work in the 1960s which is based on the asymptotic behavior of nearest neighbor classification with many data. Their best-known contribution is the upper bound of the error in the asymptotic situation, which is twice the Bayes error, as well as the idea of connecting nearest neighbor information to the underlying probability density functions. I will provide several examples about how nearest neighbor information can be better utilized in the theoretical context, as well as the explanation on the related topics of supervised machine learning theories. The explanation is mostly based on my recent work, but papers of leading researchers in this field will be introduced as well, which are appeared in recent NIPS and AISTATS conferences and in several statistics journals.

Speaker Bio

Dr. Yung-Kyun Noh is currently a research professor in the department of Computer Science at KAIST. His research interests are metric learning and dimensionality reduction in machine learning, and he is especially interested in applying statistical theory of nearest neighbors to real and large datasets. He received his B.S. in Physics from POSTECH and his Ph.D. in Computer Science from Interdisciplinary Program in Cognitive Science at Seoul National University (SNU). He was a postdoctoral fellow in the School of Mechanical and Aerospace Engineering at SNU and worked in the Sugiyama Lab at the Tokyo Institute of Technology with Prof. Masashi Sugiyama and in the GRASP Robotics Laboratory at the University of Pennsylvania with Prof. Daniel D. Lee.