[Seminar] Fast Proximal Point Optimization for Solving Penalized Regression Problems
TU Dortmund University
■호스트: 장병탁 교수 (02-880-1833)
Recent proximal point optimization techniques have been very successful for solving penalized regression problems defined with nonsmooth functions. In this talk, we discuss two types of penalized regression in the context of convex regularization, inspired by Tikhonov and Morozov. We first briefly introduce recent developments for the former type. In the second part, we focus on a particular instance of the latter type, namely the generalized Dantzig selector (GDS), presenting our recent contribution of a fast proximal point algorithm based on a convex-concave saddle-point reformulation. Some experimental results will be shown for a particular instance of GDS, defined with the ordered $ell_1$-norm regularizer, which has an attractive provable FDR control property in high dimensional model selection similarly to Benjamini-Hochberg’s. The talk will begin with a brief introduction of the Collaborative Research Center SFB 876 in TU Dortmund University and some of recent works of Dr. Lee.
Dr. Lee received his bachelor’s (2003) and first master’s degree (2005) in Seoul National University. During his master’s study, he worked in the BI lab with Prof. Zhang. Afterwards he moved to the USA for graduate study and received his second master (2008) and Ph.D. degree (2011) in optimization, from the University of Wisconsin-Madison. Since 2011, Dr. Lee has been working as a postdoc researcher in Germany at the Collaborative Research Center SFB876 within the TU Dortmund University. From 2015, Dr. Lee has been working as a project leader in the research center.