이기민 (Kimin Lee)
직함: postdoc
UC Berkeley
Deep Reinforcement Learning (RL) has been successful in a range of challenging domains, such as board games, video games, and robotic control tasks. Scaling RL to many applications, however, is yet precluded by a number of challenges. One such challenge lies in improving the sample-efficiency of RL algorithms, especially when learning with high-dimensional inputs (e.g., pixels). For example, the state-of-the-art results with direct access to the state were two orders of magnitude more data-efficient than learning from pixels on the standard DeepMind Control Suite benchmark. In this presentation, I will first introduce recent works for sample-efficient deep RL, including RAD (data augmentation), ATC (representation learning), and RE3 (exploration). Another challenge in scaling RL is on providing a suitable reward function that is sufficiently informative yet easy enough to provide. For example, real-world problems may require extensive instrumentation, or it may be hard to reflect social norms in the hand-engineered reward function. To handle this issue, I will also introduce a new interactive framework, which enables us to utilize RL without a well-designed reward function.
온라인 줌 링크: https://snu-ac-kr.zoom.us/j/83623870773
Kimin Lee is a postdoc at UC Berkeley working with Pieter Abbeel. He is interested in scaling deep reinforcement learning to diverse and challenging domains — reinforcement learning from high-dimensional inputs, reward-free reinforcement learning, and unsupervised reinforcement learning. He received his Ph.D. from KAIST, where he worked on a reliable and robust machine/deep learning with Jinwoo Shin. During Ph.D., he also interned and collaborated closely with Honglak Lee at University of Michigan. Several of his works have been presented as spotlight presentations at top-tier machine learning.