[Seminar] Interpretable deep learning for healthcare
Georgia Institute of Technology
■호스트: 장병탁 교수
Since 2012, deep learning, or representation learning has shown impressive progress in computer vision, speech recognition, and natural language processing. The power of deep learning comes from combining expressive models with large labeled data. This allowed the machine to extract useful information from high-dimensional data, which was a human responsibility before the rise of deep learning. Massive data have been collected in healthcare since the introduction of electronic healthcare records (EHR), and the amount of data is more than human medical experts can process. It is expected that, in this regard, deep learning can play a significant role in healthcare as it did in vision and language. However, computational healthcare requires predictive models to be both accurate and interpretable. My talk will introduce how to use recurrent neural networks (RNN), one of the building blocks in deep learning, to process longitudinal EHR data and predict a future event. Specifically, I will focus on predicting a heart failure onset given a patients’ 18 months record. Building on top of this, I will address the interpretability issue of deep learning models, and propose a method to make predictions that is both accurate and interpretable.
Edward Choi is a PhD candidate in Georgia Institute of Technology, working with professor Jimeng Sun. He received his bachelor’s degree in Seoul National University and his master’s degree in KAIST. Before he started his PhD, he was a researcher in Knowledge Mining Team in ETRI, working on natural language processing and big data analytics. His current research interest is applying deep learning approaches on longitudinal electronic health records to learn efficient patient representations and predict future medical events.