[Seminar] Accelerating Machine-Learning and Big Data Workload on Heterogeneous Computing Platform

Minsik Cho
IBM TJ Watson
Thursday, July 7th 2016, 2:00pm

■ 문의: 유승주 교수(x9392, 880-9392)


Machine-learning is to big data as human-learning is to life experience. Human has been interpolating and extrapolating from history and past knowledge, to learn how to address unfamiliar problems and deal with future challenges. Machine-learning is based on the same principle with big data, yet at massively larger scale in order to close the accuracy gap from the efficient cognitive process in human brain, which calls for various machine-learning and big data acceleration techniques on heterogeneous computing platform. In this seminar, I will share software-hardware-system co-optimization efforts for machine-learning and big data acceleration in IBM Research, and discuss our findings and results. In detail, I will first present our current research about Apache Spark on GPU where multiple key machine-learning algorithms are efficiently mapped to GPU for acceleration [GTC2016]. Then, I will talk about sorting acceleration on many-core platform, one of the critical kernels in big data where I will briefly discuss a new parallel radix sort algorithm PARADIS [VLDB2015] and its comparison with GPU-based sorting and custom ASIC sorting accelerator [DATE2016].

Speaker Bio

Minsik Cho is a research staff member in IBM TJ Watson Research Center since 2008. He received the BS in EE from SNU in 1999 and Ph.D. in ECE from UT-Austin in 2008. He is a team member in Acceleration Group in Cloud and Cognitive Computing Division. His research interest is BigData and Machine/Deep-learning acceleration based on SW-HW-System co-optimization. He received the best paper awards in ASPDAC2010, ISPD2013, and IBM Pat-Goldberg memorial Best Paper award in 2011.