[Seminar] Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications

김용덕
박사
삼성전자 SW
Date: 
Tuesday, December 13th 2016, 10:00am
Location: 
302-308

■ 호스트: 유승주 교수(x9392, 880-9392)

Summary

Compression is required for the deployments of deep convolutional neural network (CNN) on mobile devices. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy.