Deep Learning

Deep learning is a new area of machine learning research, which have demonstrated state-of-the-art performance on many artificial intelligence tasks, e.g., computer vision, speech recognition and natural language processing. We are primarily interested on developing new deep architectures and training algorithms.

Ensembles of Deep Models

Ensemble methods have played a critical role in the machine learning community to obtain better predictive performance than what could be obtained from any of the constituent learning models alone. Recently, they have been successfully applied to enhancing the power of many deep neural networks. For example, roughly 80% of top-5 best-performing teams on ILSVRC challenge 2016 employ ensemble methods, where our team, KAISTNIA_ETRI (joint with ETRI), also used an ensemble of 4~5 models and was ranked 5th for both classification and localization tasks.

Localization and classification results of ILSVRC challenge 2016:


Ensemble methods are easy and trustworthy to apply for most scenarios. We developed more advanced ensembles, coined confident multiple choice learning (CMCL), for deep models utilizing the known concept of multiple choice learning (MCL). Our main novelty lies on introducing a new loss, called confident oracle loss, for resolving the overconfidence issue of the original MCL.

Confident Multiple Choice Learning (code)

Kimin Lee, Changho Hwang, Kyoungsoo Park and Jinwoo Shin
ICML 2017


(edited by Kimin Lee and Jinwoo Shin, Jun 2017)