Deep Learning

Deep learning is a new area of machine learning research, which have demonstrated state-of-the-art performance on many artificial intelligence tasks, e.g., computer vision, speech recognition and natural language processing. We are primarily interested on developing new deep architectures and training algorithms.

Ensembles of Deep Models

Ensemble methods have played a critical role in the machine learning community to obtain better predictive performance than what could be obtained from any of the constituent learning models alone. Recently, they have been successfully applied to enhancing the power of many deep neural networks. Many of best-performing teams on machine learning competitions typically employ ensemble methods, e.g., our team, KAISTNIA_ETRI (joint with ETRI), also used ensembles of 5~10 models and was ranked 5th for both classification and localization tasks at ILSVRC challenge 2016, and 3rd for the detection task at ILSVRC challenge 2017

Localization and classification results of ILSVRC challenge 2016:

imagenet_loc 

Detection result of ILSVRC challenge 2017:

Imagenet_2017_object_detection_results 

Ensemble methods are easy and trustworthy to apply for most scenarios. We developed more advanced ensembles, coined confident multiple choice learning (CMCL), for deep models utilizing the known concept of multiple choice learning (MCL). Our main novelty lies on introducing a new loss, called confident oracle loss, for resolving the overconfidence issue of the original MCL.

Confident Multiple Choice Learning (code)

Kimin Lee, Changho Hwang, Kyoungsoo Park and Jinwoo Shin
ICML 2017

cmcl 

(edited by Kimin Lee and Jinwoo Shin, Jun 2017)