Inference in probabilistic graphical models

Computing the partition function or MAP (Maximum a Posteriori) are the most important statistical inference tasks arising in applications of probabilistic graphical models. However, they are known to be computationally intractable (i.e., NP- or #P-hard). We have worked on developing efficient, polynomial-time algorithms with strong theoretical guarantees.

Contributors in our lab: Sungsoo Ahn, Sejun Park, Jungseul Ok

Large-scale spectral computation

Computation of the spectral functions of matrices has played an important role in many scientific computing applications, including applications in machine learning, computational physics (e.g., lattice quantum chromodynamics), network analysis and computational biology (e.g., protein folding), just to name a few application areas. We have worked on developing efficient algorithms for approximating and optimizing spectral functions of large-scale matrices with tens of millions dimensions.

  • stochastic gradient descents for optimizing spectral-sums (arXiv2018)

  • MAP inference in determinantal point process (ICML2017)

  • approximating large-scale spectral-sums (ICML2015, SISC2017)

Contributor in our lab: Insu Han

Uncertainty on deep neural classifiers

The predictive uncertainty (e.g., entropy of softmax distribution of a deep classifier) is indispensable as it is useful in many machine learning tasks (e.g., active learning, incremental learning and ensemble learning) as well as when deploying the trained model in real-world systems. In order to improve the quality of the predictive uncertainty, we have proposed advanced training and inference schemes for deep models.

Contributor in our lab: Kimin Lee

Deep neural architectures and training schemes

Maximizing social strategic diffusion

Combinatorial resource allocation