ProjectsInference in probabilistic graphical modelsComputing the partition function or MAP (Maximum a Posteriori) are the most important statistical inference tasks arising in applications of probabilistic graphical models. However, they are known to be computationally intractable (i.e., NP- or #P-hard). We have worked on developing efficient, polynomial-time algorithms with strong theoretical guarantees.
Contributors in our lab: Sungsoo Ahn, Sejun Park, Jungseul Ok Large-scale spectral computationComputation of the spectral functions of matrices has played an important role in many scientific computing applications, including applications in machine learning, computational physics (e.g., lattice quantum chromodynamics), network analysis and computational biology (e.g., protein folding), just to name a few application areas. We have worked on developing efficient algorithms for approximating and optimizing spectral functions of large-scale matrices with tens of millions dimensions.
Contributor in our lab: Insu Han Uncertainty on deep neural classifiersThe predictive uncertainty (e.g., entropy of softmax distribution of a deep classifier) is indispensable as it is useful in many machine learning tasks (e.g., active learning, incremental learning and ensemble learning) as well as when deploying the trained model in real-world systems. In order to improve the quality of the predictive uncertainty, we have proposed advanced training and inference schemes for deep models. Contributor in our lab: Kimin Lee Deep neural architectures and training schemesMaximizing social strategic diffusionCombinatorial resource allocation |