Research

Bayesian Deep Learning
Bayesian deep learning (BDL) is a computational framework that combines Bayesian inference principles with deep learning models, offering potential to advance the current AI landscape by providing principled uncertainty quantification and improved robustness. However, BDL faces significant challenges including computational intractability due to the high-dimensional parameter spaces in neural networks, and the tendency toward overconfidence or miscalibration in posterior approximations. Therefore, addressing these computational and reliability issues is essential to make BDL practical and trustworthy in safety-critical applications.
Data Assimilation
Data assimilation (DA) combines dynamical models with sparse, noisy observations to estimate latent system states and quantify uncertainty for applications such as climate forecasting and environmental monitoring. In practice, traditional DA methods face severe challenges due to high-dimensional state spaces, nonlinear and possibly chaotic dynamics, model error arising from imperfect physical representations, and non-Gaussian uncertainties. Therefore, it is crucial to develop efficient and robust DA algorithms that can handle high-dimensional systems while properly accounting for model uncertainties for complex dynamical systems.
Scalable Gaussian Processes
Gaussian processes (GPs) confront significant computational bottlenecks, including the computation of the inversion and log-determinant of the covariance matrix, which limit their scalability to large datasets. These operations scale cubically with the number of data points, making standard GP inference computationally prohibitive for large datasets. Therefore, developing scalable methods for GPs is crucial to unlock their full potential for large-scale applications while preserving their desirable properties of uncertainty quantification and theoretical guarantees.