Scalable Algorithms for Data-driven Inverse and Learning Problems
Inverse problems and uncertainty quantification (UQ) are pervasive in scientific
discovery and decision-making for complex, natural, engineered, and societal systems.
They are perhaps the most popular mathematical approaches for enabling predictive scientific simulations that integrate observational/experimental data, simulations and/or
models. Unfortunately, inverse/UQ problems for practical complex systems possess these the simultaneous challenges: the large-scale forward problem challenge, the high dimensional parameter space challenge, and the big data challenge.
To address the first challenge, we have developed parallel high-order (hybridized) discontinuous Galerkin methods to discretize complex forward PDEs. To address the second challenge, we have developed various approaches from model reduction to advanced Markov chain Monte Carlo methods to effectively explore high dimensional parameter spaces to compute posterior statistics. To address the last challenge, we have developed a randomized misfit approach that uncovers the interplay between the Johnson-Lindenstrauss and the Morozov's discrepancy principle to significantly reduce the dimension of the data without compromising the quality of the inverse solutions.
In this talk we selectively present scalable and rigorous approaches to tackle these challenges for PDE-governed Bayesian inverse problems. Various numerical results for simple to complex PDEs will be presented to verify our algorithms and theoretical findings. If time permits, we will present our recent work on scientific machine learning for inverse and learning problems.