Current Series

[View Past Series]

Mon Nov 04

Applied and Computational Mathematics Seminar

3:35pm - Vincent Hall 6
Applied differential geometry and harmonic analysis in deep learning regularization
Wei Zhu, Duke University

Deep neural networks (DNNs) have revolutionized machine learning by gradually replacing the traditional model-based algorithms with data-driven methods. While DNNs have proved very successful when large training sets are available, they typically have two shortcomings: First, when the training data are scarce, DNNs tend to suffer from overfitting. Second, the generalization ability of overparameterized DNNs still remains a mystery.

In this talk, I will discuss two recent works to “inject” the “modeling” flavor back into deep learning to improve the generalization performance and interpretability of the DNN model. This is accomplished by DNN regularization through applied differential geometry and harmonic analysis. In the first part of the talk, I will explain how to improve the regularity of the DNN representation by enforcing a low-dimensionality constraint on the data-feature concatenation manifold. In the second part, I will discuss how to impose scale-equivariance in network representation by conducting joint convolutions across the space and the scaling group. The stability of the equivariant representation to nuisance input deformation is also proved under mild assumptions on the Fourier-Bessel norm of filter expansion coefficients.

Mon Nov 18

Applied and Computational Mathematics Seminar

3:35pm - Vincent Hall 6
Scalable Algorithms for Data-driven Inverse and Learning Problems
Tan Bui-Thanh, UT-Austin

Inverse problems and uncertainty quantification (UQ) are pervasive in scientific
discovery and decision-making for complex, natural, engineered, and societal systems.
They are perhaps the most popular mathematical approaches for enabling predictive scientific simulations that integrate observational/experimental data, simulations and/or
models. Unfortunately, inverse/UQ problems for practical complex systems possess these the simultaneous challenges: the large-scale forward problem challenge, the high dimensional parameter space challenge, and the big data challenge.

To address the first challenge, we have developed parallel high-order (hybridized) discontinuous Galerkin methods to discretize complex forward PDEs. To address the second challenge, we have developed various approaches from model reduction to advanced Markov chain Monte Carlo methods to effectively explore high dimensional parameter spaces to compute posterior statistics. To address the last challenge, we have developed a randomized misfit approach that uncovers the interplay between the Johnson-Lindenstrauss and the Morozov's discrepancy principle to significantly reduce the dimension of the data without compromising the quality of the inverse solutions.

In this talk we selectively present scalable and rigorous approaches to tackle these challenges for PDE-governed Bayesian inverse problems. Various numerical results for simple to complex PDEs will be presented to verify our algorithms and theoretical findings. If time permits, we will present our recent work on scientific machine learning for inverse and learning problems.