## Seminar Categories

- Applied and Computational Math Colloquium (1)
- Applied and Computational Mathematics Seminar (2)
- Climate Seminar (30)
- Colloquium (5)
- Commutative Algebra Seminar (10)
- Differential Geometry and Symplectic Topology Seminar (19)
- IMA Data Science Lab Seminar (4)
- IMA/MCIM Industrial Problems Seminar (3)
- MCFAM Seminar (6)
- Probability Seminar (3)
- Special Events and Seminars (9)
- Student Number Theory Seminar (5)
- Topology Seminar (8)

## Current Series

Mon Nov 04 |
## Applied and Computational Mathematics Seminar3:35pm - Vincent Hall 6Applied differential geometry and harmonic analysis in deep learning regularization Wei Zhu, Duke University Deep neural networks (DNNs) have revolutionized machine learning by gradually replacing the traditional model-based algorithms with data-driven methods. While DNNs have proved very successful when large training sets are available, they typically have two shortcomings: First, when the training data are scarce, DNNs tend to suffer from overfitting. Second, the generalization ability of overparameterized DNNs still remains a mystery. In this talk, I will discuss two recent works to inject the modeling flavor back into deep learning to improve the generalization performance and interpretability of the DNN model. This is accomplished by DNN regularization through applied differential geometry and harmonic analysis. In the first part of the talk, I will explain how to improve the regularity of the DNN representation by enforcing a low-dimensionality constraint on the data-feature concatenation manifold. In the second part, I will discuss how to impose scale-equivariance in network representation by conducting joint convolutions across the space and the scaling group. The stability of the equivariant representation to nuisance input deformation is also proved under mild assumptions on the Fourier-Bessel norm of filter expansion coefficients. |

Mon Nov 18 |
## Applied and Computational Mathematics Seminar3:35pm - Vincent Hall 6Scalable Algorithms for Data-driven Inverse and Learning Problems Tan Bui-Thanh, UT-Austin Inverse problems and uncertainty quantification (UQ) are pervasive in scientific To address the first challenge, we have developed parallel high-order (hybridized) discontinuous Galerkin methods to discretize complex forward PDEs. To address the second challenge, we have developed various approaches from model reduction to advanced Markov chain Monte Carlo methods to effectively explore high dimensional parameter spaces to compute posterior statistics. To address the last challenge, we have developed a randomized misfit approach that uncovers the interplay between the Johnson-Lindenstrauss and the Morozov's discrepancy principle to significantly reduce the dimension of the data without compromising the quality of the inverse solutions. In this talk we selectively present scalable and rigorous approaches to tackle these challenges for PDE-governed Bayesian inverse problems. Various numerical results for simple to complex PDEs will be presented to verify our algorithms and theoretical findings. If time permits, we will present our recent work on scientific machine learning for inverse and learning problems. |