Seminar Categories

This page lists seminar series that have events scheduled between two months ago and twelve months from now and have speaker information available.

Current Series

[View Past Series]

Mon Mar 29

Applied and Computational Mathematics Seminar

3:35pm - via Zoom
Theoretical guarantees of machine learning methods for statistical sampling and PDEs in high dimensions
Yulong Lu, University of Massachusetts, Amherst

Neural network-based machine learning methods, including the most
notably deep learning have achieved extraordinary successes in
numerous fields. In spite of the rapid development of learning
algorithms based on neural networks, their mathematical analysis are far
from understood. In particular, it has been a big mystery that neural
network-based machine learning methods work extremely well for solving
high dimensional problems.

In this talk, I will demonstrate the power of neural network methods
for solving two classes of high dimensional problems: statistical
sampling and PDEs. In the first part of the talk, I will present a
universal approximation theorem of deep neural networks for representing
high dimensional probability distributions. In the second part of the
talk, I will discuss the generalization error analysis of the Deep Ritz
Method for solving high dimensional elliptic PDEs. For both
problems, our theoretical results show that neural networks-based
methods can overcome the curse of dimensionality.

Mon Apr 12

Applied and Computational Mathematics Seminar

3:35pm - via Zoom
Graph-based Bayesian semi-supervised learning: prior design and posterior contraction.
Daniel Sanz-Alonso, University of Chicago

In this talk I will introduce graphical representations of stochastic partial differential equations that allow to approximate Matern Gaussian fields on manifolds and generalize the Matern model to abstract point clouds. Under a manifold assumption, approximation error guarantees will be established building on the theory of spectral convergence of graph Laplacians. Graph-based Matern prior models facilitate computationally efficient inference and sampling exploiting sparsity of the precision matrix. Moreover, we will show that they are natural priors for Bayesian semi-supervised learning and can give optimal posterior contraction. This is joint work with Ruiyi Yang.

Mon Apr 26

Applied and Computational Mathematics Seminar

3:35pm - via Zoom
Applied math colloquium
Prof. Anne Gelb, Dartmouth College

TBD

Mon May 03

Applied and Computational Mathematics Seminar

3:35pm - Via Zoom
Generalization Bounds for Sparse Random Feature Expansions
Hayden Schaeffer, Carnegie Mellon University

"Random feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their use for data-scarce applications or problems in scientific machine learning. This paper introduces the sparse random feature expansion via sparse features which promotes parsimonious random feature expansions. The sparse random feature expansion uses random features with $\ell^1$ optimization to generate approximations with theoretical guarantees. In particular, we provide uniform bounds on the approximation error and generalization bounds for functions in a certain class depending on the number of samples and the distribution of features. The error bounds improve with additional structural conditions, such as coordinate sparsity, compact clusters of the spectrum, or rapid spectral decay. In particular, by introducing sparse features, i.e. features with random sparse weights, we provide improved bounds for low order functions. We show that the sparse random feature expansions outperforms shallow networks in several scientific machine learning tasks."