PDE Scaling Limits in Semi-Supervised Learning
Semi-supervised learning refers to machine learning algorithms that make use of both labeled data and unlabeled data for learning tasks. Examples include large scale nonparametric regression and classification problems, such as predicting voting preferences of social media users, or classifying medical images. In today's big data world, there is an abundance of unlabeled data, while labeled data often requires expert labeling and is expensive to obtain. This has led to a resurgence of semi-supervised learning techniques, which use the topological or geometric properties of large amounts of unlabeled data to aid the learning task. In this talk, I will discuss some new rigorous PDE scaling limits for existing semisupervised learning algorithms and their practical implications. I will also discuss how these scaling limits suggest new ideas for fast algorithms for semi-supervised learning.
Calder received his Ph.D. in Applied and Interdisciplinary Mathematics from the University of Michigan in 2014, and was a Morrey Assistant Professor of Mathematics at the University of California Berkeley from 2014-2016. He is currently an Assistant Professor of Mathematics at the University of Minnesota. Calder's research interests include partial differential equations and applied probability, with applications to machine learning. He is also interested in mathematical problems in computer vision and image processing.