When Seeing is not Believing: New Forensics Algorithms to Detect Image Manipulations
Modern image editing software has made it possible to alter images in ways that can dramatically change the image content, yet the images may appear authentic to humans. While there are countless beneficial applications of photo editing, image manipulations can also be used in harmful ways e.g., altered images may be published to cause reputational harm, sway public opinions, influence elections, etc. Furthermore, with the growing popularity of social media and online sharing platforms, it is increasingly easy for altered media to go viral. Whats more, recent breakthroughs in artificial intelligence (AI) are making it dramatically easier to produce altered images, and even altered videos, that appear photo realistic. Hence, there is growing interest in developing new forensic methods that can detect manipulations in images and video. In this talk, I will give an overview of my teams work on the DARPA-funded Media Forensics (MediFor) Project to develop novel machine-learning algorithms for automated detection and localization of media manipulations.
Michael Albright is a senior data scientist in the Data Science and Video Analytics group in Honeywell Labs in Golden Valley, Minnesota. Since joining Honeywell in 2015, he has invented new technologies that solve challenging problems for internal Honeywell businesses and external customers. His work has involved applying machine learning, optimization, and other applied math and computer science techniques to a variety of problems in domains ranging from Internet of Things (IoT) systems to computer vision applications. Michael earned a Ph.D. in theoretical physics from the University of Minnesota in 2015 and has previous industry experience at Cray, Inc.