Artificial Intelligence is being increasingly relied on in safety-critical domains. But the predictive models underlying these systems are notoriously brittle, and trustworthy deployment remains a significant challenge. In this talk, I give an overview of my work towards a rigorous foundation for robust machine learning (ML).
Using a case study of invariant prediction, we first highlight the importance of formally specifying the space of adverse events we'd like to handle at deployment time. This provides a mathematical framework for analyzing, comparing, and improving the robustness of ML algorithms. Then, we explore how careful experimental probing of these methods’ failures leads to a deeper understanding of the underlying causes, and how these insights can inform the design of new methods with more reliable real-world behavior. We conclude with a brief summary of other past and ongoing works towards provably secure ML, including a method for certifying robustness to adversarial attacks with a surprising connection to data privacy.
Biography
Elan Rosenfeld is a PhD student in the Machine Learning Department at CMU, advised by Andrej Risteski and Pradeep Ravikumar. His research focuses on understanding, quantifying, and improving the robustness and trustworthiness of machine learning systems.