Skip to main content

Robustness in Machine Learning and Optimization: A Minmax Approach

ECE Colloquia Seminar

Wednesday, November 18, 2020
-

Location: Current students will receive Zoom details via Canvas

Speaker:
Asu Ozdaglar
Massachusetts Institute of Technology

 

Minmax problems arise in a large number of problems in optimization, including worst-case design, duality theory, and zero-sum games, but also have become popular in machine learning in the context of adversarial robustness and Generative Adversarial Networks (GANs). This talk will review our recent work on solving minmax problems using discrete-time gradient based optimization algorithms. We focus on Optimistic Gradient Descent Ascent (OGDA) and Extra-gradient (EG) methods, which have attracted much attention in the recent literature because of their superior empirical performance in GAN training.  We show that OGDA and EG can be seen as approximations of the classical proximal point method and use this interpretation to establish convergence rate guarantees for these algorithms. These guarantees are provided for the ergodic (averaged) iterates of the algorithms. We also consider the last iterate of EG  and present convergence rate guarantees for the last iterate for smooth convex-concave saddle point problems. We finally turn to analysis of generalization properties of gradient based minmax algorithms using the algorithmic stability framework defined by Bousquet and Elisseeff. Our generalization analysis suggests superiority of gradient descent ascent (GDA) compared to GDmax algorithm (which involves exact solution of the maximization problem at each iteration) in the nonconvex-concave case provided that similar learning rates are used in the descent and ascent steps.

Ozdaglar

 

 

 

 

 

 

 

Asu Ozdaglar received the B.S. degree from the Middle East Technical University in 1996, and the S.M. and Ph.D. degrees from the Massachusetts Institute of Technology (MIT), in 1998 and 2003, respectively.  

She’s the Mathworks Professor of Electrical Engineering and Computer Science in the EECS Department at the MIT. She’s the Department Head of EECS and the Deputy Dean of Academics in the Schwarzman College of Computing. Her research expertise includes optimization theory, distributed optimization and control, and network analysis. 

Her awards include a Microsoft fellowship, NSF Career award, 2008 Donald P. Eckman award of the American Automatic Control Council, Class of 1943 Career Development Chair, inaugural Steven and Renee Innovation Fellowship, and 2014 Spira teaching award.
 

Seminar Series