Colloque des sciences mathématiques du Québec

23 avril 2021 de 15 h 00 à 16 h 00 (heure de Montréal/HNE)

Generalized gradients, conservative fields, tame potentials, and deep learning

Colloque par Adrian Lewis (Cornell University)

To the dismay and irritation of the variational analysis community, practitioners of deep learning often implement gradient-based optimization via automatic differentiation and blithely apply the result to nonsmooth objectives.  Worse, they then gleefully point out numerical convergence.  In fact, as elegantly remarked by Bolte and Pauwels, automatic differentiation produces a novel generalized gradient:  a conservative field with enough calculus to prove convergence of stochastic subgradient descent, as practiced in deep learning.  I will sketch this interplay of analytic and algorithmic ideas, and explain how, for concrete objectives (typically semi-algebraic), this novel generalized gradient just slightly modifies Clarke's original notion.

Joint work with Tonghua Tian.