Safeguarded Learned Convex Optimization

Abstract

Many applications require repeatedly solving a certain type of optimization problem, eachtime with new (but similar) data. Data-driven algorithms can “learn to optimize” (L2O) with much fewer iterations and with similar cost per iteration as general-purpose optimiza-tion algorithms. L2O algorithms are often derived from general-purpose algorithms, butwith the inclusion of (possibly many) tunable parameters. Exceptional performance hasbeen demonstrated when they are optimized for a particular distribution of data. Unfor-tunately, it is impossible to ensure all L2O algorithmsalwaysconverge to a solution. Wehereby present a framework that uses L2O updates together with a safeguard to guaranteeconvergence for convex problems with proximal and/or gradient oracles. The safeguard issimple and computationally cheap to implement, and it is activated only when the currentL2O updates would perform poorly or appear to diverge. This approach yields the numer-ical benefits of employing machine learning methods to create rapid L2O algorithms whilestill guaranteeing convergence. Our numerical examples demonstrate the efficacy of thisapproach for existing and new L2O schemes.

Publication
Under review in Journal of Machine Learning Research