Stochastic intermediate gradient method for convex optimization problems
- Authors: Gasnikov A.V.1, Dvurechensky P.E.1
-
Affiliations:
- Institute for Information Transmission Problems
- Issue: Vol 93, No 2 (2016)
- Pages: 148-151
- Section: Mathematics
- URL: https://journals.rcsi.science/1064-5624/article/view/223461
- DOI: https://doi.org/10.1134/S1064562416020071
- ID: 223461
Cite item
Abstract
New first-order methods are introduced for solving convex optimization problems from a fairly broad class. For composite optimization problems with an inexact stochastic oracle, a stochastic intermediate gradient method is proposed that allows using an arbitrary norm in the space of variables and a prox-function. The mean rate of convergence of this method and the probability of large deviations from this rate are estimated. For problems with a strongly convex objective function, a modification of this method is proposed and its rate of convergence is estimated. The resulting estimates coincide, up to a multiplicative constant, with lower complexity bounds for the class of composite optimization problems with an inexact stochastic oracle and for all usually considered subclasses of this class.
About the authors
A. V. Gasnikov
Institute for Information Transmission Problems
Author for correspondence.
Email: gasnikov@yandex.ru
Russian Federation, Bol’shoi Karetnyi per. 19/1, Moscow, 127994
P. E. Dvurechensky
Institute for Information Transmission Problems
Email: gasnikov@yandex.ru
Russian Federation, Bol’shoi Karetnyi per. 19/1, Moscow, 127994
Supplementary files
