Stochastic intermediate gradient method for convex optimization problems


如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

New first-order methods are introduced for solving convex optimization problems from a fairly broad class. For composite optimization problems with an inexact stochastic oracle, a stochastic intermediate gradient method is proposed that allows using an arbitrary norm in the space of variables and a prox-function. The mean rate of convergence of this method and the probability of large deviations from this rate are estimated. For problems with a strongly convex objective function, a modification of this method is proposed and its rate of convergence is estimated. The resulting estimates coincide, up to a multiplicative constant, with lower complexity bounds for the class of composite optimization problems with an inexact stochastic oracle and for all usually considered subclasses of this class.

作者简介

A. Gasnikov

Institute for Information Transmission Problems

编辑信件的主要联系方式.
Email: gasnikov@yandex.ru
俄罗斯联邦, Bol’shoi Karetnyi per. 19/1, Moscow, 127994

P. Dvurechensky

Institute for Information Transmission Problems

Email: gasnikov@yandex.ru
俄罗斯联邦, Bol’shoi Karetnyi per. 19/1, Moscow, 127994

补充文件

附件文件
动作
1. JATS XML

版权所有 © Pleiades Publishing, Ltd., 2016