Gradient-free Federated Learning Methods with l1 and l2-randomization for Non-smooth Convex Stochastic Optimization Problems

Cover Page

Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

This paper studies non-smooth problems of convex stochastic optimization. Using the smoothing technique based on the replacement of the function value at the considered point by the averaged function value over a ball (in l1-norm or l2-norm) of a small radius centered at this point, and then the original problem is reduced to a smooth problem (whose Lipschitz constant of the gradient is inversely proportional to the radius of the ball). An essential property of the smoothing used is the possibility of calculating an unbiased estimation of the gradient of a smoothed function based only on realizations of the original function. The obtained smooth stochastic optimization problem is proposed to be solved in a distributed federated learning architecture (the problem is solved in parallel: nodes make local steps, e.g. stochastic gradient descent, then communicate—all with all, then all this is repeated). The goal of the article is to build on the basis of modern achievements in the field of gradient–free non-smooth optimization and in the field of federated learning gradient-free methods for solving problems of non-smooth stochastic optimization in the architecture of federated learning.

About the authors

B. A. Alashqar

Moscow Institute of Physics and Technology, Dolgoprudny, Russia

Email: comp_mat@ccas.ru
Dolgoprudny, Russia

A. V. Gasnikov

Moscow Institute of Physics and Technology; Institute for Information Transmission Problems of the RAS (Kharkevich Institute); Caucasian Mathematical Center of the Adyghe State University

Email: gasnikov@yandex.ru
Russia, 141701, Moscow region, Dolgoprudny, Institutskiy per., 9; Russia, 127051, Moscow, Bolshoi Karetny lane, 19, build. 1; Republic of Adygea, 385016, Maykop, st. Pervomaiskaya, 208

D. M. Dvinskikh

National Research University Higher School of Economics, Moscow, Russia

Email: comp_mat@ccas.ru
Moscow, Russia

A. V. Lobanov

Moscow Institute of Physics and Technology, Dolgoprudny, Russia; ISP RAS Research Center for Trusted Artificial Intelligence, Moscow, Russia; Moscow Aviation Institute, Moscow, Russia

Author for correspondence.
Email: comp_mat@ccas.ru
Dolgoprudny, Russia; Moscow, Russia; Moscow, Russia

References

  1. The power of first-order smooth optimization for black-box non-smooth problems / A. Gasnikov et al. // arXiv preprint arXiv:2201.12289. 2022.
  2. Немировский А.С., Юдин Д.Б. Сложность задач и эффективность методов оптимизации. 1979.
  3. Shamir O. An optimal algorithm for bandit and zero-order convex optimization with twopoint feedback // The Journal of Machine Learning Research. 2017. T. 18. № 1. C. 1703–1713.
  4. A gradient estimator via L1-randomization for online zero-order optimization with two point feedback / A. Akhavan et al. // arXiv preprint arXiv:2205.13910. 2022.
  5. Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex / A.V. Gasnikov et al. // Automation and Remote Control. 2016. T. 77. № 11. C. 2018–2034.
  6. Advances and open problems in federated learning / P. Kairouz [и дp.] // Foundations and Trends® in Machine Learning. 2021. T. 14, № 1/2. C. 1–210.
  7. Optimal rates for zero-order convex optimization: The power of two function evaluations / J.C. Duchi et al. // IEEE 5. C. 2788–2806. Transactions on Information Theory. 2015. T. 61.
  8. Scheinberg K. Finite Difference Gradient Approximation: To Randomize or Not? // INFORMS Journal on Computing. 2022. T. 34. № 5. C. 2384–2388.
  9. Beznosikov A., Gorbunov E., Gasnikov A. Derivative-free method for composite optimization with applications to decentralized distributed optimization // IFAC-PapersOnLine. 2020. T. 53. № 2. C. 4038–4043.
  10. Ledoux M. The concentration of measure phenomenon// American Mathematical Soc., 2001.
  11. The min-max complexity of distributed stochastic convex optimization with intermittent communication / B.E. Woodworth et al. // Conference on Learning Theory. PMLR. 2021. C. 4386–4437.
  12. Is local SGD better than minibatch SGD? / B. Woodworth et al. // Internat. Conference on Machine Learning. PMLR. 2020. C. 10334–10343.
  13. Yuan H., Ma T. Federated accelerated stochastic gradient descent // Advances in Neural Information Processing Systems. 2020. T. 33. C. 5332–5344.
  14. Gorbunov E., Dvinskikh D., Gasnikov A. Optimal decentralized distributed algorithms for stochastic convex optimization // arXiv preprint arXiv:1911.07363. 2019.
  15. Duchi J.C., Bartlett P.L., Wainwright M.J. Randomized smoothing for stochastic optimization // SIAM Journal on Optimization. 2012. T. 22. № 2. C. 674–701.
  16. Yousefian F., Nedic’ A., Shanbhag U.V. On stochastic gradient and subgradient methods with adaptive steplength sequences // Automatica. 2012. T. 48. № 1. C. 56–67.
  17. Стохастическая онлайн оптимизация. Одноточечные и двухточечные нелинейные многорукие бандиты. Выпуклый и сильно выпуклый случаи / А.В. Гасников [и др.] // Автоматика и телемеханика. 2017. № 2. С. 36–49.
  18. Gradient-Free Optimization for Non-Smooth Minimax Problems with Maximum Value of Adversarial Noise / D. Dvinskikh et al. // arXiv preprint arXiv:2202.06114. 2022.
  19. Flaxman A.D., Kalai A.T., McMahan H.B. Online convex optimization in the bandit setting: gradient descent without a gradient // Proc. of the sixteenth annual ACMSIAM symposium on Discrete algorithms. 2005. C. 385–394.
  20. Dvurechensky P., Gorbunov E., Gasnikov A. An accelerated directional derivative method for smooth stochastic convex optimization // European Journal of Operational Research. 2021. T. 290. № 2. C. 601–621.
  21. Juditsky A., Nemirovski A., Tauvel C. Solving variational inequalities with stochastic mirrorprox algorithm // Stochastic Systems. 2011. T. 1. № 1. C. 17–58.
  22. Lan G. An optimal method for stochastic composite optimization // Math. Programming. 2012. T. 133. № 1. C. 365–397.

Supplementary files

Supplementary Files
Action
1. JATS XML
2.

Download (138KB)
3.

Download (96KB)
4.

Download (105KB)

Copyright (c) 2023 Б.А. Альашкар, А.В. Гасников, Д.М. Двинских, А.В. Лобанов

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies