Polynomial Approximations for Several Neural Network Activation Functions

Cover Page

Cite item

Full Text

Abstract

Active deployment of machine learning systems sets a task of their protection against different types of attacks that threaten confidentiality, integrity and accessibility of both processed data and trained models. One of the promising ways for such protection is the development of privacy-preserving machine learning systems, that use homomorphic encryption schemes to protect data and models. However, such schemes can only process polynomial functions, which means that we need to construct polynomial approximations for nonlinear functions used in neural models. The goal of this paper is the construction of precise approximations of several widely used neural network activation functions while limiting the degree of approximation polynomials as well as the evaluation of the impact of the approximation precision on the resulting value of the whole neural network. In contrast to the previous publications, in the current paper we study and compare different ways for polynomial approximation construction, introduce precision metrics, present exact formulas for approximation polynomials as well as exact values of corresponding precisions. We compare our results with the previously published ones. Finally, for a simple convolutional network we experimentally evaluate the impact of the approximation precision on the bias of the output neuron values of the network from the original ones. Our results show that the best approximation for ReLU could be obtained with the numeric method, and for the sigmoid and hyperbolic tangent – with Chebyshev polynomials. At the same time, the best approximation among the three functions could be obtained for ReLU. The results could be used for the construction of polynomial approximations of activation functions in privacy-preserving machine learning systems.

About the authors

G. B Marshalko

Technical committee for standardization "Cryptography and security mechanisms"

Email: marshalko_gb@tc26.ru
Otradnaya St. 2B-1

J. A Trufanova

Technical committee for standardization "Cryptography and security mechanisms"

Email: trufanova_ua@tc26.ru
Otradnaya St. 2B-1

References

  1. Pitropakis N., Panaousis E., Giannetsos T., Anastasiadis E., Loukas G. A taxonomy and survey of attacks against machine learning // Comput. Sci. Rev. 2019. Vol. 34. pp. 100 – 199.
  2. Dowlin N., Gilad-Bachrach R., Laine K., Lauter K., Naehrig M.,Wernsing J. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy // Proceedings of the 33rd International Conference on Machine Learning (ICML). 2016. pp. 201 – 210.
  3. Hesamifard, E., Takabi, H., Ghasemi, M. CryptoDL: Deep neural networks over encrypted data // arXiv preprint:1711.05189. 2017.
  4. Juvekar C., Vaikuntanathan V., and Chandrakasan A. GAZELLE: A low latency framework for secure neural network inference // 27th USENIX Security Symposium. USENIX Association. 2018. pp. 1651—1669.
  5. Brakerski Z., Gentry C., Vaikuntanathan V. (Leveled) Fully Homomorphic Encryption Without Bootstrapping // ACM Trans. Comput. Theory. vol. 6. 2014. pp. 13:1–13:36.
  6. Cheon J.H., Kim A., Kim M., Song Y. Homomorphic encryption for arithmetic of approximate numbers // Proceedings of the International Conference on the Theory and Applications of Cryptology and Information Security. 2017. pp. 409 – 437.
  7. Crawford L. H., Gentry C., Halevi S., Platt D., and Shoup V. Doing realwork with FHE: The case of logistic regression // 6th Workshop Encrypted Comput. Appl. Homomorphic Cryptogr. (WAHC), New York, USA. 2018. pp. 1–12.
  8. Ghodsi Z., Gu T., Garg S. Safetynets: Verifiable execution of deep neural networks on an untrusted cloud // Advances in Neural Information Processing Systems. 2017. pp. 4672—4681.
  9. Ramy E.A., Jinhyun S., Salman Avestimehr. On Polynomial Approximations for Privacy-Preserving and Verifiable ReLU Networks // arXiv preprint:2011.05530, 2020.
  10. Репозиторий проекта CryptoDL URL: https://github.com/inspire-lab/CryptoDL (дата обращения: 01.09.2021).
  11. Lee J., Lee E., Lee J.-W., Kim Y., Kim Y.-S., No J.-S. Precise Approximation of Convolutional Neural Networks for Homomorphically Encrypted Data // arXiv preprint:2105.10879, 2021.
  12. Krogh F.T. Efficient Algorithms for Polynomial Interpolation and Numerical Differentiation, Math. Comput. 1970. vol. 24. no. 109. pp. 185 – 190.
  13. Молчанов И.Н. Машинные методы решения прикладных задач. Алгебра, приближение функций. Киев: Наук. думка, 1987. С. 288.
  14. Taylor J.R. An introduction to error analysis, University Science Books Mill Valley, California, 1982. P. 344.
  15. LeCun Y., Bottou L., Bengio Y., Haffner P. Gradient-based learning applied to document recognition // Proceedings of the IEEE. 1998. vol. 86, no. 11. pp. 2278–2324.

Supplementary files

Supplementary Files
Action
1. JATS XML

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).