Parabola as an Activation Function of Artificial Neural Networks

Мұқаба

Дәйексөз келтіру

Толық мәтін

Аннотация

The use of parabola and its branches as a nonlinearity expanding the logical capabilities of artificial neurons is considered. In particular, the applicability of parabola branches for constructing an s-shaped function suitable for tuning a neural network by reverse error propagation is determined. The implementation of the XOR function on two and three neurons using the proposed approach is demonstrated. The main advantage of the parabola over the sigmoid is a simpler implementation, which speeds up the work of artificial neural networks.

Авторлар туралы

Mikhail Khachumov

Ailamazyan Program Systems Institute of the Russian Academy of Sciences; Computer Science and Control Federal Research Center of the Russian Academy of Sciences; Peoples' Friendship University of Russia

Хат алмасуға жауапты Автор.
Email: khmike@inbox.ru

Candidate of Physical and Mathematical Sciences, Senior Researcher; Senior Researcher; Associate Professor

Ресей, Veskovo; Moscow; Moscow

Yulia Emelyanova

Ailamazyan Program Systems Institute of the Russian Academy of Sciences

Email: yuliya.emelyanowa2015@yandex.ru

Candidate of Technical Sciences, Researcher

Ресей, Veskovo

Әдебиет тізімі

  1. Zakharov A.V., Khachumov V.M. Bit-parallel Representation of Activation Functions for Fast Neural Networks // Proceedings of the 7-th International Conference on Pattern Recognition and Image Analysis. 2014. V. 2. P. 568-571.
  2. Arce F., Zamora E., Humberto S. Barrón, R. Differential evolution training algorithm for dendrite morphological neural networks // Applied Soft Computing. 2018. V. 68. P. 303-313.
  3. Dimitriadis N., Maragos, P. Advances in the training, pruning and enforcement of shape constraints of Morphological Neural Networks using Tropical Algebra // IEEE International Conference On Acoustics, Speech And Signal Processing. 2021. P. 3825-3829.
  4. Limonova E.E., Nikolaev D.P., Alfonso D.M., Arlazarov V.V. Bipolar Morphological Neural Networks: Gate-Efficient Architecture for Computer Vision // IEEE Access. 2021. V.9. P. 97569-97581.
  5. Limonova E.E., Nikolaev D.P., Arlazarov V.V. Bipolar Morphological U-Net for Document Binarization // Thirteenth International Conference on Machine Vision. 2021. P. 1-9.
  6. Limonova E.E., Nikolaev D.P, Alfonso D., Arlazarov V.V. ResNet-like Architecture with Low Hardware Requirements // 25th International Conference on Pattern Recognition. 2021. P. 6204-6211.
  7. Limonova E., Matveev D., Nikolaev D., Arlazarov V. Bipolar morphological neural networks: convolution without multiplication // Twelfth International Conference on Machine Vision. 2020. V. 11433. P. 1-18.
  8. Khachumov V.M. Logicheskie elementy na neyronah [Logical elements on neurons] // Trudy IX Mezhdunarodnoy Conferentcyi “Intellectualnye systemy i comp’uternye nauki” [Proceedings of the 9th International Conference “Intelligent Systems and Computer Science”]. Moscow, 2006. V. 1. P. 297-300.
  9. Kruflov V.V., Borisov V.V. Iskusstvennye neuronnye seti. Teoriya i practika [Artificial neural networks. Theory and practice]. Moscow: Hotline-Telecom, 2002.
  10. Haykin S. Neyronnye seti: Polny’ kurs [Neural Networks: The Full Course]. Мoscow: Williams. 2006.
  11. Callan R. Osnovnye koncepcii neyronnykh setej [The Essence of Neural Networks]. Мoscow: Williams. 2001.

Қосымша файлдар

Қосымша файлдар
Әрекет
1. JATS XML

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).