Deep learning: an overview and main paradigms


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

In the present paper, we examine and analyze main paradigms of learning of multilayer neural networks starting with a single layer perceptron and ending with deep neural networks, which are considered regarded as a breakthrough in the field of the intelligent data processing. The baselessness of some ideas about the capacity of multilayer neural networks is shown and transition to deep neural networks is justified. We discuss the principal learning models of deep neural networks based on the restricted Boltzmann machine (RBM), an autoassociative approach and a stochastic gradient method with a Rectified Linear Unit (ReLU) activation function of neural elements.

About the authors

V. A. Golovko

Brest State Technical University; National Research Nuclear University MEPhI (Moscow Engineering Physics Institute)

Author for correspondence.
Email: gva@bstu.by
Belarus, Brest; Moscow

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2017 Allerton Press, Inc.