Study of Fault Tolerance Methods for Hardware Implementations of Convolutional Neural Networks


如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

The paper concentrates on methods of fault protection of neural networks implemented as hardware operating in fixed-point mode. We have explored possible variants of error occurrence, as well as ways to eliminate them. For this purpose, networks of identical architecture based on VGG model have been studied. VGG SIMPLE neural network that has been chosen for experiments is a simplified version (with smaller number of layers) of well-known networks VGG16 and VGG19. To eliminate the effect of failures on network accuracy, we have proposed a method of training neural networks with additional dropout layers. Such approach removes extra dependencies for neighboring perceptrons. We have also investigated method of network architecture complication to reduce probability of misclassification because of failures in neurons. Based on results of the experiments, we see that adding dropout layers reduces the effect of failures on classification ability of error-prone neural networks, while classification accuracy remains the same as of the reference networks.

作者简介

R. Solovyev

Institute for Design Problems in Microelectronics, Russian Academy of Sciences

编辑信件的主要联系方式.
Email: turbo@ippm.ru
俄罗斯联邦, Moscow, 124681

A. Stempkovsky

Institute for Design Problems in Microelectronics, Russian Academy of Sciences

Email: turbo@ippm.ru
俄罗斯联邦, Moscow, 124681

D. Telpukhov

Institute for Design Problems in Microelectronics, Russian Academy of Sciences

Email: turbo@ippm.ru
俄罗斯联邦, Moscow, 124681

补充文件

附件文件
动作
1. JATS XML

版权所有 © Allerton Press, Inc., 2019