A hybrid language model based on a recurrent neural network and probabilistic topic modeling


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

A language model based on features extracted from a recurrent neural network language model and semantic embedding of the left context of the current word based on probabilistic semantic analysis (PLSA) is developed. To calculate such embedding, the context is considered as a document. The effect of vanishing gradients in a recurrent neural network is reduced by this method. The experiment has shown that adding topic-based features reduces perplexity by 10%.

About the authors

M. S. Kudinov

Federal Research Center Computer Science and Control

Author for correspondence.
Email: mikhailkudinov@gmail.com
Russian Federation, ul. Vavilova 40, Moscow, 119333

A. A. Romanenko

Moscow Institute of Physics and Technology (State University)

Email: mikhailkudinov@gmail.com
Russian Federation, Institutskii pr. 9, Dolgoprudnyi, 141700

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2016 Pleiades Publishing, Ltd.