A hybrid language model based on a recurrent neural network and probabilistic topic modeling
- Autores: Kudinov M.S.1, Romanenko A.A.2
-
Afiliações:
- Federal Research Center Computer Science and Control
- Moscow Institute of Physics and Technology (State University)
- Edição: Volume 26, Nº 3 (2016)
- Páginas: 587-592
- Seção: Applied Problems
- URL: https://journals.rcsi.science/1054-6618/article/view/194842
- DOI: https://doi.org/10.1134/S1054661816030123
- ID: 194842
Citar
Resumo
A language model based on features extracted from a recurrent neural network language model and semantic embedding of the left context of the current word based on probabilistic semantic analysis (PLSA) is developed. To calculate such embedding, the context is considered as a document. The effect of vanishing gradients in a recurrent neural network is reduced by this method. The experiment has shown that adding topic-based features reduces perplexity by 10%.
Palavras-chave
Sobre autores
M. Kudinov
Federal Research Center Computer Science and Control
Autor responsável pela correspondência
Email: mikhailkudinov@gmail.com
Rússia, ul. Vavilova 40, Moscow, 119333
A. Romanenko
Moscow Institute of Physics and Technology (State University)
Email: mikhailkudinov@gmail.com
Rússia, Institutskii pr. 9, Dolgoprudnyi, 141700
Arquivos suplementares
