Word Embedding for Semantically Related Words: An Experimental Study


如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

The ability to identify semantic relations between words has made a word2vec model widely used in NLP tasks. The idea of word2vec is based on a simple rule that a higher similarity can be reached if two words have a similar context. Each word can be represented as a vector, so the closest coordinates of vectors can be interpreted as similar words. It allows to establish semantic relations (synonymy, relations of hypernymy and hyponymy and other semantic relations) by applying an automatic extraction. The extraction of semantic relations by hand is considered as a time-consuming and biased task, requiring a large amount of time and some help of experts. Unfortunately, the word2vec model provides an associative list of words which does not consist of relative words only. In this paper, we show some additional criteria that may be applicable to solve this problem. Observations and experiments with well-known characteristics, such as word frequency, a position in an associative list, might be useful for improving results for the extraction of semantic relations for the Russian language by using word embedding. In the experiments, the word2vec model trained on the Flibusta and pairs from Wiktionary are used as examples with semantic relationships. Semantically related words are applicable to thesauri, ontologies and intelligent systems for natural language processing.

作者简介

M. Karyaeva

Demidov Yaroslavl State University

编辑信件的主要联系方式.
Email: mari.karyaeva@gmail.com
俄罗斯联邦, Yaroslavl, 150003

P. Braslavski

Ural Federal University

编辑信件的主要联系方式.
Email: pbras@yandex.ru
俄罗斯联邦, Yekaterinburg, 620002

V. Sokolov

Demidov Yaroslavl State University

编辑信件的主要联系方式.
Email: sokolov@uniyar.ac.ru
俄罗斯联邦, Yaroslavl, 150003

补充文件

附件文件
动作
1. JATS XML

版权所有 © Allerton Press, Inc., 2019