Alternating Least Squares in Generalized Linear Models


如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

We derived a convergence result for a sequential procedure known as alternating maximization (minimization) to the maximum likelihood estimator for a pretty large family of models - Generalized Linear Models. Alternating procedure for linear regression becomes to the well-known algorithm of Alternating Least Squares, because of the quadraticity of log-likelihood function L(υ). In Generalized Linear Models framework we lose quadraticity of L(υ), but still have concavity due to the fact that error-distribution is from exponential family. Concentration property makes the Taylor approximation of L(υ) up to the second order accurate and makes possible the use of alternating minimization (maximization) technique. Examples and experiments confirm convergence result followed by the discussion of the importance of initial guess.

作者简介

A. Minasyan

Yerevan State University

编辑信件的主要联系方式.
Email: arsh.minasyan@gmail.com
亚美尼亚, Yerevan


版权所有 © Allerton Press, Inc., 2019
##common.cookie##