Alternating Least Squares in Generalized Linear Models


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

We derived a convergence result for a sequential procedure known as alternating maximization (minimization) to the maximum likelihood estimator for a pretty large family of models - Generalized Linear Models. Alternating procedure for linear regression becomes to the well-known algorithm of Alternating Least Squares, because of the quadraticity of log-likelihood function L(υ). In Generalized Linear Models framework we lose quadraticity of L(υ), but still have concavity due to the fact that error-distribution is from exponential family. Concentration property makes the Taylor approximation of L(υ) up to the second order accurate and makes possible the use of alternating minimization (maximization) technique. Examples and experiments confirm convergence result followed by the discussion of the importance of initial guess.

About the authors

A. G. Minasyan

Yerevan State University

Author for correspondence.
Email: arsh.minasyan@gmail.com
Armenia, Yerevan


Copyright (c) 2019 Allerton Press, Inc.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies