Distributed coordinate descent for generalized linear models with regularization


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

Generalized linear model with L1 and L2 regularization is a widely used technique for solving classification, class probability estimation and regression problems. With the numbers of both features and examples growing rapidly in the fields like text mining and clickstream data analysis parallelization and the use of cluster architectures becomes important. We present a novel algorithm for fitting regularized generalized linear models in the distributed environment. The algorithm splits data between nodes by features, uses coordinate descent on each node and line search to merge results globally. Convergence proof is provided. A modifications of the algorithm addresses slow node problem. For an important particular case of logistic regression we empirically compare our program with several state-of-the art approaches that rely on different algorithmic and data spitting methods. Experiments demonstrate that our approach is scalable and superior when training on large and sparse datasets.

About the authors

I. Trofimov

Yandex

Author for correspondence.
Email: trofim@yandex-team.ru
Russian Federation, 16 Leo Tolstoy st., Moscow, 119021

A. Genkin

NYU Langone Medical Center

Email: trofim@yandex-team.ru
United States, 550 First Avenue, New York, NY, 10016

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2017 Pleiades Publishing, Ltd.