Stabilized Sparse Online Learning for Sparse Data

By Y. Ma et al.
Published on May 9, 2017
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1 Introduction
2 Truncated Stochastic Gradient Descent for Sparse Learning
3 Stabilized Truncated SGD for Sparse Learning

Summary

This document discusses the Stabilized Sparse Online Learning for Sparse Data focusing on the challenges posed by modern datasets with large scales. It introduces a stabilized truncated stochastic gradient descent algorithm to address issues faced by sparse online learning methods applied to high-dimensional sparse data. The algorithm employs innovative components to reduce variability in the learned weight vector and stabilize selected features. The paper presents theoretical analysis, computational complexity, and practical implementation remarks. Numerical experiments demonstrate improved stability and prediction performance for both sparse and dense data.
×
This is where the content will go.