Efficient Learning of Sparse Ranking Functions

By Mark Stevens et al
Read the original document by opening this link in a new tab.

Table of Contents

Chapter 1 Efficient Learning of Sparse Ranking Functions
1.1 Introduction
1.2 Problem Setting
1.3 An Efficient Coordinate Descent Algorithm
1.4 Extensions
1.5 Experiments

Summary

Algorithms for learning to rank can be inefficient when they employ risk functions that use structural information. A learning algorithm is described and analyzed that efficiently learns a ranking function using a domination loss. The algorithm proposes an efficient coordinate descent approach that scales linearly with the number of examples. An extension is presented that incorporates regularization, thus extending Vapnik’s notion of regularized empirical risk minimization to ranking learning. The paper discusses the importance of accurate yet efficiently computable ranking functions in the context of search engines and online advertisements. It also explores the roots of learning to rank in information retrieval and the evolution of ranking algorithms over time. Experimental results demonstrate the effectiveness of the learning algorithm in constructing compact models while retaining empirical performance accuracy.
×
This is where the content will go.