Foundations and Trends® in Machine Learning > Vol 8 > Issue 3-4

Convex Optimization: Algorithms and Complexity

By Sébastien Bubeck, Microsoft Research, USA, sebubeck@microsoft.com

 
Suggested Citation
Sébastien Bubeck (2015), "Convex Optimization: Algorithms and Complexity", Foundations and Trends® in Machine Learning: Vol. 8: No. 3-4, pp 231-357. http://dx.doi.org/10.1561/2200000050

Publication Date: 12 Nov 2015
© 2015 S. Bubeck
 
Subjects
Optimization
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Convex optimization in finite dimension
3. Dimension-free convex optimization
4. Almost dimension-free convex optimization in non-Euclidean spaces
5. Beyond the black-box model
6. Convex optimization and randomness
Acknowledgements
References

Abstract

This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov’s seminal book and Nemirovski’s lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non- Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski’s alternative to Nesterov’s smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, minibatches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.

DOI:10.1561/2200000050
ISBN: 978-1-60198-860-7
244 pp. $95.00
Buy book (pb)
 
ISBN: 978-1-60198-861-4
244 pp. $250.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Convex optimization in finite dimension
3. Dimension-free convex optimization
4. Almost dimension-free convex optimization in non-Euclidean spaces
5. Beyond the black-box model
6. Convex optimization and randomness
Acknowledgements
References

Convex Optimization

This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. It begins with the fundamental theory of black-box optimization and proceeds to guide the reader through recent advances in structural optimization and stochastic optimization. The presentation of black-box optimization, strongly influenced by the seminal book by Nesterov, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. Special attention is also given to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging), and discussing their relevance in machine learning. The text provides a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization it discusses stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. It also briefly touches upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.

 
MAL-050