By Luca Franceschi, Amazon Web Services, USA | Michele Donini, Helsing, Germany | Valerio Perrone, Amazon Web Services, USA | Aaron Klein, Leipzig University, Germany | Cédric Archambeau, Helsing, Germany | Matthias Seeger, Amazon Web Services, USA | Massimiliano Pontil, Istituto Italiano di Tecnologia, Italy and University College London, UK | Paolo Frasconi, Università di Firenze, Italy, paolo.frasconi@pm.me
Hyperparameters are configuration variables controlling the behavior of machine learning algorithms. They are ubiquitous in machine learning and artificial intelligence and the choice of their values determines the effectiveness of systems based on these technologies. Manual hyperparameter search is often time-consuming and becomes infeasible when the number of hyperparameters is large. Automating the search is an important step towards advancing, streamlining, and systematizing machine learning, freeing researchers and practitioners alike from the burden of finding a good set of hyperparameters by trial and error. In this survey, we present a unified treatment of hyperparameter optimization, providing the reader with examples, insights into the state-of-the-art, and numerous links to further reading. We cover the main families of techniques to automate hyperparameter search, often referred to as hyperparameter optimization or tuning, including random and quasi-random search, bandit-, model-, population-, and gradient-based approaches. We further discuss extensions, including online, constrained, and multi-objective formulations, touch upon connections with other fields, such as meta-learning and neural architecture search, and conclude with open questions and future research directions.
Hyperparameters are configuration variables controlling the behavior of machine learning algorithms. They are ubiquitous in machine learning and artificial intelligence and the choice of their values determines the effectiveness of systems based on these technologies. Manual hyperparameter search is often time-consuming and becomes infeasible when the number of hyperparameters is large. Automating the search is an important step towards advancing, streamlining, and systematizing machine learning, freeing researchers and practitioners alike from the burden of finding a good set of hyperparameters by trial and error.
In this monograph, a unified treatment of hyperparameter optimization is presented, providing the reader with examples, insights into the state-of-the-art, and numerous links to further reading. The main families of techniques to automate hyperparameter search are covered, often referred to as hyperparameter optimization or tuning, including random and quasi-random search, bandit-, model-, population-, and gradient-based approaches. Also discussed are extensions, including online, constrained, and multi-objective formulations, and connections with other fields are identified, such as meta-learning and neural architecture search. The monograph concludes with open questions and future research directions.