Foundations and Trends® in Machine Learning > Vol 14 > Issue 1–2

Advances and Open Problems in Federated Learning

By Peter Kairouz, Google Research, USA, Kairouz@google.com | H. Brendan McMahan, Google Research, USA | Brendan Avent, USC | Aurélien Bellet, INRIA | Mehdi Bennis, University of Oulu | Arjun Nitin Bhagoji, Princeton University | Kallista Bonawitz, Google Research | Zachary Charles, Google Research | Graham Cormode, University of Warwick | Rachel Cummings, Georgia Tech. | Rafael G. L. D’Oliveira, Rutgers University | Hubert Eichner, Google Research | Salim El Rouayheb, Rutgers University | David Evans, University of Virginia | Josh Gardner, University of Washington | Zachary Garrett, Google Research | Adrià Gascón, Google Research | Badih Ghazi, Google Research | Phillip B. Gibbons, CMU | Marco Gruteser, Google Research | Zaid Harchaoui, University of Washington | Chaoyang He, USC | Lie He, EPFL | Zhouyuan Huo, University of Pittsburgh | Ben Hutchinson, Google Research | Justin Hsu, UW–Madison | Martin Jaggi, EPFL | Tara Javidi, UC San Diego | Gauri Joshi, CMU | Mikhail Khodak, CMU | Jakub Konecný, Google Research | Aleksandra Korolova, USC | Farinaz Koushanfar, UC San Diego | Sanmi Koyejo, Google Research | Tancrède Lepoint, Google Research | Yang Liu, NTU | Prateek Mittal, Princeton | Mehryar Mohri, Google Research | Richard Nock, ANU | Ayfer Özgür, Stanford | Rasmus Pagh, Google Research | Hang Qi, Google Research | Daniel Ramage, Google Research | Ramesh Raskar, MIT | Mariana Raykova, Google Research | Dawn Song, UC Berkeley | Weikang Song, Google Research | Sebastian U. Stich, EPFL | Ziteng Sun, Cornell | Ananda Theertha Suresh, Google Research | Florian Tramèr, Stanford | Praneeth Vepakomma, MIT | Jianyu Wang, CMU | Li Xiong, Emory | Zheng Xu, Google Research | Qiang Yang, HKUST | Felix X. Yu, Google Research | Han Yu, NTU | Sen Zhao, Google Research

 
Suggested Citation
Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D’Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu and Sen Zhao (2021), "Advances and Open Problems in Federated Learning", Foundations and Trends® in Machine Learning: Vol. 14: No. 1–2, pp 1-210. http://dx.doi.org/10.1561/2200000083

Publication Date: 23 Jun 2021
© 2021 Peter Kairouz, H. Brendan McMahan, et al.
 
Subjects
Clustering,  Data mining,  Deep learning,  Optimization,  Robustness,  Statistical learning theory,  Access control,  Anonymity,  Artificial intelligence methods in security and privacy,  Big data analytics and privacy,  Distributed systems security and privacy,  Privacy-preserving systems,  Ethics,  Privacy,  Computational complexity,  Database theory,  Distributed computing,  Information retrieval,  Security,  Communication complexity,  Cryptology and data security,  Data compression,  Information theory and computer science,  Information theory and statistics,  Quantization
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Relaxing the Core FL Assumptions: Applications to Emerging Settings and Scenarios
3. Improving Efficiency and Effectiveness
4. Preserving the Privacy of User Data
5. Defending Against Attacks and Failures
6. Ensuring Fairness and Addressing Sources of Bias
7. Addressing System Challenges
8. Concluding Remarks
Acknowledgments
Appendices
References

Abstract

Federated learning (FL) is a machine learning setting where many clients (e.g., mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this monograph discusses recent advances and presents an extensive collection of open problems and challenges.

DOI:10.1561/2200000083
ISBN: 978-1-68083-788-9
224 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-68083-789-6
224 pp. $280.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Relaxing the Core FL Assumptions: Applications to Emerging Settings and Scenarios
3. Improving Efficiency and Effectiveness
4. Preserving the Privacy of User Data
5. Defending Against Attacks and Failures
6. Ensuring Fairness and Addressing Sources of Bias
7. Addressing System Challenges
8. Concluding Remarks
Acknowledgments
Appendices
References

Advances and Open Problems in Federated Learning

The term Federated Learning was coined as recently as 2016 to describe a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client’s raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective.

Since then, the topic has gathered much interest across many different disciplines and the realization that solving many of these interdisciplinary problems likely requires not just machine learning but techniques from distributed optimization, cryptography, security, differential privacy, fairness, compressed sensing, systems, information theory, statistics, and more.

This monograph has contributions from leading experts across the disciplines, who describe the latest state-of-the art from their perspective. These contributions have been carefully curated into a comprehensive treatment that enables the reader to understand the work that has been done and get pointers to where effort is required to solve many of the problems before Federated Learning can become a reality in practical systems.

Researchers working in the area of distributed systems will find this monograph an enlightening read that may inspire them to work on the many challenging issues that are outlined. This monograph will get the reader up to speed quickly and easily on what is likely to become an increasingly important topic: Federated Learning.

 
MAL-083