By Timnit Gebru, The Distributed AI Research Institute (DAIR), USA, timnit@dair-institute.org | Remi Denton, Google Research, USA, dentone@google.com
The field of computer vision is now a multi-billion dollar enterprise, with its use in surveillance applications driving this large market share. In the last six years, computer vision researchers have started to discuss the risks and harms of some of these systems, mostly using the lens of fairness introduced in the machine learning literature to perform this analysis. While this lens is useful to uncover and mitigate a narrow segment of the harms that can be enacted through computer vision systems, it is only one of the toolkits that researchers have available to uncover and mitigate the harms of the systems they build.
In this monograph, we discuss a wide range of risks and harms that can be enacted through the development and deployment of computer vision systems. We also discuss some existing technical approaches to mitigating these harms, as well as the shortcomings of these mitigation strategies. Then, we introduce computer vision researchers to harm mitigation strategies proposed by journalists, human rights activists, individuals harmed by computer vision systems, and researchers in disciplines ranging from sociology to physics. We conclude the monograph by listing principles that researchers can follow to build what we call community-rooted computer vision tools in the public interest, and give examples of such research directions. We hope that this monograph can serve as a starting point for researchers exploring the harms of current computer vision systems and attempting to steer the field into community-rooted work.
The field of computer vision is now a multi-billion dollar enterprise, with its use in surveillance applications driving this large market share. In the last six years, computer vision researchers have started to discuss the risks and harms of some of these systems, mostly using the lens of fairness introduced in the machine learning literature to perform this analysis. While this lens is useful to uncover and mitigate a narrow segment of the harms that can be enacted through computer vision systems, it is only one of the toolkits that researchers have available to uncover and mitigate the harms of the systems they build.
In this monograph, a wide range of risks and harms that can be enacted through the development and deployment of computer vision systems are discussed, in addition to some existing technical approaches to mitigating these harms and the shortcomings of these mitigation strategies. Thereafter, computer vision researchers are introduced to harm mitigation strategies proposed by journalists, human rights activists, individuals harmed by computer vision systems, and researchers in disciplines ranging from sociology to physics. The monograph concludes by listing principles that researchers can follow to build community-rooted computer vision tools in the public interest. The authors hope that this monograph can serve as a starting point for researchers exploring the harms of current computer vision systems and attempting to steer the field into community-rooted work.