By Zoya Bylinskii, Adobe Research, USA, bylinski@adobe.com | Laura Herman, Adobe Inc., USA, lherman@adobe.com | Aaron Hertzmann, Adobe Research, USA, hertzman@adobe.com | Stefanie Hutka, Adobe Inc., USA, stefanie.hutka@gmail.com | Yile Zhang, University of Washington, USA, yz278@uw.edu
Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like “which image is better, A or B?”, leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper’s contributions. On the one hand we argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. We provide further pointers to the literature for readers interested in exploring other UXR methodologies. Finally, we describe broader open issues and recommendations for the research community.
Most research in computer graphics and image synthesis produces outputs for human consumption. In many cases, these algorithms operate largely automatically; in other cases, interactive tools allow professionals or everyday users to author or edit images, video, textures, geometry, or animation.
Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like “which image is better, A or B?”, leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper’s contributions. When conducted hastily as an afterthought, such studies can lead to an increase of uninformative, and, potentially, misleading conclusions. On the other hand, in these same communities, user research is underutilized in driving project direction and forecasting user needs and reception.
Increased attention is needed in both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. This monograph focusses on these aspects, and an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and applied perception to increase exposure to the available methodologies and best practices are also presented. Foundational user research methods are included, (e.g., need finding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. Also, further pointers to the literature for readers interested in exploring other UXR methodologies are given, and broader open issues and recommendations for the research community are described.