By Boris Glavic, Illinois Institute of Technology Chicago, USA, bglavic@iit.edu | Alexandra Meliou, University of Massachusetts Amherst, USA, ameli@cs.umass.edu | Sudeepa Roy, Duke University, USA, sudeepa@cs.duke.edu
Humans reason about the world around them by seeking to understand why and how something occurs. The same principle extends to the technology that so many of human activities increasingly rely on. Issues of trust, transparency, and understandability are critical in promoting adoption and proper use of systems. However, with increasing complexity of the systems and technologies we use, it is hard or even impossible to comprehend their function and behavior, and justify surprising observations through manual investigation alone. Explanation support can ease humans’ interactions with technology: explanations can help users understand a system’s function, justify system results, and increase their trust in automated decisions.
Our goal in this article is to provide an overview of existing work in explanation support for data-driven processes, through a lens that identifies commonalities across varied problem settings and solutions. We suggest a classification of explainability requirements across three dimensions: the target of the explanation (“What”), the audience of the explanation (“Who”), and the purpose of the explanation (“Why”). We identify dominant themes across these dimensions and the high-level desiderata each implies, accompanied by several examples to motivate various problem settings. We discuss explainability solutions through the lens of the “How” dimension: How something is explained (the form of the explanation) and how explanations are derived (methodology). We conclude with a roadmap of possible research directions for the data management community within the field of explainability in data systems.
The increasing complexity of systems and technologies in everyday use, make it hard or even impossible for humans to comprehend their function and behavior, and justify surprising observations. Explanation support can ease humans’ interactions with technology: explanations can help users understand a system’s function, justify system results, and increase their trust in automated decisions.
In this book the authors provide an overview of existing work in explanation support for data-driven processes. In doing so, they classify explainability requirements across three dimensions: the target of the explanation (“What”), the audience of the explanation (“Who”), and the purpose of the explanation (“Why”). They identify dominant themes across these dimensions and the high-level desiderata each implies, accompanied by several examples to motivate various problem settings. Finally, they discuss explainability solutions through the lens of the “How” dimension: How something is explained (the form of the explanation) and how explanations are derived (methodology).
This book provides researchers and system developers with a high-level overview of the complex problems encountered when developing better user interaction with modern large-scale data-driven computing systems and describes a roadmap to solving these issues in the future.