By Aws Albarghouthi, University of Wisconsin–Madison, USA, aws@cs.wisc.edu
Deep learning has transformed the way we think of software and what it can do. But deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. This monograph covers foundational ideas from formal verification and their adaptation to reasoning about neural networks and deep learning.
Over the past decade, a number of hardware and software advances have conspired to thrust deep learning and neural networks to the forefront of computing. Deep learning has created a qualitative shift in our conception of what software is and what it can do: Every day we’re seeing new applications of deep learning, from healthcare to art, and it feels like we’re only scratching the surface of a universe of new possibilities.
This book offers the first introduction of foundational ideas from automated verification as applied to deep neural networks and deep learning. It is divided into three parts:
Part 1 defines neural networks as data-flow graphs of operators over real-valued inputs.
Part 2 discusses constraint-based techniques for verification.
Part 3 discusses abstraction-based techniques for verification.
The book is a self-contained treatment of a topic that sits at the intersection of machine learning and formal verification. It can serve as an introduction to the field for first-year graduate students or senior undergraduates, even if they have not been exposed to deep learning or verification.