SMT-based formal verification of safety properties in neural networks
Though they have been around since the late 1950s, artificial neural networks, or just neural networks (NNs), were not commonly considered as a potential (practical to implement) solution to real-world problems until the last couple of decades, when the dive into deep learning and big data started to produce significant benefits. In this case, a real-world problem refers to one that is common outside the realm of scientific exploration and research. Applications of NNs continue to increase in number and variety. As implementations of these ideas become more practical, their applications reach safety-critical systems (e.g., autonomous vehicles), making the verification of their conformity to safety requirements more critical. One of the benefits of a NN is that its internals need not be explicitly coded in order to produce reasonable outputs, demonstrating the ``learning'' aspect of machine learning. However, this creates new issues with verification. There are many established formal methods of software verification, such as bounded model checking and counterexample guided abstraction refinement (CEGAR), but these do not directly translate to NN verification. When the behavior of a machine cannot be stored explicitly as every possible path from input to output, it is not trivial to ensure it functions properly. Enumeration of NN behavior is prohibitively expensive because of the nature and number of computations it performs. This is largely because NNs typically cannot be exhaustively tested in a practical, efficient, computation-friendly way. Fortunately, in recent years, scholars have found ways to bring traditional verification techniques into the world of NNs. This paper reviews three such methods, compares them, and analyzes their potential for future use. These methods are: abstraction refinement due to Pulina and Tacchella, pure-SMT solving due to Huang et al., and Reluplex due to Katz et al.