Abstract
We present StarChild and Lazuli, two libraries which leverage refinement types to verify neural networks, implemented in F∗ and Liquid Haskell. Refinement types are types augmented, or refined, with assertions about values of that type, e.g., "integers greater than five", which are checked by an SMT solver. Crucially, these assertions are written in the language itself. A user of our library can refine the type of neural networks, e.g., "neural networks which are robust against adversarial attacks", and expect F∗ to handle the verification of this claim for any specific network, without having to change the representation of the network, or even having to learn about SMT solvers. Our initial experiments indicate that our approach could greatly reduce the burden of verifying neural networks. Unfortunately, they also show that SMT solvers do not scale to the sizes required for neural network verification.
Original language | English |
---|---|
Number of pages | 19 |
DOIs | |
Publication status | Published - 24 Nov 2020 |
Event | The 18th Asian Symposium on Programming Languages and Systems - Online Duration: 29 Nov 2020 → 3 Dec 2020 Conference number: 18th |
Conference
Conference | The 18th Asian Symposium on Programming Languages and Systems |
---|---|
Abbreviated title | APLAS 2020 |
Period | 29/11/20 → 3/12/20 |
Keywords
- neural networks
- verification
- refinement types