Neural networks, secure by construction: an exploration of refinement types

Wen Kokke, Ekaterina Komendantskaya, Daniel Kienitz, Bob Atkey, David Aspinall

Research output: Contribution to conferencePaper

Abstract

We present StarChild and Lazuli, two libraries which leverage refinement types to verify neural networks, implemented in F∗ and Liquid Haskell. Refinement types are types augmented, or refined, with assertions about values of that type, e.g., "integers greater than five", which are checked by an SMT solver. Crucially, these assertions are written in the language itself. A user of our library can refine the type of neural networks, e.g., "neural networks which are robust against adversarial attacks", and expect F∗ to handle the verification of this claim for any specific network, without having to change the representation of the network, or even having to learn about SMT solvers. Our initial experiments indicate that our approach could greatly reduce the burden of verifying neural networks. Unfortunately, they also show that SMT solvers do not scale to the sizes required for neural network verification.
Original languageEnglish
Number of pages19
Publication statusAccepted/In press - 20 Sep 2020
EventThe 18th Asian Symposium on Programming Languages and Systems - Online
Duration: 29 Nov 20203 Dec 2020
Conference number: 18th

Conference

ConferenceThe 18th Asian Symposium on Programming Languages and Systems
Abbreviated titleAPLAS 2020
Period29/11/203/12/20

Keywords

  • neural networks
  • verification
  • refinement types

Fingerprint Dive into the research topics of 'Neural networks, secure by construction: an exploration of refinement types'. Together they form a unique fingerprint.

  • Cite this

    Kokke, W., Komendantskaya, E., Kienitz, D., Atkey, B., & Aspinall, D. (Accepted/In press). Neural networks, secure by construction: an exploration of refinement types. Paper presented at The 18th Asian Symposium on Programming Languages and Systems, .