Automated verification and refutation of quantized neural networks

dc.contributor.advisor1Cordeiro, Lucas Carvalho
dc.contributor.advisor1Latteshttp://lattes.cnpq.br/5005832876603012eng
dc.contributor.referee1Lima Filho, Eddie Batista de
dc.contributor.referee2Santos, Eulanda Miranda dos
dc.creatorSena, Luiz Henrique Coelho
dc.creator.Latteshttp://lattes.cnpq.br/1493664223350422eng
dc.date.issued2022-03-04
dc.description.abstractArtificial Neural Networks (ANNs) are being deployed for an increasing number of safety- critical applications, including autonomous cars and medical diagnosis. However, con- cerns about their reliability have been raised due to their black-box nature and apparent fragility to adversarial attacks. These concerns are amplified when ANNs are deployed on restricted system, which limit the precision of mathematical operations and thus in- troduce additional quantization errors. Here, we develop and evaluate a novel symbolic verification framework using software model checking (SMC) and satisfiability modulo theories (SMT) to check for vulnerabilities in ANNs and mainly in Multilayer Perceptron (MLP). More specifically, here is proposed several ANN-related optimizations for SMC, including invariant inference via interval analysis, slicing, expression simplifications, and discretization of non-linear activation functions. With this verification framework, we can provide formal guarantees on the safe behavior of ANNs implemented both in floating- and fixed-point arithmetic. In this regard, the current verification approach was able to verify and produce adversarial examples for 52 test cases spanning image classifica- tion and general machine learning applications. Furthermore, for small- to medium-sized ANN, this approach completes most of its verification runs in minutes. Moreover, in con- trast to most state-of-the-art methods, the presented approach is not restricted to specific choices regarding activation functions and non-quantized representations. Experiments show that this approach can analyze larger ANN implementations and substantially re- duce the verification time compared to state-of-the-art techniques that use SMT solvingeng
dc.description.resumoArtificial Neural Networks (ANNs) are being deployed for an increasing number of safety- critical applications, including autonomous cars and medical diagnosis. However, con- cerns about their reliability have been raised due to their black-box nature and apparent fragility to adversarial attacks. These concerns are amplified when ANNs are deployed on restricted system, which limit the precision of mathematical operations and thus in- troduce additional quantization errors. Here, we develop and evaluate a novel symbolic verification framework using software model checking (SMC) and satisfiability modulo theories (SMT) to check for vulnerabilities in ANNs and mainly in Multilayer Perceptron (MLP). More specifically, here is proposed several ANN-related optimizations for SMC, including invariant inference via interval analysis, slicing, expression simplifications, and discretization of non-linear activation functions. With this verification framework, we can provide formal guarantees on the safe behavior of ANNs implemented both in floating- and fixed-point arithmetic. In this regard, the current verification approach was able to verify and produce adversarial examples for 52 test cases spanning image classifica- tion and general machine learning applications. Furthermore, for small- to medium-sized ANN, this approach completes most of its verification runs in minutes. Moreover, in con- trast to most state-of-the-art methods, the presented approach is not restricted to specific choices regarding activation functions and non-quantized representations. Experiments show that this approach can analyze larger ANN implementations and substantially re- duce the verification time compared to state-of-the-art techniques that use SMT solving.eng
dc.formatapplication/pdf*
dc.identifier.citationSENA, Luiz Henrique Coelho. Automated Verification and Refutation of Quantized Neural Networks. 2022. 55 f. Dissertação (Mestrado em Engenharia Elétrica) - Universidade Federal do Amazonas, Manaus (AM), 2022.eng
dc.identifier.urihttps://tede.ufam.edu.br/handle/tede/8845
dc.languageengeng
dc.publisherUniversidade Federal do Amazonaseng
dc.publisher.countryBrasileng
dc.publisher.departmentFaculdade de Tecnologiaeng
dc.publisher.initialsUFAMeng
dc.publisher.programPrograma de Pós-graduação em Engenharia Elétricaeng
dc.rightsAcesso Aberto
dc.subject.cnpqCIENCIAS EXATAS E DA TERRAeng
dc.subject.userModel Checkingpor
dc.subject.userNeural Networkspor
dc.subject.userQuantized Neural Networkspor
dc.thumbnail.urlhttps://tede.ufam.edu.br/retrieve/55649/Disserta%c3%a7%c3%a3o_LuizSena_PPGEE.pdf.jpg*
dc.titleAutomated verification and refutation of quantized neural networkseng
dc.typeDissertaçãoeng

Arquivos

Pacote original

Agora exibindo 1 - 4 de 4
Carregando...
Imagem de Miniatura
Nome:
carta_encaminhamento.pdf
Tamanho:
300.95 KB
Formato:
Documentos internos
Descrição:
Carta de encaminhamento assinada pelo orientador.
Carregando...
Imagem de Miniatura
Nome:
dissertacao_luiz_sena.pdf
Tamanho:
1.29 MB
Formato:
Documentos internos
Descrição:
Documento principal com folha de aprovação inserida. Assim como a ficha catalográfica.
Carregando...
Imagem de Miniatura
Nome:
141ª_ata_de_Julgamento_Luiz_Henrique_Coelho_Sena.pdf
Tamanho:
255.13 KB
Formato:
Documentos internos
Descrição:
Ata de julgamento.
Carregando...
Imagem de Miniatura
Nome:
Dissertação_LuizSena_PPGEE.pdf
Tamanho:
1.3 MB
Formato:
Adobe Portable Document Format

Licença do pacote

Agora exibindo 1 - 1 de 1
Carregando...
Imagem de Miniatura
Nome:
license.txt
Tamanho:
2.32 KB
Formato:
Item-specific license agreed upon to submission
Descrição: