Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Iterativa, gradientbaserade adversiella attacker på bildklassifierande neurala nätverk
KTH, School of Engineering Sciences (SCI).
KTH, School of Engineering Sciences (SCI).
2019 (Swedish)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesisAlternative title
Iterative, Gradient-Based Adversarial Attacks on Neural Network Image Classifiers (English)
Abstract [sv]

Djupa neurala nätverk (DNN) används i ett brett spektrum av uppgifter som röstigenkänning, bildklassificering och spamidentifiering. Det har emellertid visat sig att nätverken kan felklassifiera indata när en liten, noga utvald störning läggs till den. Många adversariella maskininlärningsattacker har föreslagits för att skapa sådana störningar, ofta med målet att hitta den minsta möjliga störningen. I denna studie är DNN tränade för att klassificera bilder. Två adversariella metoder (IFGM och DeepFool) analyseras sedan och jämförs när det gäller att hitta den minsta störningen för att orsaka felklassificering, med minsta beräkningsmässiga ansträngning. De två iterativa, gradientbaserade metoderna implementeras i fyra avståndsmått: L0, L1, L2, Linfinity. Det överraskande resultatet är att även om DeepFool använder en mer sofistikerad optimeringsstrategi, är den inte bättre än IFGM. Vidare finner IFGM en mindre störning under samma givna tidsspann. Även i den riktade regimen är IFGM bättre än DeepFool. Slutligen föreslås en snabb L0-attack som strävar efter att ändra så få pixlar som möjligt för att orsaka felklassificering.

Abstract [en]

Deep neural networks (DNN) are used in a wide range of tasks such as voice recognition, image classification and spam-mail detection. However, it has been shown that the networks can misclassify an input when a small, carefully chosen perturbation is added to it. Many adversarial machine learning attacks have been proposed to create such samples, often with the goal of finding the smallest possible perturbation. In this study, three DNN image classification algorithms are constructed. Two adversarial methods (IFGM and DeepFool) are then analyzed and compared in terms of finding the smallest perturbation to cause misclassification of the networks, with the least computational effort. The two iterative, gradient-based methods are implemented in four distance metrics: L0, L1, L2 and Linfinity. The surprising result is that even though DeepFool uses a more sophisticated optimization strategy it does not perform significantly better than IFGM. Furthermore, IFGM actually finds a smaller perturbation with the same amount of time given. Also, in the targeted regime, IFGM performs better than DeepFool. Lastly, a fast L0-attack is suggested that strives to perturb as few pixels as possible to cause misclassification.

Place, publisher, year, edition, pages
2019.
Series
TRITA-SCI-GRU ; 2019:239
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-255824OAI: oai:DiVA.org:kth-255824DiVA, id: diva2:1342233
Supervisors
Examiners
Available from: 2019-08-13 Created: 2019-08-13 Last updated: 2022-06-26Bibliographically approved

Open Access in DiVA

fulltext(1332 kB)1498 downloads
File information
File name FULLTEXT02.pdfFile size 1332 kBChecksum SHA-512
8f5d72b9c6fccd95333eaf99e750abf0500ddd07258e1117265db416d365b6a3940d7219ff54ea170ad7e3c89d873ea14b2704a816c85a5434bb258e7f4cde5a
Type fulltextMimetype application/pdf

By organisation
School of Engineering Sciences (SCI)
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 1498 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 588 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf