Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adversarial Deep Learning Against Intrusion Detection Classifiers
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
2017 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Traditional approaches in network intrusion detection follow a signature-based ap- proach, however the use of anomaly detection approaches based on machine learning techniques have been studied heavily for the past twenty years. The continuous change in the way attacks are appearing, the volume of attacks, as well as the improvements in the big data analytics space, make machine learning approaches more alluring than ever. The intention of this thesis is to show that using machine learning in the intrusion detection domain should be accompanied with an evaluation of its robustness against adversaries.

Several adversarial techniques have emerged lately from the deep learning research, largely in the area of image classification. These techniques are based on the idea of introducing small changes in the original input data in order to make a machine learning model to misclassify it. This thesis follows a big data Analytics methodol- ogy and explores adversarial machine learning techniques that have emerged from the deep learning domain, against machine learning classifiers used for network intrusion detection.

The study looks at several well known classifiers and studies their performance under attack over several metrics, such as accuracy, F1-score and receiver operating character- istic. The approach used assumes no knowledge of the original classifier and examines both general and targeted misclassification. The results show that using relatively sim- ple methods for generating adversarial samples it is possible to lower the detection accuracy of intrusion detection classifiers from 5% to 28%. Performance degradation is achieved using a methodology that is simpler than previous approaches and it re- quires only 6.25% change between the original and the adversarial sample, making it a candidate for a practical adversarial approach. 

Place, publisher, year, edition, pages
2017. , p. 57
Keywords [en]
adversarial machine learning, intrusion detection
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:ltu:diva-64577OAI: oai:DiVA.org:ltu-64577DiVA, id: diva2:1116037
Subject / course
Student thesis, at least 30 credits
Educational program
Information Security, master's level (120 credits)
Supervisors
Examiners
Available from: 2017-07-06 Created: 2017-06-27 Last updated: 2017-07-06Bibliographically approved

Open Access in DiVA

fulltext(2583 kB)554 downloads
File information
File name FULLTEXT01.pdfFile size 2583 kBChecksum SHA-512
c8267ca44c6e5f30fb0ee899fc1888d204fd2eb56496cde95dc94807f9a234aa01b56c9cf93fa66287bc97759c1c14f690619d20f22d1408ef4232f170bb15b1
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Rigaki, Maria
By organisation
Computer Science
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 554 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 2319 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf