Show simple item record

dc.contributor.authorPapadopoulos, Pavlos
dc.contributor.authorThornewill von Essen, Oliver
dc.contributor.authorPitropakis, Nikolaos
dc.contributor.authorChrysoulas, Christos
dc.contributor.authorMylonas, Alexios
dc.contributor.authorBuchanan, William J.
dc.date.accessioned2021-05-13T10:45:01Z
dc.date.available2021-05-13T10:45:01Z
dc.date.issued2021-06
dc.identifier.citationPapadopoulos , P , Thornewill von Essen , O , Pitropakis , N , Chrysoulas , C , Mylonas , A & Buchanan , W J 2021 , ' Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT ' , Journal of Cybersecurity and Privacy , vol. 1 , no. 2 , 1020014 , pp. 252-273 . https://doi.org/10.3390/jcp1020014
dc.identifier.otherArXiv: http://arxiv.org/abs/2104.12426v1
dc.identifier.otherORCID: /0000-0001-8819-5831/work/93854164
dc.identifier.urihttp://hdl.handle.net/2299/24485
dc.description© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/
dc.description.abstractAs the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models' robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability.en
dc.format.extent22
dc.format.extent1047624
dc.language.isoeng
dc.relation.ispartofJournal of Cybersecurity and Privacy
dc.subjectadversarial
dc.subjectInternet of Things
dc.subjectmachine learning
dc.subjectnetwork IDS
dc.subjectComputer Science (miscellaneous)
dc.subjectArtificial Intelligence
dc.titleLaunching Adversarial Attacks against Network Intrusion Detection Systems for IoTen
dc.contributor.institutionDepartment of Computer Science
dc.contributor.institutionSchool of Physics, Engineering & Computer Science
dc.description.statusPeer reviewed
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85109528727&partnerID=8YFLogxK
rioxxterms.versionofrecord10.3390/jcp1020014
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record