Generation and Detection of Adversarial Attacks in the Power Grid
2022 (English)Independent thesis Advanced level (professional degree), 20 credits / 30 HE credits
Student thesis
Abstract [en]
Machine learning models are vulnerable to adversarial attacks that add perturbations to the input data. Here we model and simulate power flow in a power grid test case and generate adversarial attacks for these measurements in three different ways. This is to compare the effect of attacks of different sizes constructed using various levels of knowledge of the model to see how this affects how often the attacks are detected. The three methods being one where the attacker has full knowledge of model, one where the attacker only has access to the measurements of the model, and the third method where the attacker has no knowledge of the model. By comparing these methods through how often they are detected by a residual-based detection scheme, one can argue that a data-driven attack only knowing the measurements is enough to add an error without being detected by the detection scheme. Using a linearized version of a state estimation is shown to be insufficient for generating attacks with full knowledge of the system, and further research is needed to compare the performance of the full knowledge attacks and the data-driven attacks. The attacks generated without knowledge of the system perform poorly and are easily detected.
Place, publisher, year, edition, pages
2022.
Series
UPTEC F, ISSN 1401-5757 ; 22048
Keywords [en]
Machine Learning, Adversarial Learning, Power Systems, State Estimation, Detectability Constraints
National Category
Computer Engineering Information Systems Computer Sciences
Identifiers
URN: urn:nbn:se:uu:diva-479474OAI: oai:DiVA.org:uu-479474DiVA, id: diva2:1679176
Educational program
Master Programme in Engineering Physics
Supervisors
Examiners
2022-07-012022-06-302022-07-06Bibliographically approved