Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bridging the Gap: Enhancing Explainability in ROCKET
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
2024 (English)Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

The study focuses on using the RandOm Convolutional Kernel Transform (ROCKET) classifier in machine learning for time series, particularly highlight- ing its generation and usage of ’random kernels’, which is considered a ’black box’ method. By implementing Explainable Artificial Intelligence (XAI), the research aims to make this aspect of the model more transparent and under- standable. We conducted experiments using the GunPoint dataset to analyze the e↵ect of SHAP values and other intrinsic XAI methods on the algorithm’s explainability and transparency. The methods included preprocessing of data, training a ridge regression classifier, and evaluating the model’s performance using the metrics Faithfulness and Robustness. The experiments showed that applying XAI methods, such as Shapley Additive exPlanations (SHAP) values and segmentation of key features, enhanced the model’s transparency and en- abled detailed insights into how various data segments influence the model’s predictions. However, the results showed varying Faithfulness values, indicat- ing that although the explanations are stable, they are not always accurate in identifying the most influential data segments. This research highlights the im- portance of continuing to develop and refine XAI tools to improve their precision and relevance in practical applications. By enhancing these methods’ ability to identify and explain influential data segments accurately, we can increase the trust in and accessibility of complex machine learning models. This endeavor is of utmost importance, especially in areas where accurate and transparent decision-making is critical.

Place, publisher, year, edition, pages
2024.
Keywords [en]
Interpretability, Machine Learning, Model Transparency, ROCKET, SHAP, Time Series Classification, XAI
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:su:diva-242661OAI: oai:DiVA.org:su-242661DiVA, id: diva2:1955552
Available from: 2025-04-30 Created: 2025-04-30

Open Access in DiVA

fulltext(1828 kB)21 downloads
File information
File name FULLTEXT01.pdfFile size 1828 kBChecksum SHA-512
97ae2b559188fe37d12bbc9f5a38914ffcd5021a9ecc168c62ed20b393f38042574818b5aa57cff4de2c6eb1f2bd624a7ce3a0c91a608f0cfd41a89554cf0fcb
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Tikabo, KamalTouray, Pamodou
By organisation
Department of Computer and Systems Sciences
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 21 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 18 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf