The study focuses on using the RandOm Convolutional Kernel Transform (ROCKET) classifier in machine learning for time series, particularly highlight- ing its generation and usage of ’random kernels’, which is considered a ’black box’ method. By implementing Explainable Artificial Intelligence (XAI), the research aims to make this aspect of the model more transparent and under- standable. We conducted experiments using the GunPoint dataset to analyze the e↵ect of SHAP values and other intrinsic XAI methods on the algorithm’s explainability and transparency. The methods included preprocessing of data, training a ridge regression classifier, and evaluating the model’s performance using the metrics Faithfulness and Robustness. The experiments showed that applying XAI methods, such as Shapley Additive exPlanations (SHAP) values and segmentation of key features, enhanced the model’s transparency and en- abled detailed insights into how various data segments influence the model’s predictions. However, the results showed varying Faithfulness values, indicat- ing that although the explanations are stable, they are not always accurate in identifying the most influential data segments. This research highlights the im- portance of continuing to develop and refine XAI tools to improve their precision and relevance in practical applications. By enhancing these methods’ ability to identify and explain influential data segments accurately, we can increase the trust in and accessibility of complex machine learning models. This endeavor is of utmost importance, especially in areas where accurate and transparent decision-making is critical.