A comparative study of deep learning-based network intrusion detection system with explainable artificial intelligence

International Journal of Electrical and Computer Engineering

A comparative study of deep learning-based network intrusion detection system with explainable artificial intelligence

Abstract

In the rapidly evolving landscape of cybersecurity, robust network intrusion detection systems (NIDS) are crucial to countering increasingly sophisticated cyber threats, including zero-day attacks. Deep learning approaches in NIDS offer promising improvements in intrusion detection rates and reduction of false positives. However, the inherent opacity of deep learning models presents significant challenges, hindering the understanding and trust in their decision-making processes. This study explores the efficacy of explainable artificial intelligence (XAI) techniques, specifically Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), in enhancing the transparency and trustworthiness of NIDS systems. With the implementation of TabNet architecture on the AWID3 dataset, it is able to achieve a remarkable accuracy of 99.99%. Despite this high performance, concerns regarding the interpretability of the TabNet model's decisions persist. By employing SHAP and LIME, this study aims to elucidate the intricacies of model interpretability, focusing on both global and local aspects of the TabNet model's decision-making processes. Ultimately, this study underscores the pivotal role of XAI in improving understanding and fostering trust in deep learning -based NIDS systems. The robustness of the model is also being tested by adding the signal-to-noise ratio (SNR) to the datasets.

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration