A comprehensive review of interpretable machine learning techniques for phishing attack detection
International Journal of Artificial Intelligence
Abstract
Phishing attacks remain a significant and evolving threat in the digital landscape, demanding continual advancements in detection methodologies. This paper emphasizes the importance of interpretable machine learning models to enhance transparency and trustworthiness in phishing detection systems. It begins with an overview of phishing attacks, their increasing sophistication, and the challenges faced by conventional detection techniques. A range of interpretable machine learning approaches, including rule-based models, decision trees, and additive models like Shapley additive explanations (SHAP), are surveyed. Their applicability in phishing detection is analyzed based on computational efficiency, prediction accuracy, and interpretability. The study also explores ways to integrate these methods into existing detection systems to enhance functionality and user experience. By providing insights into the decision-making processes of detection models, interpretable machine learning facilitates human supervision and intervention, strengthening overall system reliability. The paper concludes by outlining future research directions, such as improving the scalability, accuracy, and adaptability of interpretable models to detect emerging phishing techniques. Integrating these models with real-time threat intelligence and deep learning approaches could boost accuracy while preserving transparency. Additionally, user-centric explanations and human-in-the-loop systems may further enhance trust, usability, and resilience in phishing detection frameworks.
Discover Our Library
Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.





