Explainable social media disaster image classification using a lightweight attention-based deep learning approach

International Journal of Artificial Intelligence

Explainable social media disaster image classification using a lightweight attention-based deep learning approach

Abstract

In recent years, the rapid dissemination of social media content during natural and man-made disasters has created a need for automated and accurate disaster image classification systems. This paper proposes lightweight explainable attention-based disaster network (LEAD-Net), a deep learning (DL) model designed for classifying disaster-related images with high accuracy and interpretability. The system integrates an EfficientNet-B0 backbone enhanced with squeeze-and-excitation (SE) attention modules and a lightweight neural architecture search (NAS-lite) strategy for tuning the classifier head and training hyperparameters. The model was evaluated on two benchmark datasets comprehensive disaster dataset (CDD) and damage multimodal dataset (DMD) achieving 96% and 87% accuracy, respectively, outperforming several established convolutional neural network (CNN) baselines. To ensure transparency, gradient-weighted class activation mapping (Grad-CAM) was employed to generate visual explanations of the model’s decisions, confirming its focus on semantically relevant image regions.

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration