Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,922 Article Results

Enhancing predictive maintenance capabilities by integrating artificial intelligence: systematic review

10.11591/ijeecs.v41.i2.pp782-790
Thippeswamy G. N , Neelambike S , Sanjay Pande M. B
Organizations are under pressure to increase productivity and lower operating costs because facility operations and maintenance (O&M) account for a significant portion of a facility's life-cycle cost. By facilitating real-time monitoring and data-driven decision-making, artificial intelligence (AI) has become a promising catalyst for enhancing predictive maintenance. In order to investigate how AI can be combined with predictive maintenance to lower operational and maintenance overhead, this systematic review examines peer-reviewed studies that have been published in the last five years. Using an evidence-based review methodology and adaptive structuration theory (AST), the study synthesized results from 14 excellent publications. Unbiased maintenance planning, cost-effective resource utilization, and AI-enabled operational visibility emerged as three key themes. According to the review, AI-driven predictive maintenance greatly increases operational effectiveness and reduces costs; however, successful implementation necessitates better data governance and organizational preparedness.
Volume: 41
Issue: 2
Page: 782-790
Publish at: 2026-02-01

Contextualized clinical anomaly detection with explainable AI and patient modeling

10.11591/ijeecs.v41.i2.pp614-623
Amel Elketroussi , Bachir Djebbar , Ibtissem Bekkouche
This study aims to reduce alarm fatigue and improve the clinical relevance of alerts in intensive care by combining sequential modeling, patient contextualization, explainable artificial intelligence (XAI), and probability calibration. To this end, we leverage the adult cohorts from MIMIC-III/IV, segmented into four-hour windows, explicitly handling missing data and constructing a context vector that integrates demographics, comorbidities, and therapeutic interventions. The approach relies on a tabular autoencoder, an long short-term memory (LSTM) autoencoder, and a transformer, complemented by an adjustment layer based on auditable clinical rules, local explanations (LIME/SHAP), and post-hoc calibration (temperature scaling). Evaluation involves receiver operating characteristic (ROC)/precision–recall (PR) area under the curve (AUC), F1-score, sensitivity and specificity, as well as calibration metrics (ECE, Brier score), alert burden, ablation studies, robustness tests, and subgroup fairness analyses. Across all experiments, the complete model (+Context+XAI+Calibration) outperforms baselines in AUPRC and F1, reduces alert burden, and improves calibration while providing understandable explanations. Specifically, the proposed model improves ROC AUC from 0.74 to 0.89 and reduces alert burden by approximately one third compared to clinical thresholds.
Volume: 41
Issue: 2
Page: 614-623
Publish at: 2026-02-01

The Bender’s decomposition model to optimize temporary waste disposal sites based on general algebraic modeling system

10.11591/ijeecs.v41.i2.pp666-679
Sisca Octarina , Fitri Maya Puspita , Endro Setyo Cahyono , Evi Yuliza , Pebriyanti Simanjuntak , Siti Suzlin Supadi
Waste constitutes a substantial problem in urban and residential locales, as the volume of refuse escalates in tandem with population increase, deteriorating community quality of life. One solution to this problem is to provide temporary waste disposal sites (TWDS). This research discussed optimizing TWDS in the Sukarami Subdistrict, Palembang City, which consists of seven villages. The current TWDS in the Sukarami Subdistrict is irregular, with some sites located close together and others far apart. The optimization problem is solved by formulating the set covering problem (SCP) model, namely the set covering location problem (SCLP), the p-Median problem, and the Bender’s decomposition model. All models were solved using the general algebraic modeling system (GAMS) software. The research introduces a Bender’s decomposition model based on the SCLP model. The Sukarami Subdistrict has 29 TWDS located in only five villages. Using the SCLP and Bender’s decomposition models, the study identified 19 optimal TWDS in the Sukarami Subdistrict. Based on the solution of the p-Median problem, there are seven TWDS that can meet each village’s demand. This study recommends the optimal TWDS obtained from the Bender’s decomposition model. Additionally, two TWDS are recommended to be added, each in Sukodadi and Talang Betutu villages.
Volume: 41
Issue: 2
Page: 666-679
Publish at: 2026-02-01

Deep feature-based multi-class Alzheimer’s disease classification with statistical performance evaluation

10.11591/ijai.v15.i1.pp695-706
Maysaloon Abed Qasim , Marwa Mawfaq Mohamedsheet Al-Hatab , Lubab H. Albak
This study evaluated the performance of multiple machine learning classifiers for the classification of Alzheimer’s disease (AD) stages using deep features extracted from a pre-trained SqueezeNet model. Magnetic resonance imaging (MRI) scans were processed through SqueezeNet to generate high-dimensional feature vectors, which were then used as achieved an accuracy of 94.78% input to six classifiers: k-nearest neighbors (KNN), decision tree (DT), support vector machine (SVM), neural network (NN), naive Bayes (NB), and logistic regression (LR). Models were assessed using a 70/30% training-testing split and 5-, 10-, and 20-fold stratified cross validation. Principal component analysis (PCA) was applied to retain 99% of variance. On the original dataset consisting of 6,400 images, KNN has achieved 97.48% accuracy and 0.998 area under the curve (AUC), and when a larger dataset of 44,000 images was used it achieved an accuracy and of 94.78% and an AUC of 0.987, demonstrating the system’s robustness across scales. Statistical tests, including paired t-tests and Wilcoxon signed-rank tests, confirmed that KNN has significantly leveraged from PCA. These outcomes demonstrate that combining deep feature extraction with PCA improved the reliability and efficiency of the classifier for AD stage prediction.
Volume: 15
Issue: 1
Page: 695-706
Publish at: 2026-02-01

Detection and forecasting of mental health disorders using machine learning models on social media data

10.11591/ijai.v15.i1.pp672-680
Chaithra Indavara Venkateshagowda , Roopashree Hejjajji Ranganathasharma , Yogeesh Ambalagere Chandrashekaraiah , Narve Lakshminarayan Taranath
The detection and classification of depression and other mental disorders have become crucial in the modern era, particularly with the growing reliance on social media for self-expression. Existing systems often face challenges like limited prediction accuracy, difficulty forecasting future mental illnesses, and handling both clinical and non-clinical data. This study proposes a novel analytical model that not only screens individuals' current mental health status from social media content but also predicts the likelihood of future mental health issues. The proposed methodology integrates classical machine learning (ML) models, ensemble learning approaches, and pretrained models for enhanced detection and forecasting accuracy. The outcome shows that pre-trained language models accomplished maximized F1-score and overall performance significantly better than conventional ML and ensemble models. The system outperforms existing methods with a significant accuracy improvement, achieving 90.9% overall accuracy, a 7.2% improvement over traditional ML classifiers, 5.8% over ensemble models, and 11.3% over language models.
Volume: 15
Issue: 1
Page: 672-680
Publish at: 2026-02-01

Multi-scale features assisted knowledge distillation vision transformer for land cover segmentation and classification

10.11591/ijai.v15.i1.pp361-373
Sujata Arjun Gaikwad , Vijaya Musande
The most significant problem in remote sensing interpretation is semantic segmentation, which attempts to give each pixel in the image a particular class. This research work follows the various steps, such as pre-processing, segmentation, and classification. Initially, high spatial resolution remote sensing images (RSI) are collected from the open-source dataset. In the pre processing stage, an improved guided filter (Imp-GF) is used to remove various noises from images. Next, the segmentation is done by using a knowledge distillation-based vision transformer approach integrated with an atrous spatial multi-scale pyramidal module (KD-MuViTPy). Based on the segmented image, land cover classes such as vegetation, urban areas, forest, water bodies, and roads are classified. The proposed method outperformed the Bhuvan satellite dataset, achieving better accuracy, precision, recall, F1 score, Dice score, intersection over union (IoU), and Kappa score at values of 98.01%, 98.99%, 97.49%, 98.23%, 98.23%, 96.55%, and 95.91%, respectively.
Volume: 15
Issue: 1
Page: 361-373
Publish at: 2026-02-01

Single hidden layer feedforward neural networks for indoor air quality prediction

10.11591/ijai.v15.i1.pp322-328
Dwi Marisa Midyanti , Syamsul Bahri , Ilhamsyah Ilhamsyah , Zalikhah Khairunnisa , Hafizhah Insani Midyanti
Indoor air quality (IAQ) has become a problem because it affects human health, comfort, and productivity. Predicting air quality is a complex task due to the dynamic nature of IAQ variable values simultaneously. In this study, the single hidden layer feedforward neural networks model is used, namely radial basis function (RBF), self-organizing maps (SOM)-RBF, and extreme learning machine (ELM) to classify IAQ. This study also observed the effect of the number of neurons in the hidden layer on the model accuracy and overfitting of each network. The experimental results show that the number of neurons in the hidden layer can affect the accuracy of the RBF and SOM-RBF models. Among the three models used, RBF produces very good training data accuracy but also the most significant overfitting value. The largest overall accuracy was obtained using SOM-RBF, with a value of 86.37%.
Volume: 15
Issue: 1
Page: 322-328
Publish at: 2026-02-01

Botnet detection: a system for identifying DGA-based botnets using LightGBM

10.11591/ijeecs.v41.i2.pp833-844
Mumtazimah Mohamad , Nazirah Abd Hamid , Sanaa A. A. Ghaleb , Siti Dhalila Mohd Satar , Suhailan Safei , Wan Mohd Amir Fazamin Wan Hamzah , Lim En En
Botnets present a major challenge to detecting anomalies in domain generation algorithms (DGAs). Botmasters use DGAs to create numerous domain names to communicate with command-and-control servers, complicating the detection process. Traditional blacklisting methods struggle to effectively identify anomalous DGA domain names amid the vast number of randomly generated domains, leading to a greater risk of detection being evaded. The proliferation of DGA-based botnets has created an urgent need for robust detection methods. Various techniques and attributes have been utilised to categorise different DGA families, yet the dynamic nature of DGA domain names renders the current blacklisting algorithms ineffective. Additionally, the dynamic characteristics of DGAs further complicate classification, emphasising the need for machine learning models to improve detection accuracy and enhance cyber defence. This study proposes a robust solution to address the challenges posed by DGA-based botnets by developing an innovative machine learning-based model for domain name classification. The model leverages the light gradient boosting algorithm (LightGBM) and integrates n-gram features to enhance the detection of malicious DGA domains. This approach offers superior accuracy, adaptability, and efficiency in identifying and classifying anomalous domain names, achieving 96% precision when detecting true DGA domains. This system represents a significant advancement in cybersecurity and anomaly detection.
Volume: 41
Issue: 2
Page: 833-844
Publish at: 2026-02-01

An investigation of different low-power circuits and enhanced energy efficiency in medical applications

10.11591/ijeecs.v41.i2.pp478-493
Prabhu R , Sivakumar Rajagopal
This research investigates the application of low-power circuits in medical devices and imaging systems. The primary goal is to address the growing demand for energy-efficient solutions in medical applications. There is an increasing need for energy-efficient solutions due to the development of medical technologies, particularly implanted and battery-operated medical devices. This paper explores the integration of adiabatic logic as a critical enabler for achieving low power consumption in medical applications. The study looks into different low-power circuit designs and technologies that optimize power usage without sacrificing performance. Adiabatic circuits offer a promising substitute for conventional circuitry in low-energy design. The research examines several low-power circuit designs and technologies that maximize power efficiency without compromising functionality. In low-energy design, adiabatic circuits present a possible alternative to traditional circuitry. Adiabatic logic aims to create energy-efficient digital circuits that consume significantly less power than conventional complementary metal-oxide-semiconductor (CMOS) circuits. We accomplish this by recovering and recycling energy that would otherwise be lost as heat and carefully controlling energy flows during switching events. Adiabatic logic is precious in battery-operated and energy-constrained devices.
Volume: 41
Issue: 2
Page: 478-493
Publish at: 2026-02-01

A new hybrid model based on machine learning and fuzzy logic for QoS enhancing in IoT

10.11591/ijeecs.v41.i2.pp624-632
Oussama Lagnfdi , Marouane Myyara , Anouar Darif
The fast expansion of internet of things (IoT) devices presents a more complicated scenario for maintaining a stable quality of service (QoS), which would guarantee the network’s dependable operation. The emergence of increasingly complex applications that call for additional devices makes this even more crucial. Adaptive intelligence solutions that guarantee optimal network behavior are therefore required. This paper presents a hybrid optimized solution for a three-layer IoT network that models the application, network, and perception layers of an IoT network using machine learning and fuzzy logic (FL). This method guarantees optimal QoS prediction with improved network adaptability by using fuzzy membership parameters. When the number of devices increases from 100 to 1,500, FLGA maintains an average QoS of 95% to 87%, while FL maintains 84% and RANDOM maintains 79%. At the application level, genetic algorithm (GA) continues to outperform RANDOM by 15.57% and FL by 6.32%. The goal of this paper is to provide a solid network solution that could enhance the consistency of QoS performance in order to combat the increasingly complex scenario of an IoT network.
Volume: 41
Issue: 2
Page: 624-632
Publish at: 2026-02-01

Intelligent cybersecurity framework for real-time threat detection and data protection

10.11591/ijeecs.v41.i2.pp504-514
Gunti Viswanath , Kurapati Srinivasa Rao
Organizations operating across cloud, mobile, and enterprise environments are increasingly exposed to sophisticated cyberattacks that traditional rule-based security systems struggle to detect in real time. These legacy approaches lack adaptability, making it difficult to continuously monitor distributed networks, identify anomalies, and prevent zero-day threats before sensitive data is compromised. To address these challenges, this paper proposes an intelligent cybersecurity framework that integrates real-time network monitoring with AI/ML-based anomaly detection models. The framework utilizes structured preprocessing, feature engineering, and supervised learning on the UNSW-NB15 dataset (version 2015, Cyber Range Lab) to enhance detection accuracy and reduce response time. The experimental setup evaluates multiple ML classifiers using stratified train- test splitting and 5-fold cross-validation, ensuring robust performance validation. Experimental results show that the random forest (RF) model achieves 94.28% accuracy, a 2.93% false-positive rate, and an average detection time of 0.41 seconds, outperforming other baseline models. In addition to the detection layer, the framework incorporates mobile device management (MDM) controls and cloud-storage policy enforcement to strengthen organizational security posture. The main contributions of this work include: i) a unified AI/ML-driven anomaly detection model, ii) integration of MDM and cloud policy enforcement for end-to-end protection, and iii) improved empirical performance validated using a benchmark cybersecurity dataset. This combined architecture significantly enhances real-time threat identification and reduces alert latency, supporting a more security-aware and resilient enterprise environment.
Volume: 41
Issue: 2
Page: 504-514
Publish at: 2026-02-01

Robust palmprint biometric solution for secure mobile authentication

10.11591/ijeecs.v41.i2.pp680-689
Son Nguyen , Arthorn Luangsodsai , Pattarasinee Bhattarakosol
Smartphones increasingly rely on biometric authentication for access to financial and personal services, creating a need for palmprint recognition that is accurate, fast, and deployable on device. This paper proposes an end-to-end smartphone palmprint authentication framework that integrates guided mobile image capture, landmark-based region-of-interest (ROI) extraction, and compact embedding inference. A ResNet-18 teacher is first trained with self-supervised contrastive learning to reduce dependence on labeled biometric data, then distilled into a lightweight MobileNetV3 student for efficient mobile deployment. The learned embeddings support both on device verification and large-scale identification using an approximate nearest neighbor index (FAISS). Experiments on a public Kaggle palm dataset achieve 99.2% accuracy with a 0.15% equal error rate (EER). On an iPhone 13, the end-to-end pipeline runs in 87.0 ms with a 12.4 MB student model. For a 1 million-entry gallery, FAISS provides 32 ms query latency while maintaining 99.5% Recall@1. Limitations include evaluation under mostly controlled capture conditions and the absence of an explicit liveness or presentation attack detection (PAD) module; future work will address unconstrained testing and anti-spoofing integration.
Volume: 41
Issue: 2
Page: 680-689
Publish at: 2026-02-01

Control of multi-level NPC inverters in PV/grid systems using ADRC and MADRC

10.11591/ijeecs.v41.i2.pp456-469
Gherici Dinar , Ahmed Tahour
Grid-connected photovoltaic (PV) systems consist of solar panels that convert sunlight into electrical energy, interconnected directly with the utility grid. These systems comprise several key components: PV, multilevel, controllers, and grid interface equipment. In this context, fivelevel inverters are increasingly favoured over three-level inverters due to their ability to reduce total harmonic distortion (THD), improve efficiency, and ensure better power quality in grid-connected applications. This research presents a three-level enhanced control scheme aimed at optimizing the performance of a grid-connected photovoltaic system with a five-level inverter. A fractional-order proportional-integral (FOPI) controller is utilized for maximum power point tracking (MPPT) to ensure precise tracking under variable irradiance conditions. At the grid-interface stage, a modified active disturbance rejection controller (MADRC) is developed for grid-interface, featuring an inner loop for DC-link voltage regulation based on Lyapunov theory, leading to improved dynamic performance with lower THD of the grid current and enhanced efficiency. Simulation results highlight the effectiveness of the proposed system. Compared with the FOPI-ADRC, a three-level configuration (0.38% THD), the proposed FOPI-MADRC with a five-level inverter achieves superior performance, with only (0.22% THD). These results confirm the advantages of combining advanced control strategies with multilevel inverter technology in improving both power quality and system efficiency.
Volume: 41
Issue: 2
Page: 456-469
Publish at: 2026-02-01

Enhancing industrial cybersecurity via IoT device-trusted remote attestation framework with zero trust architecture in brewery operations

10.11591/ijeecs.v41.i2.pp720-730
Muhammad Salman , Alan Budiyanto
The rapid expansion of industrial internet of things (IIoT) adoption in Industry 4.0 has improved automation and real-time control yet simultaneously increased security risks in operational technology (OT) environments, where device integrity and system reliability are critical. Existing attestation approaches such as SAFEHIVE, SEDA, CRA, and ERASMUS provide scalable verification capabilities but still lack continuous hardware-rooted validation and adaptive access control required for real-time industrial systems. To address this gap, this study proposes a hybrid cybersecurity framework that integrates IoT device-trusted remote attestation (ID-TRA) based on trusted platform module (TPM) with zero trust architecture (ZTA) to ensure continuous device trustworthiness in brewery operations. The framework was implemented on an industrial testbed with programmable logic controllers (PLCs), edge devices, and industrial switches, and it was evaluated through measurements of attestation latency, false positive rate, communication overhead, and TPM resource utilization. Experimental results show that the framework achieves an average attestation latency of 250 ms, a false positive rate below 2%, and a communication overhead of only 1.1%, while TPM resource usage remains within acceptable bounds (62% CPU and 48 MB RAM). These outcomes demonstrate that the proposed solution can reliably detect unauthorized firmware modifications, prevent compromised devices from accessing critical network zones, and maintain compatibility with real-time control processes. Overall, the integration of ID-TRA and ZTA enhances device-level assurance and strengthens industrial cybersecurity resilience against firmware tampering, replay attacks, and unauthorized lateral movement.
Volume: 41
Issue: 2
Page: 720-730
Publish at: 2026-02-01

Evaluating test case minimization with DB K-means

10.11591/ijeecs.v41.i2.pp555-563
Sanjay Sharma , Jitendra Choudhary
This paper evaluates a new method for test case minimization using clustering methods. Clustering is a method used on data sets to generate clusters of the same behavior; thus, unnecessary and redundant data sets are removed. Hence, minimized data sets are generated that represent the same coverage as the original data sets. This is achieved by a new method based on clustering that separates data sets into two sets, outlier and non-outlier, after reducing redundant test cases, combines minimized data sets named DB K-means. The methods individually worked on outlier and non-outlier data sets and removed redundant data sets to minimize test cases. The result of the proposed method is better than the simple clustering method used for test case minimization. The software development would only be complete with software testing. Enhancing software quality requires testing numerous test cases, a laborious and time-consuming process, testing a program using a set of inputs known as test cases. Test case minimization approaches are critical in software testing, as they optimize testing resources and provide comprehensive coverage. Minimization is the process of choosing a subset of test cases that accurately captures the behavior of the entire test suite to minimize duplicacy and increase efficiency.
Volume: 41
Issue: 2
Page: 555-563
Publish at: 2026-02-01
Show 53 of 1995

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration