Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

28,428 Article Results

Using the ResNet-50 pre-trained model to improve the classification output of a non-image kidney stone dataset

10.11591/ijai.v14.i4.pp3182-3191
Kazeem Oyebode , Anne Ngozi Odoh
Kidney stone detection based on urine samples seems to be a cost-effective way of detecting the formation of stones. Urine features are usually collected from patients to determine if there is a likelihood of kidney stone formation. There are existing machine learning models that can be used to classify if a stone exists in the kidney, such as the support vector machine (SVM) and deep learning (DL) models. We propose a DL network that works with a pre-trained (ResNet-50) model, making non-image urine features work with an image-based pre-trained model (ResNet-50). Six urine features collected from patients are projected onto 172,800 neurons. This output is then reshaped into a 240 by 240 by 3 tensors. The reshaped output serves as the input to the ResNet-50. The output of this is then sent into a binary classifier to determine if a kidney stone exists or not. The proposed model is benchmarked against the SVM, XGBoost, and two variants of DL networks, and it shows improved performance using the AUC-ROC, Accuracy and F1-score metrics. We demonstrate that combining non-image urine features with an image-based pre-trained model improves classification outcomes, highlighting the potential of integrating heterogeneous data sources for enhanced predictive accuracy.
Volume: 14
Issue: 4
Page: 3182-3191
Publish at: 2025-08-01

A comparative study of deep learning-based network intrusion detection system with explainable artificial intelligence

10.11591/ijece.v15i4.pp4109-4119
Tan Juan Kai , Lee-Yeng Ong , Meng-Chew Leow
In the rapidly evolving landscape of cybersecurity, robust network intrusion detection systems (NIDS) are crucial to countering increasingly sophisticated cyber threats, including zero-day attacks. Deep learning approaches in NIDS offer promising improvements in intrusion detection rates and reduction of false positives. However, the inherent opacity of deep learning models presents significant challenges, hindering the understanding and trust in their decision-making processes. This study explores the efficacy of explainable artificial intelligence (XAI) techniques, specifically Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), in enhancing the transparency and trustworthiness of NIDS systems. With the implementation of TabNet architecture on the AWID3 dataset, it is able to achieve a remarkable accuracy of 99.99%. Despite this high performance, concerns regarding the interpretability of the TabNet model's decisions persist. By employing SHAP and LIME, this study aims to elucidate the intricacies of model interpretability, focusing on both global and local aspects of the TabNet model's decision-making processes. Ultimately, this study underscores the pivotal role of XAI in improving understanding and fostering trust in deep learning -based NIDS systems. The robustness of the model is also being tested by adding the signal-to-noise ratio (SNR) to the datasets.
Volume: 15
Issue: 4
Page: 4109-4119
Publish at: 2025-08-01

Secure clustering and routing – based adaptive – bald eagle search for wireless sensor networks

10.11591/ijece.v15i4.pp3824-3832
Roopashree Hejjaji Ranganathasharma , Yogeesh Ambalagere Chandrashekaraiah
Wireless sensor networks (WSNs) are self-regulating networks consisting of several tiny sensor nodes for monitoring and tracking applications over extensive areas. Energy consumption and security are the two significant challenges in these networks due to their limited resources and open nature. To address these challenges and optimize energy consumption while ensuring security, this research proposes an adaptive – bald eagle search (A-BES) optimization algorithm enabled secure clustering and routing for WSNs. The A-BES algorithm selects secure cluster heads (SCHs) through several fitness functions, thereby reducing energy consumption across the nodes. Next, secure and optimal routes are chosen using A-BES to prevent malicious nodes from interfering with the communication paths and to enhance the overall network lifetime. The proposed algorithm shows significantly lower energy consumption, with values of 0.27, 0.81, 1.38, 2.27, and 3.01 J as the number of nodes increases from 100 to 300. This demonstrates a clear improvement over the existing residual energy-based data availability approach (REDAA).
Volume: 15
Issue: 4
Page: 3824-3832
Publish at: 2025-08-01

Multilevel and multisource data fusion approach for network intrusion detection system using machine learning techniques

10.11591/ijece.v15i4.pp3938-3948
Harshitha Somashekar , Pramod Halebidu Basavaraju
To enhance the performance of network intrusion detection systems (NIDS), this paper proposes a novel multilevel and multisource data fusion approach, applied to NSL-KDD and UNSW-NB15 datasets. The proposed approach includes three various levels of operations, which are feature level fusion, dimensionality reduction, and prediction level fusion. In the first stage features of NSL-KDD and UNSW-NB15 both datasets are fused by applying the inner join joint operation by selecting common features like protocol, service and label. Once the data sets are fused in the first level, linear discriminant analysis is applied for 12 feature columns which is reduced to a single feature column leading to dimensionality reduction at the second level. Finally, in the third level, the prediction level fusion technique is applied to two neural network models, where one neural network model has a single input node, two hidden nodes, and two output nodes, and another model having a single input node, three hidden nodes, and two output nodes. The outputs obtained from these two models are then fused using a prediction fusion technique. The proposed approach achieves a classification accuracy of 97.5%.
Volume: 15
Issue: 4
Page: 3938-3948
Publish at: 2025-08-01

Blockchain and internet of things synergy: transforming smart grids for the future

10.11591/ijece.v15i4.pp4239-4248
Mouad Bensalah , Abdellatif Hair , Reda Rabie
Conventional smart grid systems face challenges in security, transparency, and efficiency. This study addresses these limitations by integrating blockchain and internet of things (IoT) technologies, presenting proof-of-concept implemented on an Orange Pi 4 single-board computer. The realized prototype demonstrated secure and transparent energy transaction management with consistent throughput between 7.45 and 7.81 transactions per second, and efficient resource utilization across varying transaction volumes. However, scalability challenges, including a linear increase in processing time with larger block sizes, emphasize the need for optimized consensus mechanisms. The findings underscore the feasibility of blockchain-based smart grids in resource-constrained settings, paving the way for advancements in peer-to-peer energy trading, decentralized energy storage, and integration with artificial intelligence for dynamic energy optimization. This work contributes to developing secure, efficient, and sustainable energy systems.
Volume: 15
Issue: 4
Page: 4239-4248
Publish at: 2025-08-01

Machine learning approaches to cybersecurity in the industrial internet of things: a review

10.11591/ijece.v15i4.pp3851-3866
Melanie Heier , Penatiyana W. Chandana Prasad , Md Shohel Sayeed
The industrial internet of things (IIoT) is increasingly used within various sectors to provide innovative business solutions. These technological innovations come with additional cybersecurity risks, and machine learning (ML) is an emerging technology that has been studied as a solution to these complex security challenges. At time of writing, to the author’s knowledge, a review of recent studies on this topic had not been undertaken. This review therefore aims to provide a comprehensive picture of the current state of ML solutions for IIoT cybersecurity with insights into what works to inform future research or real-world solutions. A literary search found twelve papers to review published in 2021 or later that proposed ML solutions to IIoT cybersecurity concerns. This review found that federated learning and semi-supervised learning in particular are promising ML techniques being proposed to combat the concerns around IIoT cybersecurity. Artificial neural network approaches are also commonly proposed in various combinations with other techniques to ensure fast and accurate cybersecurity solutions. While there is not currently a consensus on the best ML techniques to apply to IIoT cybersecurity, these findings offer insight into those approaches currently being utilized along with gaps where further examination is required.
Volume: 15
Issue: 4
Page: 3851-3866
Publish at: 2025-08-01

Breast cancer identification using a hybrid machine learning system

10.11591/ijece.v15i4.pp3928-3937
Toni Arifin , Ignatius Wiseto Prasetyo Agung , Erfian Junianto , Dari Dianata Agustin , Ilham Rachmat Wibowo , Rizal Rachman
Breast cancer remains one of the most prevalent malignancies among women and is frequently diagnosed at an advanced stage. Early detection is critical to improving patient prognosis and survival rates. Messenger ribonucleic acid (mRNA) gene expression data, which captures the molecular alterations in cancer cells, offers a promising avenue for enhancing diagnostic accuracy. The objective of this study is to develop a machine learning-based model for breast cancer detection using mRNA gene expression profiles. To achieve this, we implemented a hybrid machine learning system (HMLS) that integrates classification algorithms with feature selection and extraction techniques. This approach enables the effective handling of heterogeneous and high-dimensional genomic data, such as mRNA expression datasets, while simultaneously reducing dimensionality without sacrificing critical information. The classification algorithms applied in this study include support vector machine (SVM), random forest (RF), naïve Bayes (NB), k-nearest neighbors (KNN), extra trees classifier (ETC), and logistic regression (LR). Feature selection was conducted using analysis of variance (ANOVA), mutual information (MI), ETC, LR, whereas principal component analysis (PCA) was employed for feature extraction. The performance of the proposed model was evaluated using standard metrics, including recall, F1-score, and accuracy. Experimental results demonstrate that the combination of the SVM classifier with MI feature selection outperformed other configurations and conventional machine learning approaches, achieving a classification accuracy of 99.4%.
Volume: 15
Issue: 4
Page: 3928-3937
Publish at: 2025-08-01

A ten-year retrospective (2014-2024): Bibliometric insights into the study of internet of things in engineering education

10.11591/ijece.v15i4.pp4213-4226
Zakiah Mohd Yusoff , Siti Aminah Nordin , Norhalida Othman , Zahari Abu Bakar , Nurlaila Ismail
This article presents a comprehensive ten-year retrospective analysis (2014-2024) of the evolving landscape of internet of things (IoT) studies within engineering education, employing bibliometric insights. The pervasive influence of IoT technologies across diverse domains, including education, underscores the significance of examining its trajectory in engineering education research over the past decade. Recognizing the dynamic nature of this intersection is crucial for educators, researchers, and policymakers to adapt educational strategies to IoT-induced technological shifts. Addressing this imperative, the study conducts a detailed bibliometric review to identify gaps, trends, and areas necessitating further exploration. Methodologically, the study follows a framework involving a comprehensive search of Scopus and Web of Science databases to identify relevant articles. Selected articles undergo bibliometric analysis using the Biblioshiny tool, supplemented by manual verification and additional analysis in Excel. This approach facilitates robust evaluation of citation patterns, co-authorship networks, keyword trends, and publication patterns over the specified timeframe. Anticipated outcomes include the identification of seminal works, key contributors, influential journals, and science mapping. The study aims to unveil emerging themes, track research trends, and provide insights into collaborative networks shaping IoT discourse in engineering education. This analysis offers a roadmap for future research directions, guiding educators and researchers toward fruitful avenues of exploration.
Volume: 15
Issue: 4
Page: 4213-4226
Publish at: 2025-08-01

Adaptive multi-radio quality of service model using neural network approach for robust wireless sensor network transmission in multipath fading environment

10.11591/ijece.v15i4.pp3795-3802
Galang Persada Nurani Hakim , Dian Widi Astuti , Ahmad Firdausi , Huda A. Majid
Wireless sensor network loss in wireless data transmission is one of the problems that needs attention. Interference, fading, congestion, and delay are some factors that cause loss in wireless data transmission. This paper used an adaptive multi-radio model to enhance the wireless data transmission to be more robust to disturbance in a multipath fading environment. A neural network approach was used to generate the adaptive model. If we use 433 MHz as our carrier frequency with 250 kHz bandwidth and 12 spreading factors, we can get signal noise ratio (SNR) for 20 meters at about -9.8 dB. Thus, we can use the adaptive model to enhance the WSN wireless data transmission's SNR to 9 dB, automatically changing the radio configuration to 797.1 MHz frequency, with 378.1 bandwidth and 7.111 for spreading factor. Based on the result, the wireless data transmission link has been successfully enhanced using the proposed adaptive model for wireless sensor networks (WSN) in a multipath fading environment.
Volume: 15
Issue: 4
Page: 3795-3802
Publish at: 2025-08-01

Multi-layer convolutional autoencoder for recognizing three-dimensional patterns in attention deficit hyperactivity disorder using resting-state functional magnetic resonance imaging

10.11591/ijece.v15i4.pp3965-3976
Zarina Begum , Kareemulla Shaik
Attention deficit hyperactivity disorder (ADHD) is a neurological disorder that develops over time and is typified by impulsivity, hyperactivity, and attention deficiency. There have been noticeable changes in the patterns of brain activity in recent studies using functional magnetic resonance imaging (fMRI). Particularly in the prefrontal cortex. Machine learning algorithms show promise in distinguishing ADHD subtypes based on these neurobiological signatures. However, the inherent heterogeneity of ADHD complicates consistent classification, while small sample sizes limit the generalizability of findings. Additionally, methodological variability across studies contributes to inconsistent results, and the opaque nature of machine learning models hinders the understanding of underlying mechanisms. We suggest a novel deep learning architecture to overcome these issues by combining spatio-temporal feature extraction and classification through a hierarchical residual convolutional noise reduction autoencoder (HRCNRAE) and a 3D convolutional gated memory unit (GMU). This framework effectively reduces spatial dimensions, captures key temporal and spatial features, and utilizes a sigmoid classifier for robust binary classification. Our methodology was rigorously validated on the ADHD-200 dataset across five sites, demonstrating enhancements in diagnostic accuracy ranging from 1.26% to 9.6% compared to existing models. Importantly, this research represents the first application of a 3D Convolutional GMU for diagnosing ADHD with fMRI data. The improvements highlight the efficacy of our architecture in capturing complex spatio-temporal features, paving the way for more accurate and reliable ADHD diagnoses.
Volume: 15
Issue: 4
Page: 3965-3976
Publish at: 2025-08-01

Real-time machine learning-based posture correction for enhanced exercise performance

10.11591/ijece.v15i4.pp3843-3850
Anish Khadtare , Vasistha Ved , Himanshu Kotak , Akhil Jain , Pinki Vishwakarma
Poor posture and associated physical health problems have grown more common as technology use increases, especially during workout sessions. Maintaining proper posture is essential to increasing the efficacy of your workouts and avoiding injuries. The research paper presents the development of a machine-learning model designed to provide real-time posture correction and feedback for exercises such as squats and planks. The model uses MediaPipe for precise real-time posture estimation and OpenCV for analyzing video frames. It detects poor posture and provides users with instant corrective feedback on their posture by examining the angles between important body parts, such as the arms, knees, back, and hips. This innovative method enables a thorough evaluation of form without requiring face-to-face supervision, opening it up to a wider audience. The model is trained on real-world workout datasets of people performing exercises in different positions and postures to ensure that posture detection is reliable under various user circumstances. The system utilizes cutting-edge machine-learning algorithms to demonstrate scalability and adaptability for future training types beyond squats and planks. The main goal is to provide users with a model that increases the efficacy of workouts, lowers the risk of injury, and encourages better exercise habits. The model's emphasis on usability and accessibility makes it potentially a vital tool for anyone looking to enhance their posture and general fitness levels.
Volume: 15
Issue: 4
Page: 3843-3850
Publish at: 2025-08-01

Renewable energy impact integration in Moroccan grid-load flow analysis

10.11591/ijece.v15i4.pp3632-3648
Safaa Essaid , Loubna Lazrak , Mouhsine Ghazaoui
This paper analyzes the behavior of a Moroccan electric transportation system in the presence of an integration of renewable energy sources, which represents a significant challenge due to their intermittent nature. The aim is to evaluate the performance of the transportation system in various situations and possible configurations. The current study enables the calculation of power flow in the network using the Newton-Raphson method under the MATLAB/Simulink software. To achieve this, a series of power flow simulations were conducted on a 5-bus Moroccan electrical network, examining four distinct scenarios. In addition, this article offers an evaluation of the power flow performance of the same electric transportation system with varying percentages of renewable energy penetration. In order to provide a complete critical analysis, many simulations were conducted to obtain the voltage and active power profile generated at different bus locations, as well as an evaluation of the losses in the studied network.
Volume: 15
Issue: 4
Page: 3632-3648
Publish at: 2025-08-01

Enhancing voltage stability of transmission network using proportional integral controlled high voltage direct current system

10.11591/ijece.v15i4.pp3593-3602
Chibuike Peter Ohanu , Uche C. Ogbuefi , Emenike Ejiogu , Tole Sutikno
The contingencies experienced in transmission power networks often lead to unstable voltage profiles, challenging grid reliability and stability. This research aim is to enhance voltage stability using a proportional-integral (PI) controlled high voltage direct current (HVDC) system on a real life 330 kV network. The Newton-Raphson (NR) method is used for power flow analysis of the test network, and stability analysis identified Makurdi bus as the candidate bus for improvement due to its low eigenvalue and damping ratio. Application of a balanced three-phase fault at this bus resulted in a minimum voltage of 0.70 per unit (p.u.), falling outside the statutory voltage limit requirements of 0.95 to 1.05 p.u. The PI-based HVDC system was then applied along the Makurdi to Jos transmission line, which has a low loading capacity. The application of this model optimized the system response to disturbances, significantly improve voltage stability and raised the minimum voltage profile on the network to 0.80 p.u. This demonstrates 10% voltage profile improvement from the base case and reaffirms the effectiveness of the PI-based HVDC system in enhancing voltage stability during major disturbances. This research highlights the potential of integrating control systems into power networks to improve voltage stability and ensure reliable operation, even during large disturbances.
Volume: 15
Issue: 4
Page: 3593-3602
Publish at: 2025-08-01

Enhancing multi-class text classification in biomedical literature by integrating sequential and contextual learning with BERT and LSTM

10.11591/ijece.v15i4.pp4202-4212
Oussama Ndama , Ismail Bensassi , Safae Ndama , El Mokhtar En-Naimi
Classification of sentences in biomedical abstracts into predefined categories is essential for enhancing readability and facilitating information retrieval in scientific literature. We propose a novel hybrid model that integrates bidirectional encoder representations from transformers (BERT) for contextual learning, long short-term memory (LSTM) for sequential processing, and sentence order information to classify sentences from biomedical abstracts. Utilizing the PubMed 200k randomized controlled trial (RCT) dataset, our model achieved an overall accuracy of 88.42%, demonstrating strong performance in identifying methods and results sections while maintaining balanced precision, recall, and F1-scores across all categories. This hybrid approach effectively captures both contextual and sequential patterns of biomedical text, offering a robust solution for improving the segmentation of scientific abstracts. The model's design promotes stability and generalization, making it an effective tool for automatic text classification and information retrieval in biomedical research. These results underscore the model's efficacy in handling overlapping categories and its significant contribution to advancing biomedical text analysis.
Volume: 15
Issue: 4
Page: 4202-4212
Publish at: 2025-08-01

Optimized reactive power management system for smart grid architecture

10.11591/ijece.v15i4.pp3707-3716
Manju Jayakumar Raghvin , Manjula R. Bharamagoudra , Ritesh Dash
The Indian power grid is an extensive and mature power system that transfers large amounts of electricity between two regions linked by a power corridor. The increased reliance on decentralized renewable energy sources (RESs), such as solar power, has led to power system instability and voltage variations. Power quality and dependability in a smart grid (SG) setting can be enhanced by the careful tracking and administration of solar energy generated by panels. This study proposes a number of reactive power regulation algorithms that take smart grids into account. When developing a kernel, debugging is a must in optimal reactive power management. In this research, a debugging primitive called physical memory protection (PMP), a security feature, is considered. Debugging in the kernel domain requires specialized tools, in contrast to the user space where we have kernel assistance. This research proposes an optimal reactive power management in smart grid using kernel debugging model (ORPM-SG-KDM) for managing the reactive power efficiently. This research achieved 98.5% accuracy in kernel debugging and 99.2% accuracy in optimal reactive power management. Kernel debugging accuracy is increased by 1.8% and 3% of reactive power management accuracy is increased.
Volume: 15
Issue: 4
Page: 3707-3716
Publish at: 2025-08-01
Show 56 of 1896

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration