Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,922 Article Results

Convolutional neural network model for fingerprint-based gender classification using original and degraded images

10.11591/ijaas.v14.i4.pp1350-1358
Risqy Siwi Pradini , Wahyu Teja Kusuma , Agung Setia Budi
Fingerprint-based gender classification is a crucial component of soft biometrics, providing valuable additional information to narrow the search space in forensic investigations and large-scale identification systems. Although deep learning models, particularly convolutional neural networks (CNNs), have demonstrated significant potential, performance validation is typically performed on high-quality fingerprint images. This creates a gap between laboratory results and real-world applications, where fingerprint evidence is often found in a degraded state, such as smudged, distorted, or partially damaged. This study attempts to bridge this gap by proposing a more realistic training approach. We design a lightweight and computationally efficient CNN and train it on a comprehensive combined dataset. The main contribution of this study lies in the data training strategy, which explicitly combines real and synthetically modified fingerprint images from the Sokoto coventry fingerprint (SOCOFing) dataset into a single, unified training set. Experimental results show that the proposed model achieves very high classification accuracy (97.39%) on a test set that also includes a combination of original and degraded images. This finding not only confirms the effectiveness of diverse data-based training to produce more robust models but also establishes a new benchmark for fingerprint based gender classification research under conditions more representative of practical scenarios.
Volume: 14
Issue: 4
Page: 1350-1358
Publish at: 2025-12-01

Forecasting internet traffic patterns for the campus Metro-E network using a hybrid machine learning model

10.11591/ijaas.v14.i4.pp1433-1443
Norakmar Arbain , Murizah Kassim , Darmawaty Mohd Ali , Shuria Saaidin
Complex traffic patterns lead to crucial campus Metro-E network management and resource allocation. This paper presents an internet traffic forecasting by pre-processing data to offer better bandwidth quality of service (QoS). Eight (8) campuses' traffic data were analysed for modelling predictions using statistical analysis. A Metro-E campus network presents four (4) locations: A, E, F, and H have is a strong correlation between inbound and outbound traffic, with correlation values between 0.4547 and 0.5204. As the inbound traffic increases, outbound traffic tends to rise as well. Conversely, locations B, C, and G have weak correlations, indicating more independent traffic patterns. Data outliers were found for locations C and F, where unusual traffic spikes require further network exploration and show key trends in traffic data. Descriptive statistics reveal notable differences, with H has the highest average traffic at about 75 Mbps, while C has the lowest at around 30 Mbps. Location F shows the greatest traffic fluctuation with a standard deviation of 0.4076, whereas Location G has very little fluctuation with a standard deviation of 0.0240. Overall, this pre process data is use to combine machine learning (ML) to improve prediction abilities for better bandwidth management and real-time handling in digital campus environments.
Volume: 14
Issue: 4
Page: 1433-1443
Publish at: 2025-12-01

Quantum-inspired magnetic resonance imaging sequence optimization for detecting neurological diseases

10.11591/ijaas.v14.i4.pp1208-1216
Kotichintala Venkata Narasimha Savan Kumar , Nitin Kumar
According to a research study by the National Institutes of Health, India, a magnetic resonance imaging (MRI) holds 89% diagnostic accuracy for acute stroke, while a computed tomography (CT) holds only 54%. Means there is still 11% area of improvement for accuracy measures required and there is 84% specific in identifying nerve enlargement. The possible solution is to use quantum computing; this is new era of technology in advanced design and implementation for computing techniques as compared with that of classical computers. With the goal of improving patient care, this is the area-of research using quantum technology to solve the neurological disorders. MRI and Microsoft’s quantum-inspired algorithms to enhance approach to detecting neurological disorders. To improve accuracy of MRI results in less time, an approach called magnetic resonance fingerprinting (MRF) was explored. This paper mainly focused on optimizing the sequence using Microsoft azure simulator. By generating an optimized pulse sequence and map to the accurate predefined patterns, able to create a solution that improves the diagnostic capability of MRI. Conventional computers will take long time to predict, but accuracy may alter. The proposed quantum-inspired optimization improved MRI diagnostic accuracy up to 92%, with faster sequence optimization compared to classical methods. This simulation-based proof of concept demonstrates potential for enhanced neurological disease detection while acknowledging current limitations such as simulator dependency and limited datasets.
Volume: 14
Issue: 4
Page: 1208-1216
Publish at: 2025-12-01

Image segmentation using fuzzy clustering for industrial applications

10.11591/ijai.v14.i6.pp4636-4642
Robinson Jiménez-Moreno , Laura María Vargas Duanca , Anny Astrid Espitia-Cubillos
This paper presents a fuzzy logic clustering algorithm oriented to image segmentation and the procedure designed to evaluate its performance by varying two parameters: the number of clusters (c) and the diffusivity parameter (m), which leads to the conclusion that an adjusted number of clusters is sufficient to recognize main elements of the image, but a more detailed reconstruction requires a higher number of clusters. Also, the diffusivity parameter influences the smoothness of the boundaries between clusters, low values generate a segmentation with more abrupt transitions and sharper contours, high values smooth the segmentation, its excessive increase may cause the elements to merge, losing details. In general, the balance between these two parameters is key to obtaining an effective segmentation. Three validation scenarios were used, the first two allowed to establish the most appropriate parameters for segmentation, regulating the clusters to a maximum of 4 and keeping the diffusivity level at 2.0, the third scenario validated the algorithm with real images of industrial cleaning products, all with noise, establishing the computational cost and processing times for images of 350×350 and 2000×3000 pixels resolution. In conclusion, applications of the algorithm are foreseen in automatic quality control and inventory control of finished products and raw materials, thanks to its high efficiency and low response time, even in scenarios involving noisy and large images.
Volume: 14
Issue: 6
Page: 4636-4642
Publish at: 2025-12-01

Intelligent route optimization for internet of vehicles using federated learning: promoting green and sustainable IoT networks

10.11591/ijai.v14.i6.pp5049-5057
Desidi Narsimha Reddy , Swathi Buragadda , Janjhyam Venkata Naga Ramesh , Garapati Satyanarayana Murthy , Nallathambi Srija , Sarihaddu Kavitha
As the internet of vehicles (IoV) continues to evolve, optimizing vehicle routing becomes increasingly important for enhancing traffic efficiency and minimizing environmental impact. This paper introduces an intelligent vehicle route optimization protocol leveraging federated learning (FL) to achieve green and sustainable IoV systems. By distributing the learning process across multiple edge devices, the proposed protocol minimizes the need for centralized data processing, reducing network congestion, and preserving user privacy. The system optimizes vehicle routes based on real time traffic conditions, fuel efficiency, and carbon emissions, and promoting greener transportation practices. Simulations conducted in a dynamic IoV environment demonstrate significant improvements in route efficiency, fuel consumption, and carbon emissions. The results underscore the potential of FL in transforming IoV routing by balancing performance and sustainability, making it a promising solution for the future of connected transportation.
Volume: 14
Issue: 6
Page: 5049-5057
Publish at: 2025-12-01

Prediction of flood-affected areas based on geographic information system data using machine learning

10.11591/ijai.v14.i6.pp4675-4683
Amrul Faruq , Lailis Syafaah , Muhammad Irfan , Shahrum Shah Abdullah , Shamsul Faisal Mohd Hussein , Fitri Yakub
Flood disasters have become more frequent and severe due to climate variability, posing significant threats to human lives, agriculture, and infrastructure. Effective disaster management and mitigation require accurate identification of flood-prone areas. This study develops an intelligent flood prediction system by integrating machine learning algorithms with geographic information systems (GIS) data to enhance flood risk assessment. The proposed system utilizes two machine learning models, including random forest (RF) and support vector machine (SVM), to predict flood-susceptible areas. The models are trained on historical flood data and GIS-derived features, including elevation, slope, topographic wetness index (TWI), aspect, and curvature. The dataset undergoes preprocessing, including normalization and feature selection, before being divided into training, validation, and test sets. The models are then trained and evaluated based on their predictive performance. Evaluation metrics, particularly the area under the curve (AUC), demonstrate that RF outperforms SVM in predicting flood-prone areas. RF achieves an accuracy of 82%, while SVM records a lower accuracy of 68%. The superior performance of RF is attributed to its ability to handle complex, nonlinear relationships in flood prediction. These results highlight the effectiveness of machine learning algorithms in flood susceptibility modeling and support the integration of data-driven techniques into flood and disaster risk reduction management strategies.
Volume: 14
Issue: 6
Page: 4675-4683
Publish at: 2025-12-01

Enhancing software fault prediction through data balancing techniques and machine learning

10.11591/ijai.v14.i6.pp4787-4801
Akshat Raj , Durva Mahadeo Chavan , Priyal Agarwal , Jestin Gigi , Madhuri Rao , Vinayak Musale , Akshita Chanchlani , Murtaza Shabbirbhai Dholkawala , Kulamala Vinod Kumar
Software fault prediction is essential for ensuring the reliability and quality of software systems by identifying potential defects early in the development lifecycle. However, the presence of imbalanced datasets poses a significant challenge to the effectiveness of fault prediction models. In this paper, we investigate the impact of different data balancing techniques, including generative adversarial networks (GANs), synthetic minority over-sampling technique (SMOTE), and NearMiss, on machine learning (ML) model performance for software fault prediction. Through a comparative analysis across multiple datasets commonly used in software engineering research, we evaluate the efficacy of these techniques in addressing class imbalance and improving predictive accuracy. Our findings provide insights into the most effective approaches for handling imbalanced data in software fault prediction tasks, thereby advancing the state-of-the-art in software engineering research and practice. An extensive experimentation is performed and analyzed in this study here that includes 8 datasets, 4 data balancing techniques, and 4 ML techniques in order to demonstrate the efficacy of various models in software fault prediction.
Volume: 14
Issue: 6
Page: 4787-4801
Publish at: 2025-12-01

A comparative study of large language models with chain-of thought prompting for automated program repair

10.11591/ijai.v14.i6.pp4579-4589
Eko Darwiyanto , Rizky Akbar Gusnaen , Rio Nurtantyana
Automatic code repair is an important task in software development to reduce bugs efficiently. This research focuses on developing and evaluating a chain-of-thought (CoT) prompting approach to improve the ability of large language models (LLMs) in automated program repair (APR) tasks. CoT prompting is a technique that guides LLM to generate step-by-step explanations before providing the final answer, so it is expected to improve the accuracy and quality of code repair. This research uses the QuixBugs dataset to evaluate the performance of several LLM models, including DeepSeek-V3 and GPT-4o, with two prompting methods, namely standard and CoT prompting. The evaluation is based on the average number of plausible patches generated as well as the estimated token usage cost. The results show that CoT prompting improves performance in most models compared with the standard. DeepSeek-V3 recorded the highest performance with an average of 36.6 plausible patches and the lowest cost of $0.006. GPT-4o also showed competitive results with an average of 35.8 plausible patches and a cost of $0.226. These results confirm that CoT prompting is an effective technique to improve LLM reasoning ability in APR tasks.
Volume: 14
Issue: 6
Page: 4579-4589
Publish at: 2025-12-01

Impact of smoothing techniques for text classification: implementation in hidden Markov model

10.11591/ijai.v14.i6.pp5183-5192
Norsyela Muhammad Noor Mathivanan , Roziah Mohd Janor , Shukor Abd Razak , Nor Azura Md. Ghani
A hidden Markov model (HMM) is widely used for sequence modeling in various text classification tasks. This study investigates the impact of different smoothing techniques, such as Laplace, absolute discounting, and Gibbs sampling on HMM performance across three distinct domains: e-commerce products, spam filtering, and occupational data mining. Through the comparative analysis, Laplace smoothing consistently outperforms other techniques in handling zero-probability issues, demonstrating superior performance in the e-commerce and SMS spam datasets. The HMM without any smoothing technique achieved the best results for job title classification. This divergence underscores the dataset-specific nature of smoothing requirements, where the simplicity of parameter estimation proves effective in contexts characterized by a limited and repetitive vocabulary. Hence, the findings suggest that tailored smoothing strategies are crucial for optimizing HMM performance in different textual analysis applications.
Volume: 14
Issue: 6
Page: 5183-5192
Publish at: 2025-12-01

Recognition of Indonesian sign language using deep learning: convolutional neural network-based approach

10.11591/ijai.v14.i6.pp5008-5016
Olivia Kembuan , Haryanto Haryanto , Mochamad Bruri Triyono
This study focuses on developing an automatic Indonesian sign language (SIBI) recognition system using a convolutional neural network (CNN). Sign language is essential for communication among deaf and hard-of hearing individuals, and automatic recognition helps improve accessibility and inclusivity. CNNs are chosen for their ability to learn image features automatically, eliminating manual extraction and improving classification accuracy. The SIBI dataset used contains 5,280 images of 26 letters, divided into training and validation sets. In early training, the model achieved low accuracy (3.63% training, 3.33% validation), but after five epochs, it significantly improved to 97.58% for training and 100% for validation.
Volume: 14
Issue: 6
Page: 5008-5016
Publish at: 2025-12-01

Securing post-quantum cryptography: side-channel resilience in CRYSTALS-Kyber key encapsulation mechanism

10.11591/ijai.v14.i6.pp5251-5267
Shreyas Kasture , Sudhanshu Maurya , Alakshendra Pratap Singh , Amit Shukla , Arnav Kotiyal , Kashish Mirza
This study evaluates side-channel vulnerabilities in hardware implementations of the cryptographic suite for Algebraic lattices (CRYSTALS)-Kyber key encapsulation mechanism (KEM) using correlation and differential power analysis (DPA) techniques. Unprotected field-programmable gate array (FPGA) implementations across all Kyber parameter sets were successfully compromised, revealing significant information leakage. Attack complexity scaled linearly with key size. Additive Boolean masking provided varying protection levels, with 4-bit masking offering a 100× security increase at notable performance cost. Performance characterization showed increased slice utilization and reduced maximum frequency for higher-order masking. A novel hybrid countermeasure combining higher-order masking with controlled time randomization enhanced protection against machine learning-based attacks. Comprehensive power trace analysis using 12-bit precision at 500 MS/s sampling rates was conducted. Statistical evaluation utilized Pearson's correlation and Welch's t-tests with a 0.8 threshold for key recovery. Real world validation in IoT, financial, and satellite scenarios highlighted practical post-quantum cryptography (PQC) deployment challenges. The study provides concrete design guidance for efficiently securing hardware Kyber implementations against side-channel attacks.
Volume: 14
Issue: 6
Page: 5251-5267
Publish at: 2025-12-01

Classification of regional language dialects using convolutional neural network and multilayer perceptron

10.11591/ijai.v14.i6.pp5017-5026
Fahmi B. Marasabessy , Dwiza Riana , Muji Ernawati
Regional languages are vital for communication and preserving cultural identity, safeguarding local heritage. However, globalization and modernization endanger their existence as they are increasingly replaced by national or global languages. Despite progress in dialect recognition research, particularly for certain languages, further studies are needed to improve model performance and address less-represented dialects, including those in Indonesia. This study enhances a custom-built dataset for dialect recognition through the application of data augmentation techniques, specifically adding noise, time stretching, and pitch shifting. Using Mel-frequency cepstral coefficients (MFCC) for feature extraction, it evaluates the performance of convolutional neural network (CNN) and multilayer perceptron (MLP) in classifying six Indonesian dialects. Results indicate that CNN outperformed, achieving 97.92% accuracy, 97.90% recall, 97.97% precision, 97.92% F1-score, and a kappa score of 97.49% with combined augmentation techniques, setting a foundation for further research.
Volume: 14
Issue: 6
Page: 5017-5026
Publish at: 2025-12-01

A bibliometric analysis of feature selection techniques: trends, innovations, and future directions

10.11591/ijai.v14.i6.pp4403-4414
Oumaima Semmar , Wissal El Habti , Donalson Wilson , Abdellah Azmani
Feature selection techniques have become increasingly important in addressing the challenges of high dimensionality in machine learning and other artificial intelligence domains. In this study, we present a comprehensive bibliometric analysis of research on feature selection techniques over the past decade, focusing on mapping the intellectual structure, identifying emerging trends, and highlighting productive collaborations in the field. Using merged data from Scopus and Web of Science databases, we collected and analyzed 2,079 relevant documents published between 2014 and 2024, applying citation analysis, co-authorship networks, and keyword co-occurrence mapping. Our findings reveal that feature selection methodologies, including supervised, unsupervised, and hybrid approaches across filter, wrapper, and embedded techniques, have been widely applied across various domains. The authors who have most contributed to the development of these methods are primarily affiliated with institutions in China, India, and the USA. The insights provided by this analysis offer researchers and practitioners a valuable foundation for guiding future research directions in feature selection.
Volume: 14
Issue: 6
Page: 4403-4414
Publish at: 2025-12-01

Optimizing brain tumor MRI classification using advanced preprocessing techniques and ensemble learning methods

10.11591/ijai.v14.i6.pp5106-5119
Akim Manaor Hara Pardede , Ahmad Zamsuri , Indi Nuroini , Putrama Alkhairi
Brain tumor classification is a critical task in medical imaging that directly impacts the accuracy of diagnosis and treatment planning. However, the complexity and variability of magnetic resonance imaging (MRI) images pose significant challenges, often resulting in reduced model reliability and generalization. This study addresses these limitations by proposing a novel ResNet+Bagging model, leveraging the strengths of residual networks and ensemble learning to enhance classification performance. Using publicly available brain tumor MRI datasets, including images labeled as benign, malignant, and normal, the study employs advanced preprocessing techniques such as normalization, data augmentation, and noise reduction to ensure high-quality inputs. The proposed model demonstrated significant improvements, achieving the highest testing accuracy of 72%, outperforming other tested models such as LeNet, standard ResNet, GoogleNet, and VGGNet. Precision (0.6010), recall (0.6000), and F1-score (0.5990) metrics further highlight its superior balance in detecting positive and negative classes. The novelty of this research lies in the application of Bagging to ResNet, which effectively mitigates overfitting and enhances predictive stability in complex medical datasets. These findings underscore the proposed model's potential as a robust solution for brain tumor classification, contributing to more accurate and reliable diagnostics.
Volume: 14
Issue: 6
Page: 5106-5119
Publish at: 2025-12-01

Optimizing sparse ternary compression with thresholds for communication-efficient federated learning

10.11591/ijai.v14.i6.pp4902-4912
Nithyanianjan Murthy Chittaiah , Manjula Sunkadakatte Haladappa
Federated learning (FL) enables decentralized model training while preserving client data privacy, yet suffers from significant communication overhead due to frequent parameter exchanges. This study investigates how varying sparse ternary compression (STC) thresholds impact communication efficiency and model accuracy across the CIFAR-10 and MedMNIST datasets. Experiments tested thresholds ranging from 1.0 to 1.9 and batch sizes of 10, 15, and 20. Results demonstrated that selecting thresholds between 1.2 and 1.5 reduced total communication costs by approximately 10–15%, while maintaining acceptable accuracy levels. These findings suggest that careful threshold tuning can achieve substantial communication savings with minimal compromise in model performance, offering practical guidance for improving the efficiency and scalability of FL systems.
Volume: 14
Issue: 6
Page: 4902-4912
Publish at: 2025-12-01
Show 83 of 1995

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration