Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,939 Article Results

Integration of strain gauge sensor in biceps muscle movement detection using LabView

10.11591/ijece.v15i4.pp3696-3706
Desy Kristyawati , Busono Soerowirdjo , Erma Triawati Christina , Robby Kurniawan Harahap
Muscle injuries caused by sports can have a serious impact on sportsmen, to avoid injuries during sports can be prevented by detecting the wrong movement using a strain gauge sensor attached to the muscle which in this study is devoted to the biceps muscle. The strain gauge will detect muscle movement, and the output generated at the strain gauge will be converted into the form of voltage and current which will be used to be processed using machine learning to get data patterns so that they can be grouped into data patterns of wrong movements and correct movements. The strain gauge movement pattern here is simulated using LabView by using a gauge resistance of 120 Ω, strain configuration Quarter Bridge 1, gauge factor 2.05, Vex is the excitation voltage given to the Wheatstone bridge is 5 V and the initial voltage -180.08 µV, the strain gauge output pattern is obtained in the form of Excel and with this data can be converted into voltage and current.
Volume: 15
Issue: 4
Page: 3696-3706
Publish at: 2025-08-01

Sensitivity factors based computationally efficient approach for evaluation and enhancement of available transfer capability

10.11591/ijece.v15i4.pp3556-3565
Manjula S. Sureban , Shekhappa G. Ankaliki
Available transfer capability (ATC) is an indication of the capability of the transmission system to efficiently increase power transmission for further commercial trading between two areas or two points. ATC plays an important role in operating power systems economically, reliably, and securely. As the deregulation in the power system can cause overload in the transmission system, ATC evaluation and enhancement are required for secure and reliable operation. The advancements in power generation techniques and switching from centralized generation to distributed generation (DG) with more emphasis on renewable sources have resulted in various approaches to enhance ATC. In this work, a computationally efficient sensitivity-based methodology for evaluating and improving ATC with the presence of renewable generation is proposed. The developed approach is implemented on the IEEE 30 bus system and the outcome is compared with the existing methods in the literature.
Volume: 15
Issue: 4
Page: 3556-3565
Publish at: 2025-08-01

Indonesian speech emotion recognition: feature extraction and neural network approaches

10.11591/ijece.v15i4.pp3769-3778
Izza Nur Afifah , Tri Budi Santoso , Titon Dutono
This study explored the challenges of emotion recognition in Indonesian speech using deep learning techniques, addressing the complex nuances of emotional expression in spoken language that posed significant difficulties for automatic recognition systems. The research focused on the application of feature extraction methods and the implementation of convolutional neural networks (CNN) and a hybrid convolutional neural networks-long short-term memory (CNN-LSTM) model to identify emotional states from speech data. By analyzing key features of speech signals, including mel frequency cepstral coefficient (MFCC), zero crossing rate (ZCR), root mean square energy (RMSE), pitch, and spectral centroid, the study evaluated the models’ ability to capture both spatial and temporal patterns in the data. Testing was conducted using an Indonesian dataset comprising 200 samples. The CNN model, utilizing four features (MFCC, ZCR, RMSE, and pitch), and the CNN-LSTM model, which used three features (MFCC, ZCR, and RMSE), both achieved an emotion classification accuracy of approximately 88%. The result showed that the CNN-LSTM model achieved comparable performance with a simpler feature set compared to the CNN model. This highlighted the significance of choosing the appropriate techniques in feature extraction and classification to enhance the accuracy of identifying emotions from speech data while also managing computational complexity.
Volume: 15
Issue: 4
Page: 3769-3778
Publish at: 2025-08-01

Shearlet-based texture analysis and deep learning for osteoporosis classification in lumbar vertebrae

10.11591/ijece.v15i4.pp4318-4331
Poorvitha Hullukere Ramakrishna , Chandrakala Beturpalya Muddaraju , Bhanushree Kothathi Jayaramu , Shobha Narasimhamurthy
Osteoporosis is a bone disorder characterized by reduced bone density and increased fracture risk. It challenges society's health, remarkably among the elderly population. This research proposed an innovative method by combining Shearlet-transform (ST) spectral analysis with a deep learning neural network (DLNN) and a convolutional neural network (CNN), for osteoporosis classification in lumbar vertebrae (LV) L1-L4 of spine X-ray images. The ST enables precise extraction of texture features from images by capturing significant information regarding trabecular bone micro-architecture and bone mineral density (BMD) variations revealing in osteoporosis regions. These extracted features serve as input to a DLNN for automated classification of osteoporotic and non-osteoporotic vertebrae. Similarly, without extracting any features from ST image is directly used as an input to the CNN to classify the images. The experimental results highlight the framework's effectiveness, achieving 96% accuracy in osteoporosis image classification using CNN. Early and precise detection of osteoporosis, particularly in the lumbar vertebrae, is vital for effective treatment and fracture prevention. This study particularly emphasizes the potential and effectiveness of integrating image spectral analysis technique with NN, to improving diagnostic accuracy and clinical decision-making in osteoporosis management.
Volume: 15
Issue: 4
Page: 4318-4331
Publish at: 2025-08-01

Enhancing ultrasound image quality using deep structure of residual network

10.11591/ijece.v15i4.pp3779-3794
Ade Iriani Sapitri , Siti Nurmaini , Muhammad Naufal Rachmatullah , Annisa Darmawahyuni , Firdaus Firdaus , Anggun Islami , Bambang Tutuko , Akhiar Wista Arum
Ultrasonography, a medical imaging technique, is often affected by various types of noise and low brightness, which can result in low image quality. These drawbacks can significantly impede accurate interpretation and hinder effective medical diagnoses. Therefore, improving image quality is an essential aspect of the field of ultrasound systems. This study aims to enhance the quality of ultrasound images using deep learning (DL). The experiment is conducted using a custom dataset consisting of 2,175 infant heart ultrasound images collected from Indonesian hospitals, and the model is subsequently generalized using other datasets. We propose enhanced deep residual network combined convolutional neural networks (EDR-CNNs) to improve the image quality. After the enhancement process, our model achieved peak signal-to-noise ratio (PSNR) and structural similarity index metrics (SSIM) scores of 38.35 and 0.92 respectively, outperforming other methods. The benchmarking with other ultrasound medical images indicates that our proposed model produces good performance, as evidenced by higher PSNR, lower SSIM, a decrease in mean square error (MSE), and a lower contrast improvement index (CII). In conclusion, this study encapsulates the forthcoming trends in advancing low-illumination image enhancement, along with exploring the prevailing challenges and potential directions for further research.
Volume: 15
Issue: 4
Page: 3779-3794
Publish at: 2025-08-01

Cyber-fraud detection methodology by using machine learning algorithms

10.11591/ijece.v15i4.pp3949-3956
Ahmed Abu-Khadrah , Sahar Al-Washmi , Ali Mohd Ali , Muath Jarrah
Cybercrime covers a wide array of illegal online activities such as hacking and identity theft, while cyber fraud specifically involves deceptive practices like phishing and fraudulent financial transactions. The rise in technology and digital communication has exacerbated cyber fraud. Although prevention technologies are advancing, fraudsters continually adapt, making effective detection methods essential for identifying and addressing fraud when prevention fails. The proposed model aims to reduce online fraud through new detection algorithms. It utilizes statistical and machine learning techniques, including logistic regression, random forest, and naïve Bayes, to identify non-transactional fraud behaviors. By analyzing a meticulously collected and fine-tuned dataset, the study enhances detection capabilities beyond traditional transaction-focused approaches. The algorithms monitor user interactions and device characteristics to create profiles of normal behaviors and detect deviations indicative of fraud. The evaluation of proposed model showed 100% accuracy. A unified model incorporating all decision-making processes was used, leading to a voting phase and accuracy assessment. This approach consolidates multiple algorithms into a single framework, proving highly effective for comprehensive fraud detection. The research demonstrates the value of integrating machine learning techniques with real-world data to advance fraud detection and emphasizes the importance of continual adaptation to address evolving cyber threats.
Volume: 15
Issue: 4
Page: 3949-3956
Publish at: 2025-08-01

Enhancing mobile agent protection using a hybrid security framework combining pretty good protocol and code obfuscation

10.11591/ijece.v15i4.pp3913-3927
Jamal Zraqou , Wesam Alkhadour , Mahmoud Baklizi , Khalil Omar , Hussam Fakhouri
The security of mobile agents, which are autonomous software entities capable of migrating between computers to execute tasks, remains a critical concern in modern information technology. Cybersecurity has been a central component of this technological revolution and continues to be one of the most essential requirements for any software or platform. Despite advances in security measures, protecting mobile agents, particularly those carrying sensitive data, while they transmit over networks remains challenging. This research proposes a novel hybrid security technique, abbreviated as pretty good privacy and code obfuscation framework (PGF), which combines pretty good privacy (PGP) with code obfuscation. PGF is designed specifically to protect mobile agents, focusing on systems like Aglets. The technique aims to safeguard the integrity and confidentiality of the agent's data during transmission. Based on the mobile agent Aglets and the PGF technique, the proposed model enhances security by introducing additional protection layers during agent creation and transmission using PGP and code obfuscation. The comparative analysis demonstrated that PGF outperformed other algorithms in terms of time efficiency and security, effectively handling large data sizes through its hybrid cryptographic approach, which combines asymmetric and symmetric encryption. The model was implemented using the Aglets framework in Java development kit (JDK) and NetBeans and showed high reliability and practicality. However, its current design is tailored to Aglets, and future work could focus on adapting the model to other platforms and optimizing its resource efficiency for constrained environments.
Volume: 15
Issue: 4
Page: 3913-3927
Publish at: 2025-08-01

Privacy and confidentiality in internet of things: a literature review

10.11591/ijece.v15i4.pp4249-4258
Hiba Kandil , Hafssa Benaboud
The internet of things (IoT) is a scalable network of interconnected smart devices that aims to improve quality of life, business growth, and efficiency across multiple sectors. Since the IoT is an expanding network, a large amount of data is generated, collected, and exchanged. However, most of this data is personal data that contains private or sensitive information, which makes it a target for several cyber threats due to poor encryption, weak authentication mechanisms, and insecure communications. Therefore, ensuring the privacy and confidentiality of sensitive information remains a critical challenge. This paper presents a comprehensive literature review focusing on privacy and confidentiality issues within the IoT ecosystem. It categorizes existing research into privacy-preserving techniques, authentication and trust mechanisms, and machine learning-based solutions. Beginning by detailing the review methodology employed to gather and analyze relevant research. The review then explores recent research work related to privacy concerns and authentication and trust mechanisms, emphasizing various approaches and solutions developed to address these challenges. The paper further delves into machine learning-based solutions that offer innovative methods for enhancing privacy and confidentiality.
Volume: 15
Issue: 4
Page: 4249-4258
Publish at: 2025-08-01

Chaotic red-tailed hawk algorithm to optimize parameter power system stabilizer

10.11591/ijece.v15i4.pp3536-3545
Widi Aribowo , Laith Abualigah , Diego Oliva , Abeer Aljohani , Aliyu Sabo
This article introduces a recently created adaptation of the red-tailed hawk (RTH) algorithm. The proposed approach is a modified version of the original RTH algorithm, incorporating chaotic elements to enhance its integrity and performance. The RTH algorithm emulates the hunting behavior of the red-tailed hawk. This article demonstrates the adjustment of the power system stabilizer using the suggested technique in a case study involving a single-machine system. The suggested method was validated by benchmarking against known functions and evaluating its performance on a single-machine system in terms of transient responsiveness. The essay employs the original RTH algorithm as a means of comparison. The simulation results demonstrate that the proposed technique exhibits promising performance.
Volume: 15
Issue: 4
Page: 3536-3545
Publish at: 2025-08-01

Synchronized transform-aggregate model for big data analytics towards in distributed cloud ecosystem

10.11591/ijece.v15i4.pp4259-4267
Rajeshwari Dembala , Kavya Ananthapadmanabha , Shashank Dhananjaya
The massively generated data from various technologically advanced applications hosted in the cloud and internet of things (IoT) in present times calls for effective management towards balancing the demands of both service providers and users. The conventional usage of distributed frameworks for such big data management is witnessed with various ongoing challenges. Hence, this manuscript presents a novel analytical framework for big data that can offer reduced cost and reduced time demanded to evaluate the distributed big data from multiple data points in the cloud in an optimal way. The core ideology of this framework is to gain a synchronized optimality for cost and time for executing a task demanded for big data analytics complying with the constraints associated with task deadline. The proposed framework is capable of fine-tuning the positioning of task operation using transform and aggregate strategy to exhibit 37% reduced delay, 41% efficient task completion performance, and 28% reduced execution time in contrast to existing frameworks.
Volume: 15
Issue: 4
Page: 4259-4267
Publish at: 2025-08-01

Maximum power point tracking technique based on the grey wolf optimization-perturb and observe hybrid algorithm for photovoltaic systems under partial shading conditions

10.11591/ijece.v15i4.pp3566-3582
Leghrib Bilal , Bensiali Nadia , Adjabi Mohamed
Photovoltaic panels represent the most abundant source of renewable energy and the cleanest form of electrical energy derived from the sun. However, partial shading can lead to the appearance of multiple local maximum power points (LMPP) in the power-voltage (P-V) characteristics of solar panels. This situation traps classical power maximization algorithms, such as perturb and observe (P&O) or incremental conductance, as these algorithms tend to deviate from the global maximum power point (GMPP), resulting in reduced electrical energy production. To overcome this major challenge in the electrical industry, we propose in this study a hybrid grey wolf optimization-perturb and observe hybrid (GWO-P&O) algorithm, designed to converge towards the global maximum power without being trapped in local peaks. To demonstrate its effectiveness, the proposed algorithm was simulated in MATLAB/Simulink under various complex and uniform partial shading conditions. Furthermore, a comparative study was conducted with the P&O and GWO algorithms to evaluate precision, tracking, response time, and efficiency. The simulation results revealed superior performance for the proposed technique, particularly in terms of constant tracking of the global peak, with efficiencies of 99.95% and 99.98% in the best cases, faster response times (ranging from 0.07 to 0.04 s), and minimal, almost negligible oscillations around the GMPP.
Volume: 15
Issue: 4
Page: 3566-3582
Publish at: 2025-08-01

Gradient boosting algorithm for predicting student success

10.11591/ijece.v15i4.pp4181-4191
Brahim Jabir , Soukaina Merzouk , Radoine Hamzaoui , Noureddine Falih
The idea of using machine learning resolution techniques to predict student performance on an online learning platform such as Moodle has attracted considerable interest. Machine learning algorithms are capable of correctly interpreting the content and thus predicting the performance of our students. Algorithms namely gradient boosting machines (GBM) and eXtreme gradient boosting (XGBoost) are highly recommended by most researchers due to their high accuracy and smooth boosting time. This research was conducted to analyze the effectiveness of the XGBoost algorithm on Moodle platform to predict student performance by analyzing their online activities, practicing various types of online activities. The proposed algorithm was applied for the prediction of academic performance based on this data received from Moodle. The results demonstrate a strong correlation between many activities like the number of hours spent online and the achievement of academic goals, with a remarkable prediction rate of 0.949.
Volume: 15
Issue: 4
Page: 4181-4191
Publish at: 2025-08-01

Non-small cell lung cancer active compounds discovery holding on protein expression using machine learning models

10.11591/ijai.v14.i4.pp2815-2825
Hamza Hanafi , M’hamed Aït Kbir , Badr Dine Rossi Hassani
Computational methods have transformed the field of drug discovery, which significantly helped in the development of new treatments. Nowadays, researchers are exploring a wide ranger of opportunities to identify new compounds using machine learning. We conducted a comparative study between multiple models capable of predicting compounds to target non-small cell lung cancer, we focused on integrating protein expressions to identify potential compounds that exhibit a high efficacy in targeting lung cancer cells. A dataset was constructed based on the trials available in the ChEMBL database. Then, molecular descriptors were calculated to extract structure-activity relationships from the selected compounds and feed into several machine learning models to learn from. We compared the performance of various algorithms. The multilayer perceptron model exhibited the highest F1 score, achieving an outstanding value of 0,861. Moreover, we present a list of 10 drugs predicted as active in lung cancer, all of which are supported by relevant scientific evidence in the medical literature. Our study showcases the potential of combining protein expression analysis and machine learning techniques to identify novel drugs. Our analytical approach contributes to the drug discovery pipeline, and opens new opportunities to explore and identify new targeted therapies.
Volume: 14
Issue: 4
Page: 2815-2825
Publish at: 2025-08-01

Innovative technologies and educational quality: insights from Mongolia and Kazakhstan

10.11591/ijere.v14i4.32777
Kanziya Kabassova , Khosbayar Nyamsuren , Antony D. Miller , Iurii Piven , Yuriy Kravtsov
The study explores the digital potential and prospects of advanced innovative technologies in higher education institutions, compared to global practices. The research is situated within the theoretical framework of socio-constructivist learning theory, emphasizing the role of digital technologies in facilitating collaborative learning environments. Findings indicate that digitalization and inclusive educational practices are evolving into central elements of educational strategies. This study highlighted specific case studies, such as the implementation of virtual reality and e-learning platforms at Otgontenger University, demonstrating their significant impact on enhancing student engagement and learning outcomes.
Volume: 14
Issue: 4
Page: 3345-3354
Publish at: 2025-08-01

High-speed field-programmable gate array implementation for mmWave orthogonal frequency-division multiplexing transmitters: design and evaluation

10.11591/ijece.v15i4.pp3813-3823
Kidsanapong Puntsri , Bussakorn Bunsri , Puripong Suthisopapan
This paper presents a field-programmable gate array (FPGA)-based implementation of an orthogonal frequency-division multiplexing (OFDM) transmitter signal processing chain optimized for high-speed millimeter wave (mmWave) communication systems. The design prioritizes real-time processing efficiency and flexibility. A high-throughput 2048-point inverse fast Fourier transform (IFFT) module, realized using a Radix-2 algorithm, forms the core of the design, showcasing efficient hardware resource utilization. The implementation further includes cyclic prefix (CP) insertion and configurable support for various quadrature amplitude modulation (QAM) modulation orders and pilot arrangements. The design is implemented in VHSIC Hardware Description Language (VHDL) using Vivado 2020 and evaluated on the Zynq UltraScale+ RFSoC ZCU111 evaluation kit. The processing pipeline employs eight parallel lanes for concurrent data computation. Experimental results demonstrate a mean squared error (MSE) of only 0.00013 between the FPGA-generated waveform and its MATLAB-simulated counterpart. Additionally, post-implementation resource utilization analysis shows efficient usage of FPGA resources. These findings validate the efficacy and real-time capability of the proposed FPGA-based OFDM transmitter leverages parallelism and high-speed architecture to efficiently process massive data streams, making it suitable for a wide range of mmWave OFDM applications. In contrast to recent works that focus on lower-order IFFT modules, this paper employs a high-throughput IFFT computation, showcasing efficient hardware resource utilization for high-speed mmWave applications.
Volume: 15
Issue: 4
Page: 3813-3823
Publish at: 2025-08-01
Show 153 of 1996

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration