Articles

Access the latest knowledge in applied science, electrical engineering, computer science and information technology, education, and health.

Filter Icon

Filters article

Years

FAQ Arrow
0
0

Source Title

FAQ Arrow

Authors

FAQ Arrow

29,939 Article Results

Blockchain technology for optimizing security and privacy in distributed systems

10.11591/csit.v6i2.p210-220
Wisnu Uriawan , Adrian Putra Pratama , Shafwan Mursyid
Blockchain technology is increasingly recognized as an effective solution for addressing security and privacy challenges in distributed systems. Blockchain ensures information security by validating data and defending against cyber threats, while guaranteeing data integrity through transaction validation and reliable storage. The research involves a literature study, problem identification, analysis of blockchain security and privacy, model development, testing, and analysis of trial results. Furthermore, blockchain enables user anonymity and fosters transparency by utilizing a distributed network, reducing the risk of fraudulent activities. Its decentralized nature ensures high reliability and accessibility, even in node failures. Blockchain enhances security and privacy by offering features like data immutability, provenance, and reduced reliance on trust. It decentralizes data storage, making tampering or deletion extremely challenging, and ensures the invalidation of subsequent blocks upon any changes. Blockchain finds applications in various domains, including supply chains, finance, healthcare, and government, enabling enhanced security by tracking data origin and ownership. Despite scalability and security challenges, the potential benefits of reduced costs, increased efficiency, and improved transparency position blockchain as a promising technology for the future. In summary, blockchain technology provides secure transaction recording and data storage, thus enhancing security, privacy, and the integrity of sensitive information in distributed systems.
Volume: 6
Issue: 2
Page: 210-220
Publish at: 2025-07-01

Optimizing EfficientNet for imbalanced medical image classification using grey wolf optimization

10.11591/csit.v6i2.p112-121
Khusnul Khotimah , Sugiyarto Surono , Aris Thobirin
The advancement of deep learning in computer vision has result in substantial progress, particularly in image classification tasks. However, challenges arise when the model is applied to small and unbalanced datasets, such as X-ray data in medical applications. This study aims to improve the classification performance of fracture X-ray images using the EfficientNet architecture optimized with grey wolf optimization (GWO). EfficientNet was chosen for its efficiency in handling small datasets, while GWO was applied to optimize hyperparameters, including learning rate, weight decay, and dropout to improve model accuracy. Random cropping, rotation, flipping, color jittering, and random erasing, were used to expand the diversity of the dataset, and class weighting is applied to overcome class imbalance. The evaluation uses accuracy, precision, recall, and F1-score metrics. The combination of EfficientNetB0 and GWO resulted in an average 4.5% improvement in model performance over baseline methods. This approach provides benefits in developing deep learning methods for medical image classification, especially in dealing with small and imbalanced datasets.
Volume: 6
Issue: 2
Page: 112-121
Publish at: 2025-07-01

An ensemble learning approach for diabetes prediction using the stacking method

10.11591/csit.v6i2.p102-111
Elliot Kojo Attipoe , Alimatu Saadia Yussiff , Maame Gyamfua Asante-Mensah , Emmanuel Dortey Tetteh , Regina Esi Turkson
Diabetes is a severe illness characterized by high blood glucose levels. Machine learning algorithms, with their ability to detect and predict diabetes in its early stages, offer a promising avenue for research. This study sought to enhance the accuracy of predicting diabetes mellitus by employing the stacking method. The stacking method was chosen because it integrates predictions from various base models, resulting in a more precise final prediction. The stacking method enhances accuracy and generalization by utilizing the varied strengths of multiple base models. The Pima Indians diabetes dataset, a widely used benchmark dataset, was utilized in the study. The machine learning models used for the studies were logistic regression (LR), naïve Bayes (NB), extreme gradient boost (XGBoost), K-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM). LR, KNN, and SVM were the best-performing models based on accuracy, F1-score, precision, and area under the curve (AUC) score, and were consequently used as the base model for the stacking method. The LR model was utilized for the meta-model. The proposed ensemble approach using the stacking method demonstrated a high accuracy of 82.4%, better than the individual models and other ensemble techniques such as bagging or boosting. This study advances diabetes prediction by developing a more accurate early-stage detection model, thereby improving clinical management of the disease.
Volume: 6
Issue: 2
Page: 102-111
Publish at: 2025-07-01

Effects of hyperparameter tuning on random forest regressor in the beef quality prediction model

10.11591/csit.v6i2.p159-166
Ridwan Raafi'udin , Yohanes Aris Purwanto , Imas Sukaesih Sitanggang , Dewi Apri Astuti
Prediction models for beef meat quality are necessary because production and consumption were significant and increasing yearly. This study aims to create a prediction model for beef freshness quality using the random forest regressor (RFR) algorithm and to improve the accuracy of the predictions using hyperparameter tuning. The use of near-infrared spectroscopy (NIRS) in predicting beef quality is an easy, cheap, and fast technique. This study used six meat quality parameters as prediction target variables for the test. The R² metric was used to evaluate the prediction results and compare the performance of the RFR with default parameters versus the RFR with hyperparameter tuning (RandomSearchCV). Using default parameters, the R-squared (R²) values for color (L*), drip loss (%), pH, storage time (hour), total plate colony (TPC in cfu/g), and water moisture (%) were 0.789, 0.839, 0.734, 0.909, 0.845, and 0.544, respectively. After applying hyperparameter tuning, these R² scores increased to 0.885, 0.931, 0.843, 0.957, 0.903, and 0.739, indicating an overall improvement in the model’s performance. The average performance increase for prediction results for all beef quality parameters is 0.0997 or 14% higher than the default parameters.
Volume: 6
Issue: 2
Page: 159-166
Publish at: 2025-07-01

Artificial intelligence-powered robotics across domains: challenges and future trajectories

10.11591/csit.v6i2.p176-199
Tole Sutikno , Hendril Satrian Purnama , Laksana Talenta Ahmad
The rise of artificial intelligence (AI) in robotic systems raises both challenges and opportunities. This technological change necessitates rethinking workforce skills, resulting in new qualifications and potentially outdated jobs. Advancements in AI-based robots have made operations more efficient and precise, but they also raise ethical issues such as job loss and responsibility for robot decisions. This study explores AI-powered robotics in both of their challenges and future trajectories. As AI in robotics continues to grow, it will be crucial to tackle these issues through strong rules and ethical standards to ensure safe and fair progress. Collaborative robots in manufacturing improve safety and increase productivity by working alongside human employees. Autonomous robots reduce human mistakes during checks, leading to better product quality and lower operational expenses. In healthcare, robotic helpers improve patient care and medical staff performance by managing routine tasks. Future research should focus on improving efficiency and accuracy, boosting productivity, and creating safe environments for humans and robots to work safely together. Strong rules and ethical guidelines will be vital for integrating AI-powered robotics into different areas, ensuring technology development aligns with societal values and needs.
Volume: 6
Issue: 2
Page: 176-199
Publish at: 2025-07-01

HepatoScan: Ensemble classification learning models for liver cancer disease detection

10.11591/csit.v6i2.p167-175
Tella Sumallika , Raavi Satya Prasad
Liver cancer is a dangerous disease that poses significant risks to human health. The complexity of early detection of liver cancer increases due to the unpredictable growth of cancer cells. This paper introduces HepatoScan, an ensemble classification to detect and diagnose liver cancer tumors from liver cancer datasets. The proposed HepatoScan is the integrated approach that classifies the three types of liver cancers: hepatocellular carcinoma, cholangiocarcinoma, and angiosarcoma. In the initial stage, liver cancer starts in the liver, while the second stage spreads from the liver to other parts of the body. Deep learning is an emerging domain that develops advanced learning models to detect and diagnose liver cancers in the early stages. We train the pre-trained model InceptionV3 on liver cancer datasets to identify advanced patterns associated with cancer tumors or cells. For accurate segmentation and classification of liver lesions in computed tomography (CT) scans, the ensemble multi-class classification (EMCC) combines U-Net and mask region-based convolutional network (R-CNN). In this context, researchers use the CT scan images from Kaggle to analyze the liver cancer tumors for experimental analysis. Finally, quantitative results show that the proposed approach obtained an improved disease detection rate with mean squared error (MSE)-11.34 and peak signal-to-noise ratio (PSNR)-10.34, which is high compared with existing models such as fuzzy C-means (FCM) and kernel fuzzy C-means (KFCM). The classification results obtained based on detection rate with accuracy-0.97%, specificity-0.99%, recall-0.99%, and F1S-0.97% are very high compared with other existing models.
Volume: 6
Issue: 2
Page: 167-175
Publish at: 2025-07-01

Bibliometric analysis and short survey in CT scan image segmentation: identifying ischemic stroke lesion areas

10.11591/csit.v6i2.p91-101
Wahabou K. Taba Chabi , Sèmèvo Arnaud R. M. Ahouandjinou , Manhougbé Probus A. F. Kiki , Adoté François-Xavier Ametepe
Ischemic stroke remains one of the leading causes of mortality and long-term disability worldwide. Accurate segmentation of brain lesions plays a crucial role in ensuring reliable diagnosis and effective treatment planning, both of which are essential for improving clinical outcomes. This paper presents a bibliometric analysis and a concise review of medical image segmentation techniques applied to ischemic stroke lesions, with a focus on tomographic imaging data. A total of 2,014 publications from the Scopus database (2013–2023) were analyzed. Sixty key studies were selected for in-depth examination: 59.9% were journal articles, 29.9% were conference proceedings, and 4.7% were conference reviews. The year 2023 marked the highest volume of publications, representing 17% of the total. The most active countries in this area of research are China, the United States, and India. "Image segmentation" emerged as the most frequently used keyword. The top-performing studies predominantly used pre-trained deep learning models such as U-Net, ResNet, and various convolutional neural networks (CNNs), achieving high accuracy. Overall, the findings show that image segmentation has been widely adopted in stroke research for early detection of clinical signs and post-stroke evaluation, delivering promising outcomes. This study provides an up-to-date synthesis of impactful research, highlighting global trends and recent advancements in ischemic stroke medical image segmentation.
Volume: 6
Issue: 2
Page: 91-101
Publish at: 2025-07-01

Blockchain technology for optimizing security and privacy in distributed systems

10.11591/csit.v6i2.p214-224
Wisnu Uriawan , Adrian Putra Pratama , Shafwan Mursyid
Blockchain technology is increasingly recognized as an effective solution for addressing security and privacy challenges in distributed systems. Blockchain ensures information security by validating data and defending against cyber threats, while guaranteeing data integrity through transaction validation and reliable storage. The research involves a literature study, problem identification, analysis of blockchain security and privacy, model development, testing, and analysis of trial results. Furthermore, blockchain enables user anonymity and fosters transparency by utilizing a distributed network, reducing the risk of fraudulent activities. Its decentralized nature ensures high reliability and accessibility, even in node failures. Blockchain enhances security and privacy by offering features like data immutability, provenance, and reduced reliance on trust. It decentralizes data storage, making tampering or deletion extremely challenging, and ensures the invalidation of subsequent blocks upon any changes. Blockchain finds applications in various domains, including supply chains, finance, healthcare, and government, enabling enhanced security by tracking data origin and ownership. Despite scalability and security challenges, the potential benefits of reduced costs, increased efficiency, and improved transparency position blockchain as a promising technology for the future. In summary, blockchain technology provides secure transaction recording and data storage, thus enhancing security, privacy, and the integrity of sensitive information in distributed systems.
Volume: 6
Issue: 2
Page: 214-224
Publish at: 2025-07-01

HepatoScan: Ensemble classification learning models for liver cancer disease detection

10.11591/csit.v6i2.p169-177
Tella Sumallika , Raavi Satya Prasad
Liver cancer is a dangerous disease that poses significant risks to human health. The complexity of early detection of liver cancer increases due to the unpredictable growth of cancer cells. This paper introduces HepatoScan, an ensemble classification to detect and diagnose liver cancer tumors from liver cancer datasets. The proposed HepatoScan is the integrated approach that classifies the three types of liver cancers: hepatocellular carcinoma, cholangiocarcinoma, and angiosarcoma. In the initial stage, liver cancer starts in the liver, while the second stage spreads from the liver to other parts of the body. Deep learning is an emerging domain that develops advanced learning models to detect and diagnose liver cancers in the early stages. We train the pre-trained model InceptionV3 on liver cancer datasets to identify advanced patterns associated with cancer tumors or cells. For accurate segmentation and classification of liver lesions in computed tomography (CT) scans, the ensemble multi-class classification (EMCC) combines U-Net and mask region-based convolutional network (R-CNN). In this context, researchers use the CT scan images from Kaggle to analyze the liver cancer tumors for experimental analysis. Finally, quantitative results show that the proposed approach obtained an improved disease detection rate with mean squared error (MSE)-11.34 and peak signal-to-noise ratio (PSNR)-10.34, which is high compared with existing models such as fuzzy C-means (FCM) and kernel fuzzy C-means (KFCM). The classification results obtained based on detection rate with accuracy-0.97%, specificity-0.99%, recall-0.99%, and F1S-0.97% are very high compared with other existing models.
Volume: 6
Issue: 2
Page: 169-177
Publish at: 2025-07-01

Arowana cultivation water quality forecasting with multivariate fuzzy timeseries and internet of things

10.11591/csit.v6i2.p136-146
Alauddin Maulana Hirzan , April Firman Daru , Lenny Margaretta Huizen
Water quality plays a crucial role in the growth and survival of arowana fish, with imbalances in key parameters (pH, temperature, turbidity, dissolved oxygen, and conductivity) leading to increased mortality rates. While previous studies have introduced various monitoring models using Arduino IDE and intrinsic approaches, they lack predictive capabilities, leaving cultivators unable to take proactive measures. To address this gap, this study develops a predictive model integrating the internet of things (IoT) with a fuzzy time series (FTS) algorithm. Through rigorous evaluation and validation, the proposed FTS-multivariate T2 model demonstrated superior performance, achieving an exceptionally low error rate of 0.01704%, outperforming decision tree (0.13410%), FTS-multivariate T1 (0.88397%), and linear regression (20.91791%). These findings confirm that FTS-multivariate T2 not only accurately predicts water quality but also significantly reduces the mean absolute percentage error, providing a robust solution for sustainable arowana aquaculture.
Volume: 6
Issue: 2
Page: 136-146
Publish at: 2025-07-01

Attack detection in internet of things networks with deep learning using deep transfer learning method

10.11591/csit.v6i2.p202-213
Riki Abdillah Hasanuddin , Muhammad Subali
Cybersecurity becomes a crucial part within the information management framework of internet of things (IoT) device networks. The large-scale distribution of IoT networks and the complexity of communication protocols used are contributing factors to the widespread vulnerabilities of IoT devices. The implementation of transfer learning models in deep learning can achieve optimal performance faster than traditional machine learning models, as they leverage knowledge from previous models that already understand these features. Base model was built using the 1-dimension convolutional neural network (1D-CNN) method, using training and test data from the source domain dataset. Model 1 was constructed using the same method as base model. The test and training data used for model 1 were from the target domain dataset. This model successfully detected known attacks at a rate of 99.352%, but did not perform well in detecting unknown attacks, with an accuracy of 84.645%. Model 2 is an enhancement of model 1, incorporating transfer learning from the base model. Its results significantly improved compared to model 1 testing. Model 2 has an accuracy and precision rate of 98.86% and 99.17 %, respectively, allowing it to detect previously unknown attacks. Even with a slight decrease in normal detection, most attacks can still be detected.
Volume: 6
Issue: 2
Page: 202-213
Publish at: 2025-07-01

Effects of hyperparameter tuning on random forest regressor in the beef quality prediction model

10.11591/csit.v6i2.p159-168
Ridwan Raafi'udin , Yohanes Aris Purwanto , Imas Sukaesih Sitanggang , Dewi Apri Astuti
Prediction models for beef meat quality are necessary because production and consumption were significant and increasing yearly. This study aims to create a prediction model for beef freshness quality using the random forest regressor (RFR) algorithm and to improve the accuracy of the predictions using hyperparameter tuning. The use of near-infrared spectroscopy (NIRS) in predicting beef quality is an easy, cheap, and fast technique. This study used six meat quality parameters as prediction target variables for the test. The R² metric was used to evaluate the prediction results and compare the performance of the RFR with default parameters versus the RFR with hyperparameter tuning (RandomSearchCV). Using default parameters, the R-squared (R²) values for color (L*), drip loss (%), pH, storage time (hour), total plate colony (TPC in cfu/g), and water moisture (%) were 0.789, 0.839, 0.734, 0.909, 0.845, and 0.544, respectively. After applying hyperparameter tuning, these R² scores increased to 0.885, 0.931, 0.843, 0.957, 0.903, and 0.739, indicating an overall improvement in the model’s performance. The average performance increase for prediction results for all beef quality parameters is 0.0997 or 14% higher than the default parameters.
Volume: 6
Issue: 2
Page: 159-168
Publish at: 2025-07-01

Classification and similarity detection of Indonesian scientific journal articles

10.11591/csit.v6i2.p147-158
Nyimas Sabilina Cahyani , Deris Stiawan , Abdiansah Abdiansah , Nurul Afifah , Dendi Renaldo Permana
The development of technology is accelerating in finding references to scientific articles or journals related to research topics. One of the sources of national aggregator services to find references is Garba Rujukan Digital (GARUDA), developed by the Ministry of Education, Culture, Research, and Technology (Kemendikbudristek) of the Republic of Indonesia. The naïve Bayes method classifies articles into several categories based on titles and abstracts. The system achieves an F1-score of 98%, which indicates high classification accuracy, and the classification process takes less than 60 minutes. Article similarity detection is done using the cosine similarity method, and a similarity score of 0.071 reflects the degree of similarity between the title and the abstract that has been concatenated, while a score close to 1 indicates a higher similarity. Searching for similar scientific articles based on title and abstract, sort articles based on the results of the highest similarity score are the most similar articles, and generating article categories. The results of the research show that the proposed method significantly improves the classification and search processes in GARUDA, as well as accurate and efficient similarity detection.
Volume: 6
Issue: 2
Page: 147-158
Publish at: 2025-07-01

Development of a web-based application for real-time eye disease classification system using artificial intelligence

10.11591/ijres.v14.i2.pp558-574
Kennedy Okokpujie , Adekoya Tolulope , Abidemi Orimogunje , Joshua Sokowonci Mommoh , Adaora Princess Ijeh , Mary Oluwafeyisayo Ogundele
The incorporation of artificial intelligence (AI) into the field of medicine has created new strategies in enhancing the detection of disease, with a focus on the identification of eye diseases such as glaucoma, diabetic retinopathy, and macular degeneration associated with age, which can lead to blindness if not detected and treated early enough. Driven by the need to combat blindness, which affects approximately 39 million people globally, according to the World Health Organization (WHO). This research offers a web-based, real time approach to classifying eye diseases from fundus images due to user friendliness. Three pre-trained convolutional neural network (CNN) models are adopted, namely ResNet-50, Inception-v3, and MobileNetV3. The models were trained on a dataset of 8000 fundus images subdivided into four classes: cataract, glaucoma, diabetic retinopathy, and normal eyes. The performance of the models was evaluated in 3-way (normal eye and two diseases) and 4-way (normal eye and three diseases). ResNet-50 had higher performances, with 98% and 97% accuracy in the respective classifications, compared to InceptionV3 and MobileNetV3. Consequently, ResNet-50 was used in an online application that made real-time diagnoses. This research findings reveal the potential of CNNs in the healthcare industry, particularly in reducing over-reliance on specialists and increasing access to quality diagnostic technologies. Especially in critical areas such as this with limited healthcare resources, where the technology can create significant gaps in disease detection and control.
Volume: 14
Issue: 2
Page: 558-574
Publish at: 2025-07-01

Artificial intelligence-powered robotics across domains: challenges and future trajectories

10.11591/csit.v6i2.p178-201
Tole Sutikno , Hendril Satrian Purnama , Laksana Talenta Ahmad
The rise of artificial intelligence (AI) in robotic systems raises both challenges and opportunities. This technological change necessitates rethinking workforce skills, resulting in new qualifications and potentially outdated jobs. Advancements in AI-based robots have made operations more efficient and precise, but they also raise ethical issues such as job loss and responsibility for robot decisions. This study explores AI-powered robotics in both of their challenges and future trajectories. As AI in robotics continues to grow, it will be crucial to tackle these issues through strong rules and ethical standards to ensure safe and fair progress. Collaborative robots in manufacturing improve safety and increase productivity by working alongside human employees. Autonomous robots reduce human mistakes during checks, leading to better product quality and lower operational expenses. In healthcare, robotic helpers improve patient care and medical staff performance by managing routine tasks. Future research should focus on improving efficiency and accuracy, boosting productivity, and creating safe environments for humans and robots to work safely together. Strong rules and ethical guidelines will be vital for integrating AI-powered robotics into different areas, ensuring technology development aligns with societal values and needs.
Volume: 6
Issue: 2
Page: 178-201
Publish at: 2025-07-01
Show 166 of 1996

Discover Our Library

Embark on a journey through our expansive collection of articles and let curiosity lead your path to innovation.

Explore Now
Library 3D Ilustration