ISSN :2582-9793

Evaluation and Comparison of Machine Learning Algorithms for Effective Image Classification with Fault-Tolerance

Original Research (Published On: 12-Dec-2024 )
Evaluation and Comparison of Machine Learning Algorithms for Effective Image Classification with Fault-Tolerance
DOI : https://dx.doi.org/10.54364/AAIML.2024.44174

Sithembiso Dyubele, Lubabalo Mbangata, Noxolo Pretty Cele and Phirime Monyeki

Adv. Artif. Intell. Mach. Learn., 4 (4):3006-3058

Sithembiso Dyubele : Durban University of Technology

Lubabalo Mbangata : Durban University of Technology

Noxolo Pretty Cele : Durban University of Technology

Phirime Monyeki : Durban University of Technology

Download PDF Here

DOI: https://dx.doi.org/10.54364/AAIML.2024.44174

Article History: Received on: 17-Sep-24, Accepted on: 26-Oct-24, Published on: 12-Dec-24

Corresponding Author: Sithembiso Dyubele

Email: ctheradyubele@gmail.com

Citation: Sithembiso Dyubele, Noxolo Pretty Cele, Lubabalo Mbangata, Phirime Monyeki. (SOUTH AFRICA) (2024). Evaluation and Comparison of Machine Learning Algorithms for Effective Image Classification with Fault-Tolerance. Adv. Artif. Intell. Mach. Learn., 4 (4 ):3006-3058.


Abstract

    

Image classification is critical in computer vision, with numerous applications ranging from e-commerce to medical imaging. This study provides a comprehensive evaluation of traditional machine learning algorithms for image classification, implementing and analysing novel fault tolerance mechanisms amongst these algorithms. The authors compared the performance of K-Nearest Neighbors (KNN), Decision Trees, Random Forest, and XGBoost on both Fashion MNIST and CIFAR-10 datasets. The comparison was extended to include Support Vector Machine (SVM), Logistic Regression, and Naive Bayes classifiers in order to expand the evaluation of these models on the indicated datasets. Key findings demonstrated the superiority of ensemble methods, particularly XGBoost, which achieved 89.31% of accuracy on Fashion MNIST and 54.93% on CIFAR-10, consistently outperforming other models across various configurations. Random Forest exhibited robust performance as the second-best model, reaching 87.42% and 51.64% of accuracy on the respective datasets. The significant performance gap between datasets demonstrated the challenges that traditional machine learning models face with complex image data. Implementing the fault tolerance framework in this study has also shown a remarkable effectiveness, achieved a 94.6% recovery rate while maintaining model accuracy within 0.1% of standard implementations. This was achieved with minimal computational overhead (2.3% of training time and  1.8% of memory usage), making it highly practical for production deployments. The system significantly reduced operational failures, decreasing crashes from 5.2 to 0.3 per day and increasing average uptime from 4.3 to 12.0 hours. The study also reveals important insights regarding model scalability and resource requirements, with memory usage varying significantly across models (325MB to 8,923MB). These findings provide valuable guidance for practitioners in selecting and implementing machine learning models for image classification tasks, particularly in scenarios where both performance and system reliability are critical. This research contributes to the field by demonstrating the feasibility of implementing robust fault tolerance in machine learning systems without compromising accuracy while also providing comprehensive performance comparisons across different model architectures and dataset complexities. The developed framework serves as a foundation for building more reliable machine-learning systems for real-world applications.

 

Statistics

   Article View: 504
   PDF Downloaded: 7