Zahra Sadeghi, Stsn Matwin and Noah Barrett
Adv. Artif. Intell. Mach. Learn., 3 (2):1135-1164
Zahra Sadeghi : Dalhousie University
Stsn Matwin : Dalhousie University
Noah Barrett : Dalhousie University
DOI: 10.54364/AAIML.2023.1167
Article History: Received on: 15-May-23, Accepted on: 05-Jun-23, Published on: 21-Jun-23
Corresponding Author: Zahra Sadeghi
Email: zahras@dal.ca
Citation: Zahra Sadeghi, Stan Matwin, Noah Barrett (2023). Evolutionary Augmentation Policy Optimization for Self-Supervised Learning. Adv. Artif. Intell. Mach. Learn., 3 (2 ):1135-1164
Self-supervised Learning (SSL) is a machine learning algorithm for pretraining Deep Neural Networks (DNNs) without requiring manually labeled data. The central idea of this learning technique is based on an auxiliary stage aka pretext task in which labeled data are created automatically through data augmentation and exploited for pretraining the DNN. However, the effect of each pretext task is not well studied or compared in the literature. In this paper, we study the contribution of augmentation operators on the performance of self supervised learning algorithms in a constrained settings. We propose an evolutionary search method for optimization of data augmentation pipeline in pretext tasks and measure the impact of augmentation operators in several SOTA SSL algorithms. By encoding different combination of augmentation operators in chromosomes we seek the optimal augmentation policies through an evolutionary optimization mechanism. We further introduce methods for analyzing and explaining the performance of optimized SSL algorithms. Our results indicate that our proposed method can find solutions that outperform the accuracy of classification of SSL algorithms which confirms the influence of augmentation policy choice on the overall performance of SSL algorithms. We also compare optimal SSL solutions found by our evolutionary search mechanism and show the effect of batch size in the pretext task on two visual datasets.