摘要
In recent years,self-supervised learning which does not require a large number of manual labels generate supervised signals through the data itself to attain the characterization learning of samples.Self-supervised learning solves the problem of learning semantic features from unlabeled data,and realizes pre-training of models in large data sets.Its significant advantages have been extensively studied by scholars in recent years.There are usually three types of self-supervised learning:"Generative,Contrastive,and GeneTative-Contrastive."The model of the comparative learning method is relatively simple,and the performance of the current downstream task is comparable to that of the supervised learning method.Therefore,we propose a conceptual analysis framework:data augmentation pipeline,architectures,pretext tasks,comparison methods,semisupervised fine-tuning.Based on this conceptual framework,we qualitatively analyze the existing comparative self-supervised learning methods for computer vision,and then further analyze its performance at different stages,and finally summarize the research status of sei supervised comparative learning methods in other fields.