摘要
针对现有的服装解析算法在复杂背景下分割准确率较低,依赖姿态估计等问题,提出一种基于深度卷积神经网络的自监督服装解析方法(Deeplabv2-SSL),即在Deeplabv2网络中注入一种自监督的结构敏感学习算法。新的算法在训练过程中不需要标注任何人体关节信息,直接从像素标签中学习人体更高层次的信息,利用学习到的人体关节信息更好地定位服装分割的区域,降低了姿态估计过程中的损失。实验表明,Deeplabv2-SSL网络可以有效地解析服装中人体的个别部位以及服装区域。测试过程中总体像素精度大约83.37%,平均像素精度大约52.53%,较其他语义分割模型性能更佳。
Current clothing parsing methods have low segmentation accuracy and rely too much on human posture estimation.In this paper,a self-supervised clothing analysis method based on deep convolutional neural network,Deeplabv2-SSL,is proposed,in which a self-supervised structure-sensitive learning algorithm(SSL)is injected into Deeplabv2 network.No human joint information needs to be labeled during the training process,and the higher-level information of human body can be learned directly from the pixel label.The human joint information can be used to better locate the garment segmentation area to reduce the loss in the pose estimation process.Experiments show that the Deeplabv2-SSL network can effectively analyze individual parts of human body as well as the clothing area in human clothing.The overall pixel accuracy is about 83.37%and the average pixel accuracy is about 52.53%during the testing process,which is improved compared with the other two methods.
作者
白美丽
万韬阮
汤汶
朱欣娟
薛涛
BAI Meili;WAN Taoruan;TANG Wen;ZHU Xinjuan;XUE Tao(Shaanxi Key Laboratory of Clothing Intelligence,School of Computer Science,Xi′an Polytechnic University,Xi′an 710048,China;Faulty of Engineering and Informatics,University of Bradford,Bradford BD71DP,United Kingdom;Faculty of Science and Technology,Bournemouth University,Poole BH125BB,United Kingdom)
出处
《纺织高校基础科学学报》
CAS
2019年第4期385-392,410,共9页
Basic Sciences Journal of Textile Universities
基金
陕西省科技厅自然科学基金(2016JZ026)
陕西省科技厅国际科技合作与交流计划(2016KW-043)
关键词
服装解析
语义分割
深度卷积神经网络
自监督学习
姿态估计
clothing parsing
semantic segmentation
deep convolution neural network
self-supervised learning
pose estimation