摘要
针对PF-AFN中预测外观流精度欠缺和网络泛化能力较差的问题,提出改进的虚拟试衣网络.首先,增加目标人体预测模块,通过预测目标人体解析图像解耦形状与纹理;其次,依据仿射变换的共线特性,增加共线性损失项以约束形变过程,根据外观流的特性添加距离损失,弥补PF-AFN对局部区域约束不足的缺陷;最后,将生成的人体解析图像与原输入按通道拼接作为图像生成网络的输入,使用基于ResNet的类UNet++图像生成网络得到最终的试衣图像.基于VITON数据集,与其他4种最新方法进行对比实验,实验结果表明,该方法在图像相似度评价指标SSIM,FID和LPIPS上分别比其中最优方法提升了1.2%,11.1%和5.8%,图像清晰度和多样性评价(IS)与当前最优方法相当.从整体来看,所提方法改善了原网络中存在的问题,并取得了较好的视觉效果.
An improved virtual try-on method is proposed to solve the problems of insufficient accuracy of the appearance flow predicted and poor generalization ability in PF-AFN.Firstly,to decouple the shape and style of clothing,we synthesize a human parsing map aligned with the human in target clothes by a human body prediction module.Then,based on the collinearity of the affine transformation and the characteristics of the appearance flow,the collinearity loss term and the distance loss term are added to constrain the deformation process and on local regions accordingly.Finally,the human parsing map and the original input are concated by channel as the input of the generation network and the UNet++-like generation network based on ResNet is used to obtain the ultimate virtual try-on images.A comparative experiment is executed on the VITON dataset with other 4 state-of-the-art methods.It shows that the method proposed improves the SSIM,FID and LPIPS by 1.2%,11.1%and 5.8%respectively compared with the optimal method.The image clarity and inception score are comparable to the current state-of-the-art methods.On the whole,the proposed method solves the original problems and achieves better results.
作者
韩超远
李健
王泽震
Han Chaoyuan;Li Jian;Wang Zezhen(School of Electronic Information and Artificial Intelligence,Shaanxi University of Science&Technology,Xi'an 710000)
出处
《计算机辅助设计与图形学学报》
EI
CSCD
北大核心
2023年第10期1500-1509,共10页
Journal of Computer-Aided Design & Computer Graphics
基金
国家自然科学基金(61871260).
关键词
虚拟试衣
外观流
图像生成网络
人体解析
virtual try-on
appearance flow
image generation network
human parsing