摘要
针对人体姿态迁移结果会损失服装和人脸等大量细节,以及现有的虚拟试衣算法没有实现多姿势迁移,提出一种新的姿态迁移网络。通过设计解析图生成网络,对目标姿势的解析图进行生成,再对空间变换网络(Spatial Transformer Network, STN)扭曲目标服装进行正则限制,最后利用优化的融合网络对服装和人体进行融合得到最终的姿态迁移图像。实验结果表明了该方法具有较高的鲁棒性和可靠性。
A new pose transfer network is proposed to address the fact that human pose transfer results in the loss of a large amount of details such as clothing and faces,and that existing virtual try-on algorithms do not implement virtual try-on with multiple pose transfer.A parse map generation network is proposed to generate a parse map of the target pose,then a Spatial Transformer Network(STN) distorts the target clothing with regularization restrictions,and finally an optimised fusion network is used to fuse the clothing and the body to obtain the final pose transfer image.The experimental results demonstrate the high robustness and reliability of the method.
作者
陈亚东
杜成虎
余锋
姜明华
CHEN Ya-dong;DU Cheng-hu;YU Feng;JIANG Ming-hua(School of Computer Science and Artificial Intelligence,Wuhan Textile University,Wuhan Hubei 430200,China;Engineering Research Center of Hubei Province for Clothing Informaion,Wuhan Hubei 430200 China)
出处
《武汉纺织大学学报》
2022年第1期3-9,共7页
Journal of Wuhan Textile University
基金
湖北省教育厅科学技术研究计划青年人才项目(Q20201709)
湖北省服装信息化工程技术研究中心开放课题(900204)
湖北重点研发计划项目(2021BAA042)。
关键词
姿态迁移
虚拟试衣
解析图
空间变换网络
pose transfer
virtual try-on
parse map
Spatial Transformer Network(STN)