摘要
针对现有基于点云的神经渲染网络无法渲染具有时域稳定性的高质量毛发的问题,提出基于毛发点云深度剥离的神经渲染与时域稳定性改进方法.该方法通过对输入点云模型的分层投影,获取不同分层的特征信息;将结果进行融合,以适应毛发半透明特性;将训练好的结果输入到时域稳定性增强网络中,该模块利用相邻帧间点云的重投影得到当前帧和前几帧的依赖关系,生成当前帧的最终结果,从而保证了训练结果的时域稳定性.使用光线追踪生成的高质量毛发数据集进行实验,结果表明,与现有方法相比,该方法可以获得更好的时域稳定性和渲染结果.
To tackle the problem that the existing point cloud-based neural rendering network cannot render high-quality hair with temporal stability,a depth peeling and temporal refine network is presented.Depth peeling method projects point clouds in different layers;fuses the results to adapt to the translucency of the hair;input the trained results into the temporal refine network.This module uses the reprojection of the point cloud between adjacent frames to obtain the dependency relationship between the current frame and the previous frames,and generates the final result of the current frame with temporal stability.The experiment uses high-quality hair datasets generated by ray tracing,and the final results show that compared with the existing methods,the proposed method can obtain better temporal stability and rendering results.
作者
叶可扬
潘曲利
任重
Ye Keyang;Pan Quli;Ren Zhong(State Key Laboratory of CAD&CG,Zhejiang University,Hangzhou 310058)
出处
《计算机辅助设计与图形学学报》
EI
CSCD
北大核心
2023年第5期676-684,共9页
Journal of Computer-Aided Design & Computer Graphics
关键词
神经渲染
毛发点云
深度剥离
时域稳定性
neural rendering
hair point cloud
depth peeling
temporary stability