目的针对多视图立体(multi-view stereo,MVS)重建效果整体性不理想的问题,本文对MVS 3D重建中的特征提取模块和代价体正则化模块进行研究,提出一种基于注意力机制的端到端深度学习架构。方法首先从输入的源图像和参考图像中提取深度特征...目的针对多视图立体(multi-view stereo,MVS)重建效果整体性不理想的问题,本文对MVS 3D重建中的特征提取模块和代价体正则化模块进行研究,提出一种基于注意力机制的端到端深度学习架构。方法首先从输入的源图像和参考图像中提取深度特征,在每一级特征提取模块中均加入注意力层,以捕获深度推理任务的远程依赖关系;然后通过可微分单应性变换构建参考视锥的特征量,并构建代价体;最后利用多层U-Net体系结构正则化代价体,并通过回归结合参考图像边缘信息生成最终的细化深度图。结果在DTU(Technical University of Denmark)数据集上进行测试,与现有的几种方法相比,本文方法相较于Colmap、Gipuma和Tola方法,整体性指标分别提高8.5%、13.1%和31.9%,完整性指标分别提高20.7%、41.6%和73.3%;相较于Camp、Furu和Surface Net方法,整体性指标分别提高24.8%、33%和29.8%,准确性指标分别提高39.8%、17.6%和1.3%,完整性指标分别提高9.7%、48.4%和58.3%;相较于Pru Mvsnet方法,整体性指标提高1.7%,准确性指标提高5.8%;相较于Mvsnet方法,整体性指标提高1.5%,完整性标提高7%。结论在DTU数据集上的测试结果表明,本文提出的网络架构在整体性指标上得到了目前最优的结果,完整性和准确性指标得到较大提升,3D重建质量更好。展开更多
Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from ...Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF(Sparse-Input Generalized Neural Radiance Fields).Firstly,we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images,and then these features are aggregated by multi-head attention as the input of the neural radiance fields.This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference,thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes.We tested the generalization ability on DTU dataset,and our PSNR(peak signal-to-noise ratio)improved by 3.14 compared with the baseline method under the same input conditions.In addition,if the scene has dense input views available,the average PSNR can be improved by 1.04 through further refinement training in a short time,and a higher quality rendering effect can be obtained.展开更多
现有深度多视图立体(MVS)方法将Transformer引入级联网络,以实现高分辨率深度估计,从而实现高精确度和完整度的三维重建结果。然而,基于Transformer的方法受计算成本的限制,无法扩展到更精细的阶段。为此,提出一种新颖的跨尺度Transfor...现有深度多视图立体(MVS)方法将Transformer引入级联网络,以实现高分辨率深度估计,从而实现高精确度和完整度的三维重建结果。然而,基于Transformer的方法受计算成本的限制,无法扩展到更精细的阶段。为此,提出一种新颖的跨尺度Transformer的MVS网络,在不增加额外计算的情况下处理不同阶段的特征表示。引入一种自适应匹配感知Transformer(AMT),在多个尺度上使用不同的交互式注意力组合。这种组合策略使所提网络能够捕捉图像内部的上下文信息,并增强图像之间的特征关系。此外,设计双特征引导聚合(DFGA),将粗糙的全局语义信息嵌入到更精细的代价体构建中,以进一步增强全局和局部特征的感知。同时,通过设计一种特征度量损失,用于评估变换前后的特征偏差,以减少特征错误匹配对深度估计的影响。实验结果表明,在DTU数据集中,所提网络的完整度和整体度量达到0.264、0.302,在Tanks and temples 2个大场景的重建平均值分别达到64.28、38.03。展开更多
基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,...基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,提出一种基于四叉树先验辅助的MVS方法。首先,利用图像像素值获得局部纹理;其次,基于自适应棋盘网格采样的块匹配多视图立体视觉方法(ACMH)获得粗略的深度图,结合弱纹理区域中的结构信息,采用四叉树分割生成先验平面假设;再次,融合上述信息,设计一种新的多视图匹配代价函数,引导弱纹理区域得到最优深度假设,进而提高立体匹配的准确性;最后,在ETH3D、Tanks and Temples和中国科学院古建筑数据集上与多种现有的传统MVS方法进行对比实验。结果表明所提方法性能更优,特别是在ETH3D测试数据集中,当误差阈值为2 cm时,相较于当前先进的多尺度平面先验辅助方法(ACMMP),它的F1分数和完整性分别提高了1.29和2.38个百分点。展开更多
In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the...In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the BRDF can be sampled after geometry estimation using multi-view stereo(MVS)techniques.Our contribution is selection of reliable samples of lighting,surface normal,and viewing directions for robustness against estimation errors of MVS.Our method is quantitatively evaluated using synthesized images and its effectiveness is shown via real-world experiments.展开更多
文摘目的针对多视图立体(multi-view stereo,MVS)重建效果整体性不理想的问题,本文对MVS 3D重建中的特征提取模块和代价体正则化模块进行研究,提出一种基于注意力机制的端到端深度学习架构。方法首先从输入的源图像和参考图像中提取深度特征,在每一级特征提取模块中均加入注意力层,以捕获深度推理任务的远程依赖关系;然后通过可微分单应性变换构建参考视锥的特征量,并构建代价体;最后利用多层U-Net体系结构正则化代价体,并通过回归结合参考图像边缘信息生成最终的细化深度图。结果在DTU(Technical University of Denmark)数据集上进行测试,与现有的几种方法相比,本文方法相较于Colmap、Gipuma和Tola方法,整体性指标分别提高8.5%、13.1%和31.9%,完整性指标分别提高20.7%、41.6%和73.3%;相较于Camp、Furu和Surface Net方法,整体性指标分别提高24.8%、33%和29.8%,准确性指标分别提高39.8%、17.6%和1.3%,完整性指标分别提高9.7%、48.4%和58.3%;相较于Pru Mvsnet方法,整体性指标提高1.7%,准确性指标提高5.8%;相较于Mvsnet方法,整体性指标提高1.5%,完整性标提高7%。结论在DTU数据集上的测试结果表明,本文提出的网络架构在整体性指标上得到了目前最优的结果,完整性和准确性指标得到较大提升,3D重建质量更好。
基金supported by the Zhengzhou Collaborative Innovation Major Project under Grant No.20XTZX06013the Henan Provincial Key Scientific Research Project of China under Grant No.22A520042。
文摘Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization,which limits their practical applications.We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF(Sparse-Input Generalized Neural Radiance Fields).Firstly,we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images,and then these features are aggregated by multi-head attention as the input of the neural radiance fields.This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference,thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes.We tested the generalization ability on DTU dataset,and our PSNR(peak signal-to-noise ratio)improved by 3.14 compared with the baseline method under the same input conditions.In addition,if the scene has dense input views available,the average PSNR can be improved by 1.04 through further refinement training in a short time,and a higher quality rendering effect can be obtained.
文摘现有深度多视图立体(MVS)方法将Transformer引入级联网络,以实现高分辨率深度估计,从而实现高精确度和完整度的三维重建结果。然而,基于Transformer的方法受计算成本的限制,无法扩展到更精细的阶段。为此,提出一种新颖的跨尺度Transformer的MVS网络,在不增加额外计算的情况下处理不同阶段的特征表示。引入一种自适应匹配感知Transformer(AMT),在多个尺度上使用不同的交互式注意力组合。这种组合策略使所提网络能够捕捉图像内部的上下文信息,并增强图像之间的特征关系。此外,设计双特征引导聚合(DFGA),将粗糙的全局语义信息嵌入到更精细的代价体构建中,以进一步增强全局和局部特征的感知。同时,通过设计一种特征度量损失,用于评估变换前后的特征偏差,以减少特征错误匹配对深度估计的影响。实验结果表明,在DTU数据集中,所提网络的完整度和整体度量达到0.264、0.302,在Tanks and temples 2个大场景的重建平均值分别达到64.28、38.03。
文摘基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,提出一种基于四叉树先验辅助的MVS方法。首先,利用图像像素值获得局部纹理;其次,基于自适应棋盘网格采样的块匹配多视图立体视觉方法(ACMH)获得粗略的深度图,结合弱纹理区域中的结构信息,采用四叉树分割生成先验平面假设;再次,融合上述信息,设计一种新的多视图匹配代价函数,引导弱纹理区域得到最优深度假设,进而提高立体匹配的准确性;最后,在ETH3D、Tanks and Temples和中国科学院古建筑数据集上与多种现有的传统MVS方法进行对比实验。结果表明所提方法性能更优,特别是在ETH3D测试数据集中,当误差阈值为2 cm时,相较于当前先进的多尺度平面先验辅助方法(ACMMP),它的F1分数和完整性分别提高了1.29和2.38个百分点。
基金partly supported by JSPS KAKENHI JP15K16027,JP26700013,JP15H05918,JP19H04138,JST CREST JP179423the Foundation for Nara Institute of Science and Technology.
文摘In this paper,we present a practical method for reconstructing the bidirectional reflectance distribution function(BRDF)from multiple images of a real object composed of a homogeneous material.The key idea is that the BRDF can be sampled after geometry estimation using multi-view stereo(MVS)techniques.Our contribution is selection of reliable samples of lighting,surface normal,and viewing directions for robustness against estimation errors of MVS.Our method is quantitatively evaluated using synthesized images and its effectiveness is shown via real-world experiments.