摘要
颜色恒常性是实现目标检测、三维物体重建、自动驾驶等计算机视觉任务的重要前提。为充分利用图像中不同尺度的特征信息估计光源,提出渐进式多尺度特征级联融合颜色恒常性算法,通过三个卷积网络分支从不同尺度提取图像中的特征信息,通过特征融合得到更加丰富的特征信息,通过级联方式将图像中的浅层边缘信息和深层细粒度特征信息进行融合,提高了颜色恒常性算法的精确性。渐进式网络结构基于加权累计角度误差损失函数提高了算法在面对极端场景光照下光源估计的鲁棒性。在重处理的ColorChecker和NUS-8数据集上的实验结果表明,本文算法在各项评价指标上均优于目前的颜色恒常性算法,可应用于需要进行颜色恒常性预处理的其他计算机视觉任务。
Color constancy is an important prerequisite for computer vision tasks such as object detection, three-dimensional object reconstruction, and automatic driving. In order to make full use of the feature information of different scales in the image to estimate the light source, a progressive multi-scale feature cascade fusion color constancy algorithm is proposed. The feature information in the image is extracted from different scales by three convolution network branches to fuse and get more abundant feature information. By cascading the shallow edge information and the deep fine-grained feature information in the image, the accuracy of the color constancy algorithm is improved. The progressive network structure improves the robustness of the algorithm for the light source estimation in extreme scenes by weighted cumulative angle error loss function. Experimental results on the reprocessed ColorChecker and NUS-8 datasets show that the proposed algorithm outperforms the current color constancy algorithm in terms of various evaluation indexes, and can be applied to other computer vision tasks requiring color constancy preprocessing.
作者
杨泽鹏
解凯
李桐
Yang Zepeng;Xie Kai;Li Tong(School of Information Engineering,Beijing Institute of Graphic Communication,Beijing 102600,China)
出处
《光学学报》
EI
CAS
CSCD
北大核心
2022年第5期244-256,共13页
Acta Optica Sinica
基金
北京市教委科研项目(KM201810015011)。
关键词
视觉光学
颜色恒常性
光源估计
多尺度
特征融合
加权累加损失
visual optics
color constancy
illumination estimation
multi-scale
feature fusion
weighted cumulative loss