摘要
实用的暗光降噪增强解决方案往往需要具备计算速度快、内存效率高、能够实现视觉上高质量的降噪等优点。现有方法大多以提升降噪质量为目标,因此在速度和内存要求上有所折中,这在很大程度上限制了其实用性。文中提出了一种新的深度降噪网络——重参数化多尺度融合网络,用于极暗光单张原始图像降噪,在不损失降噪性能的同时加快模型的推断速度并降低内存开销。具体地,在多尺度空间提取图像特征,利用轻量级的空间通道并行注意力模块动态自适应地聚焦于空间及通道中的核心特征;同时使用重参数化的卷积单元,在不增加任何推断计算量的情况下进一步丰富模型的表征能力。该模型在常见CPU上(如Intel i7-7700K)可以在1s左右恢复超高清4K分辨率图像,在普通GPU(如NVIDIA GTX 1080Ti)上以24帧率的速度运行,在几乎4倍快于现有先进方法(如UNet)的同时仍保持具有竞争力的恢复质量。
Practical low-light denoising/enhancement solutions often require fast computation,high memory efficiency,and can achieve visually high-quality restoration results.Most existing methods aim to restore quality but compromise on speed and memory requirements,which limits their usefulness to a large extent.This paper proposes a new deep denoising architecture,a re-parameterized multi-scale fusion network for extreme low-light raw denoising,which greatly improves the inference speed without losing high-quality denoising performance.Specifically,image features are extracted in multi-scale space,and a lightweight spatial-channel parallel attention module is used to focus on core features within space and channel dynamically and adaptively.The representation ability of the model is further enriched by re-parameterized convolutional unit without increasing computational cost at inference.The proposed model can restore UHD 4K resolution images within about 1s on a CPU(e.g.,Intel i7-7700K)and run at 24 fps on a GPU(e.g.,NVIDIA GTX 1080Ti),which is almost four times faster than existing advanced methods(e.g.,UNet)while still maintaining competitive restoration quality.
作者
魏恺轩
付莹
WEI Kai-xuan;FU Ying(School of Computer Science and Technology,Beijing Institute of Technology,Beijing 100081,China)
出处
《计算机科学》
CSCD
北大核心
2022年第8期120-126,共7页
Computer Science
基金
国家自然科学基金(62171038,61827901,62088101)。
关键词
重参数化卷积单元
多尺度融合
空间通道并行注意力模块
极暗光图像降噪
Re-parameterization convolutional unit
Multi-scale fusion
Spatial-channel parallel attention module
Extreme low-light denoising