摘要
单图像去雨研究旨在利用退化的雨图恢复出无雨图像,而现有的基于深度学习的去雨算法未能有效地利用雨图的全局性信息,导致去雨后的图像损失部分细节和结构信息.针对此问题,提出一种基于窗口自注意力网络(Swin Transformer)的单图像去雨算法.该算法网络主要包括浅层特征提取模块和深度特征提取网络两部分.前者利用上下文信息聚合输入来适应雨痕分布的多样性,进而提取雨图的浅层特征.后者利用Swin Transformer捕获全局性信息和像素点间的长距离依赖关系,并结合残差卷积和密集连接强化特征学习,最后通过全局残差卷积输出去雨图像.此外,提出一种同时约束图像边缘和区域相似性的综合损失函数来进一步提高去雨图像的质量.实验表明,与目前单图像去雨表现优秀的算法MSPFN、 MPRNet相比,该算法使去雨图像的峰值信噪比提高0.19 dB和2.17 dB,结构相似性提高3.433%和1.412%,同时网络模型参数量下降84.59%和34.53%,前向传播平均耗时减少21.25%和26.67%.
Single image deraining aims to recover the rain-free image from rainy image.Most existing deraining methods based on deep learning do not utilize the global information of rainy image effectively,which makes them lose much detailed and structural information after processing.Focusing on this issue,this paper proposes a single image deraining algorithm based on Swin Transformer.The network mainly includes a shallow features extraction module and a deep features extraction network.The former exploits the context information aggregation module to adapt to the distribution diversity of rain streaks and extracts the shallow features of rainy image.The latter uses Swin Transformer to capture the global information and long-distance dependencies between different pixels,in combination with residual convolution and dense connection to strengthen features learning.Finally,the derained image is obtained through a global residual convolution.In addition,this paper proposes a novel comprehensive loss function that constrains the similarity of image edges and regions synchronously to further improve the quality of derained image.Extensive experimental results show that,compared with the two state-of-the-art methods,MSPFN and MPRNet,the average peak signal-to-noise ratio of derained images of our method increases by 0.19 dB and 2.17 dB,and the average structural similarity increases by 3.433%and 1.412%.At the same time,the model parameters of the proposed network decreases by 84.59%and 34.53%,and the forward propagation time reduces by 21.25%and 26.67%.
作者
高涛
文渊博
陈婷
张静
GAO Tao;WEN Yuanbo;CHEN Ting;ZHANG Jing(School of Information Engineering,Chang’an University,Xi’an 710064,China;College of Engineering and Computer Science,Australian National University,Canberra 2600,ACT,Australia)
出处
《上海交通大学学报》
EI
CAS
CSCD
北大核心
2023年第5期613-623,共11页
Journal of Shanghai Jiaotong University
基金
国家重点研发计划项目(2019YFE0108300)
国家自然科学基金项目(52172379,62001058)
陕西省重点研发计划(2019GY-039)
中央高校基本科研业务费专项资金项目(300102242901,300102112601)。