摘要
针对医学超声影像中图像受斑点噪声干扰、细节信息丢失、目标边界模糊等问题,提出一种基于特征融合和注意力机制的超声影像分割网络,整体结构采用编码器-解码器网络结构。首先,使用编码器模块对图像进行上下文特征提取,提取全局特征信息;然后,设计多尺度特征提取模块,捕获更广泛的语义信息;最后,在解码器模块中加入双注意力机制,沿空间和通道两个维度细化特征信息,加强对超声心动图影像中左心室区域的关注,使模型对有噪声的输入图像具有鲁棒性。实验结果表明,所提出的网络在超声心动图心尖四腔心数据集上的实验分割结果的Dice系数达到93.11%,平均交并比(mIoU)为86.80%,较传统的U-Net卷积神经网络分别提升了3.06个百分点和3.95个百分点,有效获取了左心室区域细节信息和边界信息,取得了较好的分割结果。
Aiming at the problems of speckle noise interference,loss of detail information and fuzzy target boundary in medical ultrasonic image,an ultrasonic image segmentation network based on feature fusion and attention mechanism was proposed with encoder-decoder network structure as the overall structure.Firstly,the encoder module was used to extract context features of the image and extract global feature information.Then,a multi-scale feature extraction module was designed to capture a wider range of semantic information.Finally,a dual attention mechanism was added to the decoder module to refine the feature information along the two dimensions of space and channel,and strengthen the attention to left ventricular region in the echocardiographic image,so that the model was robust to noisy input image.Experimental results show that the proposed network achieves Dice coefficient of 93.11%and mean Intersection over Union(mIoU)of 86.80%on echocardiographic four-chamber heart dataset,which are 3.06 percentage points and 3.95 percentage points higher than those of the traditional U-Net convolutional neural network.The proposed network effectively obtains the detailed information and boundary information of left ventricular region,and achieves better segmentation results.
作者
王璐
姚宇
WANG Lu;YAO Yu(Chengdu Institute of Computer Applications,Chinese Academy of Sciences,Chengdu Sichuan 610041,China;University of Chinese Academy of Sciences,School of Computer Science and Technology,Beijing 100049,China)
出处
《计算机应用》
CSCD
北大核心
2022年第S02期230-236,共7页
journal of Computer Applications
基金
四川省重点研发项目(2021YFS0019)。
关键词
深度学习
图像分割
超声心动图
特征融合
注意力机制
deep learning
image segmentation
echocardiography
feature fusion
attention mechanism