摘要
目的精准的危及器官(organs at risk,OARs)勾画是肿瘤放射治疗过程中的关键步骤。依赖人工的勾画方式不仅耗费时力,且勾画精度容易受图像质量及医生主观经验等因素的影响。本文提出了一种2D级联卷积神经网络(convolutional neural network,CNN)模型,用于放疗危及器官的自动分割。方法模型主要包含分类器和分割网络两部分。分类器以VGG(visual geometry group)16为骨干结构,通过减少卷积层以及加入全局池化极大地降低了参数量和计算复杂度;分割网络则是以U-Net为基础,用双线性插值代替反卷积对特征图进行上采样,并引入Dropout层来缓解过拟合问题。在预测阶段,先利用分类器从输入图像中筛选出包含指定器官的切片,然后使用分割网络对选定切片进行分割,最后使用移除小连通域等方法对分割结果进一步优化。结果本文所用数据集共包含89例宫颈癌患者的腹盆腔CT(computed tomography)图像,并以中国科学技术大学附属第一医院多位放射医师提供的手工勾画结果作为评估的金标准。在实验部分,本文提出的分类器在6种危及器官(左右股骨、左右股骨头、膀胱和直肠)上的平均分类精度、查准率、召回率和F1-Score分别为98.36%、96.64%、94.1%和95.34%。基于上述分类性能,本文分割方法在测试集上的平均Dice系数为92.94%。结论与已有的CNN分割模型相比,本文方法获得了最佳的分割性能,先分类再分割的策略能够有效地避免标注稀疏问题并减少假阳性分割结果。此外,本文方法与专业放射医师在分割结果上具有良好的一致性,有助于在临床中实现更准确、快速的危及器官分割。
Objective Accurate delineation of organs at risk(OARs)is an essential step in the radiation therapy for cancers.However,this procedure is frequently time consuming and error prone because of the large anatomical variation across patients,different experience of observers,and poor soft-tissue contrast in computed tomography(CT)scans.A computer-aided analysis system for OAR auto-segmentation from CT images will reduce the burden of doctors and the subjective errors and improve the effect of radiotherapy.In the early years,atlas-based methods are extremely popular and widely used in anatomy segmentation.However,the performance of atlas-based segmentation methods can be easily affected by various factors,such as the quality of atlas and registration methods.Recently,profits from the rapid growth of computing power and the amount of available data,deep learning,especially deep convolutional neural networks(CNNs),has shown great potential in the field of image analysis.For most of the medical image segmentation tasks,the algorithms based on CNN outperform traditional methods.As a special fully CNN,U-Net adopts the design of encoder-decoder and fuses the high-and low-level features by skip connections to realize pixelwise segmentation.Given the outstanding performance of U-Net,numerous derivatives of the U-Net architecture have been gradually developed in various organ segmentation tasks.V-Net is proposed as an improvement scheme of U-Net to address the difficulties in processing 3 D data.V-Net can fully utilize the 3 D characteristics of images,although it is unsuitable for the 3 D medical image datasets with few samples and the segmentation tasks of small organs.Therefore,a two-step 2 D CNN model is proposed for the automatic and accurate segmentation of OARs in radiotherapy.Method In this study,we propose a novel cascade-CNN model that mainly includes a slice classifier and a 2 D organ segmentation network.visual geometry group(VGG)16 is used as the backbone structure of the classifier and modified accordingly by con
作者
石军
赵敏帆
薛旭东
郝晓宇
金旭
安虹
张红雁
Shi Jun;Zhao Minfan;Xue Xudong;Hao Xiaoyu;Jin Xu;An Hong;Zhang Hongyan(School of Computer Science and Technology,University of Science and Technology of China,Hefei 230026,China;Department of Radiation Oncology,The First Affiliated Hospital of USTC University of Science and Technology of China,Hefei 230001,China;Department of Radiation Oncology,Hubei Cancer Hospital,Wuhan 430079,China)
出处
《中国图象图形学报》
CSCD
北大核心
2020年第10期2110-2118,共9页
Journal of Image and Graphics
基金
国家重点研发计划项目(2016YFB1000403)
中央高校基本科研业务费专项资金资助。
关键词
危及器官分割
卷积神经网络
级联模型
放射治疗
宫颈癌
segmentation of organs at risk
convolutional neural network(CNN)
cascade model
radiation therapy
cervical cancer