Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low a...Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters.展开更多
Liver cancer has the second highest incidence rate among all types of malignant tumors,and currently,its diagnosis heavily depends on doctors’manual labeling of CT scan images,a process that is time-consuming and sus...Liver cancer has the second highest incidence rate among all types of malignant tumors,and currently,its diagnosis heavily depends on doctors’manual labeling of CT scan images,a process that is time-consuming and susceptible to subjective errors.To address the aforementioned issues,we propose an automatic segmentation model for liver and tumors called Res2Swin Unet,which is based on the Unet architecture.The model combines Attention-Res2 and Swin Transformer modules for liver and tumor segmentation,respectively.Attention-Res2 merges multiple feature map parts with an Attention gate via skip connections,while Swin Transformer captures long-range dependencies and models the input globally.And the model uses deep supervision and a hybrid loss function for faster convergence.On the LiTS2017 dataset,it achieves better segmentation performance than other models,with an average Dice coefficient of 97.0%for liver segmentation and 81.2%for tumor segmentation.展开更多
The high similarity of shellfish images and unbalanced samples are key factors affecting the accuracy of shellfish recognition.This study proposes a new shellfish recognition method FL_Net based on a Convolutional Neu...The high similarity of shellfish images and unbalanced samples are key factors affecting the accuracy of shellfish recognition.This study proposes a new shellfish recognition method FL_Net based on a Convolutional Neural Network(CNN).We first establish the shellfish image(SI)dataset with 68 species and 93574 images,and then propose a filter pruning and repairing model driven by an output entropy and orthogonality measurement for the recognition of shellfish with high similarity features to improve the feature expression ability of valid information.For the shellfish recognition with unbalanced samples,a hybrid loss function,including regularization term and focus loss term,is employed to reduce the weight of easily classified samples by controlling the shared weight of each sample species to the total loss.The experimental results show that the accuracy of shell-fish recognition of the proposed method is 93.95%,13.68%higher than the benchmark network(VGG16),and the accuracy of shellfish recognition is improved by 0.46%,17.41%,17.36%,4.46%,1.67%,and 1.03%respectively compared with AlexNet,GoogLeNet,ResNet50,SN_Net,MutualNet,and ResNeSt,which are used to verify the efficiency of the proposed method.展开更多
Retinal vessel segmentation in fundus images plays an essential role in the screening,diagnosis,and treatment of many diseases.The acquired fundus images generally have the following problems:uneven illumination,high ...Retinal vessel segmentation in fundus images plays an essential role in the screening,diagnosis,and treatment of many diseases.The acquired fundus images generally have the following problems:uneven illumination,high noise,and complex structure.It makes vessel segmentation very challenging.Previous methods of retinal vascular segmentation mainly use convolutional neural networks on U Network(U-Net)models,and they have many limitations and shortcomings,such as the loss of microvascular details at the end of the vessels.We address the limitations of convolution by introducing the transformer into retinal vessel segmentation.Therefore,we propose a hybrid method for retinal vessel segmentation based on modulated deformable convolution and the transformer,named DT-Net.Firstly,multi-scale image features are extracted by deformable convolution and multi-head selfattention(MHSA).Secondly,image information is recovered,and vessel morphology is refined by the proposed transformer decoder block.Finally,the local prediction results are obtained by the side output layer.The accuracy of the vessel segmentation is improved by the hybrid loss function.Experimental results show that our method obtains good segmentation performance on Specificity(SP),Sensitivity(SE),Accuracy(ACC),Curve(AUC),and F1-score on three publicly available fundus datasets such as DRIVE,STARE,and CHASE_DB1.展开更多
As an important part of the new generation of information technology,the Internet of Things(IoT)has been widely concerned and regarded as an enabling technology of the next generation of health care system.The fundus ...As an important part of the new generation of information technology,the Internet of Things(IoT)has been widely concerned and regarded as an enabling technology of the next generation of health care system.The fundus photography equipment is connected to the cloud platform through the IoT,so as to realize the realtime uploading of fundus images and the rapid issuance of diagnostic suggestions by artificial intelligence.At the same time,important security and privacy issues have emerged.The data uploaded to the cloud platform involves more personal attributes,health status and medical application data of patients.Once leaked,abused or improperly disclosed,personal information security will be violated.Therefore,it is important to address the security and privacy issues of massive medical and healthcare equipment connecting to the infrastructure of IoT healthcare and health systems.To meet this challenge,we propose MIA-UNet,a multi-scale iterative aggregation U-network,which aims to achieve accurate and efficient retinal vessel segmentation for ophthalmic auxiliary diagnosis while ensuring that the network has low computational complexity to adapt to mobile terminals.In this way,users do not need to upload the data to the cloud platform,and can analyze and process the fundus images on their own mobile terminals,thus eliminating the leakage of personal information.Specifically,the interconnection between encoder and decoder,as well as the internal connection between decoder subnetworks in classic U-Net are redefined and redesigned.Furthermore,we propose a hybrid loss function to smooth the gradient and deal with the imbalance between foreground and background.Compared with the UNet,the segmentation performance of the proposed network is significantly improved on the premise that the number of parameters is only increased by 2%.When applied to three publicly available datasets:DRIVE,STARE and CHASE DB1,the proposed network achieves the accuracy/F1-score of 96.33%/84.34%,97.12%/83.17%and 97.06%/84.10%,respectively.The展开更多
基金funded by the National Natural Foundation of China under Grant No.61172167the Science Fund Project of Heilongjiang Province(LH2020F035).
文摘Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters.
文摘Liver cancer has the second highest incidence rate among all types of malignant tumors,and currently,its diagnosis heavily depends on doctors’manual labeling of CT scan images,a process that is time-consuming and susceptible to subjective errors.To address the aforementioned issues,we propose an automatic segmentation model for liver and tumors called Res2Swin Unet,which is based on the Unet architecture.The model combines Attention-Res2 and Swin Transformer modules for liver and tumor segmentation,respectively.Attention-Res2 merges multiple feature map parts with an Attention gate via skip connections,while Swin Transformer captures long-range dependencies and models the input globally.And the model uses deep supervision and a hybrid loss function for faster convergence.On the LiTS2017 dataset,it achieves better segmentation performance than other models,with an average Dice coefficient of 97.0%for liver segmentation and 81.2%for tumor segmentation.
基金the joint support of the National Key R&D Program Blue Granary Technology Innovation Key Special Project(2020YFD0900204)the Yantai Key R&D Project(2019XDHZ084).
文摘The high similarity of shellfish images and unbalanced samples are key factors affecting the accuracy of shellfish recognition.This study proposes a new shellfish recognition method FL_Net based on a Convolutional Neural Network(CNN).We first establish the shellfish image(SI)dataset with 68 species and 93574 images,and then propose a filter pruning and repairing model driven by an output entropy and orthogonality measurement for the recognition of shellfish with high similarity features to improve the feature expression ability of valid information.For the shellfish recognition with unbalanced samples,a hybrid loss function,including regularization term and focus loss term,is employed to reduce the weight of easily classified samples by controlling the shared weight of each sample species to the total loss.The experimental results show that the accuracy of shell-fish recognition of the proposed method is 93.95%,13.68%higher than the benchmark network(VGG16),and the accuracy of shellfish recognition is improved by 0.46%,17.41%,17.36%,4.46%,1.67%,and 1.03%respectively compared with AlexNet,GoogLeNet,ResNet50,SN_Net,MutualNet,and ResNeSt,which are used to verify the efficiency of the proposed method.
基金supported in part by the National Natural Science Foundation of China under Grant 61972267the National Natural Science Foundation of Hebei Province under Grant F2018210148the University Science Research Project of Hebei Province under Grant ZD2021334.
文摘Retinal vessel segmentation in fundus images plays an essential role in the screening,diagnosis,and treatment of many diseases.The acquired fundus images generally have the following problems:uneven illumination,high noise,and complex structure.It makes vessel segmentation very challenging.Previous methods of retinal vascular segmentation mainly use convolutional neural networks on U Network(U-Net)models,and they have many limitations and shortcomings,such as the loss of microvascular details at the end of the vessels.We address the limitations of convolution by introducing the transformer into retinal vessel segmentation.Therefore,we propose a hybrid method for retinal vessel segmentation based on modulated deformable convolution and the transformer,named DT-Net.Firstly,multi-scale image features are extracted by deformable convolution and multi-head selfattention(MHSA).Secondly,image information is recovered,and vessel morphology is refined by the proposed transformer decoder block.Finally,the local prediction results are obtained by the side output layer.The accuracy of the vessel segmentation is improved by the hybrid loss function.Experimental results show that our method obtains good segmentation performance on Specificity(SP),Sensitivity(SE),Accuracy(ACC),Curve(AUC),and F1-score on three publicly available fundus datasets such as DRIVE,STARE,and CHASE_DB1.
基金This work was supported in part by the National Natural Science Foundation of China(Nos.62072074,62076054,62027827,61902054)the Frontier Science and Technology Innovation Projects of National Key R&D Program(No.2019QY1405)+2 种基金the Sichuan Science and Technology Innovation Platform and Talent Plan(No.2020JDJQ0020)the Sichuan Science and Technology Support Plan(No.2020YFSY0010)the Natural Science Foundation of Guangdong Province(No.2018A030313354).
文摘As an important part of the new generation of information technology,the Internet of Things(IoT)has been widely concerned and regarded as an enabling technology of the next generation of health care system.The fundus photography equipment is connected to the cloud platform through the IoT,so as to realize the realtime uploading of fundus images and the rapid issuance of diagnostic suggestions by artificial intelligence.At the same time,important security and privacy issues have emerged.The data uploaded to the cloud platform involves more personal attributes,health status and medical application data of patients.Once leaked,abused or improperly disclosed,personal information security will be violated.Therefore,it is important to address the security and privacy issues of massive medical and healthcare equipment connecting to the infrastructure of IoT healthcare and health systems.To meet this challenge,we propose MIA-UNet,a multi-scale iterative aggregation U-network,which aims to achieve accurate and efficient retinal vessel segmentation for ophthalmic auxiliary diagnosis while ensuring that the network has low computational complexity to adapt to mobile terminals.In this way,users do not need to upload the data to the cloud platform,and can analyze and process the fundus images on their own mobile terminals,thus eliminating the leakage of personal information.Specifically,the interconnection between encoder and decoder,as well as the internal connection between decoder subnetworks in classic U-Net are redefined and redesigned.Furthermore,we propose a hybrid loss function to smooth the gradient and deal with the imbalance between foreground and background.Compared with the UNet,the segmentation performance of the proposed network is significantly improved on the premise that the number of parameters is only increased by 2%.When applied to three publicly available datasets:DRIVE,STARE and CHASE DB1,the proposed network achieves the accuracy/F1-score of 96.33%/84.34%,97.12%/83.17%and 97.06%/84.10%,respectively.The