期刊文献+

Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data 被引量:2

原文传递
导出
摘要 Optical coherence tomography(OCT)is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples.Here,we present a deep learning-based image reconstruction framework that can generate swept-source OCT(SS-OCT)images using undersampled spectral data,without any spatial aliasing artifacts.This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired.To show the efficacy of this framework,we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system.Using 2-fold undersampled spectral data(i.e.,640 spectral points per A-line),the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units(GPUs),removing spatial aliasing artifacts due to spectral undersampling,also presenting a very good match to the images of the same samples,reconstructed using the full spectral OCT data(i.e.,1280 spectral points per A-line).We also successfully demonstrate that this framework can be further extended to process 3×undersampled spectral data per A-line,with some performance degradation in the reconstructed image quality compared to 2×spectral undersampling.Furthermore,an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network,which improved the overall imaging performance using less spectral data points per A-line compared to 2×or 3×spectral undersampling results.This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems,helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.
出处 《Light(Science & Applications)》 SCIE EI CAS CSCD 2021年第9期1658-1671,共14页 光(科学与应用)(英文版)
基金 The Ozcan Lab at UCLA acknowledges the support of NSF and HHMI.The Larin Lab at UH acknowledges the support of NIH(R01AA028406,R01HD096335,R01EB027099,and R01HL146745).
关键词 GRAPHICS HARDWARE IMAGE
  • 相关文献

参考文献2

二级参考文献2

共引文献62

同被引文献7

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部