Light field imaging technology can obtain three-dimensional(3D)information of a test surface in a single exposure.Traditional light field reconstruction algorithms not only take a long time to trace back to the origin...Light field imaging technology can obtain three-dimensional(3D)information of a test surface in a single exposure.Traditional light field reconstruction algorithms not only take a long time to trace back to the original image,but also require the exact parameters of the light field system,such as the position and posture of a microlens array(MLA),which will cause errors in the reconstructed image if these parameters cannot be precisely obtained.This paper proposes a reconstruction algorithm for light field imaging based on the point spread function(PSF),which does not require prior knowledge of the system.The accurate PSF derivation process of a light field system is presented,and modeling and simulation were conducted to obtain the relationship between the spatial distribution characteristics and the PSF of the light field system.A morphology-based method is proposed to analyze the overlapping area of the subimages of light field images to identify the accurate spatial location of the MLA used in the system,which is thereafter used to accurately refocus light field imaging.A light field system is built to verify the algorithm’s effectiveness.Experimental results show that the measurement accuracy is increased over 41.0%compared with the traditional method by measuring a step standard.The accuracy of parameters is also improved through a microstructure measurement with a peak-to-valley value of 25.4%and root mean square value of 23.5%improvement.This further validates that the algorithm can effectively improve the refocusing efficiency and the accuracy of the light field imaging results with the superiority of refocusing light field imaging without prior knowledge of the system.The proposed method provides a new solution for fast and accurate 3D measurement based on a light field.展开更多
Most approaches to estimate a scene’s 3D depth from a single image often model the point spread function (PSF) as a 2D Gaussian function. However, those method<span>s</span><span> are suffered ...Most approaches to estimate a scene’s 3D depth from a single image often model the point spread function (PSF) as a 2D Gaussian function. However, those method<span>s</span><span> are suffered from some noises, and difficult to get a high quality of depth recovery. We presented a simple yet effective approach to estimate exactly the amount of spatially varying defocus blur at edges, based on </span><span>a</span><span> Cauchy distribution model for the PSF. The raw image was re-blurred twice using two known Cauchy distribution kernels, and the defocus blur amount at edges could be derived from the gradient ratio between the two re-blurred images. By propagating the blur amount at edge locations to the entire image using the matting interpolation, a full depth map was then recovered. Experimental results on several real images demonstrated both feasibility and effectiveness of our method, being a non-Gaussian model for DSF, in providing a better estimation of the defocus map from a single un-calibrated defocused image. These results also showed that our method </span><span>was</span><span> robust to image noises, inaccurate edge location and interferences of neighboring edges. It could generate more accurate scene depth maps than the most of existing methods using a Gaussian based DSF model.</span>展开更多
基金This work was partially supported by the National Key R&D Program of China(No.2017YFA0701200)the National Nat-ural Science Foundation of China(Grant No.52075100)Shanghai Science and Technology Committee Innovation Grant(19ZR1404600).
文摘Light field imaging technology can obtain three-dimensional(3D)information of a test surface in a single exposure.Traditional light field reconstruction algorithms not only take a long time to trace back to the original image,but also require the exact parameters of the light field system,such as the position and posture of a microlens array(MLA),which will cause errors in the reconstructed image if these parameters cannot be precisely obtained.This paper proposes a reconstruction algorithm for light field imaging based on the point spread function(PSF),which does not require prior knowledge of the system.The accurate PSF derivation process of a light field system is presented,and modeling and simulation were conducted to obtain the relationship between the spatial distribution characteristics and the PSF of the light field system.A morphology-based method is proposed to analyze the overlapping area of the subimages of light field images to identify the accurate spatial location of the MLA used in the system,which is thereafter used to accurately refocus light field imaging.A light field system is built to verify the algorithm’s effectiveness.Experimental results show that the measurement accuracy is increased over 41.0%compared with the traditional method by measuring a step standard.The accuracy of parameters is also improved through a microstructure measurement with a peak-to-valley value of 25.4%and root mean square value of 23.5%improvement.This further validates that the algorithm can effectively improve the refocusing efficiency and the accuracy of the light field imaging results with the superiority of refocusing light field imaging without prior knowledge of the system.The proposed method provides a new solution for fast and accurate 3D measurement based on a light field.
文摘Most approaches to estimate a scene’s 3D depth from a single image often model the point spread function (PSF) as a 2D Gaussian function. However, those method<span>s</span><span> are suffered from some noises, and difficult to get a high quality of depth recovery. We presented a simple yet effective approach to estimate exactly the amount of spatially varying defocus blur at edges, based on </span><span>a</span><span> Cauchy distribution model for the PSF. The raw image was re-blurred twice using two known Cauchy distribution kernels, and the defocus blur amount at edges could be derived from the gradient ratio between the two re-blurred images. By propagating the blur amount at edge locations to the entire image using the matting interpolation, a full depth map was then recovered. Experimental results on several real images demonstrated both feasibility and effectiveness of our method, being a non-Gaussian model for DSF, in providing a better estimation of the defocus map from a single un-calibrated defocused image. These results also showed that our method </span><span>was</span><span> robust to image noises, inaccurate edge location and interferences of neighboring edges. It could generate more accurate scene depth maps than the most of existing methods using a Gaussian based DSF model.</span>
基金国家高技术研究发展计划(863)(the National High-Tech Research and Development Plan of China under Grant No.2006AA012324)航空基金(the Aeronautical Science Foundation No.20060853010)高等院校博士学科点专项科研基金(the China Specialized Research Fundfor the Doctoral Program of Higher Education under Grant No.20040699034)