摘要
Single-shot 2 D optical imaging of transient scenes is indispensable for numerous areas of study.Among existing techniques,compressed optical-streaking ultrahigh-speed photography(COSUP)uses a cost-efficient design to endow ultrahigh frame rates with off-the-shelf CCD and CMOS cameras.Thus far,COSUP’s application scope is limited by the long processing time and unstable image quality in existing analytical-modeling-based video reconstruction.To overcome these problems,we have developed a snapshot-to-video autoencoder(S2 V-AE)—which is a deep neural network that maps a compressively recorded 2 D image to a movie.The S2 V-AE preserves spatiotemporal coherence in reconstructed videos and presents a flexible structure to tolerate changes in input data.Implemented in compressed ultrahigh-speed imaging,the S2 V-AE enables the development of single-shot machine-learning assisted real-time(SMART)COSUP,which features a reconstruction time of 60 ms and a large sequence depth of 100 frames.SMART-COSUP is applied to wide-field multiple-particle tracking at 20,000 frames per second.As a universal computational framework,the S2 V-AE is readily adaptable to other modalities in high-dimensional compressed sensing.SMART-COSUP is also expected to find wide applications in applied and fundamental sciences.
基金
Natural Sciences and Engineering Research Council of Canada(CRDPJ-532304-18,I2IPJ-555593-20,RGPAS-507845-2017,RGPIN-2017-05959)
Canada Foundation for Innovation and Ministère de l’économie et de l’Innovation du Québec(37146)
Fonds de recherche du Québec–Nature et technologies(2019-NC-252960)
Fonds de Recherche du Québec–Santé(267406,280229)
Ministère des Relations internationales et de la Francophonie du Québec
Compute Canada
Calcul Québec。