为了解决含有丰富纹理信息和复杂结构信息的大破损区域中的缺失信息修复的问题,提出了一种划分特征子区域的图像修复算法。首先,根据图像中包含的不同特征,运用特征公式进行特征提取,再通过统计特征值划分特征子区域,提高了图像修复的速...为了解决含有丰富纹理信息和复杂结构信息的大破损区域中的缺失信息修复的问题,提出了一种划分特征子区域的图像修复算法。首先,根据图像中包含的不同特征,运用特征公式进行特征提取,再通过统计特征值划分特征子区域,提高了图像修复的速度;其次,在原Criminisi算法的基础上改进了优先级的计算,通过增大结构项的影响,避免结构断裂的产生;然后,通过目标块和其最佳邻域相似块共同约束样本块的选取,确定最佳样本块集;最后,利用权值分配法合成最佳样本块。实验结果表明,所提算法相比原Criminisi算法,其峰值信噪比(PSNR)提升了2~3 d B,相比基于稀疏表示的块优先权值计算的算法,其修复效率有明显的提高。所提算法不但适用于一般小尺度的破损图像的修复,而且对于含有丰富纹理信息和复杂结构信息的大破损图像的修复效果也更佳,并且修复后的图像更加符合人们视觉上的连通性。展开更多
Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or select...Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or selecting such features valid for specific crop types requires prior knowledge and thus remains an open challenge. Convolutional neural networks(CNNs) can effectively overcome this issue with their advanced ability to generate high-level features automatically but are still inadequate in mining spectral features compared to mining spatial features. This study proposed an enhanced spectral feature called Stacked Spectral Feature Space Patch(SSFSP) for CNN-based crop classification. SSFSP is a stack of twodimensional(2 D) gridded spectral feature images that record various crop types’ spatial and intensity distribution characteristics in a 2 D feature space consisting of two spectral bands. SSFSP can be input into2 D-CNNs to support the simultaneous mining of spectral and spatial features, as the spectral features are successfully converted to 2 D images that can be processed by CNN. We tested the performance of SSFSP by using it as the input to seven CNN models and one multilayer perceptron model for crop type classification compared to using conventional spectral features as input. Using high spatial resolution hyperspectral datasets at three sites, the comparative study demonstrated that SSFSP outperforms conventional spectral features regarding classification accuracy, robustness, and training efficiency. The theoretical analysis summarizes three reasons for its excellent performance. First, SSFSP mines the spectral interrelationship with feature generality, which reduces the required number of training samples.Second, the intra-class variance can be largely reduced by grid partitioning. Third, SSFSP is a highly sparse feature, which reduces the dependence on the CNN model structure and enables early and fast convergence in model training. In conclusion, SSFSP has great potential for practical crop cla展开更多
Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural...Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method,which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically,we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic(or structural)information of deep feature map such that we could utilizemore sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture,which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover,we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.展开更多
To address the problem of using fixed feature and single apparent model which is difficult to adapt to the complex scenarios, a Kernelized correlation filter target tracking algorithm based on online saliency feature ...To address the problem of using fixed feature and single apparent model which is difficult to adapt to the complex scenarios, a Kernelized correlation filter target tracking algorithm based on online saliency feature selection and fusion is proposed. It combined the correlation filter tracking framework and the salient feature model of the target. In the tracking process, the maximum Kernel correlation filter response values of different feature models were calculated respectively, and the response weights were dynamically set according to the saliency of different features. According to the filter response value, the final target position was obtained, which improves the target positioning accuracy. The target model was dynamically updated in an online manner based on the feature saliency measurement results. The experimental results show that the proposed method can effectively utilize the distinctive feature fusion to improve the tracking effect in complex environments.展开更多
文摘为了解决含有丰富纹理信息和复杂结构信息的大破损区域中的缺失信息修复的问题,提出了一种划分特征子区域的图像修复算法。首先,根据图像中包含的不同特征,运用特征公式进行特征提取,再通过统计特征值划分特征子区域,提高了图像修复的速度;其次,在原Criminisi算法的基础上改进了优先级的计算,通过增大结构项的影响,避免结构断裂的产生;然后,通过目标块和其最佳邻域相似块共同约束样本块的选取,确定最佳样本块集;最后,利用权值分配法合成最佳样本块。实验结果表明,所提算法相比原Criminisi算法,其峰值信噪比(PSNR)提升了2~3 d B,相比基于稀疏表示的块优先权值计算的算法,其修复效率有明显的提高。所提算法不但适用于一般小尺度的破损图像的修复,而且对于含有丰富纹理信息和复杂结构信息的大破损图像的修复效果也更佳,并且修复后的图像更加符合人们视觉上的连通性。
基金supported by the National Natural Science Foundation of China (67441830108 and 41871224)。
文摘Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or selecting such features valid for specific crop types requires prior knowledge and thus remains an open challenge. Convolutional neural networks(CNNs) can effectively overcome this issue with their advanced ability to generate high-level features automatically but are still inadequate in mining spectral features compared to mining spatial features. This study proposed an enhanced spectral feature called Stacked Spectral Feature Space Patch(SSFSP) for CNN-based crop classification. SSFSP is a stack of twodimensional(2 D) gridded spectral feature images that record various crop types’ spatial and intensity distribution characteristics in a 2 D feature space consisting of two spectral bands. SSFSP can be input into2 D-CNNs to support the simultaneous mining of spectral and spatial features, as the spectral features are successfully converted to 2 D images that can be processed by CNN. We tested the performance of SSFSP by using it as the input to seven CNN models and one multilayer perceptron model for crop type classification compared to using conventional spectral features as input. Using high spatial resolution hyperspectral datasets at three sites, the comparative study demonstrated that SSFSP outperforms conventional spectral features regarding classification accuracy, robustness, and training efficiency. The theoretical analysis summarizes three reasons for its excellent performance. First, SSFSP mines the spectral interrelationship with feature generality, which reduces the required number of training samples.Second, the intra-class variance can be largely reduced by grid partitioning. Third, SSFSP is a highly sparse feature, which reduces the dependence on the CNN model structure and enables early and fast convergence in model training. In conclusion, SSFSP has great potential for practical crop cla
基金This work was supported by the Sichuan Science and Technology program(2019JDJQ0002,2019YFG0496,2021016,2020JDTD0020)partially supported by National Science Foundation of China 42075142.
文摘Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method,which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically,we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic(or structural)information of deep feature map such that we could utilizemore sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture,which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover,we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.
基金the National Natural Science Foundation (61472196, 61672305)Natural Science Foundation of Shandong Province (BS2015DX010, ZR2015FM012)Key Research and Development Foundation of Shandong Province (2017GGX10133).
文摘To address the problem of using fixed feature and single apparent model which is difficult to adapt to the complex scenarios, a Kernelized correlation filter target tracking algorithm based on online saliency feature selection and fusion is proposed. It combined the correlation filter tracking framework and the salient feature model of the target. In the tracking process, the maximum Kernel correlation filter response values of different feature models were calculated respectively, and the response weights were dynamically set according to the saliency of different features. According to the filter response value, the final target position was obtained, which improves the target positioning accuracy. The target model was dynamically updated in an online manner based on the feature saliency measurement results. The experimental results show that the proposed method can effectively utilize the distinctive feature fusion to improve the tracking effect in complex environments.