In order to classify nonlinear features with a linear classifier and improve the classification accuracy, a deep learning network named kernel principal component analysis network( KPCANet) is proposed. First, the d...In order to classify nonlinear features with a linear classifier and improve the classification accuracy, a deep learning network named kernel principal component analysis network( KPCANet) is proposed. First, the data is mapped into a higher-dimensional space with kernel principal component analysis to make the data linearly separable. Then a two-layer KPCANet is built to obtain the principal components of the image. Finally, the principal components are classified with a linear classifier. Experimental results showthat the proposed KPCANet is effective in face recognition, object recognition and handwritten digit recognition. It also outperforms principal component analysis network( PCANet) generally. Besides, KPCANet is invariant to illumination and stable to occlusion and slight deformation.展开更多
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained ...Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been 展开更多
Linear octrees offer a volume representation of 3-D objects, which is quite compactand lends itself to traditional object processing operations. However, the linear octree structurefor generating the representation of...Linear octrees offer a volume representation of 3-D objects, which is quite compactand lends itself to traditional object processing operations. However, the linear octree structurefor generating the representation of 3-D objects from three orthogonal silhouettes by using thevolume intersection technique is dependent on viewpoints. The recognition achieved from match-ing object representations to model representations requires that the representations of objectsare independent of viewpoints. In order to obtain independent representations of viewpoints,the three principal axes of the object should be obtained from the moment of inertia matrix bycomputing its eigenvectors. The linear octree is projected onto the image planes of the three prin-cipal views (along the principal axes) to obtain the three normalized linear quadtrees. The objectmatching procedure has two phases: the first phase is to match the normalized linear quadtrees ofthe unknown object to a subset of models contained in a library utilizing a measure of symmetricdifference; the second phase is to generate the normalized linear octrees of the object and theseselected models and then to match the normalized linear octree of the unknown object with themodel having the minimum symmetric difference.展开更多
基金The National Natural Science Foundation of China(No.6120134461271312+7 种基金6140108511301074)the Research Fund for the Doctoral Program of Higher Education(No.20120092120036)the Program for Special Talents in Six Fields of Jiangsu Province(No.DZXX-031)Industry-University-Research Cooperation Project of Jiangsu Province(No.BY2014127-11)"333"Project(No.BRA2015288)High-End Foreign Experts Recruitment Program(No.GDT20153200043)Open Fund of Jiangsu Engineering Center of Network Monitoring(No.KJR1404)
文摘In order to classify nonlinear features with a linear classifier and improve the classification accuracy, a deep learning network named kernel principal component analysis network( KPCANet) is proposed. First, the data is mapped into a higher-dimensional space with kernel principal component analysis to make the data linearly separable. Then a two-layer KPCANet is built to obtain the principal components of the image. Finally, the principal components are classified with a linear classifier. Experimental results showthat the proposed KPCANet is effective in face recognition, object recognition and handwritten digit recognition. It also outperforms principal component analysis network( PCANet) generally. Besides, KPCANet is invariant to illumination and stable to occlusion and slight deformation.
文摘Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been
文摘Linear octrees offer a volume representation of 3-D objects, which is quite compactand lends itself to traditional object processing operations. However, the linear octree structurefor generating the representation of 3-D objects from three orthogonal silhouettes by using thevolume intersection technique is dependent on viewpoints. The recognition achieved from match-ing object representations to model representations requires that the representations of objectsare independent of viewpoints. In order to obtain independent representations of viewpoints,the three principal axes of the object should be obtained from the moment of inertia matrix bycomputing its eigenvectors. The linear octree is projected onto the image planes of the three prin-cipal views (along the principal axes) to obtain the three normalized linear quadtrees. The objectmatching procedure has two phases: the first phase is to match the normalized linear quadtrees ofthe unknown object to a subset of models contained in a library utilizing a measure of symmetricdifference; the second phase is to generate the normalized linear octrees of the object and theseselected models and then to match the normalized linear octree of the unknown object with themodel having the minimum symmetric difference.