Based on the least-square minimization a computationally efficient learning algorithm for the Principal Component Analysis(PCA) is derived. The dual learning rate parameters are adaptively introduced to make the propo...Based on the least-square minimization a computationally efficient learning algorithm for the Principal Component Analysis(PCA) is derived. The dual learning rate parameters are adaptively introduced to make the proposed algorithm providing the capability of the fast convergence and high accuracy for extracting all the principal components. It is shown that all the information needed for PCA can be completely represented by the unnormalized weight vector which is updated based only on the corresponding neuron input-output product. The convergence performance of the proposed algorithm is briefly analyzed.The relation between Oja’s rule and the least squares learning rule is also established. Finally, a simulation example is given to illustrate the effectiveness of this algorithm for PCA.展开更多
基金Supported by the National Natural Science Foundation of Chinathe Science foundation of Guangxi Educational Administration
文摘Based on the least-square minimization a computationally efficient learning algorithm for the Principal Component Analysis(PCA) is derived. The dual learning rate parameters are adaptively introduced to make the proposed algorithm providing the capability of the fast convergence and high accuracy for extracting all the principal components. It is shown that all the information needed for PCA can be completely represented by the unnormalized weight vector which is updated based only on the corresponding neuron input-output product. The convergence performance of the proposed algorithm is briefly analyzed.The relation between Oja’s rule and the least squares learning rule is also established. Finally, a simulation example is given to illustrate the effectiveness of this algorithm for PCA.