In machine learning, selecting useful features and rejecting redundant features is the prerequisite for better modeling and prediction. In this paper, we first study representative feature selection methods based on c...In machine learning, selecting useful features and rejecting redundant features is the prerequisite for better modeling and prediction. In this paper, we first study representative feature selection methods based on correlation analysis, and demonstrate that they do not work well for time series though they can work well for static systems. Then, theoretical analysis for linear time series is carried out to show why they fail. Based on these observations, we propose a new correlation-based feature selection method. Our main idea is that the features highly correlated with progressive response while lowly correlated with other features should be selected, and for groups of selected features with similar residuals, the one with a smaller number of features should be selected. For linear and nonlinear time series, the proposed method yields high accuracy in both feature selection and feature rejection.展开更多
主要介绍了以长距离α探测技术(Long Range Alpha Detection,简称LRAD)为实验基础的关于二次灰色关联分析算法的预测,以及被测管管径、管长、测量距离、管道、风速、空气流量6大因素对实验的影响,利用C语言编程完成整个预测过程,最后根...主要介绍了以长距离α探测技术(Long Range Alpha Detection,简称LRAD)为实验基础的关于二次灰色关联分析算法的预测,以及被测管管径、管长、测量距离、管道、风速、空气流量6大因素对实验的影响,利用C语言编程完成整个预测过程,最后根据测量数据检验程序的完整性及正确性。展开更多
文摘In machine learning, selecting useful features and rejecting redundant features is the prerequisite for better modeling and prediction. In this paper, we first study representative feature selection methods based on correlation analysis, and demonstrate that they do not work well for time series though they can work well for static systems. Then, theoretical analysis for linear time series is carried out to show why they fail. Based on these observations, we propose a new correlation-based feature selection method. Our main idea is that the features highly correlated with progressive response while lowly correlated with other features should be selected, and for groups of selected features with similar residuals, the one with a smaller number of features should be selected. For linear and nonlinear time series, the proposed method yields high accuracy in both feature selection and feature rejection.