This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve m...This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve model,the seemingly unrelated regression equations,the variance components model,and so on.It is proved that admissible estimators of functions of the regression coefficient β in the class of linear models with multivariate t error terms,called as Model II,are also ones in the case that error terms have multivariate normal distribution under a strictly convex loss function or a matrix loss function.It is also proved under Model II that the usual estimators of β are admissible for p 2 with a quadratic loss function,and are admissible for any p with a matrix loss function,where p is the dimension of β.展开更多
We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve...We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our analysis, the convexity of the loss functions plays a central role.展开更多
As a way of training a single hidden layer feedforward network(SLFN),extreme learning machine(ELM)is rapidly becoming popular due to its efficiency.However,ELM tends to overfitting,which makes the model sensitive to n...As a way of training a single hidden layer feedforward network(SLFN),extreme learning machine(ELM)is rapidly becoming popular due to its efficiency.However,ELM tends to overfitting,which makes the model sensitive to noise and outliers.To solve this problem,L_(2,1)-norm is introduced to ELM and an L_(2,1)-norm robust regularized ELM(L_(2,1)-RRELM)was proposed.L_(2,1)-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function.In light of the non-convex feature of L_(2,1)-RRELM,the concave-convex procedure(CCCP)is applied to solve its model.The convergence of L_(2,1)-RRELM is also given to show its robustness.In order to further verify the effectiveness of L_(2,1)-RRELM,it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine(UCI)datasets.And each algorithm in different noise environments is tested with two evaluation criterions root mean square error(RMSE)and fitness.The results of the simulation indicate that L_(2,1)-RRELM has smaller RMSE and greater fitness under different noise settings.Numerical analysis shows that L_(2,1)-RRELM has better generalization performance,stronger robustness,and higher anti-noise ability and fitness.展开更多
In personalised medicine,the goal is tomake a treatment recommendation for each patient with a given set of covariates tomaximise the treatment benefitmeasured by patient’s response to the treatment.In application,su...In personalised medicine,the goal is tomake a treatment recommendation for each patient with a given set of covariates tomaximise the treatment benefitmeasured by patient’s response to the treatment.In application,such a treatment assignment rule is constructed using a sample training data consisting of patients’responses and covariates.Instead of modelling responses using treatments and covariates,an alternative approach is maximising a response-weighted target function whose value directly reflects the effectiveness of treatment assignments.Since the target function involves a loss function,efforts have been made recently on the choice of the loss function to ensure a computationally feasible and theoretically sound solution.We propose to use a smooth hinge loss function so that the target function is convex and differentiable,which possesses good asymptotic properties and numerical advantages.To further simplify the computation and interpretability,we focus on the rules that are linear functions of covariates and discuss their asymptotic properties.We also examine the performances of our method with simulation studies and real data analysis.展开更多
Throughout this note, the following notations are used. For matrices A and B,A】B means that A-B is positive definite symmetric, A×B denotes the Kroneckerproduct of A and B R(A), A’ and A<sup>-</sup&g...Throughout this note, the following notations are used. For matrices A and B,A】B means that A-B is positive definite symmetric, A×B denotes the Kroneckerproduct of A and B R(A), A’ and A<sup>-</sup> stand for the column space, the transpose andany g-inverse of A, respectively; P<sub>A</sub>=A(A’A)<sup>-</sup>A’;for s×t matrix B=(b<sub>1</sub>…b<sub>t</sub>),vec(B) de-notes the st-dimensional vector (b<sub>1</sub>′b<sub>2</sub>′…b<sub>t</sub>′)′, trA stands for the trace of the square ma-trix A.展开更多
In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rat...In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rate is influenced by the strong convexity.展开更多
Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be di...Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.展开更多
In this note we establish some appropriate conditions for stochastic equality of two random vari- ables/vectors which are ordered with respect to convex ordering or with respect to supermodular ordering. Multivariate ...In this note we establish some appropriate conditions for stochastic equality of two random vari- ables/vectors which are ordered with respect to convex ordering or with respect to supermodular ordering. Multivariate extensions of this result are also considered.展开更多
基金supported by National Natural Science Foundation of China(Grant Nos.10871146,10771015)
文摘This paper discusses admissibilities of estimators in a class of linear models,which include the following common models:the univariate and multivariate linear models,the growth curve model,the extended growth curve model,the seemingly unrelated regression equations,the variance components model,and so on.It is proved that admissible estimators of functions of the regression coefficient β in the class of linear models with multivariate t error terms,called as Model II,are also ones in the case that error terms have multivariate normal distribution under a strictly convex loss function or a matrix loss function.It is also proved under Model II that the usual estimators of β are admissible for p 2 with a quadratic loss function,and are admissible for any p with a matrix loss function,where p is the dimension of β.
文摘We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our analysis, the convexity of the loss functions plays a central role.
基金supported by the National Natural Science Foundation of China(51875457)the Key Research Project of Shaanxi Province(2022GY-050,2022GY-028)+1 种基金the Natural Science Foundation of Shaanxi Province of China(2022JQ-636,2022JQ-705,2021JQ-714)Shaanxi Youth Talent Lifting Plan of Shaanxi Association for Science and Technology(20220129)。
文摘As a way of training a single hidden layer feedforward network(SLFN),extreme learning machine(ELM)is rapidly becoming popular due to its efficiency.However,ELM tends to overfitting,which makes the model sensitive to noise and outliers.To solve this problem,L_(2,1)-norm is introduced to ELM and an L_(2,1)-norm robust regularized ELM(L_(2,1)-RRELM)was proposed.L_(2,1)-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function.In light of the non-convex feature of L_(2,1)-RRELM,the concave-convex procedure(CCCP)is applied to solve its model.The convergence of L_(2,1)-RRELM is also given to show its robustness.In order to further verify the effectiveness of L_(2,1)-RRELM,it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine(UCI)datasets.And each algorithm in different noise environments is tested with two evaluation criterions root mean square error(RMSE)and fitness.The results of the simulation indicate that L_(2,1)-RRELM has smaller RMSE and greater fitness under different noise settings.Numerical analysis shows that L_(2,1)-RRELM has better generalization performance,stronger robustness,and higher anti-noise ability and fitness.
基金Research reported in this article was partially funded through a Patient-Centered Outcomes Research Institute(PCORI)Award[ME-1409-21219]The second author’s research was also partially supported by the Chinese 111 Project[B14019]the US National Science Foundation[grant number DMS-1612873].
文摘In personalised medicine,the goal is tomake a treatment recommendation for each patient with a given set of covariates tomaximise the treatment benefitmeasured by patient’s response to the treatment.In application,such a treatment assignment rule is constructed using a sample training data consisting of patients’responses and covariates.Instead of modelling responses using treatments and covariates,an alternative approach is maximising a response-weighted target function whose value directly reflects the effectiveness of treatment assignments.Since the target function involves a loss function,efforts have been made recently on the choice of the loss function to ensure a computationally feasible and theoretically sound solution.We propose to use a smooth hinge loss function so that the target function is convex and differentiable,which possesses good asymptotic properties and numerical advantages.To further simplify the computation and interpretability,we focus on the rules that are linear functions of covariates and discuss their asymptotic properties.We also examine the performances of our method with simulation studies and real data analysis.
文摘现有的面向大规模数据分类的支持向量机(support vector machine,SVM)对噪声样本敏感,针对这一问题,通过定义软性核凸包和引入pinball损失函数,提出了一种新的软性核凸包支持向量机(soft kernel convex hull support vector machine for large scale noisy datasets,SCH-SVM).SCH-SVM首先定义了软性核凸包的概念,然后选择出能代表样本在核空间几何轮廓的软性核凸包向量,再将其对应的原始空间样本作为训练样本并基于pinball损失函数来寻找两类软性核凸包之间的最大分位数距离.相关理论和实验结果亦证明了所提分类器在训练时间,抗噪能力和支持向量数上的有效性.
文摘Throughout this note, the following notations are used. For matrices A and B,A】B means that A-B is positive definite symmetric, A×B denotes the Kroneckerproduct of A and B R(A), A’ and A<sup>-</sup> stand for the column space, the transpose andany g-inverse of A, respectively; P<sub>A</sub>=A(A’A)<sup>-</sup>A’;for s×t matrix B=(b<sub>1</sub>…b<sub>t</sub>),vec(B) de-notes the st-dimensional vector (b<sub>1</sub>′b<sub>2</sub>′…b<sub>t</sub>′)′, trA stands for the trace of the square ma-trix A.
基金Supported by National Natural Science Foundation of China(Grant Nos.10871226,11001247 and 61179041)Natural Science Foundation of Zhejiang Province(Grant No.Y6100096)
文摘In the present paper, we give an investigation on the learning rate of l2-coefficient regularized classification with strong loss and the data dependent kernel functional spaces. The results show that the learning rate is influenced by the strong convexity.
基金This is a Plenary Report on the International Symposium on Approximation Theory and Remote SensingApplications held in Kunming, China in April 2006Supported in part by NSF of China under grants 10571010 , 10171007 and Startup Grant for Doctoral Researchof Beijing University of Technology
文摘Neyman-Pearson classification has been studied in several articles before. But they all proceeded in the classes of indicator functions with indicator function as the loss function, which make the calculation to be difficult. This paper investigates Neyman- Pearson classification with convex loss function in the arbitrary class of real measurable functions. A general condition is given under which Neyman-Pearson classification with convex loss function has the same classifier as that with indicator loss function. We give analysis to NP-ERM with convex loss function and prove it's performance guarantees. An example of complexity penalty pair about convex loss function risk in terms of Rademacher averages is studied, which produces a tight PAC bound of the NP-ERM with convex loss function.
基金supported by the National Natural Science Foundation of China(11571198,11701319)
文摘In this note we establish some appropriate conditions for stochastic equality of two random vari- ables/vectors which are ordered with respect to convex ordering or with respect to supermodular ordering. Multivariate extensions of this result are also considered.