This article continues to study the research suggestions in depth made by M.Z.Nashed and G.F.Votruba in the journal"Bull.Amer.Math.Soc."in 1974.Concerned with the pricing of non-reachable"contingent cla...This article continues to study the research suggestions in depth made by M.Z.Nashed and G.F.Votruba in the journal"Bull.Amer.Math.Soc."in 1974.Concerned with the pricing of non-reachable"contingent claims"in an incomplete financial market,when constructing a specific bounded linear operator A:l_(1)^(n)→l_(2) from a non-reflexive Banach space l_(1)^(n) to a Hilbert space l_(2),the problem of non-reachable"contingent claims"pricing is reduced to researching the(single-valued)selection of the(set-valued)metric generalized inverse A■ of the operator A.In this paper,by using the Banach space structure theory and the generalized inverse method of operators,we obtain a bounded linear single-valued selection A^(σ)=A+of A■.展开更多
To estimate central dimension-reduction space in multivariate nonparametric rcgression, Sliced Inverse Regression (SIR), Sliced Average Variance Estimation (SAVE) and Slicing Average Third-moment Estimation (SAT...To estimate central dimension-reduction space in multivariate nonparametric rcgression, Sliced Inverse Regression (SIR), Sliced Average Variance Estimation (SAVE) and Slicing Average Third-moment Estimation (SAT) have been developed, Since slicing estimation has very different asymptotic behavior for SIR, and SAVE, the relevant study has been madc case by case, when the kernel estimators of SIH and SAVE share similar asymptotic properties. In this paper, we also investigate kernel estimation of SAT. We. prove the asymptotic normality, and show that, compared with tile existing results, the kernel Slnoothing for SIR, SAVE and SAT has very similar asymptotic behavior,展开更多
Extreme learning machine(ELM)is a feedforward neural network with a single layer of hidden nodes,where the weight and the bias connecting input to hidden nodes are randomly assigned.The output weight between hidden no...Extreme learning machine(ELM)is a feedforward neural network with a single layer of hidden nodes,where the weight and the bias connecting input to hidden nodes are randomly assigned.The output weight between hidden nodes and outputs are learned by a linear model.It is interesting to ask whether the training error of ELM is significantly affected by the hidden layer output matrix H,because a positive answer will enable us obtain smaller training error from better H.For single hidden layer feedforward neural network(SLFN)with one input neuron,there is significant difference between the training errors of different Hs.We find there is a reliable strong negative rank correlation between the training errors and some singular values of the Moore-Penrose generalized inverse of H.Based on the rank correlation,a selection algorithm is proposed to choose robust appropriate H to achieve smaller training error among numerous Hs.Extensive experiments are carried out to validate the selection algorithm,including tests on real data set.The results show that it achieves better performance in validity,speed and robustness.展开更多
基金supported by the National Science Foundation (12001142)Harbin Normal University doctoral initiation Fund (XKB201812)supported by the Science Foundation Grant of Heilongjiang Province (LH2019A017)
文摘This article continues to study the research suggestions in depth made by M.Z.Nashed and G.F.Votruba in the journal"Bull.Amer.Math.Soc."in 1974.Concerned with the pricing of non-reachable"contingent claims"in an incomplete financial market,when constructing a specific bounded linear operator A:l_(1)^(n)→l_(2) from a non-reflexive Banach space l_(1)^(n) to a Hilbert space l_(2),the problem of non-reachable"contingent claims"pricing is reduced to researching the(single-valued)selection of the(set-valued)metric generalized inverse A■ of the operator A.In this paper,by using the Banach space structure theory and the generalized inverse method of operators,we obtain a bounded linear single-valued selection A^(σ)=A+of A■.
文摘To estimate central dimension-reduction space in multivariate nonparametric rcgression, Sliced Inverse Regression (SIR), Sliced Average Variance Estimation (SAVE) and Slicing Average Third-moment Estimation (SAT) have been developed, Since slicing estimation has very different asymptotic behavior for SIR, and SAVE, the relevant study has been madc case by case, when the kernel estimators of SIH and SAVE share similar asymptotic properties. In this paper, we also investigate kernel estimation of SAT. We. prove the asymptotic normality, and show that, compared with tile existing results, the kernel Slnoothing for SIR, SAVE and SAT has very similar asymptotic behavior,
基金supported by the National Key Research and Development Program of China under Grant No.2020YFA0714200.
文摘Extreme learning machine(ELM)is a feedforward neural network with a single layer of hidden nodes,where the weight and the bias connecting input to hidden nodes are randomly assigned.The output weight between hidden nodes and outputs are learned by a linear model.It is interesting to ask whether the training error of ELM is significantly affected by the hidden layer output matrix H,because a positive answer will enable us obtain smaller training error from better H.For single hidden layer feedforward neural network(SLFN)with one input neuron,there is significant difference between the training errors of different Hs.We find there is a reliable strong negative rank correlation between the training errors and some singular values of the Moore-Penrose generalized inverse of H.Based on the rank correlation,a selection algorithm is proposed to choose robust appropriate H to achieve smaller training error among numerous Hs.Extensive experiments are carried out to validate the selection algorithm,including tests on real data set.The results show that it achieves better performance in validity,speed and robustness.