The past two decades have witnessed the active development of a rich probability theory of Studentized statistics or self-normalized processes, typified by Student’s t-statistic as introduced by W. S. Gosset more tha...The past two decades have witnessed the active development of a rich probability theory of Studentized statistics or self-normalized processes, typified by Student’s t-statistic as introduced by W. S. Gosset more than a century ago, and their applications to statistical problems in high dimensions, including feature selection and ranking, large-scale multiple testing and sparse, high dimensional signal detection. Many of these applications rely on the robustness property of Studentization/self-normalization against heavy-tailed sampling distributions. This paper gives an overview of the salient progress of self-normalized limit theory, from Student’s t-statistic to more general Studentized nonlinear statistics. Prototypical examples include Studentized one- and two-sample U-statistics. Furthermore, we go beyond independence and glimpse some very recent advances in self-normalized moderate deviations under dependence.展开更多
Using suitable self-normalization for partial sums of i.i.d.random variables,Griffin and Kuelbs established the law of the iterated logarithm for all distributions in the domain of attraction of a normal law.We obtain...Using suitable self-normalization for partial sums of i.i.d.random variables,Griffin and Kuelbs established the law of the iterated logarithm for all distributions in the domain of attraction of a normal law.We obtain the corresponding results for Studentized increments of partial sums under thesame condition.展开更多
The sub-linear expectation or called G-expectation is a non-linear expectation having advantage of modeling non-additive probability problems and the volatilityuncertainty in finance.Let{Xn;n≥1}be a sequence of indep...The sub-linear expectation or called G-expectation is a non-linear expectation having advantage of modeling non-additive probability problems and the volatilityuncertainty in finance.Let{Xn;n≥1}be a sequence of independent random vari-ables in a sub-linear expectation space(Ω,H,E^(^)).Denote S_(n)=∑_(k=1)^(n)Xk and=V_(n)^(2)=∑_(k=1)^(n)X_(k)^(2).In this paper,a moderate deviation for self-normalized sums,thatis,the asymptotic capacity of the event{Sn/Vn≥x_(n)}for x_(n)=o(√n),is found both for identically distributed random variables and independent but not necessarilyidentically distributed random variables.As an application,the self-normalized lawsof the iterated logarithm are obtained.A Bernstein's type inequality is also establishedfor proving the law of the iterated logarithm.展开更多
With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex...With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations.展开更多
Let {X, Xn; n ≥ 0} be a sequence of independent and identically distributed random variables with EX=0, and assume that EX^2I(|X| ≤ x) is slowly varying as x →∞, i.e., X is in the domain of attraction of the n...Let {X, Xn; n ≥ 0} be a sequence of independent and identically distributed random variables with EX=0, and assume that EX^2I(|X| ≤ x) is slowly varying as x →∞, i.e., X is in the domain of attraction of the normal law. In this paper, a self-normalized law of the iterated logarithm for the geometrically weighted random series Σ~∞(n=0)β~nXn(0 〈 β 〈 1) is obtained, under some minimal conditions.展开更多
G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion.Its quadratic variation process is also a continuous process with independent and stationary ...G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion.Its quadratic variation process is also a continuous process with independent and stationary increments.We prove a self-normalized functional central limit theorem for independent and identically distributed random variables under the sub-linear expectation with the limit process being a G-Brownian motion self-normalized by its quadratic variation.To prove the self-normalized central limit theorem,we also establish a new Donsker’s invariance principle with the limit process being a generalized G-Brownian motion.展开更多
In the case of Z+^d(d ≥ 2)-the positive d-dimensional lattice points with partial ordering ≤, {Xk,k∈ Z+^d} i.i.d, random variables with mean 0, Sn =∑k≤nXk and Vn^2 = ∑j≤nXj^2, the precise asymptotics for ∑...In the case of Z+^d(d ≥ 2)-the positive d-dimensional lattice points with partial ordering ≤, {Xk,k∈ Z+^d} i.i.d, random variables with mean 0, Sn =∑k≤nXk and Vn^2 = ∑j≤nXj^2, the precise asymptotics for ∑n1/|n|(log|n|dP(|Sn/Vn|≥ε√log log|n|) and ∑n(logn|)b/|n|(log|n|)^d-1P(|Sn/Vn|≥ε√log n),as ε↓0,is established.展开更多
The Berry-Esseen bound provides an upper bound on the Kolmogorov distance between a random variable and the normal distribution.In this paper,we establish Berry-Esseen bounds with optimal rates for self-normalized sum...The Berry-Esseen bound provides an upper bound on the Kolmogorov distance between a random variable and the normal distribution.In this paper,we establish Berry-Esseen bounds with optimal rates for self-normalized sums of locally dependent random variables,assuming only a second-moment condition.Our proof leverages Stein's method and introduces a novel randomized concentration inequality,which may also be of independent interest for other applications.Our main results have applied to self-normalized sums of m-dependent random variables and graph dependency models.展开更多
In this paper,we establish normalized and self-normalized Cramér-type moderate deviations for the Euler-Maruyama scheme for SDE.Due to our results,Berry-Esseen's bounds and moderate deviation principles are a...In this paper,we establish normalized and self-normalized Cramér-type moderate deviations for the Euler-Maruyama scheme for SDE.Due to our results,Berry-Esseen's bounds and moderate deviation principles are also obtained.Our normalized Cramér-type moderate deviations refine the recent work of Lu et al.(2022).展开更多
Let X, X1, X2, be a sequence of nondegenerate i.i.d, random variables with zero means, which is in the domain of attraction of the normal law. Let (ani, 1 ≤ i ≤n,n ≥1} be an array of real numbers with some suitab...Let X, X1, X2, be a sequence of nondegenerate i.i.d, random variables with zero means, which is in the domain of attraction of the normal law. Let (ani, 1 ≤ i ≤n,n ≥1} be an array of real numbers with some suitable conditions. In this paper, we show that a central limit theorem for self-normalized weighted sums holds. We also deduce a version of ASCLT for self-normalized weighted sums.展开更多
A finite group G is called an J N J-group if every proper subgroup H of G is either subnormal in G or self-normalizing. We determinate the structure of non-J N J-groups in which all proper subgroups are J N J- groups.
Let variables in the {X, Xn, n ≥ 1} be a sequence of strictly stationary φ-mixing positive random domain of attraction of the normal law. Under some suitable conditions the principle for self-normalized products of ...Let variables in the {X, Xn, n ≥ 1} be a sequence of strictly stationary φ-mixing positive random domain of attraction of the normal law. Under some suitable conditions the principle for self-normalized products of partial sums is obtained.展开更多
In this article, the unit root test for AR(p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test s...In this article, the unit root test for AR(p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test statistics is derived under the weak conditions.展开更多
文摘The past two decades have witnessed the active development of a rich probability theory of Studentized statistics or self-normalized processes, typified by Student’s t-statistic as introduced by W. S. Gosset more than a century ago, and their applications to statistical problems in high dimensions, including feature selection and ranking, large-scale multiple testing and sparse, high dimensional signal detection. Many of these applications rely on the robustness property of Studentization/self-normalization against heavy-tailed sampling distributions. This paper gives an overview of the salient progress of self-normalized limit theory, from Student’s t-statistic to more general Studentized nonlinear statistics. Prototypical examples include Studentized one- and two-sample U-statistics. Furthermore, we go beyond independence and glimpse some very recent advances in self-normalized moderate deviations under dependence.
基金Project supported by the National Natural Science Foundation of Chinaan NSERC Canada grant of M.Csorgo at Carletoa University of Canada+1 种基金the Fok Yingtung Education Foundationan NSERC Canada Scientific Exchange Award at Carleton University
文摘Using suitable self-normalization for partial sums of i.i.d.random variables,Griffin and Kuelbs established the law of the iterated logarithm for all distributions in the domain of attraction of a normal law.We obtain the corresponding results for Studentized increments of partial sums under thesame condition.
基金Grants from the National Natural Science Foundation of China(No.11225104)973 Program(No.2015CB352302)the Fundamental Research Funds for the CentralUniversities.
文摘The sub-linear expectation or called G-expectation is a non-linear expectation having advantage of modeling non-additive probability problems and the volatilityuncertainty in finance.Let{Xn;n≥1}be a sequence of independent random vari-ables in a sub-linear expectation space(Ω,H,E^(^)).Denote S_(n)=∑_(k=1)^(n)Xk and=V_(n)^(2)=∑_(k=1)^(n)X_(k)^(2).In this paper,a moderate deviation for self-normalized sums,thatis,the asymptotic capacity of the event{Sn/Vn≥x_(n)}for x_(n)=o(√n),is found both for identically distributed random variables and independent but not necessarilyidentically distributed random variables.As an application,the self-normalized lawsof the iterated logarithm are obtained.A Bernstein's type inequality is also establishedfor proving the law of the iterated logarithm.
文摘With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations.
基金Supported by National Natural Science Foundation of China(Grant Nos.11301481,11371321 and 10901138)National Statistical Science Research Project of China(Grant No.2012LY174)+1 种基金Zhejiang Provincial Natural Science Foundation of China(Grant No.LQ12A01018)the Fundamental Research Funds for the Central Universities and Zhejiang Provincial Key Research Base for Humanities and Social Science Research(Statistics)
文摘Let {X, Xn; n ≥ 0} be a sequence of independent and identically distributed random variables with EX=0, and assume that EX^2I(|X| ≤ x) is slowly varying as x →∞, i.e., X is in the domain of attraction of the normal law. In this paper, a self-normalized law of the iterated logarithm for the geometrically weighted random series Σ~∞(n=0)β~nXn(0 〈 β 〈 1) is obtained, under some minimal conditions.
基金Research supported by Grants from the National Natural Science Foundation of China(No.11225104)the 973 Program(No.2015CB352302)and the Fundamental Research Funds for the Central Universities.
文摘G-Brownian motion has a very rich and interesting new structure that nontrivially generalizes the classical Brownian motion.Its quadratic variation process is also a continuous process with independent and stationary increments.We prove a self-normalized functional central limit theorem for independent and identically distributed random variables under the sub-linear expectation with the limit process being a G-Brownian motion self-normalized by its quadratic variation.To prove the self-normalized central limit theorem,we also establish a new Donsker’s invariance principle with the limit process being a generalized G-Brownian motion.
文摘In the case of Z+^d(d ≥ 2)-the positive d-dimensional lattice points with partial ordering ≤, {Xk,k∈ Z+^d} i.i.d, random variables with mean 0, Sn =∑k≤nXk and Vn^2 = ∑j≤nXj^2, the precise asymptotics for ∑n1/|n|(log|n|dP(|Sn/Vn|≥ε√log log|n|) and ∑n(logn|)b/|n|(log|n|)^d-1P(|Sn/Vn|≥ε√log n),as ε↓0,is established.
基金supported by the Singapore Ministry of Education Academic Research Fund Tier 2(Grant No.MOE2018-T2-2-076)。
文摘The Berry-Esseen bound provides an upper bound on the Kolmogorov distance between a random variable and the normal distribution.In this paper,we establish Berry-Esseen bounds with optimal rates for self-normalized sums of locally dependent random variables,assuming only a second-moment condition.Our proof leverages Stein's method and introduces a novel randomized concentration inequality,which may also be of independent interest for other applications.Our main results have applied to self-normalized sums of m-dependent random variables and graph dependency models.
基金supported by National Natural Science Foundation of China(Grant No.11971063)。
文摘In this paper,we establish normalized and self-normalized Cramér-type moderate deviations for the Euler-Maruyama scheme for SDE.Due to our results,Berry-Esseen's bounds and moderate deviation principles are also obtained.Our normalized Cramér-type moderate deviations refine the recent work of Lu et al.(2022).
基金Supported by the National Natural Science Foundation of China (No. 10971081, 11101180).
文摘Let X, X1, X2, be a sequence of nondegenerate i.i.d, random variables with zero means, which is in the domain of attraction of the normal law. Let (ani, 1 ≤ i ≤n,n ≥1} be an array of real numbers with some suitable conditions. In this paper, we show that a central limit theorem for self-normalized weighted sums holds. We also deduce a version of ASCLT for self-normalized weighted sums.
文摘A finite group G is called an J N J-group if every proper subgroup H of G is either subnormal in G or self-normalizing. We determinate the structure of non-J N J-groups in which all proper subgroups are J N J- groups.
基金National Natural Science Foundation of China(1067117610771192).
文摘Let variables in the {X, Xn, n ≥ 1} be a sequence of strictly stationary φ-mixing positive random domain of attraction of the normal law. Under some suitable conditions the principle for self-normalized products of partial sums is obtained.
基金National Natural Science Foundation of China(1047112610671176).
文摘In this article, the unit root test for AR(p) model with GARCH errors is considered. The Dickey-Fuller test statistics are rewritten in the form of self-normalized sums, and the asymptotic distribution of the test statistics is derived under the weak conditions.