Expounded in this survey article is a series of refinements and generalizations of Hilbert's inequalities mostly published during the years 1990 through 2002.Those inequalities concerned may be classified into sev...Expounded in this survey article is a series of refinements and generalizations of Hilbert's inequalities mostly published during the years 1990 through 2002.Those inequalities concerned may be classified into several types (discrete and integral etc.), and various related results obtained respectively by L. C. Hsu, M. Z. Gao, B. C. Yang, J. C. Kuang, Hu Ke and H. Hong et.al are described a little more precisely. Moreover, earlier and recent extensions of Hilbert-type inequalities are also stated for reference. And the new trend and the research ways are also brought forward.展开更多
为提高传统压缩感知(CS)恢复算法的抗噪性能,结合观测矩阵优化和自适应观测的思想,提出一种自适应压缩感知(ACS)算法。该算法将观测能量全部分配在由传统CS恢复算法估计的支撑位置,由于估计支撑集中包含支撑位置,这样可有效提高观测信噪...为提高传统压缩感知(CS)恢复算法的抗噪性能,结合观测矩阵优化和自适应观测的思想,提出一种自适应压缩感知(ACS)算法。该算法将观测能量全部分配在由传统CS恢复算法估计的支撑位置,由于估计支撑集中包含支撑位置,这样可有效提高观测信噪比(SNR);再从优化观测矩阵的角度推导出最优的新观测向量,即其非零部分设计为Gram矩阵的特征向量。仿真结果表明,随着观测数增大,Gram矩阵非对角元素的能量增速小于传统CS算法,并且分别在观测次数、稀疏度和SNR相同的条件下,所提算法的重构归一化均方误差低于传统CS恢复算法10 d B以上,低于典型的贝叶斯方法 5 d B以上。分析表明,所提自适应观测机制可有效提高传统CS恢复算法的能量利用效率和抗噪性能。展开更多
测量矩阵是压缩感知理论的三大核心部分之一,它直接影响着压缩感知理论在图像融合领域的应用。针对随机测量矩阵不易硬件实现的问题,本文设计了一种仅由-1、0和1三个值组成的测量矩阵,并利用基于Gram矩阵的优化方法使其尽可能地与稀疏...测量矩阵是压缩感知理论的三大核心部分之一,它直接影响着压缩感知理论在图像融合领域的应用。针对随机测量矩阵不易硬件实现的问题,本文设计了一种仅由-1、0和1三个值组成的测量矩阵,并利用基于Gram矩阵的优化方法使其尽可能地与稀疏变换矩阵不相关。实验结果表明,该测量矩阵不仅能提高重构图像的PSNR(Peak Signal to Noise Ratio),而且将其应用于基于压缩感知的图像融合中,在采样率仅为非压缩域50%的情况下仍能取得较好的融合效果。展开更多
A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated.General initialization scheme...A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated.General initialization schemes as well as general regimes for the network width and training data size are considered.In the overparametrized regime,it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels.In addition,it is proved that throughout the training process the functions represented by the neural network model are uniformly close to those of a kernel method.For general values of the network width and training data size,sharp estimates of the generalization error are established for target functions in the appropriate reproducing kernel Hilbert space.展开更多
文摘Expounded in this survey article is a series of refinements and generalizations of Hilbert's inequalities mostly published during the years 1990 through 2002.Those inequalities concerned may be classified into several types (discrete and integral etc.), and various related results obtained respectively by L. C. Hsu, M. Z. Gao, B. C. Yang, J. C. Kuang, Hu Ke and H. Hong et.al are described a little more precisely. Moreover, earlier and recent extensions of Hilbert-type inequalities are also stated for reference. And the new trend and the research ways are also brought forward.
文摘为提高传统压缩感知(CS)恢复算法的抗噪性能,结合观测矩阵优化和自适应观测的思想,提出一种自适应压缩感知(ACS)算法。该算法将观测能量全部分配在由传统CS恢复算法估计的支撑位置,由于估计支撑集中包含支撑位置,这样可有效提高观测信噪比(SNR);再从优化观测矩阵的角度推导出最优的新观测向量,即其非零部分设计为Gram矩阵的特征向量。仿真结果表明,随着观测数增大,Gram矩阵非对角元素的能量增速小于传统CS算法,并且分别在观测次数、稀疏度和SNR相同的条件下,所提算法的重构归一化均方误差低于传统CS恢复算法10 d B以上,低于典型的贝叶斯方法 5 d B以上。分析表明,所提自适应观测机制可有效提高传统CS恢复算法的能量利用效率和抗噪性能。
文摘测量矩阵是压缩感知理论的三大核心部分之一,它直接影响着压缩感知理论在图像融合领域的应用。针对随机测量矩阵不易硬件实现的问题,本文设计了一种仅由-1、0和1三个值组成的测量矩阵,并利用基于Gram矩阵的优化方法使其尽可能地与稀疏变换矩阵不相关。实验结果表明,该测量矩阵不仅能提高重构图像的PSNR(Peak Signal to Noise Ratio),而且将其应用于基于压缩感知的图像融合中,在采样率仅为非压缩域50%的情况下仍能取得较好的融合效果。
基金supported by a gift to Princeton University from i Flytek and the Office of Naval Research(ONR)(Grant No.N00014-13-1-0338)。
文摘A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated.General initialization schemes as well as general regimes for the network width and training data size are considered.In the overparametrized regime,it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels.In addition,it is proved that throughout the training process the functions represented by the neural network model are uniformly close to those of a kernel method.For general values of the network width and training data size,sharp estimates of the generalization error are established for target functions in the appropriate reproducing kernel Hilbert space.