When the number of runs is large, to search for uniform designs in the sense of low-discrepancy is an NP hard problem. The number of runs of most of the available uniform designs is small (≤50). In this article, th...When the number of runs is large, to search for uniform designs in the sense of low-discrepancy is an NP hard problem. The number of runs of most of the available uniform designs is small (≤50). In this article, the authors employ a kind of the so-called Hamming distance method to construct uniform designs with two- or three-level such that some resulting uniform designs have a large number of runs. Several infinite classes for the existence of uniform designs with the same Hamming distances between any distinct rows are also obtained simultaneously. Two measures of uniformity, the centered L2-discrepancy (CD, for short) and wrap-around L2-discrepancy (WD, for short), are employed.展开更多
The theory of uniform design has received increasing interest because of its wide application in the field of computer experiments.The generalized discrete discrepancy is proposed to evaluate the uniformity of the mix...The theory of uniform design has received increasing interest because of its wide application in the field of computer experiments.The generalized discrete discrepancy is proposed to evaluate the uniformity of the mixed-level factorial design.In this paper,the authors give a lower bound of the generalized discrete discrepancy and provide some construction methods of optimal mixed-level uniform designs which can achieve this lower bound.These methods are all deterministic construction methods which can avoid the complexity of stochastic algorithms.Both saturated mixed-level uniform designs and supersaturated mixed-level uniform designs can be obtained with these methods.Moreover,the resulting designs are also χ^(2)-optimal and minimum moment aberration designs.展开更多
卷积神经网络性能的快速提升是以不断堆叠的网络层数以及成倍增长的参数量和存储空间为代价,这不仅会使模型在训练过程中出现过拟合等问题,也不利于模型在资源受限的嵌入式设备上运行,因而提出模型压缩技术来解决上述问题,主要对模型压...卷积神经网络性能的快速提升是以不断堆叠的网络层数以及成倍增长的参数量和存储空间为代价,这不仅会使模型在训练过程中出现过拟合等问题,也不利于模型在资源受限的嵌入式设备上运行,因而提出模型压缩技术来解决上述问题,主要对模型压缩技术中的特征蒸馏算法进行了研究。针对特征蒸馏中利用教师网络特征图指导学生网络并不能很好地锻炼学生网络特征拟合能力的问题,提出基于特征分布蒸馏算法。该算法利用条件互信息的概念构建模型特征空间的概率分布,并引入最大平均差异(maximum mean discrepancy,MMD)设计损失函数以最小化教师网络和学生网络特征分布间的距离。在知识蒸馏的基础上利用toeplitz矩阵对学生网络进行权重共享操作,进一步节省了模型的存储空间。为验证在特征分布蒸馏算法训练下学生网络的特征拟合能力,在图像分类、目标检测和语义分割三种图像处理任务上进行了实验验证,实验表明所提算法在以上三种学习任务中的表现均优于对比算法且实现了不同网络架构间的蒸馏。展开更多
基金This work was partially supported by the NNSF of China(10441001) the Project sponsored by SRF for ROCS (SEM) the NSF of Hubei Province. The second author's research was also partially supported by the Pre-studies Project of NBRP (2003CCA2400)
文摘When the number of runs is large, to search for uniform designs in the sense of low-discrepancy is an NP hard problem. The number of runs of most of the available uniform designs is small (≤50). In this article, the authors employ a kind of the so-called Hamming distance method to construct uniform designs with two- or three-level such that some resulting uniform designs have a large number of runs. Several infinite classes for the existence of uniform designs with the same Hamming distances between any distinct rows are also obtained simultaneously. Two measures of uniformity, the centered L2-discrepancy (CD, for short) and wrap-around L2-discrepancy (WD, for short), are employed.
基金supported by the National Natural Science Foundation of China under Grant Nos.12131001,12226343,12371260,and 12371261National Ten Thousand Talents Program of Chinathe 111 Project under Grant No.B20016.
文摘The theory of uniform design has received increasing interest because of its wide application in the field of computer experiments.The generalized discrete discrepancy is proposed to evaluate the uniformity of the mixed-level factorial design.In this paper,the authors give a lower bound of the generalized discrete discrepancy and provide some construction methods of optimal mixed-level uniform designs which can achieve this lower bound.These methods are all deterministic construction methods which can avoid the complexity of stochastic algorithms.Both saturated mixed-level uniform designs and supersaturated mixed-level uniform designs can be obtained with these methods.Moreover,the resulting designs are also χ^(2)-optimal and minimum moment aberration designs.
文摘卷积神经网络性能的快速提升是以不断堆叠的网络层数以及成倍增长的参数量和存储空间为代价,这不仅会使模型在训练过程中出现过拟合等问题,也不利于模型在资源受限的嵌入式设备上运行,因而提出模型压缩技术来解决上述问题,主要对模型压缩技术中的特征蒸馏算法进行了研究。针对特征蒸馏中利用教师网络特征图指导学生网络并不能很好地锻炼学生网络特征拟合能力的问题,提出基于特征分布蒸馏算法。该算法利用条件互信息的概念构建模型特征空间的概率分布,并引入最大平均差异(maximum mean discrepancy,MMD)设计损失函数以最小化教师网络和学生网络特征分布间的距离。在知识蒸馏的基础上利用toeplitz矩阵对学生网络进行权重共享操作,进一步节省了模型的存储空间。为验证在特征分布蒸馏算法训练下学生网络的特征拟合能力,在图像分类、目标检测和语义分割三种图像处理任务上进行了实验验证,实验表明所提算法在以上三种学习任务中的表现均优于对比算法且实现了不同网络架构间的蒸馏。