摘要
由于深度神经网络存在内存密集和计算密集两大特征,其无法满足移动设备存储的需求。为此,提出一种基于深度神经网络的存储压缩方法,该方法通过SVD矩阵奇异值分解方法,并辅以网络剪枝技术,在深度神经网络运行过程中引入少量神经元,并设定神经元数连接数,降低权重规模。通过一系列方法对策,尽可能在不损失精度的前提下,对深度神经网络模型大幅度压缩,其压缩程度达到了5×、15×。
Deep neural network has two characteristics:memory intensive and computation intensive,which makes it unable to meet the storage needs of mobile devices.Therefore,a storage compression method based on deep neural network is proposed.This method introduces a small number of neurons during the operation of deep neural network through SVD matrix singular value decomposition method and network pruning technology,and sets the number of neuron connections to reduce the weight model.Through a series of methods and countermeasures,the deep neural network model is greatly compressed without loss of accuracy as far as possible,and the compression degree reaches 5×、15×。
作者
陈小祥
CHEN Xiao-xiang(College of Information Technology,Anqing Vocational and Technical College,Anhui,Anqing 246003)
出处
《贵阳学院学报(自然科学版)》
2024年第1期64-68,共5页
Journal of Guiyang University:Natural Sciences
基金
2018年安徽高校自然科学研究项目“教学过程支撑系统的开发与应用”(项目编号:KJ2018ZD067)。
关键词
深度神经网络
存储压缩方法
网络剪枝技术
矩阵奇异值分解方法
deep neural network
storage compression method
network pruning technology
matrix singular value decomposition method