摘要
随着物联网设备的广泛部署应用,海量数据被直接送入互联网中导致网络服务质量变差。虽然深度神经网络技术的发展给数据智能识别分类提供技术支撑,但大量的参数和计算使得这些技术难以被应用于资源受限的网关设备中。因此,提出将知识蒸馏与模型早退方法结合应用于深度神经网络中,在降低模型计算量、保证模型精度的同时提供可选择的检测识别速度。在真实IPv6网络中的实验结果表明,网关设备部署优化模型后网络带宽占用减少用近90%。
With the deployment and application of a wide range of Internet of Things devices,massive amounts of data are directly sent to the internet,resulting in poor network service quality.Although the development of deep neural network technology provides technical support for the in⁃telligent identification and classification of data,a large number of parameters and computations make it difficult for these technologies to be applied to resource-constrained gateway devices.To this end,this paper proposes an efficient approach combining knowledge distilla⁃tion and model early exit methods,which can reduce calculations and ensure model accuracy of deep neural networks while providing se⁃lectable detection and recognition speeds.The experimental results deployed on the IPv6 network show that our approach can reduce the bandwidth occupation in the network by nearly 90%.
作者
许文元
方维维
孟娜
XU Wenyuan;FANG Weiwei;MENG Na(School of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044)
出处
《现代计算机》
2021年第15期66-71,共6页
Modern Computer
基金
赛尔网络下一代互联网技术创新项目(No.NGII20190308)。
关键词
物联网
IPV6网络
深度神经网络
知识蒸馏
模型早退
Internet of Things
IPv6 Network
Deep Neural Network
Knowledge Distillation
Model Early Exit