摘要
厨房产生的尾菜垃圾以及餐后垃圾在质地上差异较大,预处理环节若不能有效识别餐厨垃圾的类型,进而采取合适的工作参数,经常导致减量化处理设备处理效果差甚至无法正常进行的问题。针对尾菜垃圾和餐后垃圾在图像上的差异,对不同季节的蔬菜产生的尾菜垃圾以及不同饮食风格下产生的餐后垃圾进行图像采集和处理,在此基础上采用ResNet18作为基础网络,引入注意力机制设计1个全新深度学习模型进行餐厨垃圾识别,并用ResNet18、ECANET+ResNet18、SENET+ResNet18、SANET+ResNet18模型进行对比。结果表明:上述4种网络模型均有较高的准确率。准确率分别为96.73%、97.10%、97.28%和96.92%;损失率分别为4.35%、4.11%、3.76%和4.17%;在训练时间方面,ECANET+ResNet18的训练时间最短,比ResNet18快350 s。ECANET+ResNet18网络有效提高了ResNet18网络的性能,达到了最高准确率和最小的损失率,能够满足餐厨垃圾的机器识别要求。
Kitchen waste and post-meal waste vary greatly in texture,if the pre-treatment process doesn’t effectively identify the type of kitchen waste,and then take the appropriate working parameters,it often leads to poor treatment effect of the reduction treatment equipment.We collected and processed images of vegetable waste and post-meal waste in different seasons and different dietary styles,considering the differences in images between tailgate waste and post-meal waste,on this basis,ResNet18 was used as the base network,and attention mechanism was introduced to design a new deep learning model for kitchen waste recognition,which was compared with ResNet18,ECANET+ResNet18,SENET+ResNet18,and SANET+ResNet18 models.The results showed that all the above four network models had high accuracy rates.Their accuracy rates were 96.73%,97.10%,97.28%,and 96.92%,respectively;their loss rates were 4.35%,4.11%,3.76%,and 4.17%,respectively;in terms of training time,ECANET+ResNet18 had the shortest training time,which was 350 seconds faster than ResNet18.ECANET+ResNet18 network effectively improved the performance of ResNet18 network,achieved the highest accuracy rate and the smallest loss rate,and could meet the requirements of machine recognition of kitchen waste.
作者
刘志
高东明
LIU Zhi;GAO Dongming(College of Artificial Intelligence,Beijing Technology and Business University,Beijing 100048,China)
出处
《环境工程》
CAS
CSCD
2024年第3期254-260,共7页
Environmental Engineering
基金
国家重点研发计划项目“中央厨房菜肴包装与加工剩余物处理技术与装备研发”(2018YFD0400804)
关键词
餐厨垃圾
深度学习
图像处理
类型识别
注意力机制
food waste
deep learning
image processing
type identification
attention mechanism