提出了基于注意力机制的多模态人体行为识别算法;针对多模态特征的有效融合问题,设计基于注意力机制的双流特征融合卷积网络(TAM3DNet,two-stream attention mechanism 3D network);主干网络采用结合注意力机制的注意力3D网络(AM3DNet,a...提出了基于注意力机制的多模态人体行为识别算法;针对多模态特征的有效融合问题,设计基于注意力机制的双流特征融合卷积网络(TAM3DNet,two-stream attention mechanism 3D network);主干网络采用结合注意力机制的注意力3D网络(AM3DNet,attention mechanism 3D network),将特征图与注意力图进行加权后得到加权行为特征,从而使网络聚焦于肢体运动区域的特征,减弱背景和肢体静止区域的影响;将RGB-D数据的颜色和深度两种模态数据分别作为双流网络的输入,从两条分支网络得到彩色和深度行为特征,然后将融合特征进行分类得到人体行为识别结果。展开更多
Recently,video based flame detection has become an important approach for early detection of fire under complex circumstances.However,the detection accuracy of most existing methods remains unsatisfactory.In this pape...Recently,video based flame detection has become an important approach for early detection of fire under complex circumstances.However,the detection accuracy of most existing methods remains unsatisfactory.In this paper,we develop a new algorithm that can significantly improve the accuracy of flame detection in video images.The algorithm segments a video image and obtains areas that may contain flames by combining a two-step clustering based approach with the RGB color model.A few new dynamic and hierarchical features associated with the suspected regions,including the flicker frequency of flames,are then extracted and analyzed.The algorithm determines whether a suspected region contains flames or not by processing the color and dynamic features of the area altogether with a classifier,which can be a BP neural network,a k nearest neighbor classifier or a support vector machine.Testing results show that this algorithm is robust and efficient,and is able to significantly reduce the probability of false alarms.展开更多
文摘提出了基于注意力机制的多模态人体行为识别算法;针对多模态特征的有效融合问题,设计基于注意力机制的双流特征融合卷积网络(TAM3DNet,two-stream attention mechanism 3D network);主干网络采用结合注意力机制的注意力3D网络(AM3DNet,attention mechanism 3D network),将特征图与注意力图进行加权后得到加权行为特征,从而使网络聚焦于肢体运动区域的特征,减弱背景和肢体静止区域的影响;将RGB-D数据的颜色和深度两种模态数据分别作为双流网络的输入,从两条分支网络得到彩色和深度行为特征,然后将融合特征进行分类得到人体行为识别结果。
文摘Recently,video based flame detection has become an important approach for early detection of fire under complex circumstances.However,the detection accuracy of most existing methods remains unsatisfactory.In this paper,we develop a new algorithm that can significantly improve the accuracy of flame detection in video images.The algorithm segments a video image and obtains areas that may contain flames by combining a two-step clustering based approach with the RGB color model.A few new dynamic and hierarchical features associated with the suspected regions,including the flicker frequency of flames,are then extracted and analyzed.The algorithm determines whether a suspected region contains flames or not by processing the color and dynamic features of the area altogether with a classifier,which can be a BP neural network,a k nearest neighbor classifier or a support vector machine.Testing results show that this algorithm is robust and efficient,and is able to significantly reduce the probability of false alarms.