Fall behavior is closely related to high mortality in the elderly,so fall detection becomes an important and urgent research area.However,the existing fall detection methods are difficult to be applied in daily life d...Fall behavior is closely related to high mortality in the elderly,so fall detection becomes an important and urgent research area.However,the existing fall detection methods are difficult to be applied in daily life due to a large amount of calculation and poor detection accuracy.To solve the above problems,this paper proposes a dense spatial-temporal graph convolutional network based on lightweight OpenPose.Lightweight OpenPose uses MobileNet as a feature extraction network,and the prediction layer uses bottleneck-asymmetric structure,thus reducing the amount of the network.The bottleneck-asymmetrical structure compresses the number of input channels of feature maps by 1×1 convolution and replaces the 7×7 convolution structure with the asymmetric structure of 1×7 convolution,7×1 convolution,and 7×7 convolution in parallel.The spatial-temporal graph convolutional network divides the multi-layer convolution into dense blocks,and the convolutional layers in each dense block are connected,thus improving the feature transitivity,enhancing the network’s ability to extract features,thus improving the detection accuracy.Two representative datasets,Multiple Cameras Fall dataset(MCF),and Nanyang Technological University Red Green Blue+Depth Action Recognition dataset(NTU RGB+D),are selected for our experiments,among which NTU RGB+D has two evaluation benchmarks.The results show that the proposed model is superior to the current fall detection models.The accuracy of this network on the MCF dataset is 96.3%,and the accuracies on the two evaluation benchmarks of the NTU RGB+D dataset are 85.6%and 93.5%,respectively.展开更多
(Aim)Chinese sign language is an essential tool for hearing-impaired to live,learn and communicate in deaf communities.Moreover,Chinese sign language plays a significant role in speech therapy and rehabilitation.Chine...(Aim)Chinese sign language is an essential tool for hearing-impaired to live,learn and communicate in deaf communities.Moreover,Chinese sign language plays a significant role in speech therapy and rehabilitation.Chinese sign language identification can provide convenience for those hearing impaired people and eliminate the communication barrier between the deaf community and the rest of society.Similar to the research of many biomedical image processing(such as automatic chest radiograph processing,diagnosis of chest radiological images,etc.),with the rapid development of artificial intelligence,especially deep learning technologies and algorithms,sign language image recognition ushered in the spring.This study aims to propose a novel sign language image recognition method based on an optimized convolutional neural network.(Method)Three different combinations of blocks:Conv-BN-ReLU-Pooling,Conv-BN-ReLU,Conv-BN-ReLU-BN were employed,including some advanced technologies such as batch normalization,dropout,and Leaky ReLU.We proposed an optimized convolutional neural network to identify 1320 sign language images,which was called as CNN-CB method.Totally ten runs were implemented with the hold-out randomly set for each run.(Results)The results indicate that our CNN-CB method gained an overall accuracy of 94.88±0.99%.(Conclusion)Our CNN-CB method is superior to thirteen state-of-the-art methods:eight traditional machine learning approaches and five modern convolutional neural network approaches.展开更多
基金supported,in part,by the National Nature Science Foundation of China under Grant Numbers 62272236,62376128in part,by the Natural Science Foundation of Jiangsu Province under Grant Numbers BK20201136,BK20191401.
文摘Fall behavior is closely related to high mortality in the elderly,so fall detection becomes an important and urgent research area.However,the existing fall detection methods are difficult to be applied in daily life due to a large amount of calculation and poor detection accuracy.To solve the above problems,this paper proposes a dense spatial-temporal graph convolutional network based on lightweight OpenPose.Lightweight OpenPose uses MobileNet as a feature extraction network,and the prediction layer uses bottleneck-asymmetric structure,thus reducing the amount of the network.The bottleneck-asymmetrical structure compresses the number of input channels of feature maps by 1×1 convolution and replaces the 7×7 convolution structure with the asymmetric structure of 1×7 convolution,7×1 convolution,and 7×7 convolution in parallel.The spatial-temporal graph convolutional network divides the multi-layer convolution into dense blocks,and the convolutional layers in each dense block are connected,thus improving the feature transitivity,enhancing the network’s ability to extract features,thus improving the detection accuracy.Two representative datasets,Multiple Cameras Fall dataset(MCF),and Nanyang Technological University Red Green Blue+Depth Action Recognition dataset(NTU RGB+D),are selected for our experiments,among which NTU RGB+D has two evaluation benchmarks.The results show that the proposed model is superior to the current fall detection models.The accuracy of this network on the MCF dataset is 96.3%,and the accuracies on the two evaluation benchmarks of the NTU RGB+D dataset are 85.6%and 93.5%,respectively.
基金supported from The National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘(Aim)Chinese sign language is an essential tool for hearing-impaired to live,learn and communicate in deaf communities.Moreover,Chinese sign language plays a significant role in speech therapy and rehabilitation.Chinese sign language identification can provide convenience for those hearing impaired people and eliminate the communication barrier between the deaf community and the rest of society.Similar to the research of many biomedical image processing(such as automatic chest radiograph processing,diagnosis of chest radiological images,etc.),with the rapid development of artificial intelligence,especially deep learning technologies and algorithms,sign language image recognition ushered in the spring.This study aims to propose a novel sign language image recognition method based on an optimized convolutional neural network.(Method)Three different combinations of blocks:Conv-BN-ReLU-Pooling,Conv-BN-ReLU,Conv-BN-ReLU-BN were employed,including some advanced technologies such as batch normalization,dropout,and Leaky ReLU.We proposed an optimized convolutional neural network to identify 1320 sign language images,which was called as CNN-CB method.Totally ten runs were implemented with the hold-out randomly set for each run.(Results)The results indicate that our CNN-CB method gained an overall accuracy of 94.88±0.99%.(Conclusion)Our CNN-CB method is superior to thirteen state-of-the-art methods:eight traditional machine learning approaches and five modern convolutional neural network approaches.