The integration of sensory information from different modalities,such as touch and vision,is essential for organisms to perform behavioral functions such as decision-making,learning,and memory.Artificial implementatio...The integration of sensory information from different modalities,such as touch and vision,is essential for organisms to perform behavioral functions such as decision-making,learning,and memory.Artificial implementation of human multi-sensory perception using electronic supports is of great significance for achieving efficient human–machine interaction.Thanks to their structural and functional similarity with biological synapses,memristors are emerging as promising nanodevices for developing artificial neuromorphic perception.Memristive devices can sense multidimensional signals including light,pressure,and sound.Their in-sensor computing architecture represents an ideal platform for efficient multimodal perception.We review recent progress in multimodal memristive technology and its application to neuromorphic perception of complex stimuli carrying visual,olfactory,auditory,and tactile information.At the device level,the operation model and undergoing mechanism have also been introduced.Finally,we discuss the challenges and prospects associated with this rapidly progressing field of research.展开更多
The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suf...The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.展开更多
基金supported by the fund from Ministry of Science and Technology of China(2023YFB4402301)the NSFC for Distinguished Young Scholars(No.52025022)+3 种基金the NSFC Program(Nos.11974072,U19A2091,62004016,52072065,52372137,U23A20568)the‘111’Project(No.B13013)The Fundamental Research Funds for the Central Universities(No.2412023YQ004)the fund from Jilin Province(Nos.YDZJ202101ZYTS021,2412021ZD003,20220502002GH,20230402072GH).
文摘The integration of sensory information from different modalities,such as touch and vision,is essential for organisms to perform behavioral functions such as decision-making,learning,and memory.Artificial implementation of human multi-sensory perception using electronic supports is of great significance for achieving efficient human–machine interaction.Thanks to their structural and functional similarity with biological synapses,memristors are emerging as promising nanodevices for developing artificial neuromorphic perception.Memristive devices can sense multidimensional signals including light,pressure,and sound.Their in-sensor computing architecture represents an ideal platform for efficient multimodal perception.We review recent progress in multimodal memristive technology and its application to neuromorphic perception of complex stimuli carrying visual,olfactory,auditory,and tactile information.At the device level,the operation model and undergoing mechanism have also been introduced.Finally,we discuss the challenges and prospects associated with this rapidly progressing field of research.
基金supported by Beijing Key Laboratory of Behavior and Mental Health,Peking University。
文摘The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN.