With the continuous development of deep learning and artificial neural networks(ANNs), algorithmic composition has gradually become a hot research field. In order to solve the music-style problem in generating chord m...With the continuous development of deep learning and artificial neural networks(ANNs), algorithmic composition has gradually become a hot research field. In order to solve the music-style problem in generating chord music, a multi-style chord music generation(MSCMG) network is proposed based on the previous ANN for creation. A music-style extraction module and a style extractor are added by the network on the original basis;the music-style extraction module divides the entire music content into two parts, namely the music-style information Mstyleand the music content information Mcontent. The style extractor removes the music-style information entangled in the music content information. The similarity of music generated by different models is compared in this paper. It is also evaluated whether the model can learn music composition rules from the database. Through experiments, it is found that the model proposed in this paper can generate music works in the expected style. Compared with the long short term memory(LSTM) network, the MSCMG network has a certain improvement in the performance of music styles.展开更多
With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classifi...With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classification of music can help in conveniently organizing and retrieving music.It is also the premise of using music for psychological intervention and physiological adjustment.A new chord-to-vector method was proposed,which converted the chord information of music into a chord vector of music and combined the weight of the Mel-frequency cepstral coefficient(MFCC) and residual phase(RP) with the feature fusion of a cochleogram.The music emotion recognition and classification training was carried out using the fusion of a convolution neural network and bidirectional long short-term memory(BiLSTM).In addition,based on the self-collected dataset,a comparison of the proposed model with other model structures was performed.The results show that the proposed method achieved a higher recognition accuracy compared with other models.展开更多
基金National Natural Science Foundation of China (No.61801106)。
文摘With the continuous development of deep learning and artificial neural networks(ANNs), algorithmic composition has gradually become a hot research field. In order to solve the music-style problem in generating chord music, a multi-style chord music generation(MSCMG) network is proposed based on the previous ANN for creation. A music-style extraction module and a style extractor are added by the network on the original basis;the music-style extraction module divides the entire music content into two parts, namely the music-style information Mstyleand the music content information Mcontent. The style extractor removes the music-style information entangled in the music content information. The similarity of music generated by different models is compared in this paper. It is also evaluated whether the model can learn music composition rules from the database. Through experiments, it is found that the model proposed in this paper can generate music works in the expected style. Compared with the long short term memory(LSTM) network, the MSCMG network has a certain improvement in the performance of music styles.
基金National Natural Science Foundation of China (No.61801106)。
文摘With the rapid development in the field of artificial intelligence and natural language processing(NLP),research on music retrieval has gained importance.Music messages express emotional signals.The emotional classification of music can help in conveniently organizing and retrieving music.It is also the premise of using music for psychological intervention and physiological adjustment.A new chord-to-vector method was proposed,which converted the chord information of music into a chord vector of music and combined the weight of the Mel-frequency cepstral coefficient(MFCC) and residual phase(RP) with the feature fusion of a cochleogram.The music emotion recognition and classification training was carried out using the fusion of a convolution neural network and bidirectional long short-term memory(BiLSTM).In addition,based on the self-collected dataset,a comparison of the proposed model with other model structures was performed.The results show that the proposed method achieved a higher recognition accuracy compared with other models.