期刊文献+

Identifying disaster related social media for rapid response:a visual-textual fused CNN architecture 被引量:1

原文传递
导出
摘要 In recent years,social media platforms have played a critical role in mitigation for a wide range of disasters.The highly up-to-date social responses and vast spatial coverage from millions of citizen sensors enable a timely and comprehensive disaster investigation.However,automatic retrieval of on-topic social media posts,especially considering both of their visual and textual information,remains a challenge.This paper presents an automatic approach to labeling on-topic social media posts using visual-textual fused features.Two convolutional neural networks(CNNs),Inception-V3 CNN and word embedded CNN,are applied to extract visual and textual features respectively from social media posts.Well-trained on our training sets,the extracted visual and textual features are further concatenated to form a fused feature to feed the final classification process.The results suggest that both CNNs perform remarkably well in learning visual and textual features.The fused feature proves that additional visual feature leads to more robustness compared with the situation where only textual feature is used.The on-topic posts,classified by their texts and pictures automatically,represent timely disaster documentation during an event.Coupling with rich spatial contexts when geotagged,social media could greatly aid in a variety of disaster mitigation approaches.
出处 《International Journal of Digital Earth》 SCIE 2020年第9期1017-1039,共23页 国际数字地球学报(英文)
基金 University of South Carolina [grant number 13540-18-48955].
  • 相关文献

同被引文献3

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部