Publication: Multimodal Sentiment Analysis Of Social Media Through Deep Learning Approach
Loading...
Date
2020-06
Authors
An, Jieyu
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Multimodal Data, Characterized By Its Inherent Complexity And Heterogeneity, Presents Computational Challenges In Comprehending Social Media Content. Conventional Approaches To Sentiment Analysis Often Rely On Unimodal Pre-Trained Models For Feature Extraction From Each Modality, Neglecting The Intrinsic Connections Of Semantic Information Between Modalities, As They Are Typically Trained On Unimodal Data. Additionally, Existing Multimodal Sentiment Analysis Methods Primarily Focus On Acquiring Image Representations While Disregarding The Rich Semantic Information Contained Within The Images. Furthermore, Current Methods Often Overlook The Significance Of Color Information, Which Provides Valuable Insights And Significantly Influences Sentiment Classification. Addressing These Gaps, This Thesis Explores Deep Learning-Based Methods For Multimodal Sentiment Analysis, Emphasizing The Semantic Association Between Multimodal Data, Information Interaction, And Color Sentiment Modelling From The Perspectives Of The Multimodal Representation Layer, The Multimodal Interaction Layer, And The Color Information Integration Layer. To Mitigate The Overlooked Semantic Interrelations Between Modalities, The Thesis Introduces "Joint Representation Learning For Multimodal Sentiment Analysis" Within The Representation Layer. This Method, Validated By Rigorous Experiments, Showcases A Marked Improvement In Accuracy, Achieving 76.44% On The Mvsa-Single And 72.29% On The Mvsa-Multiple Datasets, Surpassing Existing Methodologies. In The Multimodal Interaction Layer,
Description
Keywords
Multimodal Sentiment Analysis Of Social Media