Advancing Automotive Business Strategy through Multimodal Aspect-Based Sentiment Analysis Using SSLU-GRU and YOLO
Main Article Content
Abstract
Sentiment analysis (SA) has become a key tool in understanding consumer feedback in the automotive industry. However, most existing models are limited to unimodal data and fail to capture fine-grained, aspect-level sentiments from multimodal sources such as text, images, and video. Additionally, privacy concerns related to user-generated content remain under-addressed. This study proposes a novel Multimodal Aspect-Based Sentiment Analysis (MASA) framework that integrates textual, visual, and video data for business decision-making in the automotive sector. The framework includes a BERT-based aspect dictionary for extracting domain-specific features, SCV-YOLOv5 for object segmentation in images and videos, and a GRU model enhanced with the Sinu-Sigmoidal Linear Unit (SSLU) activation function for sentiment classification. A K-Anonymity method augmented by Kendall's Tau and Spearman's Rank Correlation is employed to protect user privacy in sentiment data. The framework was evaluated using the MuSe Car dataset, encompassing over 60 car brands and 10,000 data samples per brand. The proposed model achieved 98.94% classification accuracy, outperforming baseline models such as BiLSTM and CNN in terms of Mean Absolute Error (0.14), RMSE (1.01), and F1-score (98.15%). Privacy-preservation tests also showed superior performance, with a 98% privacy-preserving rate and lower information loss than traditional methods. The results demonstrate that integrating multimodal input with deep learning and privacy-aware techniques significantly enhances the accuracy and reliability of sentiment analysis in automotive business contexts. The framework enables better alignment of consumer feedback with strategic decisions such as product development and targeted marketing.
Downloads
Article Details
Section

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
How to Cite
References
Alshuwaier, F., Areshey, A., & Poon, J. (2022). Applications and Enhancement of Document-Based Sentiment Analysis in Deep learning Methods: Systematic Literature Review. Intelligent Systems with Applications, 15, 200090. https://doi.org/10.1016/j.iswa.2022.200090
Barua, A., Ahmed, M. U., & Begum, S. (2023). A Systematic Literature Review on Multimodal Machine Learning: Applications, Challenges, Gaps and Future Directions. IEEE Access, 11, 14804–14831. https://doi.org/10.1109/ACCESS.2023.3243854
Bayoudh, K., Knani, R., Hamdaoui, F., & Mtibaa, A. (2022). A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets. The Visual Computer, 38(8), 2939–2970. https://doi.org/10.1007/s00371-021-02166-7
Buscemi, A., & Proverbio, D. (2024). ChatGPT vs Gemini vs LLaMA on Multilingual Sentiment Analysis.
Ghadge, A., Mogale, D. G., Bourlakis, M., M. Maiyar, L., & Moradlou, H. (2022). Link between Industry 4.0 and green supply chain management: Evidence from the automotive industry. Computers and Industrial Engineering, 169. https://doi.org/10.1016/j.cie.2022.108303
Guo, W., Zhang, Y., Cai, X., Meng, L., Yang, J., & Yuan, X. (2021). LD-MAN: Layout-Driven Multimodal Attention Network for Online News Sentiment Recognition. IEEE Transactions on Multimedia, 23, 1785–1798. https://doi.org/10.1109/TMM.2020.3003648
Jena, R. (2020). An empirical case study on Indian consumers’ sentiment towards electric vehicles: A big data analytics approach. Industrial Marketing Management, 90, 605–616. https://doi.org/10.1016/j.indmarman.2019.12.012
Ligthart, A., Catal, C., & Tekinerdogan, B. (2021). Systematic reviews in sentiment analysis: a tertiary study. Artificial Intelligence Review, 54(7), 4997–5053. https://doi.org/10.1007/s10462-021-09973-3
Lin, D., Lee, C. K. M., Lau, H., & Yang, Y. (2018). Strategic response to Industry 4.0: an empirical investigation on the Chinese automotive industry. Industrial Management and Data Systems, 118(3). https://doi.org/10.1108/IMDS-09-2017-0403
Lin, Y., Fu, Y., Li, Y., Cai, G., & Zhou, A. (2021). Aspect-based sentiment analysis for online reviews with hybrid attention networks. World Wide Web, 24(4), 1215–1233. https://doi.org/10.1007/s11280-021-00898-z
Llopis-Albert, C., Rubio, F., & Valero, F. (2021). Impact of digital transformation on the automotive industry. Technological Forecasting and Social Change, 162. https://doi.org/10.1016/j.techfore.2020.120343
Lukin, E., Krajnović, A., & Bosna, J. (2022). Sustainability Strategies and Achieving SDGs: A Comparative Analysis of Leading Companies in the Automotive Industry. In Sustainability (Switzerland) (Vol. 14, Issue 7). https://doi.org/10.3390/su14074000
Maitama, J. Z., Idris, N., Abdi, A., Shuib, L., & Fauzi, R. (2020). A Systematic Review on Implicit and Explicit Aspect Extraction in Sentiment Analysis. IEEE Access, 8, 194166–194191. https://doi.org/10.1109/ACCESS.2020.3031217
Motz, A., Ranta, E., Calderon, A. S., Adam, Q., Alzhouri, F., & Ebrahimi, D. (2022). Live Sentiment Analysis Using Multiple Machine Learning and Text Processing Algorithms. Procedia Computer Science, 203, 165–172. https://doi.org/10.1016/j.procs.2022.07.023
Nazir, A., Rao, Y., Wu, L., & Sun, L. (2022). Issues and Challenges of Aspect-based Sentiment Analysis: A Comprehensive Survey. IEEE Transactions on Affective Computing, 13(2), 845–863. https://doi.org/10.1109/TAFFC.2020.2970399
Shoumy, N. J., Ang, L.-M., Seng, K. P., Rahaman, D. M. M., & Zia, T. (2020). Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals. Journal of Network and Computer Applications, 149, 102447. https://doi.org/10.1016/j.jnca.2019.102447
Tan, X., Cai, Y., Xu, J., Leung, H.-F., Chen, W., & Li, Q. (2020). Improving aspect-based sentiment analysis via aligning aspect embedding. Neurocomputing, 383, 336–347. https://doi.org/10.1016/j.neucom.2019.12.035
Xiao, G., Tu, G., Zheng, L., Zhou, T., Li, X., Ahmed, S. H., & Jiang, D. (2021). Multimodality Sentiment Analysis in Social Internet of Things Based on Hierarchical Attentions and CSAT-TCN With MBM Network. IEEE Internet of Things Journal, 8(16), 12748–12757. https://doi.org/10.1109/JIOT.2020.3015381
Xu, J., Li, Z., Huang, F., Li, C., & Yu, P. S. (2021). Social Image Sentiment Analysis by Exploiting Multimodal Content and Heterogeneous Relations. IEEE Transactions on Industrial Informatics, 17(4), 2974–2982. https://doi.org/10.1109/TII.2020.3005405
Zhang, C., Tian, Y.-X., Fan, Z.-P., Liu, Y., & Fan, L.-W. (2020). Product sales forecasting using macroeconomic indicators and online reviews: a method combining prospect theory and sentiment analysis. Soft Computing, 24(9), 6213–6226. https://doi.org/10.1007/s00500-018-03742-1
Zhao, N., Gao, H., Wen, X., & Li, H. (2021). Combination of Convolutional Neural Network and Gated Recurrent Unit for Aspect-Based Sentiment Analysis. IEEE Access, 9, 15561–15569. https://doi.org/10.1109/ACCESS.2021.3052937
Zhou, J., Zhao, J., Huang, J. X., Hu, Q. V., & He, L. (2021). MASAD: A large-scale dataset for multimodal aspect-based sentiment analysis. Neurocomputing, 455, 47–58. https://doi.org/10.1016/j.neucom.2021.05.040